America’s Plan to Dominate the Full AI Stack with Sriram Krishnan

36m
Sriram Krishnan was never interested in policy. But after seeing a gap in AI knowledge at senior levels of government, he decided to lend his expertise to the tech-friendly Trump administration. Senior White House Policy Advisor on AI Sriram Krishnan joins Elad Gil and Sarah Guo to talk about America’s AI Action Plan, a recent executive order that outlines how America can win the AI race and maintain its AI supremacy. Sriram discusses why winning the AI race is important and what that looks like, as well as the core goals of the Action Plan that he helped to author. Together, they explore how AI is the latest iteration of American cultural exportation and soft power, the bottlenecks in upgrading America’s energy infrastructure, and the importance of America owning the “full stack” from GPUs and models to agents and software.

Sign up for new podcasts every week. Email feedback to show@no-priors.com

Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @skrishnan47 | @sriramk

Chapters:

00:00 – Sriram Krishnan Introduction

01:00 – Sriram’s Role in Government

03:43 – Impetus for the America AI Action Plan

06:14 – What Winning the AI Race Looks Like

10:36 – Algorithms and Cultural Bias

12:26 – Main Tenets of the America AI Action Plan

19:13 – Infrastructure and Energy Needs for AI

22:56 – Manufacturing, Supply Chains, and AI

24:52 – Ensuring American Dominance in Robotics

26:30 – Translating Policy to Industry and the Economy

29:30 – Should the US Be a Technocracy?

32:33 – Understanding the Argument Against Open Source Models

36:07 – Conclusion

Press play and read along

Runtime: 36m

Transcript

Speaker 2 Hi listeners, welcome back to No Priors.

Speaker 2 Today Aladdin and I are here with Sriram Krishnan, a top White House official currently serving as the senior White House policy advisor on artificial intelligence.

Speaker 2 A former tech executive and venture capitalist, he's one of the lead authors on the American AI Action Plan released this past week.

Speaker 2 We talk about the national implications of the AI race, what position we hold today, the workforce and energy needs of the future, and how to win.

Speaker 3 Striram, thank you so much for joining us today for Nepriors.

Speaker 1 Thank you for having me. I'm a long-term fan.
Never been invited before. I was always a bit sad, but no, thank you for having me for the very first time.

Speaker 1 And before I just have to point out for folks who are listening on audio, that Elard has never looked as good, as dashing, as handsome as he does. Now, Elad, you dressed up for me.
I'm honored.

Speaker 3 This is how you can tell that Shiram is in politics now. He has the liquid tongue of gold with which he coaxes everybody into doing his bidding.
So it's very good.

Speaker 3 So, you know, know, for our audience, Hiram has been a well-known Silicon Valley individual.

Speaker 3 He worked at Andreessen Horowitz, he worked at a number of the sort of marquee companies and names in Silicon Valley over the last decade plus.

Speaker 3 And now he's in government and he's really working on a variety of exciting initiatives around AI and other areas.

Speaker 1 Could you tell us a little bit more about your role?

Speaker 3 And should we be calling you Your Excellency, or is there some special title we should be using now that you're in government?

Speaker 1 You don't have to, but I will take it. But no, thank you.
It's fascinating for me to be here talking to you in this capacity because I've known both of you for forever and ever.

Speaker 1 You know, we've had hundreds of interactions and also been such a fan of the pod. Congratulations.
Just a little bit of backstory. I've been in Silicon Valley for a long time.
I feel very old.

Speaker 1 I did a tour of all the large consumer social media companies. And then I was at Andreessen Horowitz for the last four years, competing actively for CDZA term sheets with both of you folks, I'm sure.

Speaker 1 And all this while, I had no real intention of joining government. I wasn't very particularly interested in policy.

Speaker 1 But what wound up happening is a couple of years ago, I moved to England to head up Oliver Andreessen's international efforts.

Speaker 1 And at the time, the UK was kind of a hotbed of all the AI policy debates. They had this AI safety summit at Bletchley Park.

Speaker 1 And this was kind of the peak of, I would say, the effective altruist versus the EAC kind of drama, which is going on. And I got pulled into a lot of those discussions.

Speaker 1 And I remember thinking to myself, like, wow, a lot of people who are in very senior roles in governments in the United States back then and other parts of the world didn't know what they were talking about when it came to AI.

Speaker 1 I was convinced that they were doing the absolute wrong thing on many topics, for example, open source or helping startups.

Speaker 1 And it was just really, really bad in a way which I think the industry didn't really appreciate until much later.

Speaker 1 And that got me interested in just policy, which by the way, it was a word which I didn't even understand what it means. We can even get into what that means, but it got me interested in policy.

Speaker 1 When President Trump got inaugurated in the first week, he did two things. One, he rescinded the Biden executive order on AI, which was bad and awful in many, many ways, which we can get into.

Speaker 1 And then he signed a new executive order, which basically said that America should dominate and win on AI.

Speaker 1 And then he called upon a few of us to say, you guys need to come up with a plan within six months. to figure out how America is going to dominate and win.
And so that put us off of the races.

Speaker 1 And I think everything that has happened since then culminated in the event that we had yesterday, where we put together, you know, we put out this long document, 28-page document on America's action plan.

Speaker 1 We had a bunch of executors. So that's kind of the little bit of history.
That's great.

Speaker 3 And could you tell us a little bit more about what the main things that you considered as you put together this plan were? What are the things that you worry about geopolitically?

Speaker 3 How do you think about AI and competition, big tech versus small tech? Like it feels like there's a lot of threads in that.

Speaker 3 And it'd be great just to get a view of what are the main issues that created this plan, and then it'd be be great to talk through the plan itself.

Speaker 1 One of the catalytic moments which happened was the day before I started this job, I get a call, and this was the weekend DeepSeek had come out.

Speaker 1 And there's actually some chatter that I've heard online that China timed it so it can come out right after the president got sworn in.

Speaker 1 And people were like, Hey, we just want you to come in and brief a lot of people at the White House on DeepSeek because we're like, Hey, what is this? Is it cheaper? Is it faster?

Speaker 1 Do they have some magic way of training these models, which only cost a few million bucks and not, you know, hundreds of billions of dollars? What's going on?

Speaker 1 You folks might remember that narrative, which existed that weekend. And so I got to go.
And, you know, me and David helped brief all of the White House leadership.

Speaker 1 But it was really a starting gun because I think that moment was profound because it immediately told us a few things. It told us that America doesn't have a huge lead on AI.

Speaker 1 It actually has a very, very small lead. If you remember at the time, DeepSeek was the only reasoning model, which was not open AI.
I don't think Claude had come out of the reasoning model yet.

Speaker 1 I don't think Google had yet. It was the only non-open AI reasoning model.
It was very high up in the leaderboards.

Speaker 1 It was a bit unclear as to what their cost claims were and how they had gotten there. I think we know a lot better now.

Speaker 3 Yeah, and all that ended up turning out to be very overstated, right?

Speaker 3 Like basically, there was a claim that it was a few million dollars to train the model, and they didn't really talk about the hundreds of millions of dollars that were probably spent to get to that point.

Speaker 3 It's sort of the last training run, you know, was sort of what they paid for. Yeah.

Speaker 1 Yes, absolutely. I think I would say there were claims we inflated and the claims we take seriously.

Speaker 1 The claims we inflated was to your point, they kind of, I would say they put out like the final training run and as to the all the ablations and training costs and i look at the paper by the way i don't think they make the claim that the total cost was two million bucks i think it's what the press imputed but i do think they deserve a lot of credit i always make a point to say look beep seek did some very very good technical work if you think about didn't have as good hardware as the american model companies do what they did with kb caching mla you know there's some you know multiple theories on how they actually got chain of thought maybe they did by themselves maybe they had some help from American companies, but there was some really novel new work there, right?

Speaker 1 And I think it told us that, okay, we don't have a lead that we can take for granted. They were definitely the best open source model.

Speaker 1 So that was, and I think to even today, I would probably say the Chinese models, Deep Sea, Quinn, are the best open source models.

Speaker 1 But symbolically, it told us that we are now in a race, in a very close race, by the way.

Speaker 2 What does it mean to win the AI race? Like, why do we need to win that? And what would losing mean? And how do we know we've won?

Speaker 1 Well, I suspect you folks agree. AI might be the most transformational economic cultural force of our lifetime.

Speaker 1 And I believe believe that if the country or the ecosystem which winds up getting ahead is going to have these cyclic effects, right?

Speaker 1 Like you're going to, you know, you're going to power productivity. You're going to have drug discovery.

Speaker 1 You're going to, you know, discover new material sciences, new technologies, which then feed back into your infrastructure, feed back into your economy.

Speaker 1 And you're going to get this flywheel effect where whoever winds up getting ahead could wind up really accelerating ahead in kind of a classic network effect ecosystem way that all of us in Silicon Valley will will understand.

Speaker 1 Now, that is purely on the civilian economic context. You can also imagine a military context, right? Think about everything from drones to autonomous weapons.

Speaker 1 I'm pretty sure it's not in our best interest to have another country have that same economy of scale and flywheel and race ahead of us. So that's the race.

Speaker 1 Now, one interesting question that we have been pondering, which we can get into, is how do you actually measure what it means? How are we doing in the race?

Speaker 1 And one measure I've been playing around with, and maybe I'll get your take on, is I think Google just announced this morning that they inference one quadrillion tokens a month or a quarter, I forget which one.

Speaker 1 And one of the measures I've been thinking about is, let's say the world inferences, I don't know, maybe it's called 10 quadrillion tokens a month. We don't know what the number is.

Speaker 1 What share of those tokens are being inferenced on American hardware on American models? And how do we maximize that market share? And that's kind of one of the mental models I've been playing with.

Speaker 1 And it's in a way, you can think about it as we are America Inc. We have a product stack starting from GPUs with Nvidia and AMD and a bunch of others.

Speaker 1 We have a model layer with obviously OpenAI and Grok and Gemini and many, many others. We have an application layer.

Speaker 1 You've had many, many, many of them on your podcast from agents to all kinds of software. How do we make sure this American stack is dominating that market share of tokens inference?

Speaker 1 It'll be a good metric.

Speaker 3 Yeah, it's really interesting because one other thing that you didn't mention, I feel, is a cultural exportation through the models.

Speaker 3 And so if you look at prior waves of sort of culture spread, it was the movie industry, it was social media, and then now it's these models because a lot of people go to these models as a

Speaker 3 source of truth for history, for information, for other things.

Speaker 3 And there have been some famous examples in some of the Chinese models where there's a mission of Tiananmen Square, a mission of other facts. And relatedly, there's some things in some of the U.S.

Speaker 3 models that seem very politically slanted or otherwise not quite great. But it's interesting to also think about it from the perspective of broader cultural exports.

Speaker 3 I just wanted to add that to your points on defense and scientific progress and other areas. I think that's another key thing.

Speaker 1 Exactly, something we are actually addressing. And you're absolutely right.
Like, I grew up in India, and a lot of my exposure to Western culture was in the internet and Google.

Speaker 1 And obviously, a lot of the internet was American. And, you know, that kind of introduced me to Americana.

Speaker 1 And imagine if 1995, the internet was not done by America, but done by one of our adversaries. And so, in a similar context, you're absolutely right.

Speaker 1 When Deep Sea came out, I think that all these great examples of lots of stuff in there, which doesn't probably align to American values. Now, we are actually addressing this.

Speaker 1 The president signed an executive order yesterday. It's called No Woke AI in the federal government.

Speaker 1 And what it does, and this is probably going to be one of the spicy bits for your audience, is that it basically says that, look, from the day one of the Trump administration, we have tried to fight back against DEI,

Speaker 1 you know, wokeness, critical race theory, whatever you want to call it, in all parts of the federal government, right? And all kinds of propaganda. And what this EO does is actually very simple.

Speaker 1 It says that all models that the federal government will procure, aka your taxpayer dollars will be spent on, has to do two things.

Speaker 1 They have to be truth-seeking and they can't have artificial ideological bias added. If bias is added, you just have to be transparent about where you're getting that bias from.

Speaker 1 It should be very simple for both people, but to your point, you know, that cuts to the heart of, you know, if you're seeing nothing happen in Tainement Square in 19, you know, in the early 90s, that cuts the heart of that.

Speaker 1 It also cuts the heart of many, many other things from the culture wars that we have now been trying to fight against.

Speaker 2 Hey, Sriram, you used to work in social media for a long time, right? Like, this sounds a little bit familiar in terms of like, is it a platform? Is it a publisher?

Speaker 2 What is the information consumption that most consumers have? Like, where does that analogy apply or break down?

Speaker 1 It's a good question. I think, in some ways, that's for the industry and the ecosystem to answer a bit.
You're right. I spent a lot of time at Facebook, now Meta, at Twitter.

Speaker 1 One of the things I saw when I was at Twitter was how easily you could inject cultural bias in your algorithms.

Speaker 1 I have so many stories about how if you pick the right kind of Twitter accounts, which then feed into the trending algorithm, which then feed into Twitter moments, and then which every journalist or editor will wake up.

Speaker 1 And next thing you know, it's like one of the news stories off the land. And Buzzfeed will write a piece saying, people on the internet are talking about this.
I saw this over and over again.

Speaker 1 And it left me with this profound appreciation of how algorithms can shape culture.

Speaker 1 And one of the things I always say is Twitter or X is the memetic battleground upon which we fight a lot of these ideological battles.

Speaker 1 So when it comes to AI, I think it's probably going to be very similar. Like my kids use ChatGPT to answer everything, right? From history to geography to, you know, just kind of silly kids questions.

Speaker 1 And you can easily imagine a world where people inject their own cultural biases into this and you know in the eo we have a few good examples we have you know we have examples of the pope being seen as a black person misgendering someone being seen as worse than thermonuclear explosion uh and a lot of it is meant to say that you can easily imagine a world where these systems which are at the heart of so many things that the government is going to use we're always going to use we don't want them artificially injected with an ideology at least without being transparent about it what are some of the other main points of the announcement from yesterday so one of the ways David and I try to think about this with some of the people we work on is

Speaker 1 it should make sense as a strategy for almost like a technology company. And I hope that, you know, please go read the document.
It's actually pretty readable.

Speaker 1 And, you know, and hopefully for those of you who work in the tech industry, it should kind of make sense. And we think if America is going to win the race with China, they need to do three things.

Speaker 1 And they ladder up to the strategy. The first is we need to build.
infrastructure, right? At the heart of this, if you kind of go back to the scaling loss, what do we need? We need

Speaker 1 computation, we need data.

Speaker 1 And in the United States, it's been really challenging with the grid we have, with kind of this crazy permitting that we have around constructing new data centers to get some of these projects off the ground.

Speaker 1 So the first part of the action plan really dives into what the president calls build, baby, build, kind of playing on drill, baby, drill, which is all about how do we make sure we are building infrastructure?

Speaker 1 Because obviously, you know, some of the other countries are.

Speaker 1 And just as an example, one of the things it talks about is to make permitting on federal land a lot easier for data centers when it comes to old environmental laws or other regulations which get in the way.

Speaker 1 So think of that of like, let's make sure we are building the infrastructure to power these models as we scale up. So that's number one.

Speaker 1 The second pillar is innovation, which I would kind of say as let's make sure all these amazing companies, everyone that you know of, and you know, or maybe some companies which don't exist yet, can build applications and models or anything they want as fast as they can.

Speaker 1 And And what we are trying to do is a couple of things I really want to highlight. The first is we want to cut through red tape.

Speaker 1 You know, like until last, a year and a half ago, I was in California along with all of you.

Speaker 1 California almost passed SB 1047, which if that had happened, it would have been the end of open source, by the way, in the United States.

Speaker 1 We would not have a Lama, we would not have an Owen mini coming out. And

Speaker 1 a lot of states which want to do versions of this. And we think that AI is a national priority.

Speaker 1 And if you're going to compete with China, we need to make sure that these are things that we take a deal with at the national level rather than every single state uh especially states which have ideologies that you know you and i may not agree on um you know try and set their own rules uh and by the way some people may not understand this i don't understand this if you have a small state set rules it can often become the de facto law for the country because if you're a company you're like well i have to operate in the state or i have an office so let me just do that for everybody kind of just like the eu does that so we want to make sure cut through red tape let's make sure if there's regulation it happens at the federal level.

Speaker 1 So that's very, very key because I think that's going to enable not just the big companies, but every series A, Series B, Aqui Hire companies, whatever the kids are doing these days, you know, we make sure that they are off the risk.

Speaker 1 That's number one. The second part is open source.
Now, I think we probably talked about this a bit offline. Open source is one of the big reasons I actually got into the policy world.

Speaker 1 The Biden administration really, really tried to scare people about open source, talk about how unsafe it was. SB 1047 obviously tried to kind of basically ban it in many ways.

Speaker 1 And what the EO does is say, like, open source is a space where the United States needs to win. It actually points to some resources that is going to be made available to research.

Speaker 1 Because I think you and I know, open source is the way everyone from a kid in their bedroom or their dorm room, all the way to a startup to all the way to somebody who wants a lower cost of inference in their IoT device or a robotic startup.

Speaker 1 That's what they're using.

Speaker 3 For context, too, much of the internet runs on open source software, right? So the server software and other things, much of of that is open source. The protocols are all open for the internet.

Speaker 3 That's also true for crypto. And so, you know, it's interesting because removing open source from things like AI actually just centralizes power, right?

Speaker 3 It centralizes power into a small number of companies that could then be controlled by the government.

Speaker 3 And so to some extent, the fact that you all are supportive of open source means you actually are supportive of a thousand flowers blooming, but also a lack of direct government control in literally everything AI.

Speaker 3 So it's a very interesting counter stance to take.

Speaker 1 By the way, Nilard has our talking points better than I have because that is absolutely right.

Speaker 1 One very fundamental difference I think we have with the Biden administration is the Biden team really looked at AI as something to be centralized and controlled.

Speaker 1 Everything was about how do we make sure that we regulate these three or four companies and only three or four companies can build AI. They got to submit their models for testing.

Speaker 1 It was all about control in a centralized fashion. Now, when I moved to DC, one of the things I realized is that's kind of the way DC thinks, which is control and centralized in one place.

Speaker 1 You and I know that's not how Silicon Valley thinks.

Speaker 1 And one of the ways, one of the reasons Silicon Valley is the envy of the world is because anybody, any day, can go to a white combinator seed round or raise

Speaker 1 or just go off to the races and they could just build something amazing. And it catches everyone's imagination.

Speaker 1 And I think what we want to do is enable just that rather than say, okay, we want to centralize power, you know, within a 10-mile radius of where I am right now.

Speaker 3 Yeah, in general, too, central planning tends to lead to very bad economic outcomes. And so that's the collapse of the Soviet Union, et cetera.
And so

Speaker 3 it's something that's been tried many times before in many industries and it tends to lead to a very bad place in terms of innovation, in terms of economics.

Speaker 2 I think one of the things that people like underprice about open source models is they're going to happen and it's a strategic weapon.

Speaker 2 They're happening and Western companies are using Chinese open source models very broadly already.

Speaker 2 And so like if you believe that like not every model is going to be ideologically like neutral or aligned with American and democratic values, then you probably have a problem.

Speaker 2 And so the ability to like support whatever your point of view, like pluralism and openness and innovation and have some control of like as a as an ecosystem versus in a centralized way is a very different point of view than like, you know, we'll let China develop it.

Speaker 2 Yes.

Speaker 1 And I think you make a profound point. And you're already seeing that where when somebody's using DeepSeek or Quinn, that's an expression of soft power.

Speaker 1 And I think I would much rather have, you know, them using a model built by somebody who kind of agrees with us and has our values. That's number one.

Speaker 1 The other issue I would point to is that these models, we don't know what's inside them. Interpretability is still a nascent field.

Speaker 1 And you could very easily see ways where you plug a model into a cursor or win serve and you generate a piece of code.

Speaker 1 And then two years down the road, it turns out that code had a little if statement saying, if I'm running in some piece of critical infrastructure, go do something else.

Speaker 2 and we don't have ways to validate all that and so a lot of reasons why we want to make sure that our american models or western models wind up winning and this is something i think we were going to put a lot of focus on just because you have such a good view into this can we talk a little bit about infrastructure and energy since you kind of made that like point number one in terms of what what sort of stack we mean like people hear these claims from the leaders of the large labs that you know we're building a data center the size of manhattan or you know it's the energy that a city uses at any point can you contextualize like how much capacity we really need to build and sort of like what the biggest bottleneck is?

Speaker 2 Like, is it is it the grid? Is it sources? Is it workforce? Like, you know, when you want to solve this problem, like as a systems person, like, what is the first problem?

Speaker 1 So, the first thing I would say is it is a system. And this system was one that wasn't really battle-tested for decades.
Somebody showed me this number.

Speaker 1 I think the United States basically had like one to two percent of power usage growth for a very, very long period of time.

Speaker 1 And so, you can imagine this whole system of everything from gas turbines, coal, renewable energy. There was a regulation which kind of really stopped nuclear.

Speaker 1 And then you had these first-state utility companies, which often didn't have the incentive to innovate. You basically kind of ran the state.

Speaker 1 You weren't really, you know, getting new demand or competition. You had a grid, which wasn't really pushed because, again, you didn't need to.
And

Speaker 1 then you have essentially a patchwork of environmental laws, regulation, everything from water to emission to a whole other sort of things, which I'm sure I'm forgetting, right?

Speaker 1 So somebody kind of explains to me as kind of this tangled, spaghetti mess of things, which again, until two years ago was just fine because you and I were not dramatically using more energy than what we were doing 10 years ago.

Speaker 1 Now that obviously changed. The scaling laws arrived and everybody is trying to build new things.

Speaker 1 And I think the way we are trying to attack it is at every single step of the way, which is one is how do we make generation better?

Speaker 1 Second is how do we make sure we make constructing these data centers better, making it easier to kind of these regulations and kind of get these red tape out of the way, making sure we put focus on the right energy sources and making sure, like, you know, we have those lined up.

Speaker 1 So, we are trying to take an approach to all of this, but it is a complicated problem just because there is so many different players, so many different states, and so many different patchwork of laws and regulations involved.

Speaker 1 But I think what you know, I encourage folks to look at the executive order yesterday, which the president signs on infrastructure, which I think is going directly at this.

Speaker 1 We also have something called the National Energy Dominance Council, which works very closely with Secretary Bergham and Secretary Wright, you know, for Interior Energy.

Speaker 1 And I think you're going to see a lot more from us on that front. The short answer, Sarah, is it's complicated.

Speaker 1 I think we're taking a very, very strong approach to this, but there's going to be more to come.

Speaker 3 How do you think energy infrastructure is going to feed into these big data center buildouts?

Speaker 3 And so, you know, one theory I heard is that fiber is cheap and easy to lay while grid is hard building out the electrical grid.

Speaker 3 And so therefore, you're going to centralize data centers and your sources of cheap power, and then you just run, you know, fiber into them versus moving things around based on, you know, other types of capacity from a telecommunications or other perspective.

Speaker 3 Are there specific sources of energy that you think are going to power this AI revolution? Are there things we need to reinvest in?

Speaker 3 Obviously, the president has issued some executive orders around nuclear.

Speaker 3 I'm just sort of curious how you think about what that future really will be and what are the major sources of energy that we really need need to be dependent on?

Speaker 3 And how does that all shape up from an infra perspective?

Speaker 1 What I think we see our role as is like, get rid of the red tape. Let's make sure the permitting on these things are super easy.

Speaker 1 Nuclear, that's another case where I think for decades and decades, the climate lobby and the doomers that kind of stop any real efforts over there.

Speaker 1 So I think we're seeing a lot of effort to let's get the right, let's get the red tape out of the way. Let's get construction going and see where we get.

Speaker 3 The other thing that I think is interesting from an infrastructure perspective is manufacturing capability and supply chain. And a subset of AI supply chain is dependent on China or other countries.

Speaker 3 Are there certain areas of supply chain that we should be repatriating back? Or how should we be thinking about more generally American manufacturing?

Speaker 1 I say that America needs not just engineers, but it needs people up and down the stack. It needs electricians, technicians.
We need to get construction going.

Speaker 1 And we need to get these jobs and this whole ecosystem

Speaker 1 back in the U.S. So if you look at the action plan, there's a bunch of stuff in there about this.

Speaker 1 I think I mentioned two parts with the action plan, which is building and then innovation on on cutting out red tape and open source.

Speaker 1 And the president also talked a little bit about copyright yesterday.

Speaker 1 The third piece of the action plan, which I also think is a pretty dramatic switch away from how the Biden folk thought about it, is around making sure the world uses our standards and our technology.

Speaker 1 So just for context, and again, this is something unless you are a policy wonk, you may not be super familiar with.

Speaker 1 Under the Biden era, there was something called the Biden diffusion rule, which basically was a 200-page document, which basically made it illegal for America to export GPUs.

Speaker 1 It was really hard for, you know, if you're Jensen or if you're Lisa Sue to really kind of get your GPUs out to other countries, even some of our allies who want to, who are really enthusiastic about AI and they want to help us out, but we're not actually giving them GPUs.

Speaker 1 So we listened to that order. And one of the things we talk about is how do we make sure that we get all of our allies around the world using the American stack?

Speaker 1 So that means how do we make sure, and we just did this in the Gulf with the American AI Acceleration Partnerships. how do we make sure we are getting our GPUs over?

Speaker 1 And one of the other things of doing that is if we get our GPUs over, we probably get them to run our models as opposed to models from another country and we go from there.

Speaker 1 So having this sense of an American stack that we can export and the world standardizes on, that's, I would say, is a third part of the action plan.

Speaker 3 One other topic that I think people believe that China has a lead in right now. is certain areas of robotics that could be human form or other.
It's drones.

Speaker 3 It's potentially catching up on self-driving and autonomy. So, if you think about that both from a societal perspective, it's obviously automotive.

Speaker 3 So, if you look at European market share of cars, DYD, and others are really taking enormous amounts of share.

Speaker 3 And if you look at these, these are the same technologies that would also be used from a defense perspective.

Speaker 3 And so, to some extent, one could argue that there's two parts of AI: there's the digital form side of it, and then there's the real world robotics and drones and interactive side.

Speaker 3 How do you think about that in the context of American policy? And what in the action plan addresses that capabilities to to build these physical world products?

Speaker 1 The action plan actually has a section in it of making sure we are set up for robotics. I think that's obviously that's going to get super key within the next 18 to 24 months.

Speaker 1 I would say it ladders from everything else that we talked about, both in the U.S. and internationally.
The first is making sure that our model companies can actually build as fast as they can.

Speaker 1 Our startups can go innovate as fast as they can. The second piece I would say is we want to make sure that the world is using our robotics companies and our models and not say DeepSeek or Quenn.

Speaker 1 And that's actually one of the things because when I was talking to a bunch of robotic startups, you're seeing a lot of digital deep seek, a lot of digital Quenn out there.

Speaker 1 And what we want to do is to make sure that we have an open source response, an American response, which pushes our products as a standard out there. But, you know, it is a focus.

Speaker 1 I think it's going to increasingly come into focus in the next, say, six months to 12 months. And we're spending a lot of time on it.

Speaker 3 Related to that, there's always a question of how do things actually get done in politics and how does it translate into the real world.

Speaker 3 And I think, you know, you've gotten something like 90 different agency actions listed in this action plan.

Speaker 3 And how do you think about these things actually translating into industry, the economy, action by companies and other players?

Speaker 3 Like what are the mechanisms that you all have to sort of ensure that these things come together or happen? And if they don't come together, what's plan B?

Speaker 1 Well, there is no plan B. We want to get this done.

Speaker 1 And I think one of the things of the Trump administration, you will see is that the administration moves really, really fast, know which is why in the first week we had a bunch of executive orders uh look we already work at all of it um we had three executive orders signed yesterday one for infrastructure one for export which kind of ties to a lot of things we talk about and one to stop ideology invokeness and dei and i think you're going to see a lot more we are already at work on um pretty much all of it that is no plan b we're going to go get this done and the other part i would say from yesterday is i've been inundated with just great response from the industry.

Speaker 1 A lot of folks that you and I know who are just really excited to see the government actually maybe understand AI and is actually happy to

Speaker 1 make sure that American companies can go build American AI. So I think they're also very excited to go partner with us.
So it's go, go, go, no time to waste. We're getting it done.
There is no plan B.

Speaker 3 It's actually exciting because I think to your point on understanding AI and government, when I've looked at prior administrations, be they Republican or Democrat, a lot of the people who went into them from tech weren't the core driving forces of tech in the technology world.

Speaker 3 In other words, they get great people, very nice people, but it wasn't the top of the industry. It wasn't necessarily the deepest technical experts in some cases.

Speaker 3 Obviously, there's counterexamples to that.

Speaker 3 And so I think one thing that's striking about this administration is the caliber of tech people that they actually got this time around is very high relative to prior administration.

Speaker 3 So I think that impacts on the understanding. It impacts how you all are thinking about the world.

Speaker 3 So I found that very exciting and inspiring in terms of just having a really strong technical basis for what you all are doing. So I think that's really good.

Speaker 1 Thank you.

Speaker 1 There's a lot of great people in the administration and the tech industry, not just in AI, but for example, you have Emil Michael, you know, as the undersecretary for R ⁇ E, who's running DARPA in the Pentagon.

Speaker 1 You have many, many others. You know, one of the things I think about is we just bring a understanding of how the tech industry works, what is possible, what isn't.
We bring a sense of urgency.

Speaker 1 We also just really deeply understand the technology.

Speaker 1 Like you'd be shocked at how often I've seen David Sachs in a meeting explain how inferencing works, what high-bandwidth memory is, how the world has shifted from a, you know, pre-training context to a post-training context.

Speaker 1 And so we can just really mix it up on the technical details. And we obviously also have a lot of strong social ties to the industry.
So we can call upon them to help us out.

Speaker 1 So I think it just adds a very different flavor of understanding of AI.

Speaker 1 Where again, to go to my other point, I think DC kind of just suffered from a lack of real technical understanding of both the industry and of the products involved.

Speaker 2 I'm exposing my cards a little bit here, but it sounds from both your policies and what you're saying, Sri Ram, that you're on the same page. Do you think that the U.S.
should be like a

Speaker 2 technocracy? Like if it just that simple statement.

Speaker 1 What does a technocracy mean?

Speaker 2 Leading with technology and then having a bunch of people in technology leading the country.

Speaker 1 I'm not sure I would think of it that way.

Speaker 1 The way I see it is America has been blessed to have the leading technology ecosystem of the world. and that is an ecosystem which is in an intense competition right now.

Speaker 1 And I think we could have easily lost that competition, and it's still a very, very close race. And we need to do everything we can to protect, preserve, and extend our lead.

Speaker 1 But at the end of the day, if you look at this administration, we are still trying to make sure that we serve the American worker, the American workforce.

Speaker 1 If you look at the action plan, that is at the heart of everything we do. So I don't think I see it maybe exactly the way you describe it.
I see it more as

Speaker 1 we have something in a technology ecosystem that is the envy of the world.

Speaker 1 The president, by the way, when he was on stage yesterday, he talked about a lot of the inventions that the United States had made, right? Like we did the integrated circuit.

Speaker 1 We had Shocky invented the transistor. We had the fat children all the way.
We, you know, internet came from us. We did PageRank and Google.
We did the iPhone and Cupertino.

Speaker 1 So many of these things that the world winds up using. So what do we do to make sure we preserve that lead, especially when it comes to AI?

Speaker 1 And if you look at AI, look, there are so many potential timelines that AI could take. I have read AI 2027 from Daniel.

Speaker 1 I have read much more optimistic takes on AI. I think there's going to be an event horizon beyond which you and I can have reasonable discussions on how AI could play out.

Speaker 1 But in any one of those scenarios, I want to make sure that the United States is well positioned where we can take advantage of the productivity and the science and the technology breakthroughs that's going to happen and then be set up for whatever happens next.

Speaker 1 So I'm not sure I really kind of answered the question you phrased it. No, no, no, you did.

Speaker 2 I was trying to ask the question in a bit of a triggering way, because I think a lot of people would say, like, it shouldn't just be driven by the technologists.

Speaker 2 And it's like, what good does that, you know, do us in sort of winning the AI race?

Speaker 2 But I think that's actually a really profound claim that you made, which I hear as like the country that builds the most capable AI systems, they gain a lot of upstream control and influence that has been traditionally very American, right?

Speaker 2 And we should all care about that.

Speaker 2 Like you, you know, use the examples of accelerating life sciences, new materials, optimizing industry, being more efficient in healthcare and education and things that matter to every American and like compounding national wealth.

Speaker 2 And so I actually think that like sometimes a lot of this discussion becomes like, you know, an argument about like what parties have influence versus like what position do we want to have as a country, right?

Speaker 2 And whether or not we want that edge. That's right.

Speaker 1 And I think very simply we want to build.

Speaker 2 Yeah. So I have two questions for you before we run out of time.
One is just going back to this idea of you being the strong proponent of open source and open weights.

Speaker 2 What is the strongest counter argument to the people who would raise concern that the P-doom, the probability that there's some sort of like a cycle of key man risk or, you know, some factor of abuse of these powerful models increases with open source models?

Speaker 1 If you look at the action plan, it's kind of a manifestation of how we think about things, right? Like we do talk a lot about risk.

Speaker 1 We talk a lot about having systems in place to identify cyber risk, bio-risk, et cetera.

Speaker 1 I think the difference from the Biden administration or the folks who talk about Pedoom a lot on Less Wrong is that we are just inherently more optimistic.

Speaker 1 If folks haven't seen it, I encourage them to watch the vice president's speech in Paris where he talked about, look, we want to embrace AI with optimism rather than fear.

Speaker 1 And I think one of the things which happened is that there was such a lot of fear, I would say, mistakenly placed on open source. I think there are two kinds of fears people talked about.

Speaker 1 One was what you talked about, which is, hey, what are the risks if these models could do really, really bad things? The second was, hey, are we actually giving away our secrets to China? Like,

Speaker 1 and

Speaker 1 what DeepSeek showed us is that China is actually building these models just perfectly fine, just by themselves. And it's actually American models which were far behind.

Speaker 1 So, you know, immediately, I think that argument got refuted. On the PDOM question, I think that's a perfectly fair question.
And I think we need to be vigilant about it.

Speaker 1 And the action plan talks about it.

Speaker 1 But we have to remember we are in a race with China and there are going to be catastrophic consequences if Chinese models are running on every robot, every camera, every car, every device around the world.

Speaker 1 And we just got to face that reality.

Speaker 3 I think also the people who are driving the PDOM arguments to some extent are coming from one or two large companies that have closed source models.

Speaker 3 And so I think we also forget the incentives of who's actually pushing for this. It's a very traditional form of regulatory capture.

Speaker 3 If you have a big pharma company, they work with the government to prevent other entrants into the industry. And this is exactly what it felt like is at least partially happening in the AI world.

Speaker 3 Now, that may be for

Speaker 3 perceived altruistic causes or other things or worried about humanity.

Speaker 3 But I do think the reality is a small number of companies have been kind of pushing this narrative pretty strongly that open source is bad.

Speaker 3 And these are companies that control the closed source models.

Speaker 1 Oh, absolutely. I think, you know, I think think there is a, I think there's a few things going on.
One is that, you know, people pushing for regulatory capture.

Speaker 1 Second is obviously, you know, the schools have thought from effective altruism and a lot of people kind of worried about this all kind of mixed together. Here's my rebuttal to that.

Speaker 1 Like, I think one of the things that open source software has shown us on the internet is that by default, open source is just safer and more secure. What does Linus's law say?

Speaker 1 More eyes make every bug shallow. And over the last 20 years, what has the security industry learned?

Speaker 1 More scrutiny you put your libraries to, the more scrutiny you put your browser rendering engines to, the safer it becomes. And we have seen that time and time and time again.

Speaker 1 And I think the same holds true for open source and open weights, where I sort of, you know, if you have a model up on Hugging Face and somebody downloads a 500 gigabyte file and the thousands of students and researchers just kind of pounding away on that, I think there's a good chance they're going to find issues a lot better than a very small safety team inside a large lab.

Speaker 1 So I'm a big fan of open source sometimes being a lot more secure than closed source as well.

Speaker 2 Awesome. Thanks so much, Triron.

Speaker 3 Thank you, Your Excellency. Your governorship, your grace.
I'm not sure again what the right title is. Your policy advisorship.

Speaker 1 Feel free, Ilan.

Speaker 1 The more inflated it helps my ego. So thank you.
It was your excellency.

Speaker 2 That's what we started with.

Speaker 3 We really appreciate the time today, your Shiram Ship. So thank you for joining.

Speaker 1 Thank you so much.

Speaker 1 Such an honor, you know, and I love the work you folks do. And thanks for having me.

Speaker 2 Find us on Twitter at no priors pod. Subscribe to our YouTube channel if you want to see our faces.
Follow the show on Apple Podcasts, Spotify, or wherever you listen.

Speaker 2 That way, you get a new episode every week. And sign up for emails or find transcripts for every episode at no-bryers.com.