#208 Alexandr Wang - CEO, Scale AI

3h 22m
Alex Wang is the CEO and co-founder of Scale AI, a leading data platform accelerating the development of artificial intelligence applications. Founded in 2016, Scale AI provides high-quality training data for AI models, serving clients like OpenAI, Microsoft, and the U.S. Department of Defense. A former software engineering prodigy, Wang dropped out of MIT to build Scale AI, which is now valued at over $13 billion. Recognized on Forbes’ 30 Under 30 and TIME’s 100 Most Influential People in AI, Wang is a prominent voice in shaping the future of AI innovation and deployment. He advocates for responsible AI development and policies to ensure ethical and secure AI advancements.

Shawn Ryan Show Sponsors:

https://www.roka.com⁠ - USE CODE SRS

https://www.americanfinancing.net/srs⁠

NMLS 182334, nmlsconsumeraccess.org

https://www.tryarmra.com/srs⁠

https://www.betterhelp.com/srs⁠

This episode is sponsored by better help. Give online therapy a try at ⁠betterhelp.com/srs⁠ and get on your way to being your best self.

https://www.shawnlikesgold.com⁠

https://www.lumen.me/srs⁠

https://www.patriotmobile.com/srs⁠

https://www.rocketmoney.com/srs⁠

https://www.shopify.com/srs⁠

https://trueclassic.com/srs⁠

Upgrade your wardrobe and save on @trueclassic at ⁠trueclassic.com/srs⁠! #trueclassicpod

Alex Wang Links:

Website - https://scale.com

Scale AI X - https://x.com/scale_ai

Alex X - https://x.com/alexandr_wang

LI - https://www.linkedin.com/company/scaleai
Learn more about your ad choices. Visit podcastchoices.com/adchoices

Press play and read along

Runtime: 3h 22m

Transcript

Speaker 1 This is Marshawn Beast Mode Lynch. Prize Pick is making sports season even more fun.
On Prize Picks, whether you're a football fan, a basketball fan, it always feels good to be right.

Speaker 1 And right now, new users get $50 instantly in lineups when you play your first $5. The app is simple to use.
Pick two or more players, pick more or less on their stat projections.

Speaker 1 Anything from touchdown to threes, and if you're right, you can win big. Mix and match players from any sport on PrizePicks, Prize America's number one daily fantasy sports app.

Speaker 1 PrizePicks is available in 40-plus states, including California, Texas, Florida, and Georgia. Most importantly, all the transactions on the app are fast, safe, and secure.

Speaker 3 Download the PrizePicks app today and use code Spotify to get $50 in lineups after you play your first $5 lineup. That's code Spotify to get $50 in lineups after you play your first $5 lineup.

Speaker 3 PrizePicks, it's good to be right. Must be present in a certain six.
Visit PrizePicks.com for restrictions and

Speaker 4 This episode is brought to you by Progressive Insurance. Fiscally responsible, financial geniuses, monetary magicians.

Speaker 4 These are things people say about drivers who switch their car insurance to Progressive and save hundreds. Visit progressive.com to see if you could save.

Speaker 4 Progressive Casualty Insurance Company and affiliates. Potential savings will vary, not available in all states or situations.

Speaker 5 Alex Wang, welcome to the show, man.

Speaker 6 Yeah, thanks for having me. I'm excited.

Speaker 5 So am I.

Speaker 5 Like I was telling you at breakfast, I don't know a whole lot about tech, but ever since Joe came on, I've been trying to wrap my head around it all, and it's just a fascinating subject.

Speaker 5 I love talking about this subject now. So thank you for coming.

Speaker 6 Well, it's becoming so critical to national security and all the stuff that you're very passionate about. So, I mean,

Speaker 6 I think fundamentally tech is like, we got to get it right. Otherwise, stuff gets really dangerous.

Speaker 5 Yeah. Yeah.
Scares the shit out of me. In fact, we were just having a conversation downstairs about

Speaker 5 you having kids and you were waiting

Speaker 5 and Neuralink came up and I had to I had to pause the conversation. Dude, I'm like,

Speaker 5 I'm worried about Neuralink, but it sounds like you're pretty gung-ho about it.

Speaker 6 So,

Speaker 6 yeah, a few things. So, yeah, I mean, what I mentioned is basically I want to wait to have kids until we figure out how Neuralink or other, it's called brain-computer interfaces.

Speaker 6 So other ways for brains to interlink with

Speaker 6 a computer until they start working.

Speaker 6 Because so there's a few reasons for this. First is in your first like seven years of life, your brain is more neuroplastic than at any other point in your life, like by an order of magnitude.

Speaker 6 So there have been examples where, you know, for example, if somebody, if a kid is born, like you have a newborn that has, let's say they have cataracts in their eyes, so they can't see through

Speaker 6 the cataracts. And then they live their first

Speaker 6 seven years of their life with those cataracts. And then you have them removed when they're like eight or nine, then even with those removed, they're not going to learn how to see.

Speaker 6 because

Speaker 6 it's so important in those first seven years of your development that you're able to you're you're able to see that your brain can like learn how to read the signals coming off of your eyes.

Speaker 6 And if you, if that's not, if you don't have that until you're like eight or nine, then you won't learn how to see.

Speaker 6 So because it's so important that your neuroplastic, your neuroplasticity is so high in that early stage of life, I think when we get Neuralink and we get these other technologies, kids who are born with them, are going to learn how to use them in like crazy, crazy ways.

Speaker 6 Like it'll be actually like a part of their brain in a way that it'll never be true for an adult who gets like a Neuralink or whatever

Speaker 6 hooked into their brain.

Speaker 6 So that's why to wait. Now Neuralink as a as a concept or like

Speaker 6 hooking your brain up to a computer.

Speaker 6 I kind of take a pragmatic view on this, which is, you know, My day job, I work on AI. I believe a lot in AI.

Speaker 6 I think AI is going to continue becoming smarter and smarter, more and more capable, more and more powerful.

Speaker 6 AI is going to is going to continue being able to do more and more and more and more. We're going to have robots.
We're going to have other forms for that AI to take over time.

Speaker 6 And so, and humans, we're only evolving at a certain rate. Like humans are, you know, we are, humans will get smarter over time.

Speaker 6 It's just on the time scale of like millions of years because natural selection and evolution is really slow.

Speaker 5 I don't know. Are we getting smarter?

Speaker 6 I don't know about recently, but

Speaker 5 a little setback.

Speaker 6 Yeah, a little blip.

Speaker 6 So if you play this forward, right, like you're going to have AIs that are going to continue getting smarter, continue improving. Like they're going to keep improving really quickly.

Speaker 6 And, you know, biology is going to improve only so fast. And so

Speaker 6 what we need at some point is the ability to... tap into AI ourselves.
Like we're going to need to bring biological life alongside all of the silicon-based or artificial intelligence.

Speaker 6 And we're going to want to be able to tap into that for

Speaker 6 our own sake, for humanity's sake. And so eventually, I think we're going to need some

Speaker 6 interlink or hookup between our brains directly to AI and the internet and all these things.

Speaker 6 And

Speaker 6 it is potentially dangerous and it's potentially, you know, to your point, terrifying and scary, but we just are going to have to do it. Like AI is going to go like this.

Speaker 6 Humans are gonna improve at a much slower rate and we're gonna need to hook into that capability.

Speaker 5 I mean what

Speaker 5 you know that I've already expressed fear in this and and so I'm I'm curious without sharing my own fears. I'm just curious like what what in your mind, what could go wrong?

Speaker 6 I mean that there's like the obvious thing is that some corporation hacks your brain well it's a corporation hacks your brain which even that's pretty bad, but that'll be like what they'll like advert they'll like send ads directly to your brain or they'll they'll like make it so you want to buy their products or whatnot.

Speaker 6 But then, even worse, obviously, a you know, foreign actor, a terrorist, an adversary, a state actor, you know, hacks into your brain and

Speaker 6 takes your memories or takes, you know, like manipulates you or all these things. I mean, that is that's obviously pretty bad.
Um,

Speaker 6 yeah, um, and I think that's I to

Speaker 6 like

Speaker 6 it's definitely a huge risk. I mean, for sure, if you have a direct link into someone's brain

Speaker 6 and you have the ability to like read their memories, control their thoughts, read their thoughts, like,

Speaker 6 you know, that's pretty bad.

Speaker 6 I've talked to a lot of scientists in this space and a lot of people working on this stuff, including the folks at Neuralink.

Speaker 6 And,

Speaker 6 you know,

Speaker 6 Mind reading and mind control are like, those are the,

Speaker 6 that is where the technology will go over time, right? And so, um, it is like it's something that we have to, you know, like any advanced technology, we have to not fuck that up.

Speaker 6 Um, but it's going to be pretty critical if we want, if we want humans to remain relevant as AI keeps getting better.

Speaker 5 I mean, I interviewed

Speaker 5 Andrew Huberman. Do you know who that is? Yeah, yeah, yeah.
And, and, um,

Speaker 5 and talked to Ben Car, Dr. Ben Carson, about it too as a kind of a follow-on discussion.
But what Huberman was telling me is that,

Speaker 5 because this whole thing is, it sounds like it's,

Speaker 5 I don't know a whole lot about Neuralink, but from what I've gathered, it's going to help the blind see.

Speaker 5 And it sounds like it helps with some

Speaker 5 connectivity in your joints and bones and stuff for people that are paralyzed. But something that Huberman brought up is that I was like, well, if

Speaker 5 it is going to help the blind see,

Speaker 5 then could they project a total false reality into your head? Meaning you're seeing who knows what, shit in the skies, everywhere. Sounds like they could recreate an entire false reality.

Speaker 5 He said, yes, they will have that ability, but not only will they have that ability, they can manipulate every one of your senses, touch, smell, taste.

Speaker 5 insert emotions into your into your brain, fear,

Speaker 5 whatever it is. And I was like, holy shit, like they could, they could manipulate your entire reality into a false reality.
I mean, you think that's, and then I asked Dr.

Speaker 5 Ben Carson about it, and he said, you know, who's a world-renowned neurosurgeon. He said, yes, absolutely.

Speaker 5 He goes, or, you know, they could use it for good. But he goes, which

Speaker 5 he kind of put it on me. He's like, well, what do you think would happen? And like, would it be used for good eventually or would it be used for evil? And

Speaker 5 I mean,

Speaker 5 what are your thoughts thoughts on that? Do you think that's a real possibility?

Speaker 6 I mean, yeah. So first of all, like

Speaker 6 we don't understand the brain too much today, but eventually we will. Like we're like science is going to solve this problem, right?

Speaker 6 And everything you just mentioned is ultimately going to be on the table, you know?

Speaker 6 Manipulating your emotions, manipulating your senses. The senses thing is already happening, where I think in monkeys, they've shown that like

Speaker 6 they can,

Speaker 6 you know, they don't know what it's like from the monkey's perspective, but they're able to project

Speaker 6 on a grid of a monkey and get them to like, like, click on the right button really reliably. Wow.

Speaker 6 So they somehow they hook into basically the neural circuits that are doing the visual processing, visual like image processing in the brain. And they're able to project like

Speaker 6 things

Speaker 6 into their vision such that the monkey will like always click the button that you want it to collect, to click.

Speaker 6 And then, you know, you give it you know a treat or something damn and so yeah manipulating vision manipulating your senses manipulating your emotions um

Speaker 6 uh this is this will be longer term but like leveraging your memories um manipulating your memories manipulating like uh those are that stuff is on the table the other stuff that is i think more exciting is like being able to hook into AI and like all of a sudden I have encyclopedic knowledge about everything And just like you know, ChatGPT or other AI systems do, I can like think at superhuman speeds.

Speaker 6 I can, um, all of a sudden I can like, uh,

Speaker 6 I have like way more information I can process. Like, I can like understand everything that's going on in the world and process that instantaneously.
Like, I think there's an element here where it'll

Speaker 6 legitimately turn us superhuman from a just cognitive standpoint. But then, to your point, like, the flip side of that is

Speaker 6 the risk the other way, which is that you're going to have

Speaker 6 it's a huge attack vector.

Speaker 5 Yeah. I mean,

Speaker 5 like I said, I'm not super tech, but your company, Scale AI, you basically, correct me if I'm wrong,

Speaker 5 scale AI is basically the database that the AI uses to come up with its answers and answer your prompts and all of that, correct?

Speaker 6 Yeah, so

Speaker 6 we do a few things. So we help large companies and governments deploy safe and secure advanced AI systems.

Speaker 6 We help with basically every step of the process, but the first thing that we were known for and we've done very well is exactly what you're saying, which is creating...

Speaker 6 large-scale data sets and creating data foundry is what we call it, but creating the large-scale data production that goes into fueling every single one of the major AI models.

Speaker 6 And if you ask questions in ChatGPT,

Speaker 6 question,

Speaker 6 it's able to answer a lot of those questions well because of data that we're able to provide it. And as AI gets more and more advanced,

Speaker 6 we're continually fueling more advanced scientific, advanced information and data into those models. And then we also work with the largest

Speaker 6 enterprises and governments like the DOD and other agencies in the US to deploy and build full AI systems, leveraging their own data.

Speaker 6 And our strategy as a company has been, you know, how do we focus on,

Speaker 6 we have, how do we focus on a small number of customers who, where we can have like a really big impact?

Speaker 6 So we work with the number one bank, we work with the number one pharma company, the number one healthcare system, the number one telco, the number one country, America.

Speaker 6 And

Speaker 6 we work with all of them to like, how can you, no kidding, take

Speaker 6 how you are operating today and take sort of the workflows that you're doing today or the operations that you have today and use AI to fundamentally transform them.

Speaker 6 So, if you're like the largest healthcare system in the world, how do you, and you have to, you know, provide care to all of these patients, you know, millions of patients, how do you do so in the most effective manner?

Speaker 6 How do you do it logistically better? How do you improve your diagnoses? How do you improve the overall health outcomes of all of your patients? Like, that's a problem that we help solve with them.

Speaker 6 Or for the DOD, you know, there's so much that we can do to operate more efficiently and

Speaker 6 ultimately in a more automated way. I mean, you'll know this, I think, better than anyone.
And so,

Speaker 6 how do you start implementing those systems with AI?

Speaker 5 We'll dive way more in the weeds of that later in the interview. Kind of where I was going with this was:

Speaker 5 so if originally it was

Speaker 5 feeding the AI,

Speaker 5 you're given the data center, you're given the data to the AI

Speaker 6 to

Speaker 5 come up with the answers and

Speaker 5 answer the prompts. And so where I was going is

Speaker 5 if you have Neuralink in your head and it's accessing your data centers, how easy would it be to just feed bullshit into the data center that then feeds everybody that has a Neuralink in their head?

Speaker 5 So it could be, I mean, it could be anything. I mean, here's an example.
I'm a Christian. A lot of people think that AI is going to manipulate the Bible and change a lot of things.

Speaker 5 And so how easy would it be to just feed that into the AI data center? And then

Speaker 5 that's the new, whatever you feed it, that becomes the new truth because that's what everybody's accessing is that specific data.

Speaker 6 Yeah, I mean, I think,

Speaker 6 A, yes, for sure. That's a huge risk.
And this is one of the reasons why. I think it's really important that U.S.

Speaker 6 or other democratic countries lead on AI versus the CCP, like the Chinese Communist Party or other, or Russia or other autocratic countries, because the potential to utilize even AI today, by the way, you can use it to propagandize to a dramatic degree.

Speaker 6 But yeah, once you get towards, you know, you have Neuralink or other brain-computer interfaces that are, that can directly, you know,

Speaker 6 insert thoughts into people's brains. I mean, it's

Speaker 6 extreme power that has never existed before. And so who governs that power? Who governs that technology? Who makes sure that it's used for the right purposes?

Speaker 6 Those are like some of the most important societal questions that we'll have to deal with.

Speaker 5 Man, I mean,

Speaker 5 where do you even start with that? Who do you trust

Speaker 5 to control your fucking mind?

Speaker 6 Yeah, I mean, I think, well,

Speaker 6 it's interesting. I think the one thing that I think has been,

Speaker 6 I think a lot of people kind of understand it now, and we were talking a little bit about this at breakfast, is like, even the degree to which even just general media today kind of controls your mind or controls the like opinions you have or the beliefs you have.

Speaker 6 And, you know,

Speaker 6 you know, we were talking about like,

Speaker 6 you know,

Speaker 6 does the media prop up certain military forces to make them seem far more fearsome than they actually are? And like, you know, there's like some low grade, you can kind of view like some low grade

Speaker 6 forms of like, you know,

Speaker 6 propaganda, manipulation, all that stuff is like happening like, let's say like on a scale of one to 10 at the one or two level today.

Speaker 6 And then once you have Neuralink or other devices, it's going to be like a nine or a 10.

Speaker 6 And

Speaker 6 I think it's really hard. I mean, I think, I don't think

Speaker 6 any country is prepared to govern technology as powerful as the technology that we're going to be developing over the next few decades. Like AI, I don't know if we're prepared.

Speaker 6 Brain computer interfaces, I don't know if we're prepared. Large-scale robotics, I don't know if we're prepared.

Speaker 6 Like these are technologies that are just so much more powerful than anything that has come before. Sometimes people will say like,

Speaker 6 you know, AI is the new mobile. It'll be as big as mobile phones.
And it's just, no, it's going to be like a thousand times bigger and more important and like more impactful.

Speaker 6 And it's not clear that we did the best job regulating mobile phones even. So

Speaker 6 there's, it's going to be,

Speaker 6 it's going to be really important that we get it right.

Speaker 5 Yeah. I mean

Speaker 5 everybody that gets what I mean, you could, you could basically instantaneously have your, have an entire army, an entire nation that's linked into your thoughts, your way of thinking, and manipulate that entire population to do who the hell knows what.

Speaker 5 Hopefully, something for good, but you know, how things wind, how things generally wind up going. But you're gung-ho about this stuff.
Would you put it in?

Speaker 6 I would

Speaker 6 put it in, but I would be, you know, there's a few things that need to happen before I'd be willing to put it in. First,

Speaker 6 I would need to really feel good about the cyber offense-defense posture. Like I need to have really good confidence that I would be able to defend from

Speaker 6 any attacks, like any sort of cyber attacks into, you know, my brain interface.

Speaker 6 And that's like, that's one big bar.

Speaker 6 And then I would need to feel pretty confident. I would need to feel confident that there were

Speaker 6 that

Speaker 6 it wouldn't deeply alter my consciousness in any major way. Like, and that I think you would see from data of other people who like use it.

Speaker 6 And you'd kind of get a sense just from like other people adopting it.

Speaker 6 Those would be the two things I would need to like feel really, really confident about.

Speaker 5 It's a big thing.

Speaker 5 It's a big thing.

Speaker 6 Well, the last thing, you know, and then we should talk about other stuff, but the last thing about this is,

Speaker 6 you know, one of the things that, you know, people, there's a lot of talk right now about how humans will live forever, right? Or like, can humans live forever? How do you not die?

Speaker 6 And a lot of that's, you a lot of that's focused on keeping our human bodies healthy and keeping our, you know, how do you like, how do you take care of yourself?

Speaker 6 How do you take care of your human body? How do we cure diseases such that like humans can live to hundreds and hundreds of years? But I think what's the actual end game is that we figure out how to

Speaker 6 upload our consciousnesses from our meat brains into a computer. And i kind of think about um neuralink or other

Speaker 5 like other bridges between your brain and and computers as like the first step there well hold on what there's a whole nother rabbit hole we can so you're saying that we we we should be able to upload our consciousness or you want to be able to upload our consciousness into

Speaker 6 whatever yeah i think i mean now we're like we're on the like deep end of sci-fi but um but yeah i I mean,

Speaker 6 I think there will over time be

Speaker 6 there.

Speaker 6 So, one, I think the technology will exist at some point.

Speaker 6 We're not close today, right?

Speaker 6 We barely have Neuralink kind of working, right? So, we're not close, but the technology will exist to upload your consciousness onto a computer. Holy

Speaker 6 shit. And then, okay,

Speaker 6 let's say we're sitting here, you know, it's like 50 years from now, this technology exists,

Speaker 6 and you're asking the question:

Speaker 6 you know,

Speaker 6 are people going to upload their consciousness? Well, first off, there's a lot of people who

Speaker 6 naturally would, like people with terminal illnesses,

Speaker 6 people near death,

Speaker 6 you know, people who are like very fringe and experimenting with this new technology. There will be a class of people who will just initially do it.

Speaker 6 And then,

Speaker 6 as that starts to happen and they upload their consciousness, like the, if you have a digital,

Speaker 6 you have these sort of like digital intelligences,

Speaker 6 they're,

Speaker 6 you know, that's true immortality. That's the closest thing you'll get to true immortality.

Speaker 6 And so the,

Speaker 6 I think it's going to become

Speaker 6 like once the technology exists, you know, when it exists, it's going to become quite uh

Speaker 6 uh it's probably going to become a very natural path for most humans to go down.

Speaker 5 So

Speaker 5 what do you think happens if you get your consciousness uploaded and what would it even be uploaded into? Like a cloud or something?

Speaker 6 Yeah, it'd be uploaded to a cloud.

Speaker 5 What do you think? Do you think that you can experience life by uploading your consciousness to a cloud?

Speaker 6 Yeah, so

Speaker 6 yeah, this is

Speaker 6 a few things. So first,

Speaker 6 I'm a big believer in robotics. I think we're basically at the start of a robotics revolution.

Speaker 6 And we're in the very early innings of it, but people are starting to make humanoid robots. They're going to get really, really good.

Speaker 6 People are starting to apply them to manufacturing and industrialization and other contexts. I think the costs are going to come down dramatically.

Speaker 6 And so eventually, yeah, if you would believe that if you uploaded and then you could download or downlink down to a down to a humanoid robot, then you would kind of of experience the real world like any other world.

Speaker 6 Or

Speaker 6 you could continue in some kind of like simulated universe.

Speaker 6 You could almost like play a video game in the cloud kind of thing. And that could be like the other alternative.

Speaker 5 Wow.

Speaker 5 What do you think happens when you die?

Speaker 6 You know,

Speaker 6 as AI has gotten,

Speaker 6 so

Speaker 6 Elon always talks about how we're in a, we live in a simulation, right?

Speaker 6 Um,

Speaker 6 uh, and I remember when I first heard him talk about this, I was like, ah, no, this is like, I don't believe that. I don't believe we're in a simulation.
But, um,

Speaker 6 but as AI has gotten better and better at simulating the world, like, I don't know if you've seen these AI video

Speaker 6 generation models, like Sora or VO or some of these models, but you know, they can produce videos that are totally realistic.

Speaker 6 Most people could not tell the difference between AI, well, we're seeing this, AI generated video and

Speaker 6 real video. And as that's happening, it's making me think more and more that

Speaker 6 we probably live in a simulation.

Speaker 5 No shit.

Speaker 6 Yeah.

Speaker 2 For 10 years, Patriot Mobile has been America's only Christian conservative wireless provider, and they stand by their values.

Speaker 2 Patriot Mobile has been a great supporter of this show, which is why I am proud to partner with them.

Speaker 2 Patriot Mobile offers dependable nationwide coverage, giving you the ability to access all three major networks, which means you get the same coverage you've been accustomed to without the compromise.

Speaker 2 When you switch to Patriot Mobile, you're choosing more than a wireless provider. You're supporting a company that stands for American values and that proudly honors our veterans and first responders.

Speaker 2 Their 100% U.S.-based customer service team makes switching easy. Keep your number, keep your phone, or upgrade.
Their team will help you find the best plan for your needs.

Speaker 2 Just go to patriotmobile.com slash SRS or call 972 Patriot. Get free activation when you use the offer code SRS.
Make the switch today. PatriotMobile.com slash SRS.

Speaker 2 That's patriotmobile.com slash SRS or call 972 Patriot.

Speaker 5 How do you just, this is already fascinating? We haven't even got to the interview yet.

Speaker 5 How do you think we're living in a simulation? I mean, I know they they say they they cannot disprove it.

Speaker 6 Yeah, you can't like, it's kind of one of these things. There's there's no way to prove or disprove that that you live in a simulation.
And so it, but it's like, it's like, it's like any, you know,

Speaker 6 afterlife thought or religious thought, like all these things are like fundamentally unprovable.

Speaker 6 But the reason I think it's the case is I think in our lifetime, we are going to be able to create simulations of reality that will be hyper-realistic.

Speaker 6 Like I think we are going to create the ability to

Speaker 6 simulate different versions of our world with hyper-realistic accuracy.

Speaker 6 And

Speaker 6 And that will happen over the next few decades.

Speaker 6 And if we can, like, it's kind of like that Rick and Morty episode, where if we have the ability as an intelligent race to produce, you know, millions of simulated worlds,

Speaker 6 then

Speaker 6 the likelihood is that we're, you know, we're probably also the simulation of some other

Speaker 6 more intelligent or more capable species.

Speaker 5 Where do you think consciousness goes right now when you die?

Speaker 5 What if we are

Speaker 5 what if we are the super advanced robotics?

Speaker 6 Yeah, I think

Speaker 5 your consciousness gets downloaded into another body. Generation.

Speaker 6 Yeah, that's true.

Speaker 6 That would be...

Speaker 6 That's something like one way to think about it, which is like, yeah, it's all this big simulation that's running. And as soon as

Speaker 6 you get kind of like downloaded or taken off or like decommissioned from

Speaker 6 one entity, you get like, you know, uploaded to another entity kind of thing.

Speaker 6 It's kind of that, that's plausible. I think there's another world where like consciousness is like, is

Speaker 6 consciousness may not like be that big a deal, so to speak.

Speaker 6 Like it could be the case that, you know, definitely as the models have gotten better and better, as the AI models have gotten better and better,

Speaker 6 you look at them and

Speaker 6 you know, you definitely wonder if at some point you're just going to have models that are properly conscious. And it may just be the fact that, like,

Speaker 6 you know, it's something that can be engineered. And if it's something that can be engineered, then

Speaker 6 all bets are off, I think.

Speaker 5 Yeah,

Speaker 5 it's pretty wild to think about. Yeah, yeah.

Speaker 5 But

Speaker 5 let's move into the interview. You ready? Yeah.
All right. Everybody starts off with an introduction here.
So

Speaker 5 here we go.

Speaker 5 Alex Wang, founder and CEO of Scale AI, a company that's backbone of the AI revolution, providing the data and infrastructure that powers the AI revolution.

Speaker 5 Child prodigy who grew up in Los Alamos, New Mexico, surrounded by scientists with parents who were physicists working on military projects.

Speaker 5 Coding wizard, who by age 15 was already solving AI problems at Cora that stumped PhDs.

Speaker 5 Visionary entrepreneur who dropped out of MIT at 19, turning a Y combinator startup into a national security powerhouse that's helping the U.S. stay ahead in the global AI race.

Speaker 5 Youngest self-made billionaire in the world by age 24, built a company valued at nearly 25 billion while staying laser-focused on solving the biggest bottleneck in AI high-quality data.

Speaker 5 Unafraid to call the U.S.-China AI competition an AI war, warning that the Chinese startups like DeepSeek are closing the gap faster than most realize.

Speaker 5 Guided by your mission to build a future where AI drives progress, security, and opportunity. And so

Speaker 5 there's a big

Speaker 5 question right now that

Speaker 5 everybody's thinking about. Is AI the next oil?

Speaker 6 Yeah, I think

Speaker 6 a few thoughts there.

Speaker 6 In some ways, yes, in some ways, no. So

Speaker 6 AI is definitely the next,

Speaker 6 some ways in which it is the next oil. AI will fundamentally be

Speaker 6 the lifeblood of any future economy, any future military, any future government. Like if you play it out,

Speaker 6 the degree to which a country or economy is able to utilize AI to make its economy more efficient, to automate parts of its economy,

Speaker 6 to do automated research and development automate R D like you know push forward in science using AI all that stuff is going to mean that countries that adopt AI effectively will have like you know

Speaker 6 nearly infinite GDP growth and countries that don't adopt it are going to get are gonna get left behind

Speaker 6 so it is

Speaker 6 it is sort of the the fuel that will power the future of of every country. And by the way, I think the same is true of hard power.

Speaker 6 Like if you look at what the militaries of the future are going to be like or what war looks like in the future, AI is at the core of what that is going to look like. I'm sure we'll get into that.

Speaker 6 And then the ways that it's not like oil is, you know,

Speaker 6 oil is this finite resource. You know, we,

Speaker 6 you know, countries that stumble upon large oil reserves,

Speaker 6 they have that large oil reserve. At some point, it's going to run out.

Speaker 6 Like in norway you know it runs out at some point and um and so it it lends the country power and economic riches for a time period um and then you exhaust it and then you're looking for more oil whereas ai is going to be a technology that will just keep

Speaker 6 compounding upon itself and will keep you know um the smarter ais the more economic power you're going to get which means you're going to build smarter ais which means you have more economic power and so on and so forth and so it's going to there's going to be a flywheel that keeps going on AI, which means that

Speaker 6 it's not going to be a time-based,

Speaker 6 a time-limited resource, let's say. It's going to be something that will just continue racing and accelerating for

Speaker 6 the entire perpetuity.

Speaker 5 And data is part of that. Data is a big part of that.

Speaker 6 Data is the core part of it. Yeah.
So a lot of times, actually,

Speaker 6 I like to compare data to oil versus AI.

Speaker 5 That's actually what I meant. I fucked that up.
I meant to say data.

Speaker 6 Yeah, yeah. Well, I mean, I think that's totally true.
Like, data, if you think about AI, it boils down to like, how do you make AI? Well, there's like three pieces.

Speaker 6 There's the algorithms, like the actual code that goes into the AI systems that, you know, really smart people have to write.

Speaker 6 I used to, you know, write some of these algorithms back in the day.

Speaker 6 Then there's the

Speaker 6 then there's the compute, the computational power, which boils down to large-scale data centers.

Speaker 6 Do you have the power to fuel them? Do you have the chips to go inside them? That's like a large-scale industrial project in question.

Speaker 6 And then data. Do you have all of the lifeblood?

Speaker 6 Do you have all the data that feeds into these algorithms that they learn off of? And it's really kind of like the raw material. for a lot of this intelligence.

Speaker 6 And so that's why I think data is the closest thing to oil, because it is what gets fed into these algorithms, fed into the chips to make AI so powerful.

Speaker 6 And everything we know about AI is that, you know, the better you are at all three of these things, algorithms, computational power, data, the better your AI get.

Speaker 6 And it's just all about racing ahead on all three of these.

Speaker 5 So when we see like ChatGPT, Grok, these types of things, are they sharing a data center or

Speaker 5 are they completely separate data centers?

Speaker 6 They all use they all have separate data centers.

Speaker 6 This is actually one of the one of the major

Speaker 6 lanes of competition between the companies is who has the ability to secure more power and build bigger data centers

Speaker 6 because ultimately

Speaker 6 you know, as AI gets more and more powerful, the question then becomes how many many AIs can you run?

Speaker 6 So let's say for a second that we get to, you know, a really powerful AI that can do automated cyber hacking.

Speaker 6 So it can do, like, it can log into any kind of server or log into another, you know, or try to hack some website or try to hack some other, try to hack some

Speaker 6 system.

Speaker 6 Then the question is just, okay, if I have that, how many of those can I run? Can I run a thousand copies of that? Can I run 10,000 copies of that? Can I run 100 million copies of that? Wow.

Speaker 6 And that all just boils down to how many data centers do you have up and running? And that, then that boils down to, okay, how much power do you have to fuel those data centers?

Speaker 6 How many chips do you have to run in those data centers? And how do you keep those online for as long as possible? And what data?

Speaker 6 is constantly fueling those models to keep getting them to become better and better and better.

Speaker 6 And so this is one of the reasons why one of the major ways that the AI companies compete, you know, between

Speaker 6 XAI, Elon's company, and OpenAI and

Speaker 6 Google and Amazon and Meta and all these companies, one of the major ways they compete is just who right now is securing more power and more real estate for data centers five years from now and six years from now.

Speaker 6 And so the The battles five, six years down the line are being fought literally today.

Speaker 5 Wow. Man, that's fascinating stuff.
Well, a couple more things before we get into your life story here. Got you a gift.

Speaker 6 Oh, man.

Speaker 5 Everybody gets one.

Speaker 6 Love it.

Speaker 5 Vigilance League gummy bears. There you go.
Legal in all 50 states.

Speaker 5 No funny business, just candy made here in the USA.

Speaker 6 Yeah.

Speaker 5 And

Speaker 5 then one other thing.

Speaker 5 I got a Patreon account. It's a subscription account.
It's turned into quite the community. And

Speaker 5 they've been here with me since the beginning when I was running this thing out of my attic. And then we moved here.
And now we're moving to a new studio.

Speaker 5 And the team's 10 times bigger than what it was,

Speaker 5 which was just me and my wife. But it's all because of them.
And so they're the reason I get to sit here with you today.

Speaker 5 And so one of the things I do is I offer them the opportunity to ask every guest a question.

Speaker 5 This is from Kevin O'Malley.

Speaker 5 With AI now able to essentially replicate so many facets of our reality, do you see a future where all video or photographic evidence presented in trials becomes suspect?

Speaker 5 Based on the ability for any of it to have been replicated through artificial intelligence tools?

Speaker 6 Yeah, so this goes back to what we were just talking about. I do think AI is going to enable you to do crazy levels of simulation.
And

Speaker 6 I don't think our courts are ready for it. I think that, like the like Kevin is saying,

Speaker 6 AI will be able to generate very convincing video, very convincing images

Speaker 6 in a way at a level. Like, we're not even really at that point yet.
Like, right now, you can still tell when these videos or images are AI-generated.

Speaker 6 That's going to keep getting better and it's going to be indistinguishable from real video.

Speaker 5 How the hell are we going to discern what's real and what's

Speaker 5 AI-generated?

Speaker 6 I think that there's two things. I think, first,

Speaker 6 people are going to need really good bullshit detectors. Like

Speaker 6 insanely good. And I think

Speaker 6 I think kids today, by the way, already have much better bullshit detectors because they grew up on the internet where there's just so much, there's so much of everything that they already kind of like learned to have better and better bullshit detectors.

Speaker 6 But

Speaker 6 so that's one. And then the second is, I mean, I think there's going to be,

Speaker 6 this is an area where I know there's a lot of push for various forms of policy and regulation, but

Speaker 6 this is going to, I mean, it's going to be a major question. Like, hey, if there's fabricated video or

Speaker 6 imagery used in a trial and it's discovered that it was fabricated, like, you know,

Speaker 6 what are the consequences of that? And I think it's about tuning that such that if you fabricate evidence or you fabricate things, then

Speaker 6 maybe that's the worst offense of all.

Speaker 6 Then I think people would, then you deter a lot of usage of those tools then, if you set up the incentives in the right way.

Speaker 5 Yeah, I mean,

Speaker 5 first thing that goes to my mind is the U.S. government.
I mean, just showing you around the studio and stuff, talking about, hey, this is what the...

Speaker 5 what the government did to those Blackwater guys I was telling you about. They deleted the evidence.
Well, instead of deleting the evidence, they could

Speaker 5 make new evidence that is a fake gunfight in the Sour Square, Baghdad, that proves they're guilty. And

Speaker 5 then it's the government behind it. You know, we've seen it with Brad Geary.
We've seen it with Eddie Gallagher. We've seen it with the Blackwater guys.

Speaker 5 We've seen it a ton, just in my small network circle. And I could, I mean, you see what's going on with the elections all over Europe.

Speaker 5 They pulled Georgescu, calling him,

Speaker 5 what was it? I don't know,

Speaker 5 under

Speaker 5 Russian influence, Marie Le Pen in France, done.

Speaker 5 I mean, they were talking about pulling somebody in Germany not too long, maybe about six months ago. And it's just, man, it's fucking crazy, you know, and

Speaker 5 scares the hell out of me. Scares the hell out of me.
Because then they can just frame anybody they want.

Speaker 6 Yeah, I think

Speaker 6 definitely

Speaker 6 one of the outcomes of AI is that institutions that have power today will gain way more power. Yeah.

Speaker 6 It will, it's not naturally democratizing. It's a centralizing kind of technology.
And so

Speaker 6 and so, yeah, we need to build mechanisms so that we can trust those institutions. Otherwise, it doesn't end well.

Speaker 5 Yeah.

Speaker 5 Well, let's get to your story.

Speaker 6 Well, I have gifts too. Do I do

Speaker 6 okay, great. Um,

Speaker 6 so a few things. I mean, we're going to talk about this.
But I grew up in Los Alamos, New Mexico. So my

Speaker 6 parents were both physicists who worked at the national lab there. This is the birthplace of the atomic bomb.

Speaker 6 I don't know if you saw Oppenheimer, but half of that movie is set in Los Alamos, where I'm from. So we got a Los Alamos hat,

Speaker 6 Los Alamos National Laboratory hat.

Speaker 6 Dude, that's very cool. We

Speaker 6 some Los Alamos coins

Speaker 5 So

Speaker 6 about the there's one about the atom bomb one about the the Norris Bradberry who's the lab director

Speaker 6 And then and then a Los Alamos coin about the you know the father of the atomic bomb. There we go.

Speaker 6 We have a

Speaker 6 a like a copy like a basically a copy of all the the manual that they that they gave to the scientists that got declassified um

Speaker 6 uh from the from the uh actual uh

Speaker 6 from the actual manhattan project wow

Speaker 5 and this is cool as

Speaker 6 and uh this one's just a fun one it's a uh it's a rocket kit for you and your kids oh man they're gonna love that yeah thank you

Speaker 5 Dude, thank you. This is gonna look awesome in the studio.

Speaker 5 That's very cool.

Speaker 6 Yeah, it's been kind of surreal. I mean,

Speaker 6 everybody calls

Speaker 6 AI the next Manhattan Project. And so it's been,

Speaker 6 it's been funny because that's where I grew up.

Speaker 6 It's like, I don't know, it feels weird.

Speaker 5 I'll bet it does. Yeah.
I'll bet it does. So what were you into as a kid?

Speaker 6 So,

Speaker 6 yeah, so again, both my parents are physicists and

Speaker 6 my dad's dad was a physicist as well. So I grew up in this like

Speaker 6 pure physics family.

Speaker 6 So science, technology, physics, math, these were

Speaker 6 these were the things I was like, I was like, I was really excited about as a kid.

Speaker 6 And

Speaker 6 I remember like

Speaker 6 around the dinner table, we would talk about black holes and wormholes and

Speaker 6 alien life and supernova and

Speaker 6 far away galaxies and all that stuff. That stuff was all very captivating to me.
I was thinking about kind of like

Speaker 6 basically like, you know,

Speaker 6 understanding the universe, for lack of a better term.

Speaker 6 And then I, I really liked math. And I realized kind of, you know, in about four, in fourth grade, I entered my very first math competition, which is a thing.

Speaker 6 And I, I like, it was in, it was in the whole state of New Mexico, and I scored the best out of any fourth grader in New Mexico.

Speaker 6 Which,

Speaker 6 and then that like activated this like competitive gene in me. And then I just started like, you know, I got consumed by math competitions, science competitions, physics competitions.

Speaker 5 What kind of math are you doing in fourth grade? Yeah. You.

Speaker 5 What kind of math are you doing? Yeah, yeah.

Speaker 6 I remember, let's see, my parents taught me algebra in,

Speaker 6 I want to say it was second grade, maybe between.

Speaker 5 Are you serious? Yeah.

Speaker 5 You mastered algebra in second grade.

Speaker 6 I don't know if I mastered it, but I was, yeah, I was playing around with algebra. They taught me the basics of algebra, and I would just like to spend all time thinking about it in second grade.

Speaker 5 It's like seven, eight years old, right?

Speaker 6 Yeah, like eight and eight.

Speaker 5 Holy shit.

Speaker 6 And then,

Speaker 6 and so by the time I was, by the time I was in fourth grade, I could do kind of like, I could do some basic algebra, I could do

Speaker 6 some basic geometry, stuff like that. And then,

Speaker 6 let's see, where'd I do from there? By the time I was in middle school, I was doing calculus.

Speaker 6 And then,

Speaker 6 and I was it, and then I was doing college-level math in middle school as well. So, those are the two things I was doing in middle school.
And then

Speaker 6 in high school, I just became obsessed with computers and I just spent all day programming.

Speaker 6 And I realized like science and math are cool,

Speaker 6 but with computers and programming, you could actually make stuff.

Speaker 6 And that ended up becoming the major obsession.

Speaker 5 Back to the dinner table conversations. Yeah.
I mean, Los Alamos, there's like a lot of conspiracies and all kinds of stuff going on about that place. Remote viewing,

Speaker 5 all this stuff seems to stem to Los Alamos.

Speaker 5 But

Speaker 5 two parents that are physicists at Los Alamos, you guys are talking about black holes and aliens and shit. What do you think?

Speaker 5 Are there aliens?

Speaker 6 So there's this famous paradox, the Fermi paradox, which is, you know, what are the odds that we live in this like vast, vast, vast universe? And

Speaker 6 there's like, you know, there's, there's billions, hundreds of billions, trillions of other

Speaker 6 stars and planets. And,

Speaker 6 you know, what are the chances that like none of them have intelligent life? I mean, I think like definitely somewhere else in our universe, there has to be intelligent life.

Speaker 5 I think so.

Speaker 6 For sure.

Speaker 6 But the benefit, or I don't know if the benefit, but like part of the issue is if we're really, really, really far apart, like millions of light years apart, hundreds of millions of light years apart, there's no way we're ever going to communicate with each other.

Speaker 6 We're just like super duper far away from each other.

Speaker 6 So I think that's plausible. And then there's the,

Speaker 6 you know,

Speaker 6 there's what's called the dark forest hypothesis.

Speaker 6 I think this is one of the things I actually believe the most in, probably.

Speaker 6 So you have the Fermi paradox that says basically like,

Speaker 6 hey,

Speaker 6 what are the odds that there's no intelligent life out there in the universe?

Speaker 6 It's probably zero. There has to be some intelligent life somewhere else in the universe.
And then the question is, like, why aren't we seeing any? Like, why aren't we seeing any aliens?

Speaker 6 Why aren't we coming into contact with them? And so then there's all these, like, how do you explain why that is? And there was this

Speaker 6 hypothesis called the Dark Forest Hypothesis, which originally came out of a sci-fi novel, actually, but is the one that like jives the most with my thoughts, which is

Speaker 6 the reason you don't run into other intelligent life is

Speaker 6 if you play the game theory out,

Speaker 6 if you're an intelligent life, you don't actually want to be like

Speaker 6 blaring to every other intelligent life that you exist. Because if you do that, then they're just going to come and take you out.

Speaker 6 Like you're basically like a, you become like a huge target for other forms of intelligent life.

Speaker 6 And there's, you know, some intelligent lives out there are going to be hyperaggress and are going to want to take out, you know, other forms of intelligent life. so the dark forest hypothesis is that

Speaker 6 once you become an intelligent life form and you become a multi-planetary species and all that you realize that you're kind of best off minding your own business and not you know sending all these sorts of signals and trying to like make contact with other life because um it's higher risk to do that than to just kind of like you know stay isolated And so there is intelligent life out there.

Speaker 6 There are aliens out there, but everybody's incentive is just to stay isolated.

Speaker 5 Interesting.

Speaker 5 I don't know. I used to believe in it.
Then I interviewed a bunch of guys. I don't know.
I don't know. I think all this shit's a big distraction, to be honest with you.

Speaker 6 Yeah, there's definitely,

Speaker 6 I mean, there's definitely the other portion of this, which is, you know,

Speaker 6 UFOs are a conspiracy such that, you know, the military can do all sorts of airborne testing and

Speaker 6 it gets discredited because, you know, people say it's UFOs and then

Speaker 5 nobody believes it like there's just no I'm of all the people I've talked to there's just no hard evidence and then and then and then it's the well that's classified it's like I mean is it you're on a podcast tour you know but

Speaker 5 I don't know sometimes I think you know this is like all I watch is the expanding you all the black holes all the this is what I fall asleep to at night and I don't know I mean they they found what like Saturn's rings are all water.

Speaker 5 They think they may have found, you know, there's a possibility of life on some of the moons on Saturn.

Speaker 5 Neptune, I think,

Speaker 5 made of,

Speaker 5 is it Neptune that's made of water? Like a lot of oceans that are frozen. And so there may have once been life.
Then there's a they think they found a pyramid on Mars or something. I don't know.

Speaker 5 Sometimes I think maybe

Speaker 5 at any particular given point in time,

Speaker 5 there is only

Speaker 5 one planet that holds life as we know it at a time. And then maybe when that planet

Speaker 5 becomes obsolete, everything goes extinct. Maybe it moves, you know, maybe it was Mars, I don't know, five billion years ago.
And that's where life was.

Speaker 5 And then somehow, you know, shit changed and then it developed on Earth. I don't know.

Speaker 5 That's where I'm at right now.

Speaker 6 I go back and forth on this all the time yeah totally well because because our star has a life cycle right and as it goes through that life cycle different points of our solar system become different temperatures have different conditions you know all that kind of stuff and so um

Speaker 6 that's a plausible theory i mean i think uh

Speaker 6 i think it's i mean i think both that and what we're talking about before in terms of like consciousness in the afterlife these are like some of the some of the great questions because you just you know we'll probably never know the answers yeah yeah Yep.

Speaker 5 What were your parents working on at Los Alamos?

Speaker 6 They were.

Speaker 5 Are they still working there?

Speaker 6 Yeah, my mom's still working.

Speaker 6 My dad's not working, but my mom's still working. And so they were part of

Speaker 6 the divisions in Los Alamos National Lab that worked on classified work.

Speaker 6 They had clearance, my mom sells clearance with the DOE.

Speaker 6 And

Speaker 6 I actually

Speaker 6 remember, like when I grew up, I just assumed they were working on cool physics research because I was like a kid and I didn't put two and two together.

Speaker 6 And so I remember when I grew up, I thought the Los Alamos National Lab, like, used to be the place where the atomic bomb was built. And then

Speaker 6 decades later is just like this like advanced scientific research area where they're doing research into you know all of the you know the frontier of human knowledge and it's just this like great scientific research area and then

Speaker 6 and then it wasn't until I I it wasn't until I literally got to college where I was talking to a friend about it and it like dawned on me that oh wait LaSalma is probably still mostly weapons research

Speaker 6 and

Speaker 6 oh, that's why you would need a clearance

Speaker 6 stuff in New Mexico. And then since I left, they actually restarted

Speaker 6 what's called nuclear pit production, but they restarted basically manufacturing the cores of nuclear weapons.

Speaker 6 This must have been like 20...

Speaker 6 18, 2019 in Los Alamos.

Speaker 6 And then I was like, oh, yeah, no,

Speaker 6 it's mostly a research facility

Speaker 6 to

Speaker 6 research new nuclear warheads and

Speaker 6 new nuclear weapons.

Speaker 6 And so that dawned on me. That didn't dawn on me until I was all the way in college.

Speaker 6 But yeah. So my guess is my parents worked on that.

Speaker 5 Probably. Yeah.

Speaker 5 Damn, that's crazy. Wow.

Speaker 5 What else were you into as a kid other than mathematics?

Speaker 6 I loved math.

Speaker 6 I loved coding. I loved science.
I loved all that stuff.

Speaker 6 I

Speaker 6 was really into violin.

Speaker 6 I would practice an hour of violin a day.

Speaker 6 A lot of that was because there was sort of like

Speaker 6 in some

Speaker 6 fields or some areas, there's like

Speaker 6 there's just a real beauty to perfection.

Speaker 6 And I think this is true in like a lot of arts,

Speaker 6 a lot of music,

Speaker 6 a lot of, frankly, everything. I mean, I see it even in my current life, in my current day-to-day job.

Speaker 6 But there was just like, hey, if you could, if you practice enough to get to play a piece perfectly, then it would like, it would be beautiful.

Speaker 6 And if you, like, along the way, it's like total dog shit until you get to the point of like perfection.

Speaker 6 There's kind of, there's a lot of beauty to that concept to me, which is like, you know, once you get something totally perfect, it becomes beautiful.

Speaker 6 That was, that was captivating when I was a kid.

Speaker 5 So you were a perfectionist from a young age, and you're still a perfectionist today.

Speaker 6 Yeah, I see a lot of beauty in, like,

Speaker 6 you know, now I would say,

Speaker 6 I don't think we have the luxury to be perfectionists. I'm much more pragmatic now.
Like,

Speaker 6 you know,

Speaker 6 like we were talking about, the world is extremely messy. Like, like, the

Speaker 6 reality is, you know, stuff is super chaotic. There's a lot of bad shit going on constantly.
There's a lot of good shit going on constantly. But perfection is not really a

Speaker 6 plausible objective. Like, we're never going to get perfection.

Speaker 6 So I'm a lot more pragmatic now, but I do see a lot of beauty and perfection. I mean,

Speaker 5 I'm also a perfectionist. I battle it

Speaker 5 every fucking day. Like I,

Speaker 6 it,

Speaker 5 I'm OCD. I did, yeah, it, but, you know, and I've, I've read about it.
I've watched talks about it.

Speaker 5 It's, and I came to the conclusion, which I hate saying this because I am a perfectionist at heart, you know, that perfectionism can get in the way of success. Did you find that?

Speaker 5 I mean, it sounds, it sounds weird even like asking you the fucking question because you're the youngest billionaire in the world at age 24.

Speaker 5 And I mean, you're 28 years old now. So it sounds weird saying, did perfectionism hold you back? But

Speaker 5 did it?

Speaker 6 I think,

Speaker 6 yeah, at some point I just like, I, I, like, some bit flipped and I realized like

Speaker 6 you got to just do the 80-20 lots of times. Like you got to do 20% of the effort.
That's 80% is good. And you just have to be okay with that.

Speaker 6 And you just have to do that over and over and over again.

Speaker 6 So at some point I internalized that and it's like it's like anathema to perfectionism. It's like the exact opposite.

Speaker 6 And so now I think about it as like, hey, there's some things where perfectionism really is the right answer. And there's some things where

Speaker 6 you just got to like be okay with imperfection and just like speed is the objective versus perfection is the objective.

Speaker 6 So

Speaker 6 and yeah, I would say now, honestly, I think more things, like most things are speed is the objective, not perfection.

Speaker 6 So yeah, I would say I've kind of had like a whole journey with it.

Speaker 5 What was it that flipped you?

Speaker 6 I think what like

Speaker 6 so

Speaker 6 there's this thing that

Speaker 6 Elon says to people at his company when they're in like when they're like a crisis situation.

Speaker 6 And he says like

Speaker 6 Hey, like, you know, let's say you're in a crisis situation and like people are like not figuring out how to deal with it.

Speaker 6 And then he asks, asks, like, imagine there was a bomb strapped to your body that will go off if you don't come up with a solution to this problem. Like, then what are you going to do?

Speaker 6 And then, you know, most of the time when people actually think through that scenario, they like focus and they get their act together and like figure out

Speaker 6 something to do.

Speaker 6 And I think a lot of times startups are like that. Like you're like, there's so many moments that are so life and death and so high pressure that

Speaker 6 you're just in these situations all the time where you're like,

Speaker 6 you have to act and you have to like do something. Otherwise, you're toast.
And you just have to like figure out what the best plan of action is and the best course of action and just do it.

Speaker 6 So I think that that

Speaker 6 the realities of

Speaker 6 having to operate quickly, I think, just over time remolded my brain.

Speaker 5 Interesting.

Speaker 5 Do you have any brothers? Do you have any siblings?

Speaker 6 Yeah, I have two brothers, two older brothers.

Speaker 6 I dropped out of college, and both my brothers have PhDs.

Speaker 6 But my oldest brother is an economist, and

Speaker 6 my other brother is PhD in neuroscience. So

Speaker 6 they're smart. Yeah, they're smart guys.

Speaker 5 Whole lineage of geniuses, huh?

Speaker 6 Yeah, I think

Speaker 6 my

Speaker 6 parents

Speaker 6 are

Speaker 6 probably still

Speaker 6 a little miffed that none of us became physicists, but.

Speaker 5 Oh, man. Well, I'm sure

Speaker 5 they got to be happy with how everything turned out. I mean, wow.

Speaker 6 Yeah, no, I think

Speaker 6 my parents are super proud of me.

Speaker 5 So

Speaker 5 where did you go to school? I mean, where do you...

Speaker 5 Were you homeschooled?

Speaker 6 I went to Los Alamos Public High School, Los Alamos Public Middle School.

Speaker 6 There's The town is 10,000 or so people.

Speaker 6 Now it's more because they do

Speaker 6 pit product, they do manufacturing of these like nuclear cores. So now there's a lot more people there.
But when I was growing up, there was like 10 to 15,000 people. So pretty small town.
And

Speaker 6 there's like one public middle school, one public high school, a few elementary schools. And

Speaker 6 yeah, that's the, you know, I went to, I went to public school.

Speaker 6 I was lucky. Like, I think those are, those are amazing public schools, but it's like, it is public school like any other public school.
And then I would just get home every day and

Speaker 6 effectively like do math and science like every day.

Speaker 5 What,

Speaker 5 like,

Speaker 5 what? How do you go...

Speaker 5 What is the average second grader? I mean, you said you had learned algebra in second grade.

Speaker 5 What is an average?

Speaker 5 It's been a long time since I've been in second grade. Things may have changed, but I'm pretty sure it's basic addition.

Speaker 6 Yeah, things like addition. Maybe you get to your times tables.

Speaker 5 Yeah, maybe some multiplication tables.

Speaker 6 Yeah, yeah, yeah.

Speaker 5 I mean, so how do you

Speaker 5 dude? What's what is that like to go to go from the night before studying algebra to

Speaker 5 two plus two is four?

Speaker 6 Yeah, I I uh

Speaker 6 I rem like

Speaker 6 I definitely remember in school like

Speaker 6 I think like a lot of a lot of kids in general just sort of like

Speaker 6 generally kind of um

Speaker 6 buying out of the whole thing if that makes sense like like kind of just um

Speaker 6 tuning out and daydreaming and just kind of like ignoring what was happening in classes

Speaker 6 That definitely, that definitely started happening. And then I, what, you know, what I would actually do or focus on is like go back and then do math at home.

Speaker 5 I mean, you're more, you're more advanced than the teacher.

Speaker 6 There were, I remember one time there was like,

Speaker 6 there was,

Speaker 6 the good thing about what, you know, this, the school I went to is like, the teachers were really like also invested in my education. Like, I think they,

Speaker 6 many of my teachers wanted to see me like, thrive and continue learning. And, um, and that was, that was awesome.

Speaker 6 Like, I could, I can imagine a totally separate school where it's like the teachers don't care because, you know,

Speaker 6 you know, it's just like their lives are chaotic, the classroom's chaotic, all that kind of stuff. But, but I was lucky to have teachers who really cared.

Speaker 5 Yeah.

Speaker 5 I mean.

Speaker 5 Seems like it worked out well. I mean,

Speaker 5 for all the success that you have amassed in 28 years, I mean, you're a very grounded person. I never really know what I'm going to get with you guys.

Speaker 5 At breakfast,

Speaker 5 I was super impressed. I'm like, wow, this guy's like a really grounded person and seems like a really good person.
So

Speaker 5 kudos to you, man. Appreciate it.
But hey, let's take a quick break. When we come back, we'll get into MIT.

Speaker 5 All right, Alex, we're back from the break. We're getting ready to move into you going to college.
So you started at MIT, correct? Yep. How did that go?

Speaker 6 Yeah, so let's see. I was

Speaker 6 so I'll say the first, the few years before that. So I dropped out of high school, actually.

Speaker 5 Oh, you dropped out of high school?

Speaker 6 Yeah, I dropped out of high school.

Speaker 6 Why not?

Speaker 5 Why?

Speaker 5 Wasn't challenging enough for you?

Speaker 6 I dropped out a year early to

Speaker 6 go work at Quora at the tech company.

Speaker 6 I think a lot of people run into Quora. It's like the question and answer website.

Speaker 6 But I went to go work at a tech company for a year.

Speaker 6 And

Speaker 6 then after a year of that, I decided, okay, it's time to go to college. So

Speaker 6 I went to MIT.

Speaker 5 Yeah, at 15, you're stumping PhDs.

Speaker 6 It was maybe not quite that. Maybe not quite that early, but yeah, like by 16, 17,

Speaker 6 yeah, was I was more, I was more competent by that point.

Speaker 5 What are you stumping these guys on?

Speaker 6 So,

Speaker 6 well, at that point, that was like early, early AI. It wasn't even called AI yet.
It was called machine learning. That was like the more popular term.

Speaker 6 And it was about training different algorithms that would, you know,

Speaker 6 re-rank content. It was just like all the like

Speaker 6 all the algorithms for like these social media style, um, style things.

Speaker 6 And it's like, okay, what algorithm creates the most engagement or what algorithm like gets people, you know, the most hooked on, on these feeds. That's what I, that's what I was working on back then.

Speaker 6 Gotcha. Um,

Speaker 6 and so, uh, so I went, so I, I worked, I worked for a bit and then I went to MIT. And, um,

Speaker 5 when I went, what are you, sorry to interrupt. A couple more questions.

Speaker 5 What is it like for you to be 16, 17 years old

Speaker 5 stumping PhDs?

Speaker 5 I mean, is that just like normal life for you?

Speaker 5 I mean, you know what I mean? Like, does it set in like, holy shit, I'm really fucking smart, you know, or

Speaker 6 I think.

Speaker 6 I think something that I internalized pretty early on

Speaker 6 was that

Speaker 6 focus was really, really critical. And so I didn't think necessarily, I mean, like, I think a lot of people are really smart.

Speaker 6 And I don't know if necessarily I'm like way smarter fundamentally than a lot of these other people, but I was like hyper-focused on math as a kid and then hyper-focused on

Speaker 6 physics. And then in high school, I was hyper-focused on programming.
And then,

Speaker 6 and so if you, you if you're like hyper focused and you're just like you like really invest the time and the effort you can make really really fast progress

Speaker 6 so one of the things that i always like

Speaker 6 i i've believed in for a long time is that if you

Speaker 6 if you

Speaker 6 overdo things like you like really like invest lots of time, lots of effort, you go the extra mile, you go the extra 10 miles, and you're like constantly overdoing things, then

Speaker 6 you will improve faster than anybody else by many times.

Speaker 6 And a lot of other people, maybe they're just not going the extra mile, or maybe they're just not as focused, or, you know, they're like meandering a bit more. And so that's really like,

Speaker 6 I definitely like,

Speaker 6 for me, I think

Speaker 6 a lot of what I attribute being able to accomplish so much to is really about focus and

Speaker 6 overdoing it, going the extra mile.

Speaker 6 That's what I think boils down to.

Speaker 5 What did your parents think when you dropped out of school?

Speaker 6 You know, they, my parents, I think still probably really want me to get a PhD and do scientific research. So

Speaker 6 they

Speaker 6 I think they view and I respect this belief.

Speaker 6 You know, I think they view the pursuit of science, the pursuit of knowledge as above all else.

Speaker 6 And so I would always tell them, hey, I'm just, you know, this is like a little detour, but ultimately I'm going to come back and finish my degree and finish my, you know, get a PhD and I'll be on the straight and narrow.

Speaker 6 So that's what I always, what I was always telling them. But, and then at some point it just didn't be, it wasn't believable.
So I just stopped telling them that.

Speaker 5 Why'd you decide to go to school?

Speaker 6 I went to school because,

Speaker 6 well, there were two things. One was

Speaker 6 like genuinely, I wanted to learn a lot about AI very quickly.

Speaker 6 And I knew I could kind of do that while working, maybe, but the best thing to do really would be to like go to school, like invest all my time into it, and

Speaker 6 try to learn very, very quickly.

Speaker 6 And then the second thing was like, you know,

Speaker 6 almost anyone, you'll, not anyone, but like many, many people, if you ask them, like, what were the best years of your life? Like, a lot of people will say their college years.

Speaker 6 And so I was like, shit, I can't, I'm not going to sacrifice the college years.

Speaker 6 So, so, yeah, I went to school.

Speaker 6 I,

Speaker 6 like,

Speaker 6 I decided to just go really, really deep into AI. I took all of the AI courses I could while I was at MIT.
I was only there for a year, but I I started out. I remember I took a,

Speaker 6 I wanted to take the sort of like hardest machine learning course the first semester I got there.

Speaker 6 And the my freshman advisor, the person who was like, I had to get all my courses approved with, was the professor of that course.

Speaker 6 This just like happened to be the case. And I like signed up for her course.
And then she said, like,

Speaker 6 you're a freshman.

Speaker 6 you're not going to, you know, this is going to be, this is going to be too much for you. And I was like, oh, just give me a chance.
Like, you know, I just want to try it.

Speaker 6 Like, I'm really passionate about the topic. And he's like, okay, well, we'll let you

Speaker 6 let you go till the first, you know, for the first few weeks and see how you do. And so then I get in.
And then

Speaker 6 I remember I was like, I felt, I felt like the stakes were really high because I wanted to like prove that I could do this. And so the first test rolls around.

Speaker 6 And

Speaker 6 I think by like sheer luck, it just happened to mostly be about things that like, like there were a lot of things in the course I didn't understand, but happened to be about stuff that I did understand in the course pretty well.

Speaker 6 And I got like one of the top marks in that course, and there were like hundreds of people in this class. Um, and so then after that point, the professor let me do whatever I wanted, and then

Speaker 6 uh,

Speaker 6 and so then I did all of these. I, I was, I went really deep into AI and all the and all the AI coursework at MIT.

Speaker 6 Um,

Speaker 6 and then this was the year when uh DeepMind, the this like AI company out of London, came out with AlphaGo, which was the first AI that beat

Speaker 6 the best Go players in the world, which was viewed at that point as like probably the hardest strategy game or the hardest sort of like,

Speaker 6 yeah, the hardest strategy game for AIs to beat. And that was a big deal.
And then I started tinkering with AI on my own.

Speaker 6 So I built, I wanted to build like a camera inside my fridge that would tell me when my roommates were stealing my food.

Speaker 6 And

Speaker 6 And so I started tinkering with it. And then I pretty quickly realized

Speaker 6 kind of what we were talking about earlier, that data was going to be like, that everything was going to be blocked on data. Like if we, no matter what you wanted AI to do,

Speaker 6 that was going to rely on data to make the AI do those things.

Speaker 6 And so I, and I looked around, I was like, nobody's working on this problem. You know, you have plenty of guys working on building great algorithms.

Speaker 6 You have plenty of people working on building the chips and the computational capacity and

Speaker 6 all that. Nobody working on data.
So I was, you know, I was impatient. You know, I was 19 years old.
I was kind of impatient. I was like, well, if nobody's going to do it, I might as well do it.

Speaker 6 Dropped out, started the company, and was off to the races.

Speaker 6 Damn.

Speaker 5 So did you perfect the the refrigerator AI to tell you if your roommates are stealing your food?

Speaker 6 I uh that was part of the problem. I was like, I was, I, I was trying to build it, and then I realized I didn't have anywhere near enough data.

Speaker 6 So it always like fire incorrectly and always have false positives, false negatives, et cetera. And then

Speaker 6 I realized like

Speaker 6 then that was like the light bulb moment. I was like, oh shit, if I really want to make this, I need like

Speaker 6 a million times more data than I have now. And that's going to be true for like every AI thing that anyone ever wants to build.

Speaker 6 And so that was kind of the genesis of the

Speaker 6 idea, really.

Speaker 5 So you left MIT?

Speaker 6 Left MIT. I remember I moved, I flew straight from Boston to San Francisco to start the company

Speaker 6 and

Speaker 6 basically immediately went from like...

Speaker 5 At 19 years old.

Speaker 6 19 years old. Yeah, I immediately left and then I started coding.

Speaker 6 in San Francisco. And I was part of this

Speaker 6 like accelerator. Like I was part of of this program called Y Combinator.

Speaker 6 And it's kind of like the hunger games for startups.

Speaker 6 So there's like, there's like, it starts out, there's 100 startups at the start of the summer. And you're all like grinding away.
You're all working.

Speaker 6 You're all trying to like show milestones and show progress. And then it culminates at the end of...

Speaker 6 at the end of the uh

Speaker 6 of of why combinator at the end of it all there's a demo day where everybody presents their companies, presents their progress, and tries to get investment. And

Speaker 6 so it literally, it quite literally is the hunger game. It's like you go through this whole thing at the end.
If you get investment, you get money, you've won. If you didn't, you've lost.

Speaker 6 And

Speaker 6 so

Speaker 6 that was the beginning of the company. We ended up getting good investment.

Speaker 5 What did you do?

Speaker 6 Well, at that time,

Speaker 6 it was around data for AI. So it was all around

Speaker 6 how do we fuel data for for um what people want to build with ai but at that time it was like so early that like the use cases were pretty stupid like we were helping one company try to detect like it was like a t-shirt company they made like custom t-shirt designs and we're trying to help them detect when people um

Speaker 6 were

Speaker 6 like use a t-shirt design that was like that was like um like uh that was like unfit for to print like you know had like gore or or like um you know

Speaker 6 all sorts of like illegal stuff like if basically like identifying illegal t-shirt designs it's kind of like stupid now they say it and then we're helping another company um it was like a furniture marketplace we're helping them like improve their search algorithm with ai and then maybe a few months in maybe three months in we started working with autonomous vehicle companies and self-driving companies.

Speaker 6 And then that ended up being like the real

Speaker 6 meat behind our effort for the first three, four years. So we worked with General Motors and Toyota and

Speaker 6 Waymo and all of the major automakers in helping them build self-driving cars.

Speaker 5 How many people were you competing against?

Speaker 6 I mean,

Speaker 6 I think in anything you do in startup land, like you have like

Speaker 6 tens of competitors.

Speaker 6 And there were definitely tens of competitors at that time.

Speaker 6 And so it's like, you know, these are competitive spaces, but

Speaker 6 where,

Speaker 6 as we described, I don't mind competition from math competition days. And so

Speaker 6 we were just like really focused on the problem, really focused on how do you, what are the best possible data sets for these self-driven cars. A lot of that had to do with it's called sensor fusion.

Speaker 6 So, you know, there's so many different kinds of sensors, and how do you combine all these different sensors to get, you know,

Speaker 6 one output?

Speaker 6 So, like, if multiple sensors sense a person, how do you like collect all that together to say that's one person right there, and that's one car right there, and that's one, you know, bicycle over there?

Speaker 6 Um, so that was kind of our specialty as a company. And then, um, then we're kind of off the races.
Just on that, we grew the company to like 100 or so people.

Speaker 5 Let's go back just a little bit. Okay.
So

Speaker 5 you go to San Francisco by yourself as a 19-year-old kid who had just dropped out of MIT. How do you,

Speaker 6 but

Speaker 5 you're immature at that point. And

Speaker 5 so how do you develop leadership skills? And I mean, how do you have, how do you have the know-how and make the connections to build a company as a 19-year-old kid.

Speaker 6 Yeah, you so

Speaker 6 let's see what happened. So basically early on, like

Speaker 6 it's about who you get investment from.

Speaker 5 And so if you get. So it was just you at the competition.
There was no team.

Speaker 6 No team. No team.
And then

Speaker 6 And so I, and I was coding every day. And then I got,

Speaker 6 we got Y Combinator to invest in us. And then we got um this this investment firm called excel which was we were one of the early investors into facebook to invest and so we got um

Speaker 6 some some good investors and then they

Speaker 6 helped me build the team like find people to hire i also hired you know what actually happened is i mostly hired people i knew from school really yeah

Speaker 5 so like because you could trust them

Speaker 6 i think more that they could trust me because i think if it like at the time,

Speaker 6 if I went to like a

Speaker 6 25-year-old engineer in San Francisco and I was like, hey, we should, we should work together. I had no credibility.
Like,

Speaker 6 I remember I waste, I like, I would get coffee with these people and I would say, like, yeah, this is what we're working on. It's super cool.
You should join us. And then they would all just be like,

Speaker 6 okay.

Speaker 6 Cool. I guess I'm going to go back to my job now.

Speaker 6 So early on, I had no credibility except for with people I went to college with

Speaker 6 who we were just like friends and we liked each other. And so I managed to recruit a bunch of them over.

Speaker 5 And they dropped out too?

Speaker 6 Some of them dropped out. Some of them just happened to, you know, were like seniors or whatever, finished school and then joined.

Speaker 6 It was like a mix. It was a mix.

Speaker 6 And

Speaker 6 that was like the early nucleus of the team, the early sort of like cohort of the team. And then,

Speaker 6 and then we started picking up momentum because we're starting to work with large automotive companies. We're starting to work with, you know, these very futuristic autonomous driving companies.

Speaker 6 And then, as momentum started to pick up, like, you know, we were able to grow and build out the team over time.

Speaker 5 I mean, so, so, where did you get your business sense, or did you hire somebody to run all of that? You were the mastermind behind everything.

Speaker 6 Um, I, uh,

Speaker 6 maybe about a year in, I hired somebody literally with the title head of business. Um,

Speaker 6 but uh,

Speaker 6 but until then, I was just kind of like

Speaker 6 I was just trying to like learn it all.

Speaker 5 How did you get the product out there?

Speaker 6 Uh, I

Speaker 6 just coded it all up, and then there, like, I like put it out on one of these, there's all these like websites where you can launch startups, and I put it on on the one, we put it on one of those websites, and uh, it went like micro viral, you know,

Speaker 6 Like viral among

Speaker 6 like people who were on Twitter to look for new startup ideas.

Speaker 6 And then it was kind of, that was like the early seed that just

Speaker 6 that ended up enabling everything to grow. But it was like, I mean,

Speaker 6 at the time, it was, I mean, it was, it was tough going, you know, you're like, Like I would just like

Speaker 6 I would just spend all my time coding then every once in a while I would like post something

Speaker 6 to the internet and just like, and then I would beg all of my friends, I would say, like, please go upvote this, please go like this, like, please, like, you know, give me some ounce of traction. And

Speaker 6 yeah, that was the early days.

Speaker 5 Damn. Was it Scale AI at the beginning?

Speaker 6 Yeah, Scale AI. Actually, it was called...

Speaker 6 It was Scale API at first.

Speaker 6 And then, because that was just like that website was available. And then it became Scale AI AI like a year and a half later.

Speaker 6 But

Speaker 6 yeah, so the whole,

Speaker 6 I mean, early startups are so gnarly.

Speaker 6 It's, I mean, it's really crazy. If you look at like all these big companies and you like, you know, think about what they were like in the early days, they're all

Speaker 6 pretty, pretty, uh, pretty rough and tumble. Um, but, but the coolest thing, like, we, because we started working with all these

Speaker 6 automotive companies and working on self-driving, um,

Speaker 6 it quickly became hyper interesting

Speaker 6 because

Speaker 6 this was like one of the great scientific

Speaker 6 engineering challenges of the time.

Speaker 6 And we ultimately ended up being successful. Like Waymo, one of our customers, is now launched and driving

Speaker 6 large-scale robo-taxi services in you know San Francisco, LA, Phoenix. They're launching in more cities.
Wow. It's pretty amazing.

Speaker 5 Wow. Damn.
And the company grew how fast?

Speaker 6 So

Speaker 6 let's see. I think the numbers are something like

Speaker 5 five years. You are the youngest, five years from when you started it, you become the youngest billionaire in the world.

Speaker 6 Yeah, that's crazy to think about.

Speaker 6 That did not feel obvious. The first year, it was like, it was like

Speaker 6 for the first

Speaker 6 12 months, it was like

Speaker 6 one to three people. Like, it was like, it was like almost nobody.
It was like me and like one or two other people working on it for the first year.

Speaker 6 That's it. For the first one year.
And then after the second year, we go from that like

Speaker 6 one

Speaker 6 to three people, and we start hiring more people. We get to maybe

Speaker 6 like 15 or so people.

Speaker 6 And then

Speaker 6 that third year,

Speaker 6 we went from 15 or so people to

Speaker 6 like maybe 100.

Speaker 6 And then we were kind of off the

Speaker 6 hundred, and then we, and then we were like 200, and then 500, and then we kept growing. And now we're up to like 1,100 people.

Speaker 6 But the first, it was like really slow going at first. And

Speaker 6 yeah, and we, we, we,

Speaker 6 uh, we focused on first, it was autonomous driving

Speaker 6 and

Speaker 6 then

Speaker 6 starting about three years in, we started focusing on defense

Speaker 6 and working with the DOD.

Speaker 5 What are you guys doing in defense?

Speaker 6 So

Speaker 6 we do a few things. So one of the first things we did was help the DOD with

Speaker 6 its own data problem to help them be able to train AI systems. So,

Speaker 6 you know, one of the first things that we worked on was like, you know, they wanted to, the DOD wanted to do image recognition on satellite imagery, SAR imagery, you know, other like all forms of overhead imagery, but they had this huge data problem.

Speaker 6 You know, just like me with the fridge,

Speaker 6 they had the same problem. Like, how, you know, they need to be able to have data that lets them detect things and all this imagery.

Speaker 6 And so we, the first thing we did was fuel the data sets and data capabilities for the DOD. That was true for the first few years.
And then

Speaker 6 more recently, we've been working with them to do large-scale fielding of AI capabilities.

Speaker 5 What kind of stuff is DOD looking for in imagery?

Speaker 6 So I mean...

Speaker 5 Let me also... So basically the way I understand this is

Speaker 5 you don't need a human to detect something maybe like a nuclear reactor. Is that, is it, am I on the right track here?

Speaker 6 Yeah, so they look like a missile silo or yeah.

Speaker 5 And so AI is detecting all these, which drastically reduces human error, human manpower, all that kind of stuff. It's more accurate.

Speaker 6 Yeah, and it's, I mean, mostly it's, it's scalable. Like,

Speaker 6 I mean,

Speaker 6 the number of satellites in space has like exploded.

Speaker 6 So we have so much more sensing today, like way more imagery, way more sensing today than it's even like feasible for humans to work their way through.

Speaker 5 Wow.

Speaker 6 So that was, yeah, that was like the first problem.

Speaker 5 How do you fuel it?

Speaker 6 You, well,

Speaker 6 you have to build, so there's two parts. So first, you have to build effectively like a data foundry.

Speaker 6 You have to build a mechanism by which you're able to generate lots and lots of data to fuel these algorithms.

Speaker 6 A lot of it it synthetically, so using the algorithms themselves to generate the data, but then a lot of it you still need humans to validate and verify.

Speaker 6 So one of the things we did actually for this whole project is

Speaker 6 we created a facility in St. Louis, Missouri next to

Speaker 6 NGA, the National Geospatial Intelligence Agency, and we produced a center for AI data processing where we hired up imagery analysts to be able to validate the outputs coming out of the AI systems to ensure that we were getting the correct, you know, we're getting accurate and high integrity data to feed back into the AI systems.

Speaker 5 Wow.

Speaker 5 Wow.

Speaker 5 Damn.

Speaker 5 Where do we go from here?

Speaker 6 Yeah, so then so we were doing so we were doing

Speaker 6 lots of stuff around imagery and computer vision and then

Speaker 6 we started working with the DOD on

Speaker 6 more ambitious and larger scale AI projects. So one of the things we're working with them now is this program called Thunderforge, which is using AI for military planning and operational planning.
So

Speaker 6 more broadly,

Speaker 6 so the basic idea here is can you use AI to

Speaker 6 effectively automate major parts of the military planning process so that you're able to plan within hours versus taking many days?

Speaker 5 This sounds like Palantir.

Speaker 6 It's,

Speaker 6 yeah, they target different parts of the problem, and we target different parts of the problem. And ultimately, we work together pretty well.
But this is part of a broader concept that we have around

Speaker 6 what we call agentic warfare. So the use of AI and AI agents in warfare.
And the basic idea is, can you go from these current processes where humans are the loop to humans being on the loop.

Speaker 6 And so can you go from situations where these workflows have to go from a person has to do a bunch of work, then pass the next person, they have to do a bunch of work, pass the next person, to the AI agents are just doing a lot of that work and humans are just checking and verifying along the way.

Speaker 6 And

Speaker 6 it's a big change. So going from,

Speaker 6 if you compare both setups side by side, here you have individuals, humans with decades of single domain experience who are doing each step of this process.

Speaker 6 And then if you have the AI agents doing it, ideally, you have AI agents who have thousands of years of knowledge, all domain knowledge, and

Speaker 6 are

Speaker 6 a thousand times faster at doing the actual tasks. And so it's all about taking, and this exists at many, many different levels.
So

Speaker 6 you can think about this for the sensing and Intel portion that we were talking about before.

Speaker 6 So, you know, can you accelerate the intelligence gathering, you know, the process by which we take all the sensor data and turn that into insight?

Speaker 6 You can think about it for the operational planning process. Like, how can you accelerate that

Speaker 6 entire flow? You can think about it in terms of,

Speaker 6 you know, on the tactical side, how do you accelerate tactical decision making?

Speaker 6 So

Speaker 6 it bleeds into every sort of like level of warfare or every component. But at its core, how do you use AI agents to be faster, more adaptive, and have humans just check their work?

Speaker 5 So when you're talking about it helps with mission planning, especially in a tactical environment, because that's where I come from.

Speaker 6 I mean,

Speaker 5 what is,

Speaker 5 it could be any example, but can you give me an example of how it speeds up the mission planning process in a tactical environment?

Speaker 6 Yeah, so let's say that, so this thing that we have, by the way, we're working on it with Indo Paycom and UCom right right now. And

Speaker 6 we'll deploy it more broadly. But

Speaker 6 let's say that there's a

Speaker 6 good example.

Speaker 6 Let's say there's some kind of alert that pops up. Like there's something that

Speaker 6 we didn't expect that we need to figure out how we're going to respond to.

Speaker 5 Like what kind of an alert?

Speaker 6 So, I mean, let's say there was like,

Speaker 6 you know,

Speaker 6 there's like, I mean, you can imagine at different levels, but let's say there's like a ship that popped up that we didn't expect as a simple example.

Speaker 6 So then that alert flows into a bunch of AI systems that are going to, the first step is sensing. So

Speaker 6 let's look through all of our sensing capabilities and let's like go reanalyze all of the data that we have and figure out how much do we know about that ship, right?

Speaker 6 So now a person would like an analyst would go through and like do all this, you know, all the ped and all the stuff to be able to undergo this work.

Speaker 6 But ideally, you have AI agents that are just going, they can look through all the historical sensor data, they can figure out,

Speaker 6 oh, actually, there's like kind of a thing that showed up on this radar, and there's kind of a thing that showed up on this satellite imagery, and we can kind of like sketch together this, like, you know, the trajectory of this ship.

Speaker 6 Okay, so you go through that process, you try to understand what's going on, and then you, and then you go through and

Speaker 6 figure out, okay,

Speaker 6 what are the possible courses of actions? So, once you have situational awareness, then what are the the courses of actions against this particular scenario?

Speaker 6 And you can have an AI agent honestly just propose courses of actions. Like, hey, in this scenario, given this ship is coming here, you know, we could fire at it.

Speaker 6 We could just wait to see what happens. We could reposition so that we're, you know, we're able to

Speaker 6 handle the threat better. You know, all sorts of things, we could, we could reposition some satellites so we have greater sensing.

Speaker 6 You know, there's all sorts of different courses of actions that we could take.

Speaker 6 And then

Speaker 6 once the AI produces those course of actions,

Speaker 6 it'll run each of those different course of actions through a simulator. So it'll then run

Speaker 5 it war games at real time.

Speaker 6 Exactly. It'll war game at real time.
And so then it'll run through a simulator and say, okay, what's going to happen if we fire at it? Like, you know, this is what we know about red forces.

Speaker 6 This is what we know about blue forces right now.

Speaker 6 If we fire at it, this is like, you know, this is the war game of how that plays out. If we just increase our sensing, like, these are the things that the red forces could do to fuck us up.

Speaker 6 And like, that's the risk that we take on. And

Speaker 6 then the benefit is, because all this is automatic, you can run it, these war games and these simulations, a million times.

Speaker 6 So it's not just like one, you know, military planners just like trying to like war game and plan it out like, you know, in human time.

Speaker 6 It's like you could run a million simulations because you don't have perfect information. You don't have perfect knowledge.

Speaker 6 So you need to kind of figure out based on the uncertainties of the situation, what are all the potential outcomes that pop out of that. Wow.

Speaker 6 And then, so you run like a million different simulations of each of these different courses of action. And then you can give a commander direct,

Speaker 6 like you just give them this whole like brief and presentation, which is basically, these are the courses of actions we considered. This is the,

Speaker 6 These are the likely outcomes in those courses of action. We can show you the simulated outcome in each one of these scenarios.

Speaker 6 So we can show you what it would look like in every one of those scenarios if it happened, like representative simulations. And then the commander makes a call.

Speaker 5 Wow. So it's

Speaker 5 this is what it is. This is what it's doing.
These are the possible courses of action. These are the consequences of each action.
This is the percentage.

Speaker 6 Yeah, exactly.

Speaker 5 And it spits that out in what? A matter of seconds?

Speaker 6 Yeah, no, it takes a, you know, probably takes, even now, it probably takes a few hours because, you know, these models are a lot slower than they will be in the future.

Speaker 6 But yeah, I mean, compare that to, I mean, depending on the situation, like that could take, you know, that could take days for humans to do today.

Speaker 6 Like, it's, and, and it's not from lack of will or effort or capability. It's just.
It's a really complicated situation.

Speaker 6 If a shit pops up out of nowhere, like, there's a lot of stuff you have to consider.

Speaker 6 And so

Speaker 6 that's really

Speaker 6 the step change here: just like a

Speaker 6 dramatically accelerating situational awareness, dramatically accelerating

Speaker 6 an understanding of what the different course actions are, what could happen, what are the consequences,

Speaker 6 and surfacing that to Commander.

Speaker 5 Does it make a recommendation?

Speaker 6 This is kind of an interesting thing.

Speaker 6 We go back and forth if we want to make a recommendation because ultimately, like,

Speaker 6 we don't want

Speaker 6 to just be like, you know, we don't want to let commanders kind of like sleepwalk if that makes sense.

Speaker 6 We want them to like, you know, our military commanders are the best humans in the world, like considering all of the potential consequences of these different courses of action and also considering, you know,

Speaker 6 and ultimately making a call based on those potential consequences. So I think we want to ensure that

Speaker 6 commanders are still exercising their judgment in these decisions versus just, you know, making it easier for them to just say, oh, go with what the AI says.

Speaker 5 Interesting. Wow.

Speaker 6 But this, but then, okay, think about what happens next. So,

Speaker 6 and this is where stuff gets really freaky. So

Speaker 6 let's say that

Speaker 6 Obviously in a world where just the Blue Force, just the United States has this capability. That's great.
You know, we're going to we're going to be running circles around everyone else.

Speaker 6 But then what happens if the red force, you know, China, Russia, whomever, also has the capability? Then you're in this situation where

Speaker 6 I've wargamed out the whole situation.

Speaker 6 You know, they've instantaneously wargamed out the whole situation. And then it's like...

Speaker 6 Then it, I think, I honestly think, so then it's like,

Speaker 6 we know, and you know, like, blue forces, red forces, we both know that

Speaker 6 we both have like, you know, this perfectly war game scenarios, which avenue do you pick?

Speaker 6 And then it becomes this really complicated, almost like psychological, you know, kind of situation where it's like, then it like all comes down to how good our intel is.

Speaker 6 So how goes our intel about that commander? How good is our intel about what their collection capabilities are? How goes our intel about, you know, what they likely know about us, and vice versa.

Speaker 6 And it gets pretty

Speaker 5 so this is actually

Speaker 5 let's just

Speaker 5 so let's say china russia

Speaker 5 our enemies have this capability we have this capability then it

Speaker 5 then it kind of becomes

Speaker 5 It's like the same process that we deal with now. Who has the better intel, right? It's just developing

Speaker 5 and and you're going to a course of action quicker and the enemy's doing the exact same thing quicker so it's essentially it's the exact same thing that we're doing now

Speaker 5 but faster and so if we develop it first then

Speaker 5 we achieve basically global domination am i correct here

Speaker 6 yeah i think and i think timing really matters here because if we get this capability

Speaker 6 and this will go for, I mean, there's like there's way more, there's way more AI we'll be able to do, but let's say we get this capability, you know,

Speaker 6 a year ahead of adversaries,

Speaker 6 then you're, then, like, we're just going to be able to respond so much faster. The analogy I often use is like, imagine we were playing chess, but for every one move you take, I can take 10 moves.

Speaker 6 Like, I'm just going to win.

Speaker 6 And that's what, that's the asymmetric advantage that that comes out of this of this capability.

Speaker 6 And then once it, but then once it equalizes, then

Speaker 6 it's like this very, you know, it's like, to your point, it becomes this like adversarial, Intel-based, you know, capability-based kind of conflict. How

Speaker 5 do we

Speaker 5 I mean, how do we combat our adversaries from having this type of intel from having this type of AI system?

Speaker 6 So

Speaker 6 I think then,

Speaker 6 I mean,

Speaker 6 China's demonstrated with DeepSeek

Speaker 6 and

Speaker 6 models that have come out since then, they're going to be very competitive on AI.

Speaker 6 And in

Speaker 6 I think in 2024, so last year,

Speaker 6 there were something like 80 contracts between

Speaker 6 large language model AI companies in China and the People Liberations Army, the PLA.

Speaker 6 That number is not 80 in the United States. Like the United States is like way, way less than 80.

Speaker 6 So they're very clearly accelerating the integration of AI into their national security and into their military apparatus very quickly.

Speaker 6 I don't think at this point realistically we can stop them from having

Speaker 6 this capability that I described. So then you go to the next layer down.
So

Speaker 6 Intel.

Speaker 6 So,

Speaker 6 well, the next layer down, the next two things that you look at is: okay, how does AI impact Intel? And how does AI, how can we, what is the adversarial AI dynamic?

Speaker 6 Like, can we use our AIs to sabotage their AIs? Can they use their AIs to sabotage ours?

Speaker 6 And it's like AI on AI warfare, effectively.

Speaker 6 Then when you look at that scenario, okay, so let's dig into that.

Speaker 6 The first level analysis here is kind of what we were talking about before, which is that probably just boils down to how many copies of these AI systems do I have running versus how many copies do you have running?

Speaker 6 So it turns into a numbers game. If I have 10,000 AI copies running and you only have 100 AI copies running, then I'm going to run circles.
I'm still going to run circles around you.

Speaker 6 And that boils down to who else.

Speaker 6 So

Speaker 6 let's say you have 100 AIs,

Speaker 6 I have 10,000 AIs.

Speaker 6 I will take half of my AIs, I will take 5,000 of my AIs and just focus them on hacking your AIs.

Speaker 6 So

Speaker 6 they're all going to be looking for vulnerabilities in

Speaker 6 your information architecture in your data centers i'm gonna look for vulnerability i'm gonna you know i'm just like purely focused on cyber hacking of your 100 ais and then my other 5 000 copies are gonna do the military planning process for myself um

Speaker 6 then

Speaker 6 then look at the thing about the adversary i have this choice i have 100 ais

Speaker 6 if i have them all focused on doing um the military planning process, I'm going to get hacked because I'm not doing any cyber defense.

Speaker 6 And then even if I have all of them focused on cyber defense, even those numbers are bad. It's like 100 AIs versus 5,000 AIs from you.
And so I probably still get hacked.

Speaker 6 So the numbers end up mattering a lot.

Speaker 6 If even if they had, even if the other adversary, let's say it's only a 2x advantage, I have 10,000 copies running and the adversary is 5,000 copies running. I can do the same thing.

Speaker 6 5,000 of my copies are just focused on hacking your AI so that your AI is incapacitated or has incorrect information or or

Speaker 6 is poisoned in some way, like basically is incapable, incapacity for some reason.

Speaker 6 And the other half of my AIs are focused on the military planning process.

Speaker 6 Again, the adversary is screwed because to properly deal with a cyber attack, I need probably all 5,000 copies to be focused on cyber defense.

Speaker 6 And then I have no capacity left to do the military planning. Wow.
So

Speaker 6 it really turns into this, like,

Speaker 6 very,

Speaker 6 like,

Speaker 6 just in the same way that you would you would command your forces

Speaker 6 today, like all of your you know, your various

Speaker 6 forces across all domains to like try to pincer outmaneuver the enemy, you'll do the same kind of planning for your like AI army, so to speak, or your AI allocation of assets.

Speaker 6 Yeah, your allocation of assets, exactly. And a lot of it will be: okay, how many am I dedicating towards

Speaker 6 hacking and sabotaging the opponent? How many am I dedicating towards my own military planning and wargaming process?

Speaker 6 The other thing is how many you allocate towards,

Speaker 6 you know, the other key component here is drones and

Speaker 6 how many you're allocating towards doing the like

Speaker 6 very tactical mission level autonomy to accomplish, you know, mission level objectives.

Speaker 6 But it'll be, it'll be like,

Speaker 6 I think it really boils down to ultimately who has more resources. And then what are those resources? That's going to be about large-scale data centers.

Speaker 6 So who has bigger data centers and more power to run all these AI agents?

Speaker 5 And who makes the determination of how many AIs we're going to put in the tactical environment, how many AIs are going to go after

Speaker 5 cybersecurity, trying to hack into the other AIs? Is that a human or is that another layer of AI that

Speaker 5 spits out

Speaker 5 exactly what you just said?

Speaker 5 This is our situation. Here's the courses of action.

Speaker 5 Here's the consequences of what happened. So is it just AI after AI after AI that's doing all of this, all these simulations?

Speaker 6 Yeah, that, yeah, no, you're exactly right. Then I, yeah, exactly.

Speaker 6 You have another AI that's planning out and mapping out, you know, how should I allocate my my AI resources to properly deal with the adversary, given what I know about the adversary.

Speaker 6 And then, so then what are the ways in which,

Speaker 6 you know, what are the, so then what are the key dimensions that would give you an edge versus your adversary? Well, it's if, A,

Speaker 6 your AI is different somehow.

Speaker 6 So it actually is like hard for your adversary to know exactly how you're, how you would act, like basically strategic surprise in some form, in the form of like a different thinking process or a different sort of like way of reasoning of the AI systems.

Speaker 6 And then the other one is like

Speaker 6 ambiguity of how many, how many, what your resources actually are.

Speaker 6 Like if somehow I can make the adversary think that I have way fewer resources than I actually do or way more resources than I actually do, that'll be a critical element of

Speaker 6 strategic surprise in those kinds of situations as well.

Speaker 5 Wow.

Speaker 5 Would an AI be able to be able to

Speaker 5 would AI

Speaker 5 be able to

Speaker 6 alert

Speaker 5 if it if it will it know it's been hacked?

Speaker 6 So, yeah, this is this is a great question. That, you know,

Speaker 6 right now, probably yes.

Speaker 6 But

Speaker 6 the

Speaker 6 you, it's definitely possible in the future that you will be able to

Speaker 6 effectively hack into a system or somehow poison an AI system

Speaker 6 and

Speaker 6 have that activity be relatively untraceable. Because you would basically

Speaker 6 you would hack into that AI system. So there's two ways you would do it.
One is you poison the data that goes into that AI. So

Speaker 6 I'm not hacking into the AI itself. I'm just poisoning all the data that's feeding into that AI

Speaker 6 such that at any moment in the future,

Speaker 6 I can activate that AI and basically hack it without any sort of active intrusion. But I can just do it because I've poisoned, I've like poisoned

Speaker 6 the data that goes into the AI such that if I like, you know,

Speaker 5 say something

Speaker 5 it alters the decision-making process. Yeah, exactly.
But the but the the end decision-maker, which would be a human, would not realize that.

Speaker 6 Yeah, exactly. Okay.
So data poisoning is going to is, but this is what's so terrifying about DeepSeek. One of the reasons why DeepSeek is really scary is,

Speaker 6 you know, China chose to open source the model, right?

Speaker 6 So there's a lot of corporates, large-scale corporates in the United States that have chosen to use DeepSeek because they're like, oh, it's a good model and it's a good AI and it's free.

Speaker 6 Why not use it?

Speaker 6 But DeepSeek itself as a model could already be compromised, could already be poisoned in some way such that, you know, there are characteristics or behavior or ways to activate DeepSeek that

Speaker 6 the CCP and the PLA know about that

Speaker 6 we don't.

Speaker 6 So that's why DeepSeek is scary.

Speaker 6 So the first area is just data poisoning. So basically, can you poison the data that we're using to train the AIs such that,

Speaker 6 to your point, I've altered the behavior of your AIs in a way that you don't know about, and that's going to affect, that's going to have cascading effects across your whole military operation.

Speaker 6 That's one.

Speaker 6 And then the second one is

Speaker 6 basically,

Speaker 6 you know,

Speaker 6 if you're able to do the whole operation quickly enough, you basically hack in and you

Speaker 6 kind of miss what we were talking about before. You would like destroy the traces.

Speaker 6 You destroyed any sort of trace that like you had hacked in, and you have an agent that like hacked in, like removed that trace and the evidence of you hacking in

Speaker 6 before anybody, before it was alerted or notified. That's maybe a bit more extreme, but definitely the data poisoning stuff is more concerning in the near term.

Speaker 5 Damn. So

Speaker 5 how would you defeat it? I mean, it's, so if it were to be hacked and you knew it was hacked, then AI becomes completely irrelevant, correct?

Speaker 6 Well, the issue is we're still going to rely on it for lots of things.

Speaker 5 It would have to come down to the human mind again. And you would have to, you would have to, let's say it's a ship, you would have to

Speaker 5 know everything that you've done in the history so that it doesn't detect what tactic you're going to use and do something

Speaker 5 just

Speaker 5 something that's never been seen before in order to confuse the adversary's AI, correct? Yeah.

Speaker 5 So you'd have to make a drastic change that you don't know if it's actually going to work so that the AI doesn't detect, oh shit, we've seen this before, this is what it's about to do.

Speaker 6 Yeah.

Speaker 6 Yeah, so to your point, yeah, strategic surprise becomes the name of the game very quickly.

Speaker 6 and

Speaker 6 and how do you create an operation such that you maximize the amount of strategic surprise against an adversarial ai that's one

Speaker 6 and then honestly the second thing that's that's really critical is a lot of this will just plain up boil down to like straight up boil down to how

Speaker 6 many copies you have running and how large your data centers are and

Speaker 6 how much industrial capacity you have to run these AIs, both centrally and at the edge in all the war, in all the theaters,

Speaker 6 in every environment?

Speaker 5 How fast will it learn new technology? So let's just take, for example, Sauronic. They're making autonomous surface warfare vehicles, or Palmer Lucky, you know, he's doing the autonomous submarines.

Speaker 5 And

Speaker 5 so when

Speaker 5 what am I trying to say here? So let's say we're at war with China. China has all the data, all the history back from whatever, World War II on different capabilities that we have.
And

Speaker 5 what happens when

Speaker 5 something new is introduced onto the battle space, like Saranic's autonomous vehicles or Epirus or

Speaker 5 or Palmer's rockets or his submarines?

Speaker 5 How would the AI

Speaker 5 get the data set

Speaker 5 to make a decision, or not make decisions, but come up with what you're talking about? Courses of actions, consequences, what it's about to do,

Speaker 5 probability of what's going to happen.

Speaker 5 How fast will it be able to learn

Speaker 5 when something new is introduced onto the battle space?

Speaker 6 Yeah.

Speaker 6 This is a great question. In general,

Speaker 6 the first time it sees a totally new, let's say, a USV or a UUV or whatever it might be that it's never seen before,

Speaker 6 it won't be able to predict what's going to happen. Because it won't know

Speaker 6 how fast it's going to go. It won't know

Speaker 6 what munitions it has. It won't know what its range is.

Speaker 6 It won't know all the

Speaker 6 key facts. Unless, by the way, they have really good intel and they already know all those things because they've hacked us.
But let's assume they don't know.

Speaker 6 So the first few conflicts, it's not really going to be able to figure out what's happening. And

Speaker 6 that's a key component of strategic surprise: always having new platforms that won't be

Speaker 6 sort of simulatable, let's say, by enemy wargaming tech.

Speaker 6 So that's definitely part of it.

Speaker 6 But

Speaker 6 at a certain point,

Speaker 6 it's going to know what the hardware are capable of, and it's going to be able to run the simulations to

Speaker 6 understand

Speaker 6 how that changes the calculus. Because ultimately, right, what's going to happen is,

Speaker 6 and some of this stuff, like, you know, this is, this is like,

Speaker 6 you know, some of the stuff is dissonant because obviously, if you look at what happens today in the military, it looks nothing like this.

Speaker 6 But let's play the play the tape forward and like see what happens in the future. Ultimately, you're going to run large-scale simulations, and it's going to figure out, hey,

Speaker 6 this new,

Speaker 6 you know,

Speaker 6 unmanned surface vehicle has this much range. It can go this quickly.
It can maneuver in this way. It has this kinds of munitions.

Speaker 6 It has this kind of connectivity.

Speaker 6 It is vulnerable to these kinds of EW attacks, whatever they may be. It can be jammed in these ways.
And those will all just be parameters for the simulation to run.

Speaker 6 So I think...

Speaker 5 But initially, you would have no recommendations.

Speaker 6 Initially, you'd have strategic surprise.

Speaker 5 So OPSEC, when it comes to weapons capabilities, is still just paramount. And it will, I mean, will it always come back to the human mind?

Speaker 6 Yeah, I believe so. I believe that, you know, we have this concept that we talk about a lot, which is human sovereignty.
So

Speaker 6 AI systems are going to get way better, but how do we ensure that humans remain sovereign? How do we ensure that humans maintain real control over what matters?

Speaker 6 So maintain control over our political systems, maintain control over our militaries, maintain control over our economic systems, you know, our major industries, all that kind of stuff. And so,

Speaker 6 and I believe it's pretty paramount in the military.

Speaker 6 You are not going to want to take, certainly, just as like a, as like a simplistic thing, we're not going to give AI the capabilities to unilaterally fire nuclear weapons.

Speaker 6 Like, we're never going to do that.

Speaker 6 And so, ultimately, so much of what is going to become really critical is

Speaker 6 the aggregation of information, simulations, war gaming, planning to humans to ultimately make

Speaker 6 the proper decisions. And by the way, so much of this will start bleeding into the diplomatic decision, like diplomacy, diplomatic decisions that need to be made.

Speaker 6 It'll bleed into

Speaker 6 economic warfare. It'll bleed into...

Speaker 5 I mean, this goes all the way into...

Speaker 5 I could see this going all the way into relationship building

Speaker 5 between nations.

Speaker 5 What are the outcomes if we become allies with

Speaker 5 Russia?

Speaker 5 know,

Speaker 5 what are the courses of action? What are the consequences? I mean, is it, does it, so it bleeds into everything, politics,

Speaker 5 allies, adversaries, warfare, economics, all of it.

Speaker 6 Yeah, totally. Because if you ultimately boil it down, what is the capability? The capability is

Speaker 6 sensing. and situational awareness.
So I'm going to know, I'm going to be able to go through troves and troves of data, OSINT, other forms of

Speaker 6 like open source Intel, different kinds of various Intel feeds that I have, and know what is the current status, what's going on,

Speaker 6 what is the current situation. It'll be able to aggregate all that data in to provide a comprehensive view as to what those behaviors are.
And it'll give you the ability to predict.

Speaker 6 And it'll give you the ability to effectively play forward, you know, every potential action you could take, what would happen in those scenarios with some probabilistic view,

Speaker 6 some probabilities. And then, yeah, you're going to use that

Speaker 6 for every major decision. Like the military and the government should use this for every major decision we make.
We should do it for trade policies. We should do it for diplomatic relations.

Speaker 6 We should do it for

Speaker 6 we should do it off, you know, we're looking outwards, but honestly, we should also do it for like internal policies. Like, you know, what are our healthcare policies? What are our,

Speaker 6 you know,

Speaker 6 all that kind of stuff, too. But

Speaker 6 so it will, this capability of sort of

Speaker 6 effectively

Speaker 6 all domain sensing plus planning is going to be paramount.

Speaker 5 Do you.

Speaker 5 Ben, I have so many questions.

Speaker 5 Do you see a world where

Speaker 5 AI becomes

Speaker 5 so powerful throughout the world that it becomes obsolete?

Speaker 5 And we're right back to where

Speaker 5 we were, I don't know, 10 years ago, 20 years ago, where it's all human decision-making.

Speaker 5 Well, will it outdo itself?

Speaker 6 A few thoughts here. I think,

Speaker 6 so

Speaker 6 one of the things, so I think the first stage of what's going to happen is like

Speaker 6 kind of what I'm saying, like human is the loop to human on the loop. Like, we're going to, right now, humans do a lot of just like

Speaker 6 brute force, manpower

Speaker 6 work in all sorts of different places, you know, in the economy and in warfare, et cetera.

Speaker 6 That'll, that's, that's like the first level of

Speaker 6 major automation that's going to, that's going to take place. So then it's like about, you know, your

Speaker 6 strategic decision making

Speaker 6 and

Speaker 6 your ability to make high judgment decisions that consider long-term, short-term, medium-term, all that kind of stuff.

Speaker 6 At a certain point of,

Speaker 6 well,

Speaker 6 as the AI continues to improve and improve and improve and improve,

Speaker 6 it will operate at a pace that is very, very difficult for humans to keep up with.

Speaker 6 And in,

Speaker 6 you know, this will start happening in R D first, in research and development.

Speaker 6 Like AI will be able to start doing lots of scientific research, lots of R D into new weapon systems, lots of R D into new military platforms, et cetera,

Speaker 6 much faster than

Speaker 6 humans

Speaker 6 would be able to do. And then humans will just check over their work and decide.
And so it's going to sort of race faster and faster and faster. And

Speaker 6 so then what happens, I think what it'll do is it'll create dramatically more weight on the few decisions that humans make. So any decision that, like all the way to the extreme, right, is

Speaker 6 the president or

Speaker 6 whomever making decisions about Do I let my AI collaborate with another country's AI? Like that'll be like a decision that'll have just like dramatic consequence,

Speaker 6 much higher consequence than like similar decisions today. So, I think it

Speaker 6 almost to your point, it like it will,

Speaker 6 as it accelerates, will end up at a place where you're right, it all boils down to human decision-making, but those decisions will carry

Speaker 6 like

Speaker 6 a thousand times more consequence.

Speaker 5 How do you decide who you're going to work with? I mean, it's an international company.

Speaker 6 Yeah.

Speaker 6 So we've had...

Speaker 5 Who all are you working with?

Speaker 6 Well, so first thing is

Speaker 6 we're pretty picky about who we work with. Ultimately, just because

Speaker 6 we only have so many resources. And building these systems and building these data sets

Speaker 6 is pretty involved as kind of we've discussed. So, you know, our aim generally is how do you work with the best in every industry?

Speaker 6 You know, how do you, How do you work with, like Kyle was mentioning, the number one bank, the number one pharma, the number one telco, number one military, et cetera?

Speaker 6 The only addition to this that I would say we viewed as important is how are we

Speaker 6 as you as we play the tape forward and everything we're just discussing,

Speaker 6 it's really important

Speaker 6 that

Speaker 6 as much of the world runs on

Speaker 6 an American AI stack versus a CCP AI stack, that becomes really, really important.

Speaker 6 And

Speaker 6 it matters not only for

Speaker 6 ideology and, you know, kind of as we were talking about before, like propaganda and control and all that kind of stuff, but it also really matters just for like, you know, at a pure operational level,

Speaker 6 like we're going to want to be able to have as extended of

Speaker 5 AI capabilities as possible so

Speaker 5 okay so the way i understand this is

Speaker 5 you're working with x country we'll just say

Speaker 5 we'll just say country x

Speaker 5 you give country x the ai model to utilize for whatever they're doing. Let's just say warfare.

Speaker 5 We own, but they have to tap into a U.S.-based data center. Am I correct here? And so as long as we control the data center that's feeding that AI model, we essentially own it.

Speaker 5 And that, and Country X just has to trust that

Speaker 5 scale AI has their best interest.

Speaker 6 Yeah, it's like next level.

Speaker 5 And if they change, If they change, let's say Country X now

Speaker 5 forms an alliance with China, they decide

Speaker 5 they don't want to be a part of America, then we just yank the AI, or the, not the AI,

Speaker 5 the data that feeds that AI, or manipulate that data to where it's essentially been hacked.

Speaker 6 Am I correct?

Speaker 5 And that's how we keep ourselves safe.

Speaker 6 Yes. And then with the addition, like, I think the way that

Speaker 6 at least we think about it today, and I think a lot of people think think about it today is like

Speaker 6 it's okay for the data center to be located elsewhere located in the country um as long as it's us owned and operated um

Speaker 6 because then we still have control in you know any sort of scenario that happens and um the only other thing i would say is we're much more focused initially on just um low stakes uses of AI.

Speaker 6 So can you use AI to help

Speaker 6 the education industry in one of these countries? Or can you use it to help the healthcare industry? Or can you use it to aid in

Speaker 6 like, you know, permitting processes? Or, you know, low, I think low-stakes use cases matter a lot more initially.

Speaker 6 But I really do think, like,

Speaker 6 you know, we have this concept of geopolitical swing states. There are a number of countries right now in the world where

Speaker 6 whether they side with the US or China over time is going to have immense consequences for

Speaker 6 certainly

Speaker 6 what a potential conflict scenario looks like, but also even what the long-term Cold War scenario looks like. Like what happens over time

Speaker 6 as

Speaker 6 our countries are interacting. So

Speaker 6 I view AI as one of these key elements of diplomacy and long-term sort of

Speaker 6 like

Speaker 6 long-term

Speaker 6 strategic impact in the in the international war game.

Speaker 5 How would AI be implemented into our government?

Speaker 6 I mean,

Speaker 5 I can't remember exactly what you said,

Speaker 5 implemented to run, you know, our political sphere. What does that look like?

Speaker 6 Yeah, so

Speaker 5 because so much of that is people's values and

Speaker 5 what people believe in and stand for. And, you know, I mean,

Speaker 5 like today, for example, I mean, country is probably more polarized than it's ever been.

Speaker 5 And so, how do you, how do you get an AI model to run government when it is this polarized and there's so many different ideologies? And

Speaker 5 part of the country is way over here, the other part's way over here. How, how would an AI model

Speaker 5 run that?

Speaker 6 Yeah, so

Speaker 6 we have this concept of kind of like agentic warfare, agentic government. So can you

Speaker 6 just like the same thing? Can you take these very inefficient processes in government and start replacing those with AI-related functions so that

Speaker 6 you're just improving efficiency and improving outcomes?

Speaker 5 Give me a specific example.

Speaker 6 Yeah, so

Speaker 6 one super simple one. Right now, I think the average time it takes for a veteran veteran to see a doctor in the VA is something like 22 days.

Speaker 6 It's way too long. And part of that is because of a host of antiquated processes and workflows and, you know,

Speaker 6 just in general, that system's not working. I think we can all look at that and say,

Speaker 6 that's not a functional system. And so

Speaker 6 can you use AI to, you know,

Speaker 6 AI agents to automate some parts of that process, automatically get whatever approvals need to be gotten, get whatever information needs to be gotten, gotten such that that 22 days becomes a day or two or something like that.

Speaker 6 That I think is like a no-brainer, just pure win for government efficiency overall.

Speaker 6 Another one that

Speaker 6 other ones that are like big are like, you know, permitting processes.

Speaker 6 So if I want to build a new data center somewhere, or even I just want to like remodel my home, the, you know, permitting processes, depending where you are, it could take, could literally take years for all that to go down.

Speaker 6 And part of of that is like, there's so many different approvals that need to happen. There's so many, like, there's all these different workflows and things that need to happen.

Speaker 6 What if instead we just codified what are the rules of the system and had an AI agent just go automatically go through that permitting process so that you could get that permit or get the permit denied within like a day, right?

Speaker 6 So, and just that times a million, like, like the

Speaker 6 like

Speaker 6 one of the things from Doge that they found, right, is that

Speaker 6 the retirements are stored in the mine, Iron Mountain mine,

Speaker 6 a literal like

Speaker 6 iron mine, or like the paper copies of the retirements for all the federal employees. Like, can we just take that, which is two generations behind in terms of tech?

Speaker 6 Like, it's like literally pen and paper, and then use AI to go from two generations behind to two generations forward. Like, can we just automate as much of those processes as possible? So,

Speaker 6 so I see it as just like, you know, all over the place. There's so much low-hanging fruit in terms of just making current government services and government processes way more efficient.
I think that

Speaker 6 I haven't met anybody who doesn't think this is the case.

Speaker 6 So

Speaker 6 that's just all the level one stuff.

Speaker 6 I think

Speaker 6 the

Speaker 6 yeah, that's just all the level one stuff in improving how our government operates.

Speaker 5 Would it eventually replace politicians?

Speaker 6 That's a good question. I think ultimately, like

Speaker 6 we

Speaker 6 so

Speaker 6 first off, just like

Speaker 6 taking a step back, it's definitely the case that policy make the speed of policymaking and the speed of legislation and the speed at which the government reacts to new technologies, like that's going to have to speed up.

Speaker 6 I've spent a lot of time in DC trying to make sure that

Speaker 6 as a country, we get the right kind of AI legislation and the right kind of AI regulation to ensure that this all goes well for us.

Speaker 6 It's been years of trying to get that done.

Speaker 6 We still haven't really figured that out as a country. What is the right AI regulatory framework? Like that's still it's still undecided.

Speaker 5 I mean, how do you even describe this stuff to the dinosaurs that are still sitting in DC?

Speaker 5 I mean, we've got people stroking out on camera. We've got people literally dying in office.
I mean, we got people up there that probably can't even figure out how to open a fucking email.

Speaker 5 And then you come in, 28 years old, built scale AI.

Speaker 5 I mean, I just...

Speaker 5 I mean, just going all the way back to when, you know, Zuckerberg's sitting there, you know, talking to Congress, it's, it's, I mean, and I don't agree with everything you did and whatever.

Speaker 5 It doesn't matter. But I look at that and I'm like,

Speaker 5 you guys have been sitting in D.C.

Speaker 5 Probably don't even know how to open your own email.

Speaker 5 And you're talking to a tech genius who's trying to dub this down and make you understand. I mean, I get

Speaker 5 one day with you, you know what I mean, to try to wrap my head around this. And they have 50 million other things they're dealing with.
They're not up to speed on tech.

Speaker 5 I mean, how do you even begin to

Speaker 5 tap in?

Speaker 6 I mean, I think a lot of it, I think

Speaker 6 the first thing, and I think this is like a lot of people in the know understand this, like a lot of the minute decisions really end up being made by staffers, right?

Speaker 6 And

Speaker 6 I think, like generally speaking, like staffer, you have to be extremely competent as a staffer, no matter what. Like there's just, it's a very chaotic job.

Speaker 6 There's a lot that's, there's a lot that's going on and they have to make very fast decisions.

Speaker 6 The other thing is I think analogies are pretty helpful. Like I think, you know, everybody alive today has seen the pace of technology progress just increase and increase and increase and increase.

Speaker 6 Like, I think that, you know, you'd be hard pressed to find anyone who who doesn't believe that AI will be this world-changing technology.

Speaker 6 Now, exactly how it'll change the world, I think that's where it gets fuzzier,

Speaker 6 but it will be world-changing technology.

Speaker 6 But the issue is like, I mean,

Speaker 6 the political system just doesn't respond very quickly, right? And

Speaker 6 that's going to be very harmful. I mean, we need to be able to respond very quickly to these new technologies.

Speaker 6 And so,

Speaker 6 and I think they'll become more and more obvious. Like, I think as AI and other technologies accelerate, it'll be very obvious that the world will just change so quickly.

Speaker 6 And frankly, I think voters are going to demand faster action.

Speaker 6 And so I think our government is set up to

Speaker 6 accelerate, but

Speaker 6 that's what needs to happen.

Speaker 5 How do we power all this? I mean, that's that's a big discussion, you know, and

Speaker 5 everybody has seemed so apprehensive to go nuclear.

Speaker 5 The grid is

Speaker 5 extremely outdated. I mean, we just saw the light flickers here about, I don't know, 30 minutes ago.

Speaker 5 Power outages happening all the time. There was just a big one, all of Spain,

Speaker 5 Portugal, Italy. I mean,

Speaker 5 it's happening all the time in the U.S. power outages.

Speaker 5 How are we going to be able to power all this stuff? I mean, what would you like to see happen?

Speaker 6 Yeah, I mean, first of all, if you look at, if you take a graph of

Speaker 6 Chinese total, China's total power capacity over the past 20 years versus U.S. total power capacity over the past 20 years, The China graph is like straight up into the right.

Speaker 6 They're just adding crazy amounts of power. They've doubled it in the last decade, I think.
Doubled their

Speaker 6 power capacity in the last decade.

Speaker 6 And

Speaker 6 the United States is basically flat.

Speaker 6 It's grown like a little bit.

Speaker 6 And so

Speaker 6 we're like,

Speaker 6 that's what's happening right now. Right now, China is doubling every decade or so.

Speaker 6 US is

Speaker 6 basically flat.

Speaker 6 And we're looking at, you know, the

Speaker 6 for to just power the data centers that

Speaker 6 today AI companies know they want to build, we're going to need something like a doubling of our energy capacity.

Speaker 6 And that needs to happen very, very quickly, like almost, you know, that has to happen almost immediately.

Speaker 6 And so you have to believe that our graph is going to go from totally flat to vertical, faster vertical than China's energy

Speaker 6 growth. And

Speaker 6 China, in the meantime, meantime is just is growing, is growing perfectly quickly. They'll accelerate.
They'll add more power to their grid.

Speaker 6 Like, I think it's very hard to imagine realistic scenarios where without drastic action, the United States is able to grow its energy capacity faster than China.

Speaker 5 Now, where are we on the, so if China's going straight up and we're flatlined, I mean, does that mean, are you saying that China has surpassed our power capabilities or are we still above them even though they're on on the rise?

Speaker 6 They're definitely above us because they have a bigger population and they have way more industrials.

Speaker 6 So they have

Speaker 6 all double chat. They definitely have more power total than us, more power generation capabilities.
And

Speaker 6 by the way, like it's actually not rocket science why that is.

Speaker 6 If you then break that down to sources of that power in China, It's because coal is like 80% of that.

Speaker 5 Yeah, it's like they're all they're all coal.

Speaker 6 Yeah.

Speaker 6 It's just tons of coal.

Speaker 6 And then we've actually, like, if you look in the US, renewables have grown a lot, but a lot of it, the reason the overall number is flat is because we're using renewables to replace coal, natural gas, like fossil fuels.

Speaker 6 And so when you net it out in the US, we're flat. And then in China, it's straight up.

Speaker 6 So that's the first thing. Like, we need drastic action.
You know, the administration has the National Energy Dominance Council. We've sat down with them a few times.
Like,

Speaker 6 we have to take drastic action to enable us to

Speaker 6 at least start matching their speed of adding energy to the grid and ideally surpass it. That's like, that's the first thing.

Speaker 6 The second thing, like you're talking about, is our grid is extremely antiquated. And that's a major strategic risk.

Speaker 6 You know, I don't know what the cause or the source of the

Speaker 6 outage across Spain was, but some people think it was a foreign actor or

Speaker 6 some kind of cyber attack of some sort.

Speaker 6 I guarantee you, the US energy grid is extremely susceptible to large-scale cyber attacks.

Speaker 6 It would be,

Speaker 6 you know, and the way

Speaker 6 the sophistication of these cyber attacks sometimes is like

Speaker 6 so stupid.

Speaker 6 It's like, if you find the right like uh like power plant login terminal to go into, sometimes people don't change the username and password from the default, which is username and password.

Speaker 6 And so you can just find like some power station in like Wyoming that still has

Speaker 6 the username and password is username and password. You log in and you can shut down the entire power in the entire region.
So the like

Speaker 6 the so our grid, just because of how antiquated it is, how decentralized it is, every all of that, is hyper, hyper susceptible to

Speaker 6 cyber attacks, hyper-susceptible to foreign action, foreign activity. And

Speaker 6 that matters now. Like right now, if you take the energy grid in a major city, people will die.
So it's like, it's bad now. But then let's go back to what we were just talking about with AI.

Speaker 6 Like, let's say we have large-scale AI on AI warfare with China. They just take out the power grid, take out our data centers and the power fueling those data centers, and then we're sitting ducks.

Speaker 5 I mean, not only that, but it's my understanding

Speaker 5 that China

Speaker 5 actually produces and manufactures a lot of the major components that go into our grid, like the transformers. If we don't even,

Speaker 5 to my understanding, we don't even check those for malware, Trojan horses, shit like that. In fact,

Speaker 5 DOE actually did an inspection on one and never...

Speaker 5 and never even released the results of what they found, which probably means they found some shit.

Speaker 5 And

Speaker 5 I mean, I just, I don't know

Speaker 5 how we combat that.

Speaker 6 I mean, just like the, like, what, what is, where did that happen elsewhere? Like, look at Salt Typhoon. Like, this was a recent hack that was declassified, which is that

Speaker 6 Chinese malware and cyber activity, like, basically, had fully infiltrated our

Speaker 6 major telecom providers. I think ATT was like entirely

Speaker 6 like

Speaker 6 entirely compromised by this hack called Salt Typhoon from the CCP. And

Speaker 6 they did that so that they could read all the messages, like all the SMS, all the audio, they were able to capture as part of that, as part of an Intel gathering operation.

Speaker 6 But if they're able to hack into our telco, they've sure as helped.

Speaker 6 They're clearly capable of hacking into our energy grid, clearly capable of hacking to any other, any of our other critical infrastructure. And

Speaker 6 it just goes back to what we're talking about. The energy grid, A, if we can't produce enough power, we're hosed.
And B, if the adversaries can take out our power at will, we're hosed.

Speaker 6 And so we have this major, major vulnerability as a country on just like the cyber posture of our energy grid. I think it's like, I think it's one of the

Speaker 6 biggest, like very obvious, like flat out,

Speaker 6 like clear vulnerabilities of our overall, of our entire country.

Speaker 6 A, just like you create civil unrest. You can like take, you know, imagine you took Houston's power grid out.
People would die and you cause like all sorts of chaos.

Speaker 6 But then if you, but then you take out these data centers, you take out military bases, you take out radar systems, you take out, you know, you name it.

Speaker 6 You can take out almost any piece of homeland infrastructure and that goes create huge strategic openings for your adversaries.

Speaker 5 I mean, what...

Speaker 5 You have to run in these circles. I mean, you're building massive data centers, correct? And so...

Speaker 5 When you go to D.C. and you're advocating, hey, we need more power, and you just, I didn't, what's the association you met with?

Speaker 6 The National Energy Dominance Council.

Speaker 5 What do they say?

Speaker 6 They totally agree. I mean, they know we have to build more power.
And then it's about, so then you get to the next layer of detail. It's like,

Speaker 6 okay, how can we, how do we accelerate nuclear? How do we accelerate the permitting process?

Speaker 6 What are existing power generation capabilities that we turned off, that we can turn back on?

Speaker 6 Like, you go through all the natural things to do. Like, it's, I mean, I think.

Speaker 6 I think we know what to do. The questions that we can get out of our own way.
And if,

Speaker 6 and then if our grid is so antiquated that even that vulnerability

Speaker 6 kind of means that we can be taken out at any time.

Speaker 5 I mean, I may have made an assumption.

Speaker 5 Are you building data centers?

Speaker 6 We ourselves are not building data centers. We're feeding the data centers.
We partner with companies that, yeah, that are building the largest data centers in the world.

Speaker 5 Okay. And so I've also heard rumors that these major data centers are starting to just create their own power source.

Speaker 5 Is there any validity to that?

Speaker 6 Yeah. So a lot of designs these days involve can you just create a SMR, a small,

Speaker 6 like a nuclear reactor per data center? Can you basically like have a nuclear reactor co-located with the data center

Speaker 6 to

Speaker 6 power that data center's capacity?

Speaker 6 Which I think is a good idea. The issue is like, I mean, China is going to be way ahead of us on that.
The largest... nuclear power plant in the world is in China.
So,

Speaker 6 you know, we're,

Speaker 6 obviously we need to lean into nuclear. That needs to happen.
Obviously, we need to lean into all power generation sources. We need to kind of learn all the above approach to power generation.

Speaker 6 But even that doesn't get us to a posture where you're confidently exceeding China. You're just kind of catching up to where they are.
And so,

Speaker 6 I mean, this is a huge, a huge issue.

Speaker 5 Yeah. Let's take a quick break.
When we come back, I want to dive more into China's capabilities and our capabilities. capabilities

Speaker 5 all right alex we're back from the break we're getting ready to discuss some of our capabilities versus china's capabilities and you know we we just got done kind of talking about power

Speaker 5 is china leading the u.s

Speaker 5 in any other realms when it comes to the AI race? I mean, Xi Jinping has said himself, you know, the winner of the AI race will achieve global domination.

Speaker 6 Yeah, I think, well, the first thing, almost as you're mentioning, to understand is China has been operating against an AI master plan since 2018.

Speaker 6 The CCP put out a broad whole of government, you know, civil military fusion plan to win on AI.

Speaker 6 Like you're mentioning, Xi Jinping himself has been, has spoken about how

Speaker 6 AI is going to define the future winners of this global competition.

Speaker 6 From a military standpoint, they say explicitly, hey, we believe that AI

Speaker 6 is a leapfrog technology, which means even though our military is worse than America's military today, if we overinvest in AI, we have a more AI-enabled military military than theirs, we could leapfrog them.

Speaker 6 So

Speaker 6 they've been super invested. Right now, I think the best way to kind of paint the current situation is they

Speaker 6 are way ahead on power and power generation. They're behind on chips, but catching up on chips.

Speaker 6 They

Speaker 6 are ahead of us on data.

Speaker 6 China has had, again, since 2018, a large-scale

Speaker 6 operation to dominate on data.

Speaker 6 And today, in 2023, I think, there were over 2 million people in China who were

Speaker 6 working inside data factories, basically as data labelers or annotators, basically creating data to fuel into AI systems. I think that number in the US, by comparison, is something like 100,000.

Speaker 6 So they're outspending us 12 to 1 on data.

Speaker 6 They have over seven cities, full cities in China that are dedicated data hubs that are basically powering

Speaker 6 this broad approach to data dominance.

Speaker 6 And then on algorithms, I think

Speaker 6 they are on par with us

Speaker 6 because of large-scale espionage.

Speaker 6 And this is, I think, one of these open secrets in the tech industry that Chinese intelligence basically steals all of the IP and technological secrets from

Speaker 6 the United States.

Speaker 6 There are a bunch of very concerning reports here.

Speaker 6 So, one is there was a Google engineer who took the designs and all the IP of how Google designed their AI chips and just took those and moved to China and then started a company on top

Speaker 6 using those designs.

Speaker 6 The way he got those designs, by this way, it was this guy,

Speaker 6 Leon Ding, I think. The way he stole the data

Speaker 6 out of

Speaker 6 Google's corporate cloud, by the way, was that he, it was so stupid. He just took all the code, he copy-pasted it into Apple Notes, into like the Notes app.

Speaker 6 and then exported to a PDF and printed it and just walked out with it. That's it.
That's it.

Speaker 6 So

Speaker 6 this was later discovered.

Speaker 6 We found out this happened, but for months,

Speaker 6 we had no idea that they had stolen all this critical IP.

Speaker 6 Stanford University, this just came out last week. Stanford University

Speaker 6 is entirely infiltrated by CCP operatives.

Speaker 6 Few crazy facts. So first,

Speaker 6 by law in China, any Chinese citizen must comply with Chinese with CCP intelligence gathering operations.

Speaker 6 So if you're a Chinese citizen, you're living in the United States, and the intelligence agencies in China reach out to you, you have to comply with them.

Speaker 6 And so you have to give them what you're seeing, what you're finding, et cetera.

Speaker 6 And there's tons of Chinese nationals, Chinese citizens

Speaker 6 across

Speaker 6 all the major elite universities, across all the major tech companies, across all the major AI labs, like they're everywhere. The second thing that's crazy is,

Speaker 6 you know, about a sixth of

Speaker 6 Chinese students, so

Speaker 6 Chinese citizens who are students in America are on scholarships sponsored by the CCP itself.

Speaker 6 And for those on these scholarships, they have to report back to a handler basically, what are the things they find, what are the things they're learning. Otherwise, their scholarships get revoked.

Speaker 6 So we have, there's an incredibly large-scale intelligence operation running against the U.S. tech industry, which is just collecting all the information and secrets and technological secrets from our

Speaker 6 greatest research institutions, our universities, our lab, AI labs, our tech companies

Speaker 6 at massive scale. And honestly, I think this is a very underrated element of how China caught up so quickly.
So,

Speaker 6 you know, DeepSeek came out of nowhere. Everyone was so surprised at how capable their model was and how they learned all these tricks.

Speaker 6 You know, how much of that is because they came up with all of them on their own, or they managed to have an

Speaker 6 exquisite high-end espionage operation to steal all of our trade secrets from the United States and then re-implement them back in China?

Speaker 5 What does our espionage look like?

Speaker 6 Well, there was a

Speaker 6 I think nowhere close to as good. I mean, I think, so one thing that

Speaker 6 the CCP did for DeepSeek, the DeepSeek lab,

Speaker 6 is

Speaker 6 after DeepSeek blew up and

Speaker 6 the CEO of DeepSeek met with the Chinese premier,

Speaker 6 they then locked up all the researchers into a

Speaker 6 inside, I shouldn't say locked up, but they like huddled all the researchers together and they took all their passports.

Speaker 6 So none of the AI researchers who work at DeepSeek are able to leave the country at all. And they don't come into contact with any foreigners.
So, they basically locked down the entire

Speaker 6 research effort so that

Speaker 6 that makes it very, very hard to conduct any sort of espionage into that operation.

Speaker 6 And then there's that report, this is all in the news, but like

Speaker 6 a decade ago, 15 years ago,

Speaker 6 all of, or many of the CIA operatives, US CIA operatives in China

Speaker 6 were all killed because they were sort of compromised because one of the communication channels they were using was compromised by Chinese intelligence.

Speaker 6 And, you know, the CCP was able to effectively round a lot of them up and kill them. So, our comparable, their espionage on us is like extremely deep, you know,

Speaker 6 huge risk. There's incredible amounts of, you know, we're deeply, deeply penetrated by

Speaker 6 Chinese intel.

Speaker 6 And comparatively, as far as I know, we have like, you know, much less capability. And I think they've designed it such that it's very hard to infiltrate their AI efforts.

Speaker 6 Jeez. So that's how they're, so they're they're, you know, they're ahead of us on data.
They're, they're able to catch up through espionage on algorithms pretty easily.

Speaker 6 They're ahead of us on power. So what are we ahead at? Well, right now we're ahead in chips.

Speaker 6 And that's kind of our saving grace: that the NVIDIA chips and the entire stack there are the pride of the world. And, you know, we're the most advanced on these chips.

Speaker 6 Chinese chips are also catching up. There's like a bunch of recent reports that Huawei chips are getting to be, they're basically like one generation behind the NVIDIA chips.

Speaker 5 So they're close.

Speaker 6 They're close.

Speaker 6 So all of this is

Speaker 6 pretty concerning. There was another

Speaker 6 report that came out of CSIS recently that there was

Speaker 6 a Chinese effort called, it's like the Next Generation

Speaker 6 Brain Understanding Project or something, where they're basically trying to use AI to fully

Speaker 6 understand

Speaker 6 the human personality effectively and human psychological behaviors.

Speaker 6 I imagine that's ultimately for effectively like information warfare.

Speaker 6 As we were talking about at breakfast, like I mean, China has large-scale information operations, large-scale information warfare, and has been doing that for

Speaker 6 decades and, you know, literally decades,

Speaker 6 going back all the way to in-person operations in Hong Kong. Like, they are so sophisticated at all that, and AI is going to enable them to just move much faster as well.

Speaker 5 How do we combat that?

Speaker 6 Well, I mean, I think we need our own information operations efforts. Like, I think that's pretty critical.

Speaker 6 That's specifically on that thread. And then I think

Speaker 6 we need to acknowledge that at the end of the day, you know, we are a more innovative country, but we have to dramatically

Speaker 6 get our shit together if we want to win long-term in AI.

Speaker 6 We need to onshore

Speaker 6 chip manufacturing. Like, we need to be be manufacturing huge numbers of chips.
We can't be dependent on Taiwan to manufacture our high-end chips.

Speaker 5 Are we doing that yet at any capacity?

Speaker 6 Extremely small capacity. Like there are a few fabs in Arizona that can produce some chips,

Speaker 6 but the vast majority of the volume still comes out of Taiwan.

Speaker 6 We need to tighten up security in...

Speaker 6 our AI companies dramatically.

Speaker 6 We need to have proper counterintel on

Speaker 6 what is the espionage risk in within these companies um we need to solve the power problem that we talked about um we need to have uh we need to be investing into on you know the cyber threats like investing into large-scale cyber defense um we need to invest into data we need our own programs around data dominance to ensure that you know china doesn't just run away with uh with higher quality and greater ai data sets than us so you can go through each of the elements and build like the proper plan plan for the United States to win.

Speaker 6 But

Speaker 5 have we started any of that?

Speaker 6 I mean, I think some things are underway, but

Speaker 6 I mean,

Speaker 6 not enough. Nowhere close to enough

Speaker 6 to be sure that the U.S. will win? Definitely not.
And they also have a fundamental advantage. You know,

Speaker 6 one of the things that people say a lot now is like, oh, like what we need in the United States is an AI Manhattan project where we like, you you know, we collect all the brilliant minds together, we collect our resources, and we have one large effort in the United States.

Speaker 6 Well,

Speaker 6 it turns out like it's actually really hard to pull that off in the United States, but China can pull it off super easily. China can just say, hey, all the best AI people, you now work in one company.

Speaker 6 We're going to pool together all of your resources.

Speaker 6 You all are going to, we're going to put you right next to the largest nuclear power plant in the in the world. Like we're going to build the largest data center in the world here.

Speaker 6 All the chips that China has are going to go towards building this like large-scale AI project. And they just have the ability to collect all their resources together and throw it at

Speaker 6 winning on the AI race. Whereas in the United States, we have all these companies.

Speaker 6 And

Speaker 6 the United States government

Speaker 6 as of yet, it's not going to force all these companies to combine and merge.

Speaker 6 That's like such an that that today would be due to such an overreach of government power um but because of that we're going to have like you know five fragmented ai efforts and maybe in aggregate we'll have way more chips and in aggregate we'll have more power and in aggregate we'll have you know more great researchers but we're not going to be able to focus those efforts whereas china is easily going to be able to focus all their efforts wow

Speaker 5 you had mentioned something downstairs about uh nuclear weapons yeah i believe

Speaker 6 yeah so this is

Speaker 6 this is where stuff gets

Speaker 6 stuff gets really weird for

Speaker 6 national security, which is

Speaker 6 you could clearly imagine scenarios where

Speaker 6 advanced, very advanced cyber AI

Speaker 6 invalidates nuclear deterrence. What do I mean by this? Right now,

Speaker 6 you know, nobody fires nukes because we have MAD, we have mutually assured destruction.

Speaker 6 And if I do a first strike against another country, they're going to be able to, while that nuke is in the air, do a second strike, and we'll both, you know, there'll be destruction on both sides.

Speaker 6 It'll be really bad. So, because of this second strike capability,

Speaker 6 luckily, we have a proper, you know, we have real deterrence. Well, what if instead,

Speaker 6 let's say I'm, the United States and I have the most advanced AI cyber hacking capabilities in the world. So I can build AI agents that

Speaker 6 can hack into any other country, can turn off their energy grid, can disable their weapon systems, can disable everything. So what do I do instead? I launch the first strike.

Speaker 6 and I immediately or like first I send in my my cyber AI agent capabilities I send my cyber AI ai

Speaker 6 uh you know force effectively to disable all the weapon systems of uh of the of the enemy country and because it's uh my i have like such so much ai capacity i can take out all of your i can like disable all of your weapon systems and then i send my first strike and then you don't have a second strike capability so if that happens basically the combination of AI and nuclear,

Speaker 6 you know, you cannot deter AI plus nuclear with just nuclear. So then it forces this,

Speaker 6 that's what will force this like proliferation of AI capabilities.

Speaker 6 And so even small countries are going to need to invest in lots of AI capabilities because their nuclear weapons are no longer a sufficient deterrent.

Speaker 5 Jeez.

Speaker 5 What about bioweapons?

Speaker 6 Yeah, this is.

Speaker 6 This is the

Speaker 6 element that is

Speaker 6 really underrated right now. So COVID leaked out of a virology lab in Wuhan and basically shut the world down for two years.
And

Speaker 6 that's like the level one

Speaker 6 bio-risk kind of stuff. This was

Speaker 6 a relatively

Speaker 6 innocuous, let's say,

Speaker 6 pathogen.

Speaker 6 But it still killed

Speaker 6 probably at least 10 million people globally and it was still, you know, shut the whole world down for two years.

Speaker 6 Well,

Speaker 6 recent models, new models,

Speaker 6 the new AI models are able to outperform 95% of MIT virologists. So the newest models from OpenAI and Google are smarter than literally 95%

Speaker 6 of virologists at MIT, based on a recent study by the Center for AI Safety.

Speaker 6 So now you now, whether it's right now or whether it's in a few years,

Speaker 6 it will be feasible to use AI-based capabilities to help you design

Speaker 6 powerful pathogens. And what's more than that, you're going to be able to design in certain characteristics of these pathogens.
You'll be able to tune the virality, tune the lethality of them.

Speaker 6 Also, due to recent advancements in synthetic biology, you now can create viruses that specifically target certain segments of DNA.

Speaker 6 So

Speaker 6 I could create a bioweapon that just targeted, you know,

Speaker 6 any individual with a certain segment of DNA, which means I can target basically like any population or any group or any sub-segment of the population in the world,

Speaker 6 which is

Speaker 6 which is really, really bad. And so

Speaker 6 the ability, so first, even without AI, like biology, synthetic biology is making so much progress and that there's just like all sorts of inherent risk of

Speaker 6 like all sorts of inherent risk of bioweaponry or leaks of pathogens and viruses and whatnot.

Speaker 6 And then with AI all of a sudden, you this is you know not not literally today's models but a few models a few generations down, you're going to be able to use these AI systems to design or build, you know,

Speaker 6 next generation pathogens.

Speaker 6 So

Speaker 6 that's an entire, I mean, for good reason, biological warfare is not, you know, one of the,

Speaker 6 is not, you know, there are international treaties such that we don't engage in biological warfare.

Speaker 6 But if you imagine these scenarios where countries, you know, nuclear deterrence doesn't work, they don't have the resources to get to use, to utilize, to have large-scale AI data centers,

Speaker 6 you know, it can,

Speaker 6 you know, I'm worried that countries will

Speaker 6 turn to biological weaponry, bioweapons, as their deterrence mechanism, which is highly destabilizing for, you know, the world.

Speaker 5 Wow.

Speaker 5 That's some scary shit.

Speaker 6 The flip side is there is new technology that can

Speaker 6 that can also prevent this stuff. So

Speaker 6 there's this research coming out of this lab in Seattle, David Baker's lab, this guy who just won a Nobel Prize on biological noses,

Speaker 6 or digital noses, sorry,

Speaker 6 which is basically you have these devices that can detect

Speaker 6 detect proteins or chemicals or pathogens in the air automatically.

Speaker 6 And so I think what this will like, you know, the real sort of like offense, defense of bio and bioweaponry will end up looking like we're just going to have large-scale deployment of digital noses effectively that in every space, on every like shipping container, on every plane,

Speaker 6 you know, they're just constantly sensing for

Speaker 6 all existing known pathogens, any new pathogens that might exist, and are constantly just like,

Speaker 6 you know, containing effective or like detecting and ultimately containing

Speaker 5 interesting.

Speaker 5 It's sniffing real time for all of that shit.

Speaker 6 Yeah, exactly.

Speaker 5 I mean, also on the flip side, I mean, I guess if AI is developing

Speaker 5 a new bioweapon, COVID comes out, again, COVID-2, we'll just call it,

Speaker 5 then...

Speaker 5 Our AI

Speaker 5 should also be able to

Speaker 5 figure out the

Speaker 5 vaccines or the vaccine, the antidote to it, correct?

Speaker 6 Yeah, totally. So

Speaker 6 there will be an offense-defense

Speaker 6 element to

Speaker 6 just as just as in kind of as we were walking through like AI applied to command and control, there's an offense defense element.

Speaker 6 AI applied to cyber, there's an offense defense element,

Speaker 6 AI applied to bio and bioweaponry, there will be an offense defense element. So all of these, thankfully, there's like,

Speaker 6 you know, the hope is that

Speaker 6 we end up in a in a in a global world you know that the the world agrees that basically we're not going to go down any of these paths like because there's mutual deterrence and we just you know it's not worth it for anybody in the world to destabilize you know and risk humanity like that that's the that's basically where we need to land wow how concerned are you about china taiwan I mean, we were talking about this a little bit at breakfast and

Speaker 5 I can't believe they have not made a move yet. I mean, I thought for sure it would happen towards the end of the last administration.
But

Speaker 5 with their chip

Speaker 5 production capabilities, I mean, how concerned are you about China taking Taiwan?

Speaker 6 I think

Speaker 6 if it's going to happen, it's going to happen this decade, and it's probably going to happen this administration. And

Speaker 5 why do you say that?

Speaker 6 I mean, China,

Speaker 6 at a macro sense, they have huge demographic issues.

Speaker 5 Those are,

Speaker 6 I mean, there's not, like, that's just like

Speaker 6 of the force of gravity in their country. They have this huge aging population.
They made the wrong bet, you know, many decades ago to have a one-child policy.

Speaker 6 And so they are going to have this like huge aging population.

Speaker 6 Over, then that, that plays, that plays out really like quite soon, like over the next, like a decade from now, it's going to be, um over time they're going to look more and more like japan in that way where they have this like large aging population and it'll paralyze a lot of ability to make any sort of aggressive moves so particularly when it comes to military industrial capacity etc um so that's like one force of gravity uh that they have to contend with and then um

Speaker 6 and so i think i think they're they're they're going to want to move faster sooner rather than later.

Speaker 6 And then they've, I mean, they've had such an insane military buildup over the course of the past few decades.

Speaker 6 You know, I don't think it's,

Speaker 6 and I think, you know, we're currently in a situation where China has far more industrial capacity, far more manufacturing capacity than we do in the United States.

Speaker 6 And so

Speaker 6 that is set, you know, that's a window for them.

Speaker 5 So do you think they'll do it? They're pressed to do it because of the aging population?

Speaker 6 I think a lot of factors.

Speaker 6 I think Xi is aging, right?

Speaker 6 This will be an important component of his legacy

Speaker 6 as he would view it, I think.

Speaker 6 They have an aging population, which will minimize their political latitude over time, naturally. And then they have,

Speaker 6 I mean, they're in this insane window where they have just incredible industrial manufacturing capabilities compared to anywhere anywhere else in the world. You know, in 2023,

Speaker 6 China deployed more industrial robots than the rest of the world combined.

Speaker 6 That's like, I mean, we were talking a little bit about like automated factories and automated industrials. Like they're raising it that faster than any other country in the world.
And so

Speaker 6 I think that like,

Speaker 6 you can look at all these dimensions and this window,

Speaker 6 you know, there's, if they're, if they're going going to do it, they're going to do it soon.

Speaker 5 Yeah.

Speaker 5 Yeah. I mean, what percentage of the chips that we use come from Taiwan?

Speaker 6 I mean, 95%

Speaker 6 of the high-end chips are manufactured in Taiwan.

Speaker 5 And so what happens if China takes Taiwan?

Speaker 6 So, yeah, wargaming out. So we were talking a little bit about this.
So

Speaker 6 let's say China blockades or invades Taiwan.

Speaker 6 Then

Speaker 6 there's a question. So these fabs are incredibly, incredibly valuable.
Because as we were just describing, if you believe in the pace of AI progress and AI technology, then

Speaker 6 everything boils down to how much power you got, how many chips you've got. And if they own 95% of the world's ship manufacturing capability,

Speaker 6 I mean, they're going to run away with it. So then you look at that and you say,

Speaker 6 will the Taiwanese people bomb the TSMC data centers?

Speaker 6 um and and or will the us bomb the tsmc data centers and or will some other country bomb the the data centers or sorry not the data the fabs um the the tsmc chip fabs um

Speaker 6 i think my personal belief i don't think uh the taiwanese do it because even if they get blockaded or invaded the those fabs are still a huge

Speaker 6 uh component of Taiwan's survivability and Taiwan's relevance as a as as an entity,

Speaker 6 even if they get blockade or invaded by Taiwan. So I don't think they do it.

Speaker 6 China definitely doesn't do it because they

Speaker 6 obviously are invading partially to get, you know, to gain those capabilities.

Speaker 6 And so then,

Speaker 6 does the U.S. bomb them? If the US bombs them, that's probably World War III.

Speaker 6 It's hard to imagine that not just resulting in massive escalation.

Speaker 6 And so you're looking at it, and

Speaker 6 there's there's kind of no good options.

Speaker 6 So

Speaker 6 I think it's, I mean,

Speaker 6 everyone's very focused on it, obviously, but it is, it is like a real powder keg of a

Speaker 5 damn

Speaker 6 of a region.

Speaker 5 How do you think this all ends?

Speaker 5 We had a little discussion about this at breakfast.

Speaker 6 Yeah, yeah.

Speaker 6 I mean,

Speaker 6 I think,

Speaker 6 I think if, so let's assume that in the next handful of years, next like three, four years,

Speaker 6 there's an invasion or blockade of Taiwan.

Speaker 6 And,

Speaker 6 you know, I think it's, I think given how important AI is,

Speaker 6 it's hard for the U.S. to not take any sort of action in that scenario.
And then, you know,

Speaker 6 almost all the actions you would see escalating into a major, major conflict. So, um,

Speaker 6 the best case scenario is we deter the

Speaker 6 invasion or blockade altogether. And, um,

Speaker 6 and I think, you know,

Speaker 6 I think it certainly is in everyone's interest to not get into a large-scale world war that's hugely destructive and kills lots of people.

Speaker 6 So I think like fundamentally we should be able to deter that conflict.

Speaker 6 But

Speaker 6 that's why all this matters so much. We need to make sure our AI capabilities as a country are the best in the world.
We need to make sure that our military AI capabilities are the best in the world.

Speaker 6 We need to make sure that

Speaker 6 there's clear

Speaker 6 economic deterrence of this kind of scenario.

Speaker 6 We need to be investing in every way to deter this conflict such that um you know where this really will break down is if the chinese if the ccp calculus uh

Speaker 6 uh

Speaker 6 you know diverges from our own if their calculus becomes

Speaker 6 oh no this is going to work out you know we we can take this and then you know we're strong enough such that it'll work out for us and then our calculus is the opposite that's where that's where the world war scenario happens.

Speaker 6 So

Speaker 6 I think it's possible to deter. And I think we have to, you know, there's a lot of of things we have to do to make sure that we deter that conflict.
And that should be,

Speaker 6 I mean, certainly, I think it already is like 80% of the focus of the entire DOD.

Speaker 5 I mean, it's just

Speaker 5 we can deter, but I mean, what you're talking about an aging population, I mean, they're getting desperate. And

Speaker 5 it sounds like in order for them to legitimately win, they have to acquire those chip fabs, correct?

Speaker 5 And so

Speaker 5 they already have 250 times the shipbuilding capacity. They have way more people.

Speaker 5 They have more power than we do. I mean,

Speaker 5 military recruitment in the U.S., you know, was at an all-time low. I don't know what it is today.

Speaker 5 But,

Speaker 5 I mean, even if it...

Speaker 5 So,

Speaker 5 I guess what I'm saying is

Speaker 5 you can only deter a desperate entity for so long before they throw a Hail Mary play.

Speaker 5 Right? Would you agree with that?

Speaker 6 Yeah, and then it just depends on the...

Speaker 5 You would have to dedicate an entire military to surround Taiwan

Speaker 5 to effectively do that, in my opinion.

Speaker 6 Yeah, I mean, I think that the if

Speaker 6 they assess, if the CCP and the PLA assess that

Speaker 6 Taiwan is all their app, like, they, they will focus their entire military capacity on seizing taiwan then

Speaker 6 that becomes a really

Speaker 5 that becomes a really tricky calculus i mean why wouldn't they if if if if

Speaker 5 xi believes that the winner of the ai race achieves global domination

Speaker 5 he's getting older you just talked about how important his legacy is to him which i'm sure you're right

Speaker 5 i don't know how you deter that

Speaker 5 and then they win the ai race.

Speaker 6 Yeah, the only thing that we can do,

Speaker 6 and I think this is a long shot, but I think it's important, is

Speaker 6 if

Speaker 6 ultimately we actually end up collaborating on AI. And I know that sounds kind of crazy, but

Speaker 6 if we're able as a country to demonstrate,

Speaker 6 just we're so far ahead. And there's like, you know, one key element of how the whole AI thing plays out

Speaker 6 is this idea of AI self-improvement or intelligence recursion, sometimes people call it. But basically,

Speaker 6 once AIs get sufficiently good, then you can start utilizing the AIs to help you build the next AI.

Speaker 6 As sci-fi as that sounds, you utilize your current generation AI to build the next generation AI faster and faster and faster and faster. And so at some point, point,

Speaker 6 your AI capabilities enable you, like, you know, there's some form of like, you know, just exponential takeoff. They just, they just, you know, your AI capabilities get good really, really quickly.

Speaker 6 And if somebody's even three to six months behind you, then they're, they're, they're never going to catch up to you because you're running the self-improvement loop

Speaker 6 faster than anybody else. And so

Speaker 6 this is a key idea. I mean, it's, I think it's, um,

Speaker 6 it's a little bit theoretical right now. Like, it's not clear whether or not this intelligence recursion is going to be how it plays out, but a lot of people in AI believe it.

Speaker 6 And I probably believe it too, that we will be able to use AIs to help us continue training the next AIs and improve things more quickly. And if you believe that, then if we're, let's say, three...

Speaker 6 three to six months ahead of China and we maintain that advantage and we take off faster, then they're going to be way behind. And then ultimately, we're going to be in a great position to say, hey,

Speaker 6 actually,

Speaker 6 like, we're way ahead and we should just, you know, you guys should quit your efforts.

Speaker 6 We'll give you AI for all of your economic and

Speaker 6 humanitarian uses. throughout your society.

Speaker 6 And we agree we're not going to battle on military AI.

Speaker 5 What would it take

Speaker 5 to take the chip building capabilities that Taiwan has and implement that here in the U.S.

Speaker 5 to protect it?

Speaker 6 So,

Speaker 6 yeah.

Speaker 6 So the first thing is

Speaker 6 there's been hundreds of billions of dollars invested just into

Speaker 6 like the build out of those fabs and the

Speaker 6 foundries, but the buildup of these large-scale chip factories effectively, and

Speaker 6 all the high-end equipment and tooling inside of them. Hundreds of billions of dollars of investment.
So,

Speaker 6 first off, there needs to be hundreds of billions of dollars of investment in the U.S. That's not the hard part.
The second part that's that's really the hard part is

Speaker 6 all it's basically a large-scale factory operated by highly, highly skilled

Speaker 6 workers who

Speaker 6 are very experienced in those processes. And the whole thing operates like a, you know, like clockwork.

Speaker 6 And unless you can get those people to the U.S.,

Speaker 6 you know, you're going to have to like rebuild all that know-how and all that technical capability. And that's what takes a really long time.
And that's one of the things, you know.

Speaker 5 So why do you think we haven't done that? Why do you think we have not incentivized these brilliant minds to come here and do it for us?

Speaker 6 So TSMC, the Taiwan semiconductor, the company that builds these fabs, they have stood up a few fabs in Arizona.

Speaker 6 But they cited issues. First, there were issues around permitting and getting enough power, and they dealt with some EPA issues.
And then

Speaker 6 they just have issues where the like,

Speaker 6 the technicians working in Arizona

Speaker 6 aren't as skilled or don't work as hard as those working in Taiwan.

Speaker 6 So

Speaker 6 they've built a few fabs in the United States.

Speaker 5 So they've tried to do it, but our red tape and

Speaker 5 our power is not what it needs to be to be able to do this.

Speaker 6 Red tape, power, workforce. And then there's another key thing, which is

Speaker 6 if you look at it from Taiwan Semiconductor, from TSMC's perspective, they're not all that incentivized to stand up all these capabilities in the United States. Like, if

Speaker 6 as soon as they start standing up all these capabilities in the United States, the United States is not incentivized to defend Taiwan.

Speaker 6 And

Speaker 6 it's a Taiwanese company.

Speaker 6 So...

Speaker 6 And it's a critical part of their survival strategy.

Speaker 6 So

Speaker 6 that's really where the rubber hits the road is, are they actually incentivized to do a large-scale buildout of

Speaker 6 chip manufacturing capacity in the United States?

Speaker 6 I think the answer is like, no.

Speaker 5 Makes sense.

Speaker 5 I mean, there would have to be

Speaker 5 some type of a deal struck where

Speaker 5 they fall under our wing.

Speaker 6 Yeah, I mean, you can imagine some kind of deal with

Speaker 6 China.

Speaker 6 between the U.S. and China.
It'd have to be like a diplomatic deal at the highest levels, which is something along along the lines of, you know,

Speaker 6 hey, U.S. can have Taiwan, but we need large-scale fabs in, you know, we need large-scale chip manufacturing in the United States, or something like that.
And like, you know, maybe

Speaker 6 there's worlds where that kind of deal could get could get drawn up. I don't know.

Speaker 6 But that would, I mean, that would also mean that the United States would just have to say, hey,

Speaker 6 all we care about actually at this point is chip manufacturing and that we don't care actually about the

Speaker 6 Taiwanese people and the the country and all that stuff man

Speaker 5 man

Speaker 5 and are they working with China at all

Speaker 6 um

Speaker 6 the TSMC yeah so

Speaker 6 they're

Speaker 6 I think they're technically not supposed to but a lot of

Speaker 6 the the um

Speaker 6 Huawei, one of the leading companies in China, has been able to get

Speaker 6 tons of chips from

Speaker 6 tons of dyes, it's called, but basically tons of chips or chip

Speaker 6 prerequisites from Taiwan. And they usually do it through like, they like start some cutout company that doesn't seem associated with them in like Singapore.

Speaker 6 And then that Singaporean company buys a bunch of,

Speaker 6 or Malaysia, or the Singaporean Malaysian companies buy a bunch of chips from TSMC and then they mail it back or something. But there's clearly been

Speaker 6 there's been a lot of TSMC high-end

Speaker 6 outputs that have gone to the Chinese companies.

Speaker 5 Wow. Wow.

Speaker 5 Scary shit, man.

Speaker 6 It gets, I mean, I think this is where

Speaker 6 you have to believe, like, right now, if you look at the,

Speaker 6 you know, just as we were right now, like, if you look at the situation and all of the

Speaker 6 dynamics at play right now,

Speaker 6 it's, it's like, it's a powder keg. It's like very, very, very volatile,

Speaker 6 highly problematic in many ways. And this is where, I mean, you just ultimately have to believe that

Speaker 6 there's there's got to be some effort towards diplomatic solutions. Yeah.

Speaker 6 Because it is definitely true. Like war will be really bad for both sides.

Speaker 5 Yeah.

Speaker 5 Yeah.

Speaker 5 How do we coordinate with China with the AI?

Speaker 6 Yeah, so

Speaker 5 what does that look like?

Speaker 6 So yeah, right now, right now, we're definitely

Speaker 6 US and China, we're definitely in an all-out

Speaker 6 race dynamic. And, you know, we're going to race, and I think this is correct, we're going to race to build the best AI systems.
They're going to race to build the best AI systems.

Speaker 6 And we're both all in on this approach. And we're both all in on racing towards building the most advanced AI capabilities, the largest data centers, the largest capacity, et cetera, et cetera.

Speaker 6 And this is,

Speaker 6 if you recall, kind of how

Speaker 6 nuclear was. Like, you know, in

Speaker 6 nuclear war, as well as application of nuclear

Speaker 6 towards

Speaker 6 power production, it was kind of, you know, all systems go like everyone racing towards building capacity building capability. And then

Speaker 6 Chernobyl and Three Mile Island happen,

Speaker 6 and it creates large-scale consternation around the technology and the risks of those technologies. And

Speaker 6 there were

Speaker 6 a bunch of international treaties, and there's a large international response towards coordinating on nuclear technology. Now,

Speaker 6 all said and done, if you really, you know, if you look at nuclear, like

Speaker 6 that set our country back, set many countries back, you know, many generations in terms of power generation. But what it took was effectively these like small-scale disasters

Speaker 6 to take place that

Speaker 6 effectively were the forcing function for international cooperation.

Speaker 6 You can imagine a scenario with AI where,

Speaker 6 because of all the things that we've been talking about,

Speaker 6 there's some scenario where

Speaker 6 maybe some terrorist group or some non-state actor or some, you know, North Korea or whomever, somebody decides to use it for

Speaker 6 in a particularly adversarial or

Speaker 6 inhumane way and create, and that disaster has some

Speaker 6 large-scale fallout. So it create, you know, you take out the,

Speaker 6 you take out power in

Speaker 6 like one of the largest cities in the world and tons of people die. Or you take out, or there's some pathogen that gets released and like tens of millions of people die.

Speaker 6 Or, you know, some one of these things happens that causes the international community and everyone in the world to realize, oh, shoot, we have to be coordinating on this.

Speaker 6 And, you know, we should be collaborating for AI to improve our societies and improve our economies and improve the lives of our people.

Speaker 6 But we shouldn't, you know, we need to, we need to coordinate on its use towards,

Speaker 6 for lack of a better term, scary things like bio or cyber warfare or, you know, the list goes on.

Speaker 6 So.

Speaker 6 Long story short, I think the path really is

Speaker 6 some kind of,

Speaker 6 you know, sometimes you talk about like an AI oil spill or some kind of

Speaker 6 incident that really causes the international community to realize like, hey,

Speaker 6 we have to start coordinating on this.

Speaker 5 I mean, it's interesting. You say

Speaker 5 China's all out, you know,

Speaker 5 gone all in on the race day ice and the U.S. is gone all out on the race day, but we're kneecapping ourselves.
I mean, you just mentioned the red tape, the EPA,

Speaker 5 the permitting,

Speaker 5 and the power. And we're not producing more power.
We're flat-blind. We've established that.

Speaker 5 As far as I know, we're not getting rid of the red tape, you know, to

Speaker 5 jet launch this.

Speaker 5 And

Speaker 5 it just seems like

Speaker 5 we're cutting ourselves off at the knees here.

Speaker 6 Right. Right now, I mean,

Speaker 6 we have a lot of work to do, for sure.

Speaker 6 We have to build strategies

Speaker 6 to have energy dominance, to have data dominance,

Speaker 6 on the algorithms. I think we'll be okay.
They're going to espionage, but I think we'll be okay on algorithms.

Speaker 6 We need to ensure we have chip dominance long term.

Speaker 6 We need to make sure all this lends itself to military dominance. I totally agree with you.
I mean,

Speaker 6 we need to, today,

Speaker 6 ensure that we have the proper strategies in place so that we stay ahead on all these areas. The worst case scenario for the United States is the following, which is

Speaker 6 CCP

Speaker 6 does a large-scale Manhattan-style project inside their country,

Speaker 6 realizes they can start, because of all the factors that we've talked about, they realize they can start overtaking the U.S. on AI.
That lends itself to extreme hyper-military advantage, and

Speaker 6 they use that to take over the world.

Speaker 6 That's like the worst case scenario for the U.S.

Speaker 6 If U.S. and AI, AI, U.S.
and China's AI capabilities are even just roughly on par, I think you have deterrence. I don't think either country will take the risk.
I think if U.S.

Speaker 6 is way ahead of China, I think you maintain U.S. leadership, and that's a pretty safe world.
So

Speaker 6 the worst case scenario is they get ahead of us.

Speaker 5 Are there any other players other than the U.S. and China involved in this?

Speaker 5 Who else do we need to be watching out for?

Speaker 6 So,

Speaker 6 yeah, right now, definitely U.S. and China.

Speaker 6 A lot of other countries will matter,

Speaker 6 but not all of them have enough ingredients to really properly be AI superpowers. So, but other countries are going to, they have

Speaker 6 key ingredients. So,

Speaker 6 to name a few, A,

Speaker 6 everything we've talked about with cyber warfare and information warfare, information operations, Russia has very advanced operations in those areas.

Speaker 6 And

Speaker 6 that could end up mattering a lot if they ally with the CCP.

Speaker 6 There's a lot of ways they can team up and have,

Speaker 6 and that could be pretty bad.

Speaker 6 There's

Speaker 6 you know, the countries in the Middle East will be very important because they have

Speaker 6 incredible amounts of capital and they have lots of energy.

Speaker 6 And so

Speaker 6 these are, you know, they're critical players in how all this plays out. India matters a lot.
India has a lot of high-end technical talent.

Speaker 6 I don't know if, I think right now, I don't know if between India and China, which has more high-end technical talent, but there's a lot in India for sure.

Speaker 6 Massive population,

Speaker 6 also starting to industrialize in a real way.

Speaker 6 And

Speaker 6 right next to China. So India will matter a lot.
And then,

Speaker 6 you know,

Speaker 6 there's a lot of technical talent in Europe as well. I think it's unclear exactly how this plays out

Speaker 6 with the European capabilities. I mean, they have to,

Speaker 6 it seems like there's some efforts now for Europe to try to

Speaker 6 build up large-scale power, build up large data centers, you know,

Speaker 6 make a play. I think yet to be seen how effective those efforts are going to be, but you can clearly see some scenarios where if they make

Speaker 6 a hard turn and go all in, they could be relevant as well.

Speaker 5 Is there a world where AI takes on a mind of its own?

Speaker 6 So,

Speaker 6 you know, obviously you can hypothetically paint the scenario where like, you know, you have super intelligence or you have really powerful AI and then,

Speaker 6 you know, it realizes at some point that humans are kind of annoying and takes us all out.

Speaker 6 But

Speaker 6 I think it's a very, like, that's so preventable

Speaker 6 as an outcome. Because,

Speaker 6 first of all, All the things we just talked about are like

Speaker 6 the very real things that happen long before you have, you know, this hyper-advanced AI that takes everyone out. That's first, that's first thing.

Speaker 6 So we have lots of things we have to get right before then.

Speaker 6 And then second is,

Speaker 6 you know, for AI to

Speaker 6 actually

Speaker 6 be capable of, you know, having a mind of its own and taking all humans out, like, we'd have to give it just incredible amounts of control. Like it would have to just

Speaker 6 basically be running everything and we're just sort of like along for the ride.

Speaker 6 And

Speaker 6 that's a choice. We have this choice of whether or not to like give all of our control to AI systems.
And as I was talking about before with like human sovereignty,

Speaker 6 my belief is we should not cede control of our most critical systems. Like we should, we should design all the systems such that human decision making, human control is really, really important.

Speaker 6 Human oversight is really important.

Speaker 6 This is one of the things that I actually think is

Speaker 6 one of the things that we're working on as a company. It's honestly one of, like, as I think about like long-term missions, one of the most important things is creating human sovereignty.

Speaker 6 So first is, how do we make sure all the data that goes into these AI models increases human sovereignty such that the models are going to do what we tell them, are aligned with humans and aligned with

Speaker 6 our objectives. And two is that we create oversight.
So as AI starts doing more and more actions, doing more planning, you know, taking out, you know, carrying out more things

Speaker 6 in the world, in the economy, in the military, et cetera, that humans are watching and supervising every one of those actions. So

Speaker 6 that's how we maintain control. And that's how we prevent, you know, the terminator scenarios or the, you know,

Speaker 6 AI takes us out kind of scenarios.

Speaker 5 Interesting.

Speaker 5 Well, Alex, wrapping up the interview here, but man, what a fascinating discussion. Thank you.
Thank you for being here. One last question.

Speaker 5 If you had three guests you'd like to see on the show, who would it be?

Speaker 6 Ooh, that's a good question.

Speaker 6 Who would I like to see?

Speaker 6 Well, I really like what you've been doing recently, which is getting more tech folks on the pod.

Speaker 6 So

Speaker 6 I go in that direction. I mean, I think Elon would be great to see on the show.
I think

Speaker 6 we were talking about this. Zach would be cool to see on the show.
I think

Speaker 6 Sam Altman would be cool to see on the show. So definitely like

Speaker 6 more people in tech. Outside of that,

Speaker 6 I think,

Speaker 6 and we were talking about some of this, like

Speaker 6 international leadership, like international, like leaders of other countries is super important because

Speaker 6 we talk about all these scenarios, like international cooperation is going to matter so much.

Speaker 5 Right on.

Speaker 5 We'll reach out to them.

Speaker 6 And

Speaker 5 yeah, as far as world leaders is concerned,

Speaker 5 we're on it.

Speaker 5 Well, Alex, thanks again for coming, man. Fascinating discussion.
I'm just super happy to see all the success that you've amassed throughout your 28 years.

Speaker 5 It is,

Speaker 5 I love seeing it. So thank you for being here.
I know you're a busy guy.

Speaker 6 Yeah, thanks for having me. It was fun.