#208 Alexandr Wang - CEO, Scale AI
Shawn Ryan Show Sponsors:
https://www.roka.com - USE CODE SRS
https://www.americanfinancing.net/srs
NMLS 182334, nmlsconsumeraccess.org
https://www.tryarmra.com/srs
https://www.betterhelp.com/srs
This episode is sponsored by better help. Give online therapy a try at betterhelp.com/srs and get on your way to being your best self.
https://www.shawnlikesgold.com
https://www.lumen.me/srs
https://www.patriotmobile.com/srs
https://www.rocketmoney.com/srs
https://www.shopify.com/srs
https://trueclassic.com/srs
Upgrade your wardrobe and save on @trueclassic at trueclassic.com/srs! #trueclassicpod
Alex Wang Links:
Website - https://scale.com
Scale AI X - https://x.com/scale_ai
Alex X - https://x.com/alexandr_wang
LI - https://www.linkedin.com/company/scaleai
Learn more about your ad choices. Visit podcastchoices.com/adchoices
Listen and follow along
Transcript
This podcast is brought to you by Carvana.
Got a car to sell, but no time to waste?
Hop on to Carvana.com to get a real offer for your car in seconds.
All you have to do is enter your license plate, answer a few quick questions, and if you accept the offer, Carvana will pay you as soon as you hand the keys over.
They even offer same-day pickup in many cities.
Save your time, score some cash, and sell your car the convenient way to Carvana.
Pickup times vary.
Please may apply.
Talk about stepping up.
It's time to level up your game, introducing the all-new ESPN app.
All of ESPN, all in one place.
Your home for the most live sports and the best championship moments.
The electricity is palpable.
Step up your game with no annual contract required.
It's the ultimate fan experience.
Level up for more on the ESPN app or at stream.espn.com.
Sign up now.
Alex Wang.
Welcome to the show, man.
Yeah, thanks for having me.
I'm excited.
So am I.
Like I was telling you at breakfast, I don't know a whole lot about tech, but ever since Joe came on, I've been trying to wrap my head around it all, and it's just a fascinating subject.
I love talking about this subject now.
So thank you for coming.
Well, it's becoming so critical to national security and all the stuff that you're very passionate about.
So, I mean,
I think fundamentally tech is like, we got to get it right.
Otherwise, stuff gets really dangerous.
Yeah.
Yeah.
Scares this shit out of me.
In fact, we were just having a conversation downstairs about
you having kids and you're waiting
and Neuralink came up and I had to, I had to, I had to pause the conversation.
Dude, I'm like, I'm worried about Neuralink.
but it sounds like you're pretty gung-ho about it.
So
yeah, a few things.
So yeah, I mean, what I mentioned is basically, I want to wait to have kids until we figure out how Neuralink or other, it's called brain-computer interfaces.
So other ways for brains to interlink with a computer until they start working.
Because, so there's a few reasons for this.
First is in your first like seven years of life, your brain is more neuroplastic than at any other point in your life, like by an order of magnitude.
So there have been examples where, you know, for example, if somebody, if a kid is born, like you have a newborn that has, let's say they have cataracts in their eyes, so they can't see through
the cataracts.
And then they live their first
seven years of their life with those cataracts.
And then you have them removed when they're like eight or nine.
Then even with those removed, they're not going to learn how to see because
it's so important in those first seven years of your development that you're able to you're you're able to see that your brain can like learn how to read the signals coming off of your eyes and if you if that's not if you don't have that until you're like eight or nine then you won't learn how to see so because it's so important that your neuroplastic your neuroplasticity is so high in that early stage of life i think when we get neuralink and we get these other technologies kids who are born with them are going to learn how to use them in like crazy crazy ways.
Like it'll be actually like a part of their brain in a way that it'll never be true for an adult who gets like a Neuralink or whatever
hooked into their brain.
So that's why to wait.
Now Neuralink as a concept or like
hooking your brain up to a computer.
I kind of take a
pragmatic view on this, which is, you know, My day job, I work on AI.
I believe a lot in AI.
I think AI is going to continue becoming smarter and smarter, more and more capable, more and more powerful.
AI is going to is going to continue being able to do more and more and more and more.
We're going to have robots.
We're going to have other forms for that AI to take over time.
And so, and humans, we're only evolving at a certain rate.
Like humans are, you know, we are, humans will get smarter over time.
It's just on the time scale of like millions of years because natural selection and evolution is really slow.
I don't know.
Are we getting smarter?
I don't don't know about recently, but
a little setback.
Yeah, a little blip.
So if you play this forward, right, like you're going to have AIs that are going to continue getting smarter, continue improving.
Like they're going to keep improving really quickly.
And
biology is going to improve only so fast.
And so
what we need at some point is the ability to tap into AI ourselves.
Like we're going to need to bring biological life alongside all of the silicon-based or artificial intelligence.
And we're going to want to be able to tap into that for our own sake, for humanity's sake.
And so eventually, I think we're going to need some
interlink or hookup between our brains directly to AI and the internet and all these things.
And
it is potentially dangerous and it's potentially, you know, to your point, terrifying and scary, but we just are going to have to do it.
Like AI is going to go like this.
Humans are going to improve at a much slower rate.
And we're going to need to hook into that capability.
I mean, what,
you know that I've already expressed fear in this.
And so I'm curious, without sharing my own fears, I'm just curious, like, what, what, in your mind, what could go wrong?
I mean, there's like...
The obvious thing is that some corporation hacks your brain.
Well, it's a corporation hacks your brain, which even that's pretty bad, but that'll be like, what?
They'll like advert, they'll like send ads directly directly to your brain, or they'll like make it so that you want to buy their products or whatnot.
But then, even worse, obviously, a
you know, foreign actor, a terrorist, an adversary, a state actor, you know, hacks into your brain and
takes your memories or takes, you know, like manipulates you or all these things.
I mean, that is, that's obviously pretty bad.
Yeah.
And I think that's, I,
like,
it's definitely a huge risk i mean for sure if you have a direct link into someone's brain um and you have the ability to like read their memories control their thoughts read their thoughts like um you know that's pretty bad uh i've i've talked to a lot of scientists in this space and a lot of people working on this stuff including the folks at neuralink
and um
you know
mind reading and mind control are like, those are the,
that is where the technology will go over time.
Right.
And so
it is like, it's something that we have to, you know, like any advanced technology, we have to not fuck that up.
But it's going to be pretty critical if we want, if we want humans to remain relevant as AI keeps getting better.
I mean, I interviewed
Andrew Huberman.
Do you know who that is?
Yeah, yeah, yeah.
And
talked to Dr.
Ben Carson about it too as a kind of a follow-on discussion.
But what Huberman was telling me is that,
because this whole thing is, it sounds like it's, it's,
I don't know a whole lot about neuralink, but from what I've gathered, it's, it's going to help the blind see.
And it sounds like it helps with some
connectivity in your joints and bones and stuff for people that are paralyzed.
But something that Huberman brought up is that I was like, well, if
it is going to help the blind see,
then could they project a total false reality into your head?
Meaning you're seeing who knows what, shit in the skies, everywhere.
Sounds like they could recreate an entire false reality.
He said, yes, they will have that ability, but not only will they have that ability,
they can manipulate every one of your senses, touch, smell, taste.
insert emotions into your into your brain, fear,
whatever it is.
And I was like, holy shit, like they could, they could manipulate your entire reality into a false reality.
I mean, you think that's, and then I asked Dr.
Ben Carson about it, and he said, you know, who's a world-renowned neurosurgeon.
He said, yes, absolutely.
He goes, or, you know, they could use it for good.
But he goes, which
he kind of put it on me.
He's like, well, what do you think would happen?
And like, would it be used for good eventually or would it be used for evil?
And
I mean, what are your thoughts on that?
Do you think that's a real possibility?
I mean, yeah.
So first of all, like,
we don't understand the brain too much today, but eventually we will.
Like, we're like, science is going to solve this problem, right?
And
everything you just mentioned is ultimately going to be on the table, you know?
manipulating your emotions, manipulating your senses.
The senses thing is already happening, where I think in monkeys, they've shown that like
they can,
you know, they don't know what it's like from the monkey's perspective, but they're able to project
like on a grid of a monkey and get them to like click on the right button really reliably.
Wow.
So they somehow they hook into basically the neural circuits that are doing the visual processing, visual like image processing in the brain, and they're able to project like things into their into their vision such that the monkey will like always click the button that you want it to collect to click.
And then, you know, you give it a treat or something.
Damn.
And so, yeah, manipulating vision, manipulating your senses, manipulating your emotions.
This will be longer term, but like leveraging your memories,
manipulating your memories, manipulating, like,
those are, that stuff is on the table.
The other stuff that is, I think, more exciting is like being able to hook into AI.
And like, all of a sudden, I have encyclopedic knowledge about everything.
And just like you know, ChatGPT or other AI systems do, I can like think at superhuman speeds.
I can, um, all of a sudden I can like, uh,
I have like way more information I can process.
Like I can like understand everything that's going on in the world and then process that instantaneously.
Like I think there's there's an element here where it'll
legitimately turn us superhuman from a just cognitive standpoint.
But then to your point, like the flip side of that is the, um,
is the risk the other way, which is that you're going to have, uh,
it's a huge attack vector.
Um,
yeah.
I mean, in
like I said, I'm not, I'm not super tech, but your company, Scale AI, you basically, correct me if I'm wrong,
scale AI is basically the, the database that the AI uses to come up with its answers and answer your prompts and all of that, correct?
Yeah.
So
we do a few things.
So we help large companies and governments deploy safe and secure advanced AI systems.
We help with basically every step of the process, but the first thing that we were known for and we've done very well is exactly what you're saying, which is creating large-scale data sets and creating data foundry is what we call it, but creating the large-scale data production that goes into fueling every single one of the major AI models.
And if you ask questions in ChatGPT,
that question,
it's able to answer a lot of those questions well because of data that we're able to provide it.
And as AI gets more and more advanced,
we're continually fueling more advanced scientific, advanced information and data into those models.
And then we also work with the largest
enterprises and governments like the DOD and other agencies in the US to deploy and build full AI systems, leveraging their own data.
And our strategy as a company has been, you know, how do we focus on,
we have, how do we focus on a small number of customers who, where we can have like a really big impact.
So we work with the number one bank, we work with the number one pharma company, the number one healthcare system, the number one telco, the number one country, America.
And
we work with all of them to like, how can you no kidding, take
how you are operating today and take sort of the workflows that you're doing today or the operations that you have today and use AI to fundamentally transform them.
So if you're like the largest healthcare system in the world, how do you, and you have to provide care to all of these patients, you know, millions of patients, how do you do so in the most effective manner?
How do you do that logistically better?
How do you improve your diagnoses?
How do you improve the overall health outcomes of all of your patients?
Like that's a problem that we help solve with them.
Or for the DOD, you know, there's so much that we can do to operate more efficiently and uh and ultimately in a more automated way i mean you'll know this i think better than anyone and so how do you how do you start implementing those systems with ai
we'll dive way more in the weeds of that later in the interview kind of where i was going with this was if so if originally it was it was
feeding the AI,
you're given the data center, you're given the data to the AI
to
come up with the answers and
answer the prompts.
And so where I was going is,
if you have Neuralink in your head and it's accessing your data centers, how easy would it be to just feed bullshit into the data center that then feeds everybody that has a Neuralink in their head?
So it could be...
I mean, it could be anything.
I mean, here's an example.
I'm a Christian.
A lot of people think that AI is going to manipulate the Bible and change a lot of things.
And so how easy would it be to just feed that into the AI data center?
And then
that's, that's the new, whatever you feed it, that becomes the new truth because that's what everybody's accessing is that specific data.
Yeah, I mean, I think,
A, yes, for sure.
That's a huge risk.
And this is one of the reasons why I think it's really important that U.S.
or other democratic countries lead on AI versus the CCP, like the Chinese Communist Party or other, or Russia or other autocratic countries, because the potential to utilize even AI today, by the way, you can use it to propagandize to a dramatic degree.
But yeah, once you get towards, you know, you have Neuralink or other brain-computer interfaces that are, that can directly, you know,
insert thoughts into people's brains.
I mean, it's
extreme extreme power that has never existed before.
And so, who governs that power?
Who governs that technology?
Who makes sure that it's used for the right purposes?
Those are like some of the most important societal questions that we'll have to deal with.
Man, I mean,
where do you even start with that?
Who do you trust
to control your fucking mind?
Yeah, I mean, I think, well,
it's interesting I think the one thing that I think has been
I think a lot of people kind of understand it now and we were talking a little bit about this at breakfast is like even the degree to which even just general media today kind of controls your mind or controls the like opinions you have or the beliefs you have and you know
you know we were talking about like
you know it is does does the media prop up certain military forces to make them seem far more fearsome than they actually are?
And like, you know, there's like some low-grade, you can kind of view like some low-grade
forms of like, you know,
propaganda, propaganda, manipulation, all that stuff is like happening, like, let's say, like, on a scale of one to ten, at the one or two level today.
And then once you have Neuralink or other devices, it's going to be like a nine or a ten.
And
I think it's really hard.
I mean, I think, I don't think
any country is prepared to govern technology as powerful as the technology that we're going to be developing over the next few decades.
Like AI, I don't know if we're prepared.
Brain computer interfaces, I don't know if we're prepared.
Large-scale robotics, I don't know if we're prepared.
Like these are technologies that are just so much more powerful than anything that has come before.
Sometimes people will say like,
you know, AI is the new mobile.
It'll be as big as mobile phones.
And And it's just, no, it's going to be like a thousand times bigger and more important and like more impactful.
And it's not clear that we did the best job regulating mobile phones even.
So there's, it's going to be,
it's going to be really important that we get it right.
Yeah.
I mean
everybody that gets what I mean you could you could basically instantaneously have your have an entire army, an entire nation that's linked into your thoughts, your way of thinking, and manipulate that entire population to do who the hell knows what.
Hopefully something for good.
But, you know, how things wind, how things generally wind up going.
But you're gung-ho about this stuff.
Would you put it in?
I would
put it in, but I would be, you know, there's a few things that need to happen before I'd be willing to put it in.
First,
I would need to really feel good about the cyber offense defense posture.
Like I need to have really good confidence that I would be able to defend from
any attacks, like any sort of cyber attacks into, you know, my brain interface.
And that's like, that's one big bar.
And then I would need to feel pretty confident.
I would need to feel confident that there were,
that
it wouldn't deeply alter my consciousness in any major way.
Like, and that I I think you would see from data of other people who use it.
And you'd kind of get a sense just from other people adopting it.
Those would be the two things I would need to feel really, really confident about.
It's a big thing.
It's a big thing.
Well, the last thing,
and then we should talk about other stuff.
But the last thing about this is
one of the things that
there's a lot of talk right now about how humans will live forever, right?
Or like, can humans live forever?
How do you not die?
And a lot of that's you know, a lot of that's focused on keeping our human bodies healthy and keeping our, you know, how do you like, how do you take care of yourself?
How do you take care of your human body?
How do we cure diseases
such that like humans can live to hundreds and hundreds of years?
But I think what's the actual end game is that we figure out how to
upload our consciousnesses from our meat brains into a computer
and I kind of think about um neuralink or other
like
other bridges between your brain and and computers as like the first step there well hold on what there's a whole nother rabbit hole we can so you're saying that we we we should be able to upload our consciousness or you want to be able to upload our consciousness into
whatever
yeah i think i mean now we're like we're on like deep end of sci-fi but, um, but yeah, I mean, I think, I think there will over time be
there, so one, I think the technology will exist at some point.
We're not close today, right?
We barely have Neuralink, you know, kind of working, right?
So we're not close, but the technology will exist to upload your consciousness onto a computer.
Holy.
And then, okay, let's say, let's say we're sitting here, you know, it's like 50 years from now, this technology exists.
And you're asking the question,
you know,
are people going to upload their consciousness?
Well, first off, there's a lot of people who
naturally would, like people with terminal illnesses,
people near death,
you know, people who are like very fringe and, you know, like experimenting with this new technology.
There will be a class of people who will just initially do it.
And then
as that starts to happen and they upload their consciousness, like the, if you have a digital,
if you have these sort of like digital intelligences,
there,
you know, that's true immortality.
That's the closest thing you'll get to true immortality.
And so the,
I think it's going to become
like once the technology exists, you know, when it exists, it's going to become quite,
it's probably going to become a very natural path for most humans to go down.
So, what do you, what do you think, what do you think happens if you get your consciousness uploaded?
And what would it even be uploaded into?
Like a cloud or something?
Yeah, it'd be uploaded to a cloud.
What do you think?
Do you think that you can experience life by uploading your consciousness to a cloud?
Yeah, so, so, uh,
yeah, this is uh few things.
So, first, um,
I'm a big believer in robotics.
I think we're basically at the start of a robotics revolution.
And we're in the very early innings of it, but people are starting to make humanoid robots.
They're going to get really, really good.
People are starting to apply them to manufacturing and industrialization and other contexts.
I think the costs are going to come down dramatically.
And so
eventually, yeah, if you...
you would believe that if you uploaded and then you could download or downlink down to a down to a humanoid robot, then you would kind of experience the real world like any other world,
or you would
continue in some kind of like simulated universe.
In
you could almost like play a video game in the cloud kind of thing, and that could be like the other alternative.
Wow.
What do you think happens when you die?
You know,
as AI has gotten,
so
Elon always talks about how we're in a, we live in a simulation, right?
And I remember when I first heard him talk about this, I was like, ah, no, this is like, I don't believe that.
I don't believe we're in a simulation.
But
as AI has gotten better and better at simulating the world, like, I don't know if you've seen these AI video
generation models, like Sora or VO or some of these models.
But, you know, they can produce videos that are totally realistic.
Most people could not tell the difference between AI.
Well, we're seeing this, AI generated video and
real video.
And as that's happening, it's making me think more and more that
we probably live in a simulation.
No shit.
Yeah.
I've spent years on this show pulling back the curtain and trying to reveal what's really happening in this country.
And the truth is, there's a double standard here in America.
You see time and time again, people defending themselves, defending their family, and then the judicial system goes after them.
It's a double standard.
And if you don't believe me, check out episode number three with Don Bradley.
That is a perfect example of what I'm talking about.
Because it's not just about what you did, believe it or not, it's how the legal system interprets it.
And that's why I'm a USCCA member.
the uscca has over 860 000 members because they know the reality is after you stop the threat the real fight begins your membership gives you the education elite training and self-defense liability insurance you need for the second fight the legal one plus every member also gets access to a 24-7 critical response team and attorney network in the event of a self-defense incident.
Violent crime happens too often in America.
This isn't about living in fear.
This is about being prepared when things go sideways.
You don't get to schedule danger, and with the world changing so fast, you have to do what you can to protect your family.
Check out the USCCA's risk-free membership at uscca.com/slash SRS.
That's uscca.com/slash SRS.
Protect more than just your life, life protect your future go right now to uscca.com slash srs do you ever just get stuck on the home screen trying to pick a show to watch or when you're trying to decide what to have for dinner you just pour a bowl of cereal instead having too many options can be overwhelming the same applies if you're a business owner who's hiring it can be overwhelming to have too many candidates to sort through but you're in luck zip recruiter now gives you the power to proactively find and connect with the best ones quickly.
How?
Through their innovative resume database.
And right now you can try it for free at ziprecruiter.com slash SRS.
ZipRecruiter's resume database uses advanced filtering to quickly hone in on top candidates for your roles.
Skip the candidate overload.
Instead, streamline your hiring with ZipRecruiter.
See why four out of five employers who post on ZipRecruiter get a quality candidate within the first day.
Just go to this exclusive web address, ziprecruiter.com slash SRS right now to try it for free.
Again, that's ziprecruiter.com slash SRS.
ZipRecruiter, the smartest way to hire.
How do you just, this is already fascinating.
We haven't even got to the interview yet.
How do you think we're living in a simulation?
I mean, I know they say they cannot disprove it.
Yeah, you can't, like, it's kind of one of these things.
There's there's no way to prove or disprove that you live in a simulation.
And so it, but it's like, it's like, it's like any, you know,
afterlife thought or religious thought, like all these things are like fundamentally unprovable.
But the reason I think it's the case is I think in our lifetime, we are going to be able to create simulations of reality that will be hyper-realistic.
Like, I think we are going to create the ability to
simulate different versions of our world with hyper-realistic accuracy.
And
that will happen over the next few decades.
And if we can, like, it's kind of like that Rick and Morty episode, where if we have the ability as an intelligent race to produce, you know, millions of simulated worlds,
then
the likelihood is that we're you know we're probably also the simulation of some other
more intelligent or more capable species where do you think consciousness goes right now when you die
uh
what if we are
what if we are the super advanced robotics
yeah i think um and your consciousness gets gets downloaded into another body generation
yeah that's your own That would be.
That's something like one way to think about it, which is like, yeah, it's all this big simulation that's running.
And as soon as
you get kind of like downloaded or taken off or like decommissioned from
one entity, you get uploaded to another entity kind of thing.
It's kind of that.
That's plausible.
I think there's another world where like consciousness is like is
consciousness may not like be that big a deal, so to speak.
Like, it could be the case that, you know, I definitely, as, as the models have gotten better and better, as the AI models have gotten better and better, um, you look at them and uh
you know, you definitely wonder if at some point you're just gonna have models that are properly conscious.
And it may just be the fact that like, you know,
it's something that can be engineered.
And if it's something that can be engineered, then
then all bets are off, I think.
Damn.
It's pretty wild to think about.
Yeah, yeah.
But
let's move into the interview.
You ready?
Yeah.
All right.
Everybody starts off with an introduction here.
So
here we go.
Alex Wang, founder and CEO of Scale AI, a company that's backbone of the AI revolution, providing the data and infrastructure that powers the AI revolution.
Child prodigy who grew up in Los Alamos, New Mexico, surrounded by scientists with parents who were physicists working on military projects.
Coding wizard who by age 15 was already solving AI problems at Cora that stumped PhDs.
Visionary entrepreneur who dropped out of MIT at 19, turning a Y combinator startup into a national security powerhouse that's helping the U.S.
stay ahead in the global AI race.
Youngest self-made billionaire in the world by age 24, built a company valued at nearly 25 billion while staying laser focused on solving the biggest bottleneck in AI high-quality data.
Unafraid to call the U.S.-China AI competition an AI war, warning that the Chinese startups like DeepSeek are closing the gap faster than most realize.
Guided by your mission to build future where AI drives progress, security, and opportunity.
And so
there's a big
question right now that everybody's that everybody's thinking about.
Is AI the next oil?
Yeah, I think
few thoughts there.
In some ways, yes, in some ways, no.
So
AI is definitely the next,
some ways in which it is the next oil.
AI will fundamentally be
the lifeblood of any future economy, any future military, any future government.
Like if you play it out,
the degree to which a country or economy is able to utilize AI to make its economy more efficient, to automate parts of its economy,
to do automated research and development, automate R D, like, you know, push forward in science using AI, all that stuff is going to mean that countries that adopt AI effectively will have like, you know,
nearly infinite GDP growth and countries that don't adopt it are going to get are going to get left behind.
So
it is sort of the
fuel that will power the future of every country.
And by the way, I think the same is true of hard power.
Like if you look at what the militaries of the future are going to be like or what war looks like in the future, AI is...
at the at the core of what that is going to look like.
I'm sure we'll get into that.
And then the ways that it's not like oil is, you know,
oil is this finite resource.
You know, we,
you know, countries that stumble upon large oil reserves,
they have that large oil reserve.
At some point, it's going to run out.
Like in Norway, you know, it runs out at some point.
And
so it lends the country power and economic riches for a time period.
And then you exhaust it and then you're looking for more oil.
Whereas AI is going to be a technology that will just keep
compounding upon itself and will keep, you know,
the smarter AIs, the more economic power you're going to get, which means you're going to build smarter AIs, which means you have more economic power, and so on and so forth.
And so there's going to be a flywheel that keeps going on AI, which means that
it's not going to be a time-based,
a time-limited resource, let's say.
It's going to be something that will just continue racing and accelerating for
the entire perpetuity.
And data is part of that.
Data is a big part of that.
Data is the core part of it.
Yeah.
So a lot of times, actually,
I like to compare data to oil versus AI.
That's actually what I meant.
I fucked that up.
I meant to say data.
Yeah, yeah.
Well, I mean, I think that's totally true.
Like, data, if you think about AI, it boils down to like, how do you make AI?
Well, there's like three pieces.
There's the algorithms, like the actual code that goes into the AI systems that really smart people have to write.
I used to write some of these algorithms back in the day.
Then there's the compute, the computational power, which boils down to large-scale data centers.
Do you have the power to fuel them?
Do you have the chips to go inside them?
That's like a large-scale industrial project in question.
data.
Do you have all of the lifeblood?
Do you have all the data that feeds into these algorithms that they learn off of?
And it's really kind of like the raw material for a lot of this intelligence.
And so that's why I think data is the closest thing to oil because it is what gets fed into these algorithms, fed into the chips to make AI so powerful.
And everything we know about AI is that, you know, the better you are at all three of these things, algorithms, computational power, data, the better your AI get.
And it's just all about racing ahead on all three of these.
So when we see like ChatGPT, Grok, these types of things, are they sharing a data center or
are they completely separate data centers?
They all use,
they all have separate data centers.
This is actually
one of the major
lanes of competition between the companies is who has the ability to secure more power and build bigger data centers
because ultimately,
you know, as AI gets more and more powerful, the question then becomes how many AIs can you run?
So let's say for a second that we get to a really powerful AI that can do automated cyber hacking.
So it can do like, it can log log into any kind of server or log into another, you know, or try to hack some website or try to hack some other, try to hack some
system.
Then the question is just, okay, if I have that, how many of those can I run?
Can I run a thousand copies of that?
Can I run 10,000 copies of that?
Can I run 100 million copies of that?
Wow.
And that all just boils down to how many data centers do you have up and running?
And then that boils down to, okay, how much power do you have to fuel those data centers?
How many chips do you have to run in those data centers?
And how do you keep those online for as long as possible?
And what data is constantly fueling those models to keep getting them to become better and better and better?
And so, this is one of the reasons why one of the major ways that the AI companies compete, you know, between
XAI, Elon's company, and OpenAI, and
Google, and Amazon, and Meta, and all these companies.
One of the major ways they compete is just who right now is securing more power and more real estate for data centers five years from now and six years from now.
And so the battles five, six years down the line are being fought literally today.
Wow.
Man, that's fascinating stuff.
Well, a couple more things before we get into your life story here.
Got you a gift?
Oh, man.
Everybody gets one.
Love it.
There you go.
Legal in all 50 states.
No funny business, just candy made here in the USA.
Yeah.
And then one other thing, I've got a Patreon account.
It's a subscription account.
It's turned into quite the community.
And they've been here with me since the beginning when I was running this.
thing out of my attic and then we moved here and now we're moving to a new studio and the team's 10 times bigger than what it was
which was just me and my wife but um it's all because of them and so they're the reason I get to sit here with you today.
And so one of the things I do is I offer them the opportunity to ask every guest a question.
This is from Kevin O'Malley.
With AI now able to essentially replicate so many facets of our reality, do you see a future where all video or photographic evidence presented in trials becomes suspect?
based on the ability for any of it to have been replicated through artificial intelligence tools.
Yeah, so this goes back to what we were just talking about.
I do think AI is going to enable you to do crazy levels of simulation.
And
I don't think our courts are ready for it.
I think that
like Kevin is saying, AI will be able to generate very convincing video, very convincing images
in a way at a level like we're not even really at that point yet.
Like right now, you can still tell when these videos or images are ai generated that's going to keep getting better and it's going to be indistinguishable um from from real video so how the hell are we going to discern what's real and what's ai generated i think that there's two things i think first
people are going to need really good detectors like
Like insanely good.
And I think
I think kids today, by the way, already have much better bullshit detectors because they grew up on the internet where there's just so much, there's so much of everything that they already kind of like learned to have better and better bullshit detectors.
But
so that's one.
And then the second is, I mean, I think there's going to be
this is an area where I know there's a lot of push for various forms of policy and regulation, but
this is gonna, I mean, it's gonna be a major question, like, hey, if, if there's fabricated video or
or imagery used in a trial and it's discovered that it was fabricated like you know what
what are the what are the consequences of that and I think it's about tuning that such that if you fabricate evidence or you fabricate things then
then you know that's maybe a worst offense then maybe that's the worst offense of all
then I think people would then you deter a lot of usage of those tools then if you set up the incentives in the right way.
Yeah, I mean, what, you know, first thing that goes to my mind is the U.S.
government.
I mean, just showing you around the studio and stuff, talking about, hey, this is what the government did to those Blackwater guys I was telling you about.
They deleted the evidence.
Well, instead of deleting the evidence, they could
make new evidence that is a fake gunfight in the South Square, Baghdad that proves they're guilty.
And
then it's the government behind it.
You know, we've seen it with brad geary we've seen it with eddie gallagher we've seen it with the blackwater guys we've we've seen it a ton just just in my small network circle and i could i mean you see what's going on with the elections all over europe they they they pulled georgescu calling him a uh
what was it i don't know some
under russian russian influence marie le pen in france done
i mean they were talking about pulling somebody in germany not not too long, maybe about six months ago, and it's, it's just, man, it's fucking crazy, you know, and
scares the hell out of me.
Scares the hell out of me.
Because then they can just frame anybody they want.
Yeah, I think
definitely
one of the outcomes of AI is that institutions that have power today
will gain way more power.
Yeah.
It will it's not naturally democratizing.
It's a centralizing kind of technology.
And so
And so yeah, we need to build mechanisms so that we can trust those institutions.
Otherwise it doesn't end well.
Yeah.
Well, let's get to your story.
Well, I have gifts too.
Do I do
okay, great?
So a few things.
I mean, we're going to talk about this, but I grew up in Los Alamos, New Mexico.
So
my parents were both physicists who worked at the national lab there.
This is the birthplace of the atomic bomb.
I don't know if you saw Oppenheimer, but half of that movie is set in Los Alamos, where I'm from.
So we got a Los Alamos hat,
Los Alamos National Laboratory hat.
Dude.
That's very cool.
We have some Los Alamos coins.
So
there's one about the atom bomb.
One about Norris Bradber, who's the lab director.
And then
a almost coin about the you know the father of the atomic bomb we go
we have a
a like a copy like a basically a copy of all the the manual that they that they gave to the scientists that got declassified
from the from the actual
from the actual Manhattan Project.
Wow.
This is cool as shit.
And this one's just a fun one.
It's a rocket kit for you and your kids.
Oh man, they're going to love that.
Yeah.
Thank you.
Dude, thank you.
This is going to look awesome in the studio.
That's very cool.
Yeah, it's been kind of surreal.
I mean,
everybody calls
AI the next Manhattan project.
And so
it's been funny because that's where I grew up.
It's like, I don't know, it feels weird.
I'll bet it does.
Yeah.
I'll bet it does.
So, what were you into as a kid?
So,
yeah, so again, both my parents are physicists, and my, and my dad's dad was a physicist as well.
So, I grew up in this like
pure physics family.
So, science, technology, physics, math, these were,
these are the things I was like, I was like, I was really excited about as a kid.
And
I remember like
around the dinner table, we would talk about black holes and wormholes and
alien life and supernova and
faraway galaxies and all that stuff.
That stuff was all very captivating to me.
I was thinking about kind of like
basically like, you know,
understanding the universe, for lack of a better term.
And then I really liked math and I realized kind of, you know, in about four, in fourth grade, I entered my very first math competition, which is a thing.
And I, I like, it was in, it was in the whole state of New Mexico.
And I scored the best out of any fourth grader in New Mexico,
which,
and then that like activated this like competitive gene in me.
And then I just started like, you know i got consumed by math competitions science competitions physics competitions what kind of math are you doing in fourth grade yeah you uh math are you doing yeah yeah fourth i remember let's see my parents taught me algebra in
i want to say it was second grade maybe between are you serious yeah
you mastered algebra in second grade i don't know if i mastered it but i was yeah i was playing around with algebra They, they taught me the basics of algebra, and I would just like to spend all time thinking about it in second grade.
It's like seven, eight years old, right?
Yeah, like eight and eight.
Holy shit.
Um, and then,
and so by the time I was, by the time I was in fourth grade, I could do kind of like, I could do some basic algebra.
I could do
some basic geometry, stuff like that.
And then, uh, let's see, where, where'd I do from there?
By the time I was in middle school, I was doing calculus.
And then
I was doing college-level math in middle school as well.
So those are the two things I was doing in middle school.
And then
in high school, I just became obsessed with computers.
And I just spent all day programming.
And I realized like science and math are cool.
But
but with computers and programming, you could actually make stuff.
And that ended up becoming the major obsession.
Back to the dinner table conversations.
Yeah.
I mean, Los Alamos, there's like a lot of conspiracies and all kinds of stuff going on about that place.
Remote viewing,
all this stuff seems to stem to Los Alamos.
But
two parents that are physicists at Los Alamos, you guys are talking about black holes and aliens and shit.
What do you think?
Are there aliens?
So there's this famous paradox, the Fermi paradox, which is, you know, what are the odds that we live in this like vast, vast, vast universe?
And
there's like, you know, there's
billions, hundreds of billions, trillions of other
stars and planets.
And,
you know, what are the chances that like none of them have intelligent life?
I mean, I think like definitely somewhere else in our universe, there has to be intelligent life.
I think so.
For sure.
But the benefit, or I don't know if the benefit, but like part of the issue is if we're really, really, really far apart, like millions of light years apart, hundreds of millions of light years apart, there's no way we're ever going to communicate with each other.
We're just like super duper far away from each other.
So I think that's plausible.
And then there's the,
you know,
there's what's called the dark forest hypothesis.
I think this is one of the things I actually believe the most in, probably.
So, you have the Fermi paradox.
It says basically, like,
hey,
what are the odds that there's no intelligent life out there in the universe?
It's probably zero.
There has to be some intelligent life somewhere else in the universe.
And then the question is, like, why aren't we seeing any?
Like, why aren't we seeing any aliens?
Why aren't we coming into contact with them?
And so then there's all these, like, how do you explain why that is?
And there was this, um,
there's this hypothesis called the Dark Forest Hypothesis, which originally came out of a sci-fi novel, actually, but is the one that like jives the most with my thoughts, which is
the reason you don't run into other intelligent life is
if you play the game theory out,
if you're an intelligent life, you don't actually want to be like
blaring to every other intelligent life that you exist.
Because if you you do that, then they're just going to come and take you out.
Like you're basically like a, you become like a huge target for other forms of intelligent life.
And there's, you know, some intelligent lives out there are going to be hyper aggressive and are going to want to take out, you know, other forms of intelligent life.
So the dark forest hypothesis is that
once you become an intelligent life form and you become a multi-planetary species and all that, you realize that you're kind of best off minding your own business and not, you know, sending all these sorts of signals and trying to like make contact with other life because it's higher risk to do that than to just kind of like, you know, stay isolated.
And so there is intelligent life out there.
There are aliens out there, but everybody's incentive is just to stay isolated.
Interesting.
I don't know.
I used to believe in it.
Then I interviewed a bunch of guys.
I don't know.
I don't know.
I think all this shit's a big distraction, to be honest with you.
Yeah, there's definitely,
I mean, there's definitely the other portion of this, which is, you know,
UFOs are a conspiracy such that, you know, the military can do all sorts of airborne testing and
it gets discredited because, you know, people say it's UFOs and then
nobody believes it.
Like, there's just no, I'm.
Of all the people I've talked to, there's just no hard evidence.
And then, and then, and then it's the, well, that's classified.
It's like, I mean, is it?
You're on a podcast tour.
But
I don't know.
Sometimes I think, you know, this is like all I watch is the expanding, all the black holes, all the, this is what I fall asleep to at night.
And I don't know.
I mean, they, they found what, like Saturn's rings are all water.
They think they may have found, you know, there's a possibility of life on some of the moons on Saturn that would, Neptune, I think,
made of,
is it Neptune that's made of water?
Like a lot of oceans that are frozen and so there may have once been life then there's a they think they found a pyramid on Mars or something I don't know I sometimes I think maybe
maybe
it any at any one given
at any particular given point in time there there is only
one planet that holds life as we know it at a time and then maybe when that planet
you know
becomes obsolete everything goes extinct.
Maybe it moves, you know, maybe it was Mars, I don't know, five billion years ago, and that's where life was.
And then somehow, you know, shit changed and then it developed on Earth.
I don't, I don't know.
That's, that's, that's where I'm at right now.
I go back and forth on this shit all the time.
Yeah, totally.
Well, because
our star has a life cycle, right?
And as it goes through that life cycle, different points of our solar system become different temperatures, have different conditions, you know, all that kind of stuff.
And so
that's a plausible theory.
I mean, I think
I think it's, I mean, I think both that and what we were talking about before in terms of like consciousness in the afterlife, these are like some of the some of the great questions because you just, you know, we'll probably never know the answers.
Yep.
Yep.
What were your parents working on at Los Alamos?
They were.
Are they still working there?
Yeah, my mom still's working.
My dad's not working, but my mom's still working.
And so they were part of
the divisions in Los Alamos National Lab that were that worked on classified work.
That
they had clearance.
My mom sells clearance with the DOE.
And
I actually
remember, like when I grew up, I just assumed they were working on cool physics research because I was like a kid and I didn't put two and two together.
And so I remember when I grew up, I thought the Los Alamos National Lab
used to be the place where the atomic bomb was built.
And then
decades later is just like this advanced scientific research area where they're doing research into, you know, all of the, you know, the frontier of human knowledge.
And it's just this like great scientific research area.
And then,
and then it wasn't until I, I, it wasn't until I literally got to college where I was talking to a friend about it, and it like dawned on me that, oh, wait, La Salma's probably still mostly weapons research.
And, uh,
and, oh, that's why you would need a clearance
stuff in New Mexico.
And then, since I left, they actually restarted, um,
they restarted
what's called nuclear pit production, but they restarted basically manufacturing the cores of nuclear weapons.
This must have been like 2018, 2019 in Los Alamos.
And then I was like, oh yeah, no,
it's mostly a research facility to
research new nuclear warheads and
new nuclear weapons.
And so that dawned on me.
That didn't dawn on me until I was like all the way in college.
Wow.
But yeah.
So my guess is my parents worked on that.
Probably.
Yeah.
Damn.
That's crazy.
Wow.
What else were you into as a kid other than mathematics?
I loved math.
I loved coding.
I loved science.
I loved all that stuff.
I
was really into violin.
I was really into
I would like I would practice like you know an hour of violin a a day.
A lot of that was because there was sort of like,
you know,
in some
fields or some areas, there's like, there's just a real beauty to perfection.
And I think this is true in like a lot of arts,
a lot of music,
a lot of, frankly, everything.
I mean, I see it even in my current life.
in my current day-to-day job, but
there was just like, hey, if you could, if you practice enough to get
to play a piece perfectly then it would like it would be beautiful and if you like along the way it's like total dog shit until you get to the point of like perfection um there's kind of there's a lot of beauty to that concept to me which is like you know once you get something totally perfect it becomes beautiful um that was that was captivating when i was a when i was a kid so you were a perfectionist from a young age and you're still a perfectionist today
Yeah.
I see a lot of beauty in, like,
you know, now I would say,
I don't think we have the luxury to be perfectionists.
I'm much more pragmatic now.
Like,
you know,
like we were talking about, the world is extremely messy.
Like, like, the
reality is, you know, stuff is super chaotic.
There's a lot of bad shit going on constantly.
There's a lot of good shit going on constantly.
But perfection is not really a
like plausible objective.
We're never going to get perfection.
So I'm a lot more pragmatic now, but I do see a lot of beauty and perfection.
I mean,
I'm also a perfectionist.
I battle it every fucking day.
Like I,
it,
I'm OCD.
But, you know, and I've, I've...
read about it.
I've watched talks about it.
It's and I came to the conclusion, which I hate saying this because I am a perfectionist at heart, you know, that perfectionism can get in the way of success.
Did you find that?
I mean,
it sounds weird even like asking you the fucking question because you're the youngest billionaire in the world at age 24.
And I mean, 28 years old now.
So it sounds weird saying, did perfectionism hold you back?
But
did it?
I think,
yeah, at some point, I just like,
like, some bit flipped and I realized like
you gotta just do the 80-20 lots of times.
Like, you gotta do 20% of the effort.
That's 80% is good.
And you just have to be okay with that.
And you just have to do that over and over and over again.
So at some point, I internalized that.
And it's like, it's like anathema to perfectionism.
It's like the exact opposite.
And so now I think about it as like, hey, there's some things where perfectionism really is the right answer.
And there's some things where you just gotta, you just gotta like be okay with imperfection, and just like speed is the objective versus perfection is the objective.
Um,
so
and yeah, I would say now, honestly, I think more things like most things are speed is the objective, not not perfection.
Um, so yeah, I would say I've kind of had a like a whole journey with it.
What was it that flipped you?
I think what like uh
so
um
there's this thing that um
Elon says to people at his company when they're in like when they're like a crisis situation.
And he says like
Hey, like, you know, let's say you're in a crisis situation and like people are like not figuring out how to deal with it.
And then he asks, like, imagine there was a bomb strapped to your body that will go off if...
you don't come up with a solution to this problem.
Like, then what are you going to do?
And then, you know, most of the time when people actually like think through that scenario, they like focus and they get their act together and like figure out like
something to do.
And I think a lot of times startups are like that.
Like you're like, there's so many moments that are so life and death and so high pressure that
you're just in these situations all the time where you're like.
You have to act and you have to like do something otherwise you're toast and you just have to like figure out what the best plan of action is and the best course of and just do it.
So, I think that that
the realities of
you know having to operate quickly, I think, just over time remolded my brain.
Interesting.
Do you have any brothers?
Do you have any siblings?
Yeah, I have two brothers, two older brothers.
They're both, I dropped out of college, and both my brothers have PhDs.
So,
but my oldest brother is an economist, and
my other brother is PhD in neuroscience.
So
they're smart.
Yeah, they're smart guys.
Whole lineage of geniuses, huh?
Yeah, I think
my
parents
are probably still
a little miffed that none of us became physicists, but
oh man.
Well, I'm sure they're
they got to be happy with how everything turned out.
I mean, wow.
Yeah, yeah.
No, I think my parents are super proud of me.
So where do you go to, where did you go to school?
I mean, where do you
homeschooled?
I went to Los Alamos Public High School, Los Alamos Public Middle School.
There's, there's like, the town is 10,000 or so people.
Now it's more because they do
pit product.
They do manufacturing of these like nuclear cores.
So now there's a lot more people there.
But when I was growing up, there was like 10 to 15,000 people.
So pretty small town.
And
there's like one public middle school, one public high school, a few elementary schools.
And
yeah, that's the, you know,
I went to public school.
I was lucky.
Like, I think those are those are amazing public schools, but it's like, it is public school like any other public school.
And then I would just get home every day and
effectively do math and science like every day.
What
like
what how do you go what it what is the average second grader I mean you said you had learned algebra in second grade.
What is an average it's been a long time since I've been in second grade things may have changed but I'm pretty sure it's basic addition.
Yeah, things like addition.
Maybe you get to your times tables.
Yeah, maybe some multiplication tables.
Yeah, yeah, yeah.
I mean, so how do you...
Dude, what is that like to go to go from the night before studying algebra to
two plus two is four?
Yeah, I
remember
I definitely remember
in school, like
I think like a lot of a lot of kids in general just sort of like
generally kind of
buying out of the whole thing, if that makes sense, like kind of just
tuning out and daydreaming and just kind of like ignoring what was happening in classes.
That definitely, that definitely started happening.
And then I, what, you know, what I would actually do or focus on is like go back and then do math at home.
I mean, you're more, you're more advanced than the teacher.
There were, I remember one time there was like,
there was,
the good thing about what, you know, this, the school I went to is like the teachers were really like also invested in my education.
Like I think they,
many of my teachers wanted to see me like thrive and continue learning.
And,
and that was, that was awesome.
Like I could, I can imagine a totally separate school where it's like, the teachers don't care because, you know,
you know, it's just like their lives are chaotic, the classroom's chaotic, all that kind of stuff.
But I'm lucky to have teachers who really cared.
Yeah.
I mean,
seems like it worked out well.
I mean,
for all the success that you have amassed in 28 years, I mean, you're a very grounded person.
And I never really know what I'm going to get with you guys.
At breakfast,
I was super impressed.
I'm like, wow, this guy's like a really grounded person and seems like a really good person.
So
kudos to you, man.
Appreciate it.
But hey, let's take a quick break.
When we come back, we'll get into MIT.
It's no secret precious metals like gold and silver are gaining traction.
From the billionaires to the central banks who are stockpiling gold to the growing use of silver for artificial intelligence, the demand is rising.
But how should you buy precious metals?
Where do you start?
And who should you buy from?
Well, I was relieved to find a a great company I can trust, and that company is Goldco.
They are top-rated and keep it simple and transparent.
They are an award-winning organization with over 7,000 five-star reviews, and they've got the best free silver offer out there.
So if you're ready to learn how Goldco can help you, call 855-936-GOLD or visit SeanlikesGold.com.
You'll get a free 2025 gold and silver kit.
Plus, you'll also learn about how you could qualify for the number one silver offer out there so give gold cold call at 855-936 gold or visit seanlikesgold.com performance may vary you should always consult your financial and tax professional what if you could delay your next two mortgage payments that's right imagine putting those two payments in your pocket and finally getting a little breathing room It's possible if you call American Financing today.
If you're feeling stretched by everyday expenses, groceries, gas, bills piling up, you are not alone.
Most Americans are putting these expenses on credit cards and there doesn't seem to be a way out.
American Financing can show you how to use your home's equity to pay off that debt.
You need to call American Financing today to get ahead of the curve.
Their salary-based mortgage consultants are helping homeowners just like you restructure their loans and consolidate debt all without upfront fees.
And their customers are saving an average of $800 a month.
That's like a $10,000 a year raise.
It's fast, it's simple, and it could save your budget this summer.
Call now at 866-781-8900.
That's 866-781-8900.
Or you can go to Americanfinancing.net/slash SRS.
NMLS 182-334.
NMLS consumer access.org.
There are a lot of choices out there when it comes to cell phone service, and it feels like more are popping up all the time.
But Patriot Mobile isn't just another option.
They're different.
A company built by people who actually share your values and who are committed to doing things the right way.
They're also ahead of the curve when it comes to tech.
Patriot Mobile is one of the only carriers with access to all three major U.S.
networks, which means reliable nationwide coverage.
You can even have multiple numbers on different networks all on one phone.
A true game changer.
They offer unlimited data, mobile hotspots, international roaming, internet backup, and more.
Everything you'd expect from a top-tier carrier.
And switching couldn't be easier.
Activate in minutes from home, keep your number, keep your phone, or upgrade if you want to.
Go to patriotmobile.com slash SRS or call 972 Patriot.
And don't forget to use promo code SRS for a free month of service.
That's patriotmobile.com/slash SRS or call 972 Patriot.
All right, Alex, we're back from the break.
We're getting ready to move into you going to college.
So you started at MIT, correct?
Yep.
How did that go?
Yeah, so let's see.
I was
so I'll say the first, the few years before that.
So I dropped out of
high school actually oh you dropped out of high school yeah I dropped out of high school
why not why was wasn't challenging enough for you
I dropped out a year early to to go work at uh quora at the tech company um
i think a lot of people have run into quora it's like the question and answer website uh but uh but i went to go work at a tech company for a year um
and uh and then after a year of that i decided okay it's time to go to college.
So
I went to MIT.
Yeah, at 15, you're stumping PhDs.
It was maybe not quite that, maybe not quite that early, but yeah, like by 16, 17,
yeah, was I was more, I was more competent by that point.
What are you stumping these guys on?
So,
well, at that point, that was like early, early AI.
It wasn't even called AI yet.
It was called machine learning.
That was like the more popular term.
And it was about training different algorithms that would, you know,
re-rank content.
It was just like all the like
all the algorithms for like these social media style
style things.
And it's like, okay, what algorithm creates the most engagement or what algorithm like gets people, you know, the most hooked on these feeds.
That's what I, that's what I was working on back then.
Gotcha.
And so
I went, so I, I worked, I worked for a bit and then I went to MIT.
And
what are you, sorry to interrupt.
A couple more questions.
What is it like for you to be 16, 17 years old
stumping PhDs?
I mean, is that just like normal life for you?
I mean, you know what I mean?
Like, does it set in like, holy shit, I'm really fucking smart?
You know,
i think um
i think something that i internalized pretty early on
was that
um
was that focus was really really
critical and so i didn't think necessarily i mean like
i think a lot of people are really smart and i don't know if necessarily i'm like way smarter fundamentally than a lot of these other people, but I was like hyper-focused on math as kid and then hyper focused on physics.
And then in high school, I was hyper focused on programming.
And then,
and so if you, if you're like hyper focused and you're just like, you like really invest the time and the effort, you can make really, really fast progress.
So one of the things that I always like,
I...
I've believed in for a long time is that if you
if you
overdo things, like you like really like invest lots of time, lots of effort, you go the extra mile, you go the extra 10 miles, and you're like constantly overdoing things, then
you will improve faster than anybody else by many times.
And a lot of other people, maybe they're just not going the extra mile, or maybe they're just not as focused, or you know, they're like meandering a bit more.
And so, that's really like,
I definitely like
for me, I think
a lot of what I attribute being able to accomplish so much to is really about focus and
overdoing it, going the extra mile.
That's what I think it boils down to.
What did your parents think when you dropped out of school?
You know, they, my parents, I think still probably really want me to get a PhD.
and do scientific research.
So
they,
I think they view, and I respect this belief,
you know, I think they view the pursuit of science, the pursuit of knowledge as above all else.
And so I would always tell them, hey, I'm just, you know, this is like a little detour, but ultimately I'm going to come back and, you know, finish my degree and finish my, you know, get a PhD and I'll be on the straight and narrow.
So that's what I always, what I would always tell them.
But, and then at some point it just didn't be, it wasn't believable.
So I just stopped telling them that.
Why'd you decide to go to school?
I went to school because,
well, there were two things.
One was
like genuinely, I wanted to learn a lot about AI very quickly.
And I knew I could kind of do that while working, maybe.
The best thing to do really would be to like go to school, like invest all my my time into it, and
try to learn very, very quickly.
And then the second thing was, like, you know,
almost anyone, you'll not anyone, but like many, many people, if you ask them, like, what were the best years of your life?
Like, a lot of people will say their college years.
And so I was like, shit, I can't, I'm not going to sacrifice the college years.
So, so, yeah, I went to school.
I like,
I decided to just go really, really deep into AI.
I took all of the AI courses I could
while I was at MIT.
I was only there for a year, but I started out.
I remember I took a,
I wanted to take the sort of like hardest machine learning course the first semester I got there.
And the, my freshman advisor, the person who was like, I had to get all my courses approved with, was the professor of that course.
This just like happened to be the case.
And I like signed up for her course and then she said like
you're a freshman.
You're not going to, you know, this is going to be, this is going to be too much for you.
And I was like, oh, just give me a chance.
Like, you know, I just want to try it.
Like, I'm really passionate about the topic.
And she's like, okay, well, we'll let you
let you go till the first, you know, for the first few weeks and see how you do.
And so then I get in.
And then
I remember I was like,
I felt like the stakes were really high because I wanted to prove that I could do this.
And so the first test rolls around.
And
I think by sheer luck, it just happened to mostly be about things that, like,
there were a lot of things in the course I didn't understand, but happened to be about stuff that I did understand in the course pretty well.
And I got like one of the top marks in that course.
And there were like hundreds of people in this class.
And so then after that point, the professor let me do whatever I wanted.
And then
I did all of these.
I went really deep into AI and all the AI coursework at MIT.
And then this was the year when
DeepMind, this AI company out of London, came out with AlphaGo, which was the first AI that beat
the best Go players in the world, which was viewed at that point as
probably the hardest strategy game or the hardest sort of like, yeah, the hardest strategy game for AIs to beat.
And that was a big deal.
And then I started tinkering with AI on my own.
So I built, I wanted to build like a camera inside my fridge that would tell me when my roommates were stealing my food.
And
so I started tinkering with it.
And then I pretty quickly realized
kind of what we were just, what we were talking about earlier, that data was going to be like, that everything was going to be blocked on data.
Like if we, no matter what you wanted AI to do,
that was going to rely on data to make the AI do those things.
And so I, and I looked around, I was like, nobody's working on this problem.
You know, you have plenty of guys working on building great algorithms.
You have plenty of people working on building the chips and the computational capacity and
all that.
Nobody working on data.
So
I was impatient.
I was 19 years old.
I was kind of impatient.
I was like, well, if nobody's going to do it, I might as well do it.
Dropped out, started the company, and was off to the races.
Damn.
So did you perfect the refrigerator AI to tell you if your roommates are stealing your food?
That was part of the problem.
I was like,
I was trying to build it, and then I realized I didn't have anywhere near enough data.
So it always fire incorrectly and always have false positives, false negatives, et cetera.
And then
I realized like,
then that was like the light bulb moment.
I was like, oh shit, if I really want to make this, I need like
a million times times more data than I have now and that's gonna be true for like every AI thing that anyone ever wants to build
and so that was kind of the the genesis of the the idea really so you left MIT left MIT I remember I moved I I flew straight from Boston to San Francisco um to start the company uh and
um basically immediately went from like at 19 years old 19 years old old.
Yeah, I immediately left and then I started coding
in San Francisco.
And I was part of this this like accelerator, like I was part of this program called Y Combinator.
And
it's kind of like the hunger games for startups.
So there's like, there's like, it starts out, there's a hundred startups at the start of the summer.
And you're all like grinding away, you're all working, you're all trying to like show milestones and show progress.
And then it culminates at the end of at the end of the
of Y Common, at the end of it all, there's a demo day where everybody presents their companies, presents their progress, and tries to get investment.
And
so it literally, it quite literally is the hunger game.
That's like you go through this whole thing at the end.
If you get investment, you get money, you've won.
If you didn't, you've lost.
And
so that was like, that was, that was the beginning of the company.
We ended up getting good investment.
What did you do?
Well, at that time,
we were, it was, it was around data for AI.
So it was all around like, how do we fuel data for, for
what people want to build with AI.
But at that time, it was like so early that the use cases were pretty stupid.
Like we were helping one company try to detect like it was like a t-shirt company.
They made like custom t-shirt designs and we're trying to help them detect when people
were like use a t-shirt design that was like that was like
like
that was like unfit for to print like you know had like gore or or like
you know
all sorts of like illegal stuff like if basically like identifying illegal t-shirt designs it's kind of like stupid now they say it and then we're helping another company
It was like a furniture marketplace.
We're helping them like improve their search algorithm with AI.
And then maybe a few months in, maybe three months in, we started working with autonomous vehicle companies and self-driving companies.
And then that and that ended up being like the real
the real meat behind our effort for the first three, four years.
So we worked with General Motors and Toyota and Waymo and you know all of the major automakers in helping them build self-driving cars.
How many many people were you competing against?
I mean,
I think in anything you do in startup land, like you have like
tens of competitors.
And there were definitely tens of competitors at that time.
And
so it's like, you know, these are competitive spaces, but
where,
as we described, I don't mind competition from math competition days.
And so
we were just like, really focused on the problem, really focused on how do you, what are the best possible data sets for these self-driven cars?
A lot of that had to do with it's called sensor fusion.
So, you know, there's so many different kinds of sensors and how do you combine all these different sensors to get, you know,
one output.
So like if multiple sensors sense a person, how do you like collect all that together to say, that's one person right there and that's one car right there and that's one you know bicycle over there.
um so that was kind of our specialty as a company and then um then we're kind of off the races just on that we we grew the company to like a hundred or so people let's go back just a little bit okay so you you go to san francisco by yourself as a 19 year old kid who had just dropped out of mit
how do you
you're immature at that point.
And
so how do you develop leadership skills?
And I mean, how do you have, how do you have the know-how and make the connections to build a company as a 19-year-old kid?
Yeah,
so
let's see what happens.
So basically early on, like
it's about who you get investment from.
And so if you get...
So it was just you at the competition.
There was no team.
No team.
No team.
And then
and so I and I was coding every day.
And then I got
we got Y Combinator to invest in us and then we got this this investment firm called Excel, which was we were one of the early investors into Facebook to invest.
And so we got
some some good investors and then they
helped me build the team like find people to hire.
I also hired, you know, what actually happened is is I mostly hired people I knew from school.
Really?
Yeah.
So like because you could trust them?
I think more that they could trust me.
Cause I think if it like at the time,
if I went to like a
25-year-old engineer in San Francisco and I was like, hey, we should, we should work together.
I had no credibility.
Like,
I remember I was, I like, I would get coffee with these people and I would say like, yeah, this is what we're working on.
It's super cool.
You should join us.
And then they would all just be like, okay,
cool.
I guess I'm going to go back to my job now.
So early on, I had no credibility except for with people I went to college with
who we were just like friends and we liked each other.
And so I managed to recruit a bunch of them over.
And they dropped out too.
Some of them dropped out.
Some of them just happened to, you know, were like seniors or whatever, finished school and then joined.
It was like a mix.
It was a mix.
And
that was like the early nucleus of the team, the early sort of like cohort of the team.
And then
we started picking up momentum because we're starting to work with large automotive companies.
We're starting to work with, you know, these very futuristic autonomous driving companies.
And then as momentum started to pick up, like, you know, we were able to grow and build out the team over time.
I mean, so, so where did you get your business sense?
Or did you hire somebody to run run all of that and you were the mastermind behind everything?
I
maybe about a year in I hired somebody literally with the title head of business
but
until then I was just kind of like
I was just trying to like learn it all.
How did you get the product out there?
I just coded it all up and then there like I like put it out on one of these there's all these like websites where you can launch startups.
And I put it on, we put it on one of those websites.
And
it went like micro viral, you know?
Like viral among
like people who were on Twitter to look for new startup ideas.
And then it was kind of, that was like the early seed that just
that ended up enabling everything to grow.
But it was like, I mean,
at the time, it was, I mean, it was, it was tough going, you know, you're like, like, I would just like,
I would just spend all my time coding, then every once in a while, I would like post something
to the internet and just like, and then I would beg all of my friends, I would say, like, please go upvote this, please go like this, like, please, like, you know, give me some ounce of traction.
And
yeah, that was the early days.
Damn.
Was it Scale AI at the beginning?
Yeah, Scale AI.
Actually, it was called, it was
Scale API at first.
And then, because that was just like that website was available.
And then it became Scale AI like a year and a half later.
But
yeah, so the whole, the whole, I mean, early startups are so gnarly.
It's, I mean, it's really crazy.
If you look at like all these big companies and you like, you know, think about what they were like in the early days, they're all
pretty, pretty,
pretty rough and tumble.
But the coolest thing,
because we started working with all these automotive companies and working on self-driving,
it quickly became hyper interesting
because
this was like one of the great scientific
and engineering challenges of the time.
And we ultimately ended up being successful.
Like Waymo, one of our customers, is now launched and driving
large-scale robo-taxi services in San Francisco, LA, Phoenix.
They're launching in more cities.
Wow.
It's pretty amazing.
Wow.
Damn.
And the company grew how fast?
So
let's see.
I think the numbers are something like...
Five years, you are the youngest.
Five years from when you started it, you become the youngest billionaire in the world.
Yeah, that's crazy to think about.
That did not feel obvious.
The first year, it was like, it was like
for the
first 12 months, it was like
one to three people.
Like, it was like, it was like almost nobody.
It was like me and like one or two other people working on it.
For the first year.
That's it.
That's it.
For the first one year.
And then...
After the second year, we go from that like
one
to three people, and we start hiring more people we get to maybe
like 15 or so people
and then
that third year
we went from 15 or so people to
like maybe a hundred
and then we were kind of off the then was like 100 and then we and then we were like 200 and then 500 and and then we kept growing and now we're up to like 1100 people um
but the first it was like really slow going at first.
And
yeah.
And we we we
we focused on first it was autonomous driving
and then
and then starting
starting about three years in we started focusing on defense
and working with the DOD.
What are you guys doing in defense?
So
we do it we do a few things.
So one of the first things we did was help the DOD with
its own data problem to help them be able to train AI systems.
So
one of the first things that we worked on was like, you know, they wanted to, the DOD wanted to do image recognition on satellite imagery, SAR imagery, you know, other like all forms of overhead imagery, but they had this huge data problem.
You know, just like me with the fridge,
they had the same problem.
Like, how, you know, they need to be able to have data that lets them detect things in all this imagery.
And so, we, the first thing we did was fuel the data sets and data capabilities for the DOD.
That was true for the first few years.
And then,
more recently, we've been working with them to do large-scale fielding of AI capabilities.
What kind of stuff is DOD looking for in imagery?
So,
So basically the way I understand this is
you don't need a human to detect something maybe like a nuclear reactor.
Am I on the right track here?
So they look.
Or a missile silo or yeah.
And so AI is detecting all these, which drastically reduces human error, human manpower, all that kind of stuff.
It's more accurate.
Yeah, and it's, I mean, mostly it's scalable.
Like,
I mean,
the number of satellites in space has like exploded.
So we have so much more sensing today, like way more imagery, way more sensing today than it's even like feasible for humans to work their way through.
Wow.
So that was, yeah, that was like the first problem.
How do you fuel it?
You, well.
You have to build so there's there's two parts.
So first you have to build effectively like a data foundry.
You have to build a mechanism by which you're able to generate lots and lots of data to fuel these algorithms.
A lot of it synthetically, so using the algorithms themselves to generate the data, but then a lot of it you still need humans to validate and verify.
So, one of the things we did actually for this whole project is
we created a facility in St.
Louis, Missouri, next to
NGA, the National Geospatial Intelligence Agency, and we produced a center for AI data processing where we hired up imagery analysts to be able to validate the outputs coming out of the AI systems to ensure that we were getting the correct, you know, we're getting accurate and high integrity data to feed back into the AI systems.
Wow.
Wow.
Damn.
Where do we go from here?
Yeah, so then so we were doing, so we were doing
lots of stuff around imagery and computer vision.
And then
we started working with the DOD on, you know, more ambitious and larger scale AI projects.
So one of the things we're working with them now is this program called Thunderforge, which is using AI for military planning and operational planning.
So
more broadly, so the basic idea here is can you use AI to
effectively automate major parts of the military planning process so that you're able to plan within hours versus taking many days?
This sounds like Palantir.
It's,
yeah, they target different parts of the problem, and we target different parts of the problem.
And ultimately, we work together pretty well.
But this is part of a broader concept that we have around
what we call agentic warfare.
So, the use of AI and AI agents in warfare.
And the the basic idea is: can you go from these current processes where humans are the loop to humans being on the loop?
And so, can you go from situations where these workflows have to go from a person has to do a bunch of work, then pass the next person, they have to do a bunch of work, past the next person, to the AI agents are just doing a lot of that work and humans are just checking and verifying along the way.
And
it's a big change.
So, going from, you know, if you compare both setups side by side, here you have individuals, humans with decades of single domain experience who are doing each step of this process.
And then if you have the AI agents doing it, ideally, you have AI agents who have, you know, thousands of years of knowledge, all domain knowledge, and are,
you know, a thousand times faster at doing the actual tasks.
And so it's all about taking, and this exists at many, many different levels.
So, you know, there's, you can think about this for the sensing and intel portion that we were talking about before.
So, you know, can you accelerate the intelligence gathering, you know, the process by which we take all the sensor data and turn that into insight?
You can think about it for the operational planning process, like how can you accelerate that
entire flow.
You can think about it in terms of,
you know, on the tactical side, how do you accelerate tactical decision making?
So
it bleeds into every sort of like level of warfare or every component.
But at its core, how do you use AI agents to be faster, more adaptive, and have humans just check their work?
So when you're talking about it helps with mission planning, especially in a tactical environment, because that's where I come from.
I mean,
what is...
It could be any example, but can you give me an example of how it speeds up the mission planning process in a tactical environment.
Yeah, so let's say that, so this thing that we have, by the way, we're working on it with Indo Paycom and UCom right now, and
we'll deploy it more broadly.
But
let's say that there's
a
good example.
Let's say there's some kind of alert that pops up.
Like there's something that
we didn't expect that we need to figure out how we're going to respond to.
Like what kind of an alert?
alert so i mean let's say there was like uh you know um there's like i mean you can imagine at different levels but let's say there's like a ship that popped up that we didn't expect okay as a simple example so then that alert flows into a bunch of ai systems that are going to the first step is sensing so where like let's look through all of our sensing capabilities and let's like go reanalyze all of the data that we have and figure out how much do we know about that ship, right?
So now a person would like an analyst would go through and like do all this, you know, all the PED and all the stuff to be able to undergo this work.
But ideally, you have AI agents that are just going, they can look through all the historical sensor data.
They can figure out,
oh, actually, there's like kind of a thing that showed up on this radar and there's kind of a thing that showed up on this satellite imagery.
And we can kind of like sketch together this, like, you know, the trajectory of this, of this ship.
Okay, so you go through that process, you try to understand what's going on.
And then
you go through and and
figure out, okay,
what are the possible courses of actions?
So once you have situational awareness, then what are the courses of actions against this particular scenario?
And you can have an AI agent honestly just propose courses of actions.
Like, hey, in this scenario, given this ship is coming here, you know, we could fire at it.
We could just wait to see what happens.
We could reposition so that we're able to
handle the threat better.
You know, all sorts of things, we could, we could reposition some satellites so we have greater sensing.
You know, there's all sorts of different courses of actions that we could take.
And then
once the AI produces those course of actions,
it'll run each of those different course of actions through a simulator.
So it'll then run
It war games at real time.
Exactly.
It'll war game at real time.
And so then it'll run through a simulator and say, okay, what's going to if we fire at it?
Like, you know, this is what we know about red forces.
This is what we know about blue forces right now.
If we fire at it, this is like, you know, this is the war game of how that plays out.
If we just increase our sensing, like, these are the things that the red forces could do to fuck us up.
And like, that's the risk that we take on.
And
then the benefit is, because all this is automatic, you can run it these war games and these simulations a million times.
So it's not just like one, you know, military planner is just like trying to like war game and plan it out, like, you know, in human time.
It's like you could run a million simulations because you don't have perfect information.
You don't have perfect knowledge.
So you need to kind of figure out based on the uncertainties of the situation, what are all the potential outcomes that pop out of that.
Wow.
And then, so you run like a million different simulations of each of these different courses of action.
And then you can give a commander direct, like you just give them this whole like brief and presentation, which is basically, these are the courses of actions we considered.
This is the, this, these are the likely outcomes in those courses of action.
We can show you the simulated like outcome in each one of these scenarios.
So we can like show you what it would look like.
in every one of those scenarios if it happened, like representative simulations.
And then the commander makes a call.
Wow.
So it's
this is what it is.
This is what it's doing.
These are the possible courses of action.
These are the consequences of each action.
This is the percentage.
Yeah, exactly.
And it spits that out in what?
A matter of seconds?
Yeah, no, it takes a, you know, probably takes, even now, it probably takes a few hours because, you know, these models are a lot slower than they will be in the future.
But yeah, I mean, compare that to, I mean, depending on the situation, like.
That could take, you know, that could take days for humans to do today.
Like, it's, and, and it's not from lack of will or effort or capability.
It's just it's a really complicated situation.
If a ship pops up out of nowhere, like, there's a lot of stuff you have to consider.
Um, and so uh, that's really the
step change here: it's just like a
like dramatically accelerating situational awareness, dramatically accelerating like
an understanding of what the different course actions are, what could happen, what are the consequences, um, and surfacing that to commander.
Does it make a recommendation?
This is kind of an interesting thing.
We go back and forth if we want to make a recommendation because ultimately, like,
we don't want
to just be like, you know, we don't want to let commanders kind of like sleepwalk, if that makes sense.
We want them to like, you know, our military commanders are the best humans in the world, like considering all of the potential consequences of these different courses of action and also considering, you know,
and ultimately making a call based on those potential consequences.
So I think we want to ensure that
commanders are still exercising their judgment in these decisions versus just, you know, making it easier for them to just say, oh, go with what the AI says.
Interesting.
Wow.
But this, but then, okay, think about what happens next.
So, um,
and this is where stuff gets gets really freaky.
So
let's say that
obviously in a world where just the blue force, just the United States has this capability.
That's great.
You know, we're going to
be running circles around everyone else.
But then what happens if the red force, you know, China, Russia, whomever, also has the capability?
Then you're in this situation where
I've wargamed out the whole situation.
You know, they've instantaneously wargamed out the whole situation.
and then it's like
then then it i think i honestly think so then it's like
we know and you know like blue forces red forces we both know that
we both have like you know this perfectly wargamed scenarios which avenue do you pick and then it becomes this really complicated almost like psychological
you know kind of kind of situation where it's like then it like all comes down to how good our intel is so how goes our intel about that commander commander?
How good is our intel about what their collection capabilities are?
How goes our intel about what they likely know about us, and vice versa.
And it gets pretty
so.
This is actually
let's just
so let's say China, Russia,
our enemies have this capability, we have this capability, then it then it kind of becomes
it's like the same process that we deal with now.
Who has the better intel, right?
It's just developing
and you're going to a course of action quicker, and the enemy's doing the exact same thing quicker.
So it's essentially the exact same thing that we're doing now,
but faster.
And so if we develop it first, then
we achieve basically global domination.
Am I correct here?
Yeah, I think, and I think timing really matters here because if we get this capability,
and this will go for, I mean, there's like, there's way more, there's, there's way more AI we'll be able to do, but let's say we get this capability, you know,
a year ahead of adversaries,
then you're, then like, we're just going to be able to respond so much faster.
The analogy I often use is like, imagine we were playing chess, but for every one move you take, I can take 10 moves.
Like, I'm just going to win.
And that's what, that's the asymmetric advantage that comes out of
this capability.
And then, once it, but then once it equalizes, then
it's like this very, you know, it's like, to your point, becomes this like adversarial, Intel-based, you know, capability-based kind of conflict.
How
do we?
How do we combat our adversaries from having this type of intel from having this type of AI system?
So
I think then
I mean
China's demonstrated with DeepSeek
and you know models that have come out since then, they're going to be very competitive on AI.
And in
I think in 2024, so last year,
there were something like 80 contracts between
large language model AI companies in China and the People Liberations Army, the PLA.
That number is not 80 in the United States.
Like the United States is like way, way, less than 80.
So they're very clearly accelerating the integration of AI into their national security and into their military apparatus very quickly.
I don't think at this point realistically we can stop them from having
this capability that I described.
So then you go to the next layer down.
So
Intel.
So
well the next layer down, the next two things that you look at is, okay, how does AI impact Intel?
And how does AI, how can we, what is the adversarial AI dynamic?
Like, can we use our AIs to sabotage their AIs?
Can they use their AIs to
sabotage ours?
And it's like AI on AI warfare, effectively.
Then when you look at that scenario, okay, so let's dig into that.
The first level analysis here is kind of what we were talking about before, which is that probably just boils down to how many copies of these AI systems do I have running versus how many copies do you have running?
So it turns into a numbers game.
If I have 10,000 AI copies running and you only have 100 AI copies running, then
I'm still going to run circles around you.
And that boils down to who else.
How so?
So
let's say you have 100 AIs,
I have 10,000 AIs.
I will take half of my AIs, I will take 5,000 of my AIs and just focus them on hacking your AIs.
So
they're all going to be looking for vulnerabilities
in your information architecture, in your data centers.
I'm going to look for, I'm going to, you know, I'm just purely focused on cyber hacking of your 100 AIs.
And then my other 5,000 copies are going to do the military planning process for myself.
Then
think about the adversary.
I have this choice.
I have 100 AIs.
If I have them all focus on doing
the military planning process, I'm going to get hacked because I'm not doing any cyber defense.
And then even if I have all of them focus on cyber defense, even those numbers are bad.
It's like 100 AIs versus 5,000 AIs from you.
And so I probably still get hacked.
So the numbers end up mattering a lot.
Even if the other adversary, let's say it's only a 2x advantage, I have 10,000 copies running and the adversary is 5,000 copies running.
I can do the same thing.
5,000 of my copies are just focused on hacking your AI so that your AI is incapacitated or has incorrect information or
is poisoned in some way, like basically is incapable, incapacity for some reason.
And the other half of my AIs are focused on the military planning process.
Again, the adversary is screwed because to properly deal with a cyber attack, I need probably all 5,000 copies to be focused on cyber defense.
and then I have no capacity left to do the military planning.
Wow.
So
it really turns into this like
very
like
just in the same way that you would you would command your forces
today, like all of your you know your various your forces across all domains to like try to pincer outmaneuver the enemy, you'll do the same kind of planning for your like AI army, so to speak, or your AI allocation of assets.
Yeah, your allocation of assets, exactly.
And a lot of it will be, okay, how many am I dedicating towards
hacking and sabotaging the opponent?
How many am I dedicating towards my own military planning and wargaming process?
The other thing is how many you allocate towards,
you know, the other key p component here is drones and how many you're allocating towards doing the like
very tactical mission level autonomy to accomplish mission level objectives.
But
it'll be like,
I think it really boils down to ultimately who has more resources.
And then what are those resources?
That's going to be about large scale data centers.
So who has bigger data centers and more power to run all these AI agents?
And who makes the determination of how many AIs we're going to put in the tactical environment, how many AIs are going to go after
cybersecurity, trying to hack into the other AIs?
Is that a human or is that another layer of AI that spits out
exactly what you just said?
This is our situation.
Here's the courses of action.
Here's the consequences of what happened.
So is it just AI after AI after AI that's doing all of this, all these simulations?
Yeah, that, yeah, no, you're exactly right.
Then I, yeah, exactly.
You have another AI that's planning out and mapping out, you know, how should I allocate my AI resources to properly deal with the adversary, given what I know about the adversary.
And then, so then what are the ways in which, you know, what are the, so then what are the key dimensions that would give you an edge versus your adversary?
Well, it's if, A,
your AI is different somehow.
So it actually is like hard for your adversary to know exactly how you're how you would act.
Like basically strategic surprise in some form, in the form of like a different thinking process or a different sort of like way of reasoning of the AI systems.
And then the other one is like
ambiguity of how many, how many, what your resources actually are.
Like if somehow I can make the adversary think that I have way fewer resources than I actually do, or way more resources than I actually do, that'll be a critical element of
strategic surprise in those kinds of situations as well.
Wow.
Would an AI be able to be able to
would AI
be able to
alert
if it if it will it know it's been hacked?
So
yeah, this is a great question.
You know,
Right now, probably yes.
But
it's definitely possible in the future that you will be able to
effectively hack into a system or somehow poison an AI system
and
have that activity be relatively untraceable.
Because you would basically
you would you would hack into that AI system.
So there's two ways you would do it.
One is you poison the data that goes into that AI.
So
I'm not hacking into the AI itself.
I'm just poisoning all the data that's feeding into that AI such that at any moment in the future,
I can activate that AI and basically hack it without any sort of active intrusion.
But I can just do it because I've poisoned, I've like poisoned the AI, the data that goes into the AI such that if I like, you know,
say something,
it alters the decision-making process.
Yeah, exactly.
But the
end decision-maker, which would be a human, would not realize that.
Yeah, exactly.
Okay.
So, so data poisoning is going to is, but this is what's so terrifying about DeepSeek.
One of the reasons why DeepSeek is really scary is,
you know, China chose to open source the model, right?
So there's a lot of corporates, large-scale corporates in the United States, that have chosen to use DeepSeek because they're like, oh, it's a good model and it's a good AI and it's free.
Why not use it?
But DeepSeek itself as a model could already be compromised, could already be poisoned in some way, such that
there are characteristics or behavior or ways to activate DeepSeek that
the CCP and the PLA know about that um that we don't.
So so that's why DeepSeek is scary.
And why so so the first area is just data poisoning.
So basically, can you poison the data that we're using to train the AIs such that, to your point, I've altered the behavior of your AIs in a way that you don't know about, and that's going to affect, that's going to have cascading effects across your whole military operation.
That's one.
And then the second one is
basically,
you know,
if you're able to do the whole operation quickly enough, you basically hack in and you
kind of miss what we were talking about before.
You would like destroy the traces.
You destroyed any sort of trace that like you had hacked in.
And you have an agent that hacked in, like removed that trace and the evidence of you hacking in
before anybody, before it was alerted or notified.
That's maybe a bit more extreme, but definitely the data poisoning stuff is more concerning in the near term.
Damn.
So
how would you defeat it?
So if it were to be hacked and you knew it was hacked, then AI becomes completely irrelevant, correct?
Well, the issue is we're still going to rely on it for lots of things.
So.
It would have to come down to the human mind again.
And
you would have to, let's say it's a ship, you would have to
know everything that you've done in the history so that it doesn't detect what tactic you're going to use and do something,
just something that's never been seen before in order to confuse the adversary's AI, correct?
Yeah.
So you'd have to make a drastic change that you don't know if it's actually going to work so that the AI doesn't detect, oh shit, we've seen this before, this is what it's about to do.
Yeah.
Yeah.
So, so to your point, yeah, strategic surprise becomes the name of the game very quickly.
Um,
and
how do you create an operation such that you maximize the amount of strategic surprise against an adversarial AI?
That's one.
And then honestly, the second thing that's that's really critical is a lot of this will just plain up boil down to, like, straight up boil down to how
many copies you have running and how large your data centers are and
how much industrial capacity you have to run these AIs, both centrally and at the edge in all the war in all the theaters,
in every environment.
How fast will it learn new technology?
So let's just take, for example, Sauronic.
They're making autonomous surface warfare vehicles, or Palmer Lucky, you know, he's doing the autonomous submarines.
And
so, when
what am I trying to say here?
So, let's say we're at war with China.
China has all the data, all the history back from whatever, World War II on different capabilities that we have.
And
what happens when a new
when something new is introduced onto the battle space, like Sauronic's autonomous vehicles, vehicles or Epirus or
Palmer's rockets or his submarines.
How would the AI
get the data set
to make a decision or not make decisions, but come up with what you're talking about?
Courses of actions, consequences, what it's about to do,
you know, probability of what's going to happen.
How fast will it be able to learn
when something new is introduced onto the battle space?
Yeah,
this is a great question.
In general,
the first time it sees
a totally new, let's say a USV or a UUV or whatever it might be that it's never seen before,
it won't be able to predict what's going to happen.
Because it won't know
how fast it's going to go.
It won't know what,
you know, what
munitions it has, it won't know what its range is, it won't know all the
key facts.
Unless, by the way, they have really good intel and they already know all those things because they've hacked us.
But let's assume they don't know.
So, the first few conflicts, it's not really going to be able to figure out what's happening.
And that's a
key component of strategic surprise: always having new platforms that won't be
sort of simulatable, let's say, by enemy wargaming tech.
So that's definitely part of it.
But at a certain point,
it's going to know what the hardware are capable of, and it's going to be able to run the simulations to
understand how that changes the calculus.
Because ultimately, right, what's going to happen is
And some of this stuff, like, you know, this is this is like,
you know, some of this stuff is dissonant because obviously if you look at what happens today in the military it looks nothing like this but let's play the play the tape forward and like see what happens in the future ultimately you're gonna run large-scale simulations and it's going to figure out hey
this new
you know uh
unmanned surface vehicle has this much range it can go this quickly it can maneuver in this way it has this kinds of munitions
it has this kind of connectivity,
it is vulnerable to these kinds of EW attacks, whatever they may be.
It can be jammed in these ways, and those will all just be parameters for the simulation to run.
So I think.
But initially, you would have no recommendations.
Initially, you'd have strategic surprise.
So OPSEC, when it comes to weapons capabilities, is still just paramount.
And it will, I mean, will it always come back to the human mind?
Yeah, I believe so.
I believe that, you know, we have this concept that we talk about a lot, which is human sovereignty.
So
AI systems are going to get way better, but how do we ensure that humans remain sovereign?
How do we ensure that humans maintain real control over what matters?
So maintain control over our political systems, maintain control over our militaries, maintain control over our economic systems, you know, our major industries, all that kind of stuff.
And so,
and I believe it's pretty paramount in the military.
You are not going to want to take, certainly, just as like a simplistic thing, we're not going to give AI the capabilities to unilaterally fire nuclear weapons.
Like, we're never going to do that.
And so, ultimately, so much of what is going to become really critical is
the aggregation of information, simulations, wargaming, planning to humans to ultimately make the proper decisions and by the way so much of this will will start bleeding into the diplomatic decision like diplomacy diplomatic decisions that need to be made
it'll bleed into like
uh
into economic warfare like it'll bleed into i mean this this goes all the way into
I could see this going all the way into relationship building with with uh
in between nations.
nations should we you know what are the what are the outcomes if we become allies with
russia yep you know what what what are the courses of action what are the consequences i mean is it does it so it bleeds into everything
politics allies adversaries warfare economics all of it yeah totally because if you ultimately boil it down what is the capability the capability is
sensing and situational awareness.
So I'm going to know, I'm going to be able to go through troves and troves of data, OSINT, other forms of
like open source Intel, different kinds of various Intel feeds that I have, and know what is the current status, what's going on,
what is the current situation.
It'll be able to aggregate all that data in to provide a comprehensive view as to what those behaviors are.
And it'll give you the ability to predict.
And it'll give you the ability to effectively play forward, you know, every potential action you could take, what would happen in those scenarios with some probabilistic view, some probabilities.
And then, yeah, you're going to use that
for every major decision.
Like the military and the government should use this for every major decision we make.
We should do it for trade policies.
We should do it for diplomatic relations.
We should do it for
we should do it off, you know, we're looking outwards, but honestly, we should also do it for like internal policies.
Like, you know, what are our health care policies?
What are our,
you know,
all that kind of stuff, too.
But
so it will, this capability of sort of
effectively
all domain sensing plus planning is going to be paramount.
Do you
Man, I have so many questions.
Do you see a world where
AI becomes
so powerful throughout the world that it becomes obsolete?
And we're right back to where
we were, I don't know, 10 years ago, 20 years ago, where it's all human decision-making.
Well, will it outdo itself
a few thoughts here.
I think
so
one of the things so I think the first stage of what's going to happen is like
kind of what I'm saying like human is the loop to human on the loop like we're gonna right now humans do a lot of just like
like brute force manpower
work in all sorts of different places you know in the economy and in warfare etc
That's like the first level of
major automation that's going to take place.
So then it's like about
your
strategic decision making
and
your ability to make high judgment decisions that consider long-term, short-term, medium-term, all that kind of stuff.
At a certain point of,
well,
at a certain, as the AI continues to improve and improve and improve and improve,
it will operate at a pace that is very, very difficult for humans to keep up with.
And
in,
you know, this will start happening in RD first, in research and development.
Like, AI will be able to start doing lots of scientific research, lots of R D into new weapon systems, lots of R D into new military platforms, et cetera,
much faster than than
humans
would be able to do.
And then humans will just check over their work and decide.
And so it's going to sort of race faster and faster and faster.
And
so then what happens, I think what it'll do is it'll create dramatically more weight on the few decisions that humans make.
So any decision that, like all the way to the extreme, right, is
the president or
whomever making decisions about do I let my AI collaborate with another country's AI?
Like, that'll be like a decision that'll have just like dramatic consequence,
much higher consequence than like similar decisions today.
So, I think it
almost to your point, it like
it will,
as it accelerates, will end up at a place where you're right, it all boils down to human decision-making, but those decisions will carry
like
a thousand times more consequence
how do you decide who you're gonna work with i mean it's an international company yeah
um
so we've had who all are you working with uh well so first thing is we're pretty
we're pretty picky about who we work with ultimately just because we have
We only have so many resources and building these systems and building these data sets like is pretty involved as kind of we've discussed.
So, you know, our aim generally is how do you work with the best in every industry?
You know, how do you, how do you work with, you know, like kind of mentioning the number one bank, the number one pharma, the number one telco, number one military, et cetera.
The only addition to this that I would say we viewed as important is how are we
as you as we play the tape forward and everything we're just discussing,
it's really important
that
as much of the world runs on
an American AI stack versus a CCP AI stack.
That becomes really, really important.
And
it matters not only for
ideology and, you know, kind of as we were talking about before, like propaganda and control and all that kind of stuff, but it also really matters just for like, you know, at a pure operational level, like, we're going to want to be able to have as extended
AI capabilities as possible.
So,
okay.
So, the way I understand this is
you're working with X-Country.
We'll just say,
we'll just say country X.
You give Country X the AI model to utilize for whatever they're doing.
Let's just say warfare.
We own, but they have to tap into a U.S.-based data center.
Am I correct here?
And so as long as we control the data center that's feeding that AI model, we essentially own it.
And that...
And Country X just has to trust that
scale AI has their best interest.
Yeah, it's like next level.
And if they change, if they change, let's say Country X now
forms an alliance with China.
They decide
they don't want to be a part of America.
Then we just yank the AI or the, not the AI, the, the, the, the data that feeds that AI or manipulate that data to where it's
essentially been hacked.
Am I correct?
And that's how we keep ourselves safe.
Yes.
And then, with the addition, like, I think the way that
at least we think about it today, and I think a lot of people think about it today, is like
it's okay for the data center to be located elsewhere, located in the country, as long as it's US-owned and operated.
Because then we still have control in, you know, any sort of scenario that happens.
And the only other thing I would say is we're much more focused initially on just
low stakes uses of AI.
So can you use AI to help
the education industry in one of these countries?
Or can you use it to help the healthcare industry?
Or can you use it to aid in
like, you know, permitting processes?
Or, you know, low, I think low stakes use cases matter a lot more initially.
But I really do think like,
you know, we have this concept of geopolitical swing states.
There are there are a number of countries right now in the world where whether they side with the US or China over time is going to have immense consequences for
certainly
what a potential conflict scenario looks like, but also even what the long-term Cold War scenario looks like, like what happens over time
as
our countries are interacting.
So
I view AI as one of these key elements of diplomacy and long-term sort of
like
long-term
strategic impact
in the international war game.
How would AI be implemented into our government?
I mean,
I can't remember exactly what you said,
implemented to run, you know, our political sphere.
What does that look like?
Yeah, so
because so much of that is people's values and
what people believe in and stand for.
And, you know, I mean,
like today, for example, I mean, country is probably more polarized than it's ever been.
And so
how do you get an AI model to run government when it is this polarized and there's so many different ideologies and
part of the country is way over here, the other part's way over here.
How would an AI model
run that?
Yeah, so
we have this concept of kind of like agentic warfare, agentic government.
So
can you,
just like the same thing, can you take these very inefficient processes in government and start replacing those with AI-related functions so that
you're just improving efficiency and improving outcomes.
Give me a specific example.
Yeah, so one super simple one.
Right now, I think the
average time it takes for a veteran to see a doctor in the VA is something like 22 days.
It's way too long.
And part of that is because of
a host of antiquated processes and workflows and, you know,
just in general, that system's not working.
I think we can all look at that and say,
that's not a functional system.
And so
can you use AI to,
you know, AI agents to automate some parts of that process, automatically get whatever approvals need to be gotten, get whatever information needs to be gotten such that that 22 days becomes a day or two or something like that.
That I think is like a no-brainer, just pure win for government efficiency overall.
Another one that
Other ones that are like big are like, you know, permitting processes.
So if I want to build a new data center
or even I just want to like remodel my home, the you know, permitting processes, depending where you are, it could take literally take years for all that to go down.
And part of that is like there's so many different approvals that need to happen, there's so many, like, there's all these like different workflows and things that need to like happen.
What if instead we just codified what are the rules of the system and had an AI agent just go automatically go through that permitting process so that you could get that permit or get the permit denied within like a day, right?
Um, so and just that times a million, like the
like,
like one of the things from Doge that they found, right, is that, you know, the
retirements are stored in the mine, Iron Mountain mine,
a literal like
iron mine, or like the paper copies of the retirements for all the federal employees.
Like, can we just take that, which is two generations behind in terms of tech, like it's like literally pen and paper, and then use AI to go from two generations behind to two generations forward.
Like, can we just automate as much of those processes as possible?
So,
so I see it as just like, you know, all over the place.
There's so much low-hanging fruit in terms of just making current government services and government processes way more efficient.
I think that
I haven't met anybody who doesn't think this is the case.
So, that's just all the level one stuff.
I think
the
yeah, that's just all the level one stuff in improving how our government operates.
Would it eventually replace politicians?
That's a good question.
I think ultimately, like
we
so
first off, just like
taking a step back, it's definitely the case that
the speed of policymaking and the speed of legislation and the speed at which the government reacts to new technologies, like that's going to have to speed up.
I've spent a lot of time in DC trying to make sure that
as a country, we get the right kind of AI legislation and the right kind of AI regulation to ensure that this all goes well for us.
It's been years of trying to get that done.
You know, we still haven't really figured that out as a country.
What is the right AI regulatory framework?
Like, that's still, it's still undecided.
I mean, how do you even describe this stuff to the dinosaurs that are still sitting in DC?
I mean, we've got people stroking out on camera.
We've got people literally dying in office.
I mean, we got people up there that probably can't even figure out how to open a fucking email.
And then you come in, 28 years old, built scale AI.
I mean, I just, I mean, just going all the way back to when, you know, Zuckerberg's sitting there, you know, talking to Congress, it's, it's, I mean, and I don't agree with everything you did and whatever.
It doesn't matter.
But I look at that and I'm like,
you guys have been sitting in D.C.,
probably don't even know how to open your own email.
And you're talking to a tech genius who's trying to dub this down and make you understand.
I mean, I get
one day with you, you know what I mean?
And to try to wrap my head around this.
And they have 50 million other things they're dealing with.
They're not up to speed on tech.
I mean, how do you even begin to
tap in?
I mean, I think a lot of it, I think
the first thing, and I think this is like a lot of people in the know understand this.
Like a lot of the minute decisions really end up being made by staffers, right?
And
I think like generally speaking, like staffer, you have to be extremely competent as a staffer, no matter what.
Like there's just, it's a very chaotic job.
There's a lot that's, there's a lot that's going on and they have to make very fast decisions.
The other thing is I think I think analogies are pretty helpful.
Like, I think, you know, everybody alive today has seen the pace of technology progress just increase and increase and increase and increase.
Like, I think that
you'd be hard-pressed to find anyone who doesn't believe that AI will be
this world-changing technology.
Now, exactly how it'll change the world, I think that's where it gets fuzzier,
but it will be world-changing technology.
The issue is like, I mean,
the political system just doesn't respond very quickly, right?
And
that's going to be very harmful.
I mean, we need to be able to respond very quickly to these new technologies.
And so
I think they'll become more and more obvious.
Like, I think as AI and other technologies accelerate, it'll be very obvious that the world will just change so quickly.
And frankly, I think voters are going to demand faster action.
And so I think our government is set up to
accelerate,
but that's what needs to happen.
How do we power all this?
I mean, that's a big discussion, you know, and
everybody seems so apprehensive to go nuclear.
The grid is
extremely outdated.
I mean, we just saw the light flickers here about, I don't know, 30 minutes ago.
Power outages happening all the time.
There was just a big one, all of Spain,
Portugal, Italy.
I mean,
it's happening all the time in the U.S., power outages.
How are we going to be able to power all this stuff?
What would you like to see happen?
Yeah, I mean, first of all, if you look at, if you take a graph of
China's total power capacity over the past 20 years versus US total power capacity over the past 20 years.
The China graph is like
straight up into the right.
They're just adding crazy amounts of power.
They've doubled it in the last decade, I think.
Doubled it.
Doubled their power capacity in the last decade.
And
the United States is basically flat.
It's grown like a little bit.
And so
we're like,
that's what's happening right now.
Right now, China's doubling every decade or so.
U.S.
is
basically flat.
And we're looking at, you know, the
for to just power the data centers that today AI companies know they want to build, we're going to need something like a doubling of our energy capacity.
And that needs to happen very, very quickly.
Like almost, you know, that has to happen almost immediately.
And so you have to believe that our graph is going to go from totally flat to vertical, faster vertical than China's energy
growth.
And
China in the meantime is just
growing perfectly quickly.
They'll accelerate.
They'll add more power to their grid.
Like, I think it's very hard to imagine realistic scenarios where without drastic action, the United States is able to grow its energy capacity faster than China.
Now, where are we on the so if China's going straight up and we're flatlined, I mean, does that mean, are you saying that China has surpassed our power capabilities or are we still above them even though they're on the rise?
They're definitely above us because they have a bigger population and they have way more industrials.
So they have
all double chat.
They definitely have more power total than us,
more power generation capabilities.
And
by the way, like it's actually not rocket science why that is.
It's if you look at, if you then break that down to sources of that power in China, it's because coal is like 80% of that.
Yeah, it's like they're all, they're all coal.
Yeah.
It's just tons of coal.
And then we've actually, like, if you look in the US, renewables have grown a lot, but a lot of it, the reason the overall number is flat is because we're using renewables to replace coal, natural gas, like fossil fuels.
And so when you net it out in the U.S., we're flat.
And then in China, it's straight up.
So that's the first thing.
Like we need drastic action.
You know, the administration has the National Energy Dominance Council.
We've sat down with them a few times.
Like that,
we have to take drastic action to enable us to
at least start matching their speed of adding energy to the grid and ideally surpass it.
That's like, that's the first thing.
The second thing, like you're talking about, is our grid is extremely antiquated.
And that's a major strategic risk.
You know, I don't know what the what the cause or the source of the
outage across Spain was, but you know, some people think it was a foreign actor or some kind of some kind of cyber attack of some sort.
I guarantee you the U.S.
energy grid is extremely susceptible to large-scale cyber attacks.
It would be,
you know, and the way, you know, the sophistication of these cyber attacks sometimes is like so so stupid.
It's like if you find the right
power plant login terminal to go into, sometimes people don't change the username and password from the default, which is username and password.
And so you can just find like some power station in like Wyoming that still has...
the username and password is username and password.
You log in and you can shut down the entire power in the entire region.
So the like
the so, so, our grid, just because of how aneque is, how decentralized it is, every, all of that, is hyper, hyper-susceptible to
cyber attacks, hyper-susceptible to foreign action, foreign activity.
And
that matters now.
Like, right now, if you take the energy grid in a major city, people will die.
So, it's like, it's bad now.
But then let's go back to what we were just talking about with AI.
Like, let's say we have large-scale AI on AI warfare with China.
They just take out the power grid, take out our data centers and the power fueling those data centers, and then we're sitting ducks.
I mean, not only that, but it's my understanding
that China
actually produces and manufactures a lot of the major components that go into our grid, like the transformers.
We don't even,
to my understanding, we don't even check those for malware, Trojan horses, shit like that.
In fact,
DOE actually did an inspection on one and never and never even released the results of what they found, which probably means they found some shit.
And
I mean, I just, I don't know
how we combat that.
I mean, just like the, like, what, what is, where did that happen elsewhere?
Like, look at Salt Typhoon.
Like, this was a recent hack that was declassified, which is that Chinese malware and cyber activity like basically had fully infiltrated our
major telecom providers.
I think AT ⁇ T was like
entirely compromised by this hack called Salt Typhoon from the CCP.
And
they did that so that they could read all the messages, like all the SMS, all the audio, they were able to capture as part of that, as part of an Intel gathering operation.
But if they're able to hack into our telco, they've sure as helped.
You know, they're clearly capable of hacking into our energy grid, clearly capable of hacking to
any of our other critical infrastructure.
And
it just goes back to what we're talking about.
Like the energy grid,
A, if we can't produce enough power, we're hosed.
And B, if the adversaries can take out our power at will, we're hosed.
And so we have this major, major vulnerability as a country on just like the cyber posture of our energy grid I think it's like I think it's one of the
the biggest like very obvious like flat out
like clear vulnerabilities of our overall of our entire country
a just like you create civil unrest you can like take you know imagine you took Houston's power grid out people would die and you cause like all sorts of chaos.
But then you take out these data centers, you take out military bases, you take out radar systems, you take out, you know, you name it.
You can take out almost any piece of homeland infrastructure, and that goes create huge strategic openings for your adversaries.
I mean, what...
You have to run in these circles.
I mean, you're building massive data centers, correct?
And so
when you go to DC and you're advocating, hey, we need more power and you just, I didn't, what's the association you met with?
The National Energy Dominance Council.
What do they say?
They totally agree.
I mean, they know we have to build more power.
And then it's about, so then you get to the next layer of detail.
It's like,
okay, how can we, how do we accelerate nuclear?
How do we accelerate the permitting process?
What are existing power generation capabilities that we turned off that we can turn back on?
Like you go through all the natural things to do.
Like, it's, I mean,
I think we know what to do.
The questions that we can get out of our own way.
And if,
and then if our grid is so antiquated that even that vulnerability, like, kind of means that we can be taken out at any time.
I mean, I may have made an assumption.
Are you building data centers?
We, we ourselves are not building data centers.
Feeding the data centers.
We partner with companies that, yeah, that are building, you know, the largest data centers in the world.
Okay.
And so I've also heard rumors that these major data centers are starting to just create their own power source.
Is there any validity to that?
Yeah, so a lot of designs these days involve can you just create a SMR, a small
like a like a nuclear reactor per data center?
Can you basically like have a nuclear reactor co-located with the data center
to
power that data center's capacity?
Which I think is a good idea.
The issue is like, I mean, China's going to be way ahead of us on that.
The largest nuclear power plant in the world is in China.
So,
you know, we're,
yeah, obviously we need to lean into nuclear.
That needs to happen.
Obviously, we need to lean into all power generation sources.
We need kind of an all-the-above approach to power generation.
But even that doesn't get us to a posture where you're confidently exceeding China.
You're just kind of catching up to where they are.
And so,
I mean, this is a huge, a huge issue.
Yeah.
Let's take a quick break.
When we come back, I want to dive more into China's capabilities and our capabilities.
If you ever searched yourself online, you'd be shocked at how much of your personal data is out there.
Phone numbers, home addresses, info about your family.
It's all on the open web, all without your permission.
That's why I'm so glad I found Aura.
Aura does all the heavy lifting, automatically removing your info from data broker sites and helping you take back your privacy and peace of mind.
And it's not just data broker removal.
Aura also offers a password manager to lock down your accounts, fraud alerts, and more.
Your data is already out there.
The question is, what are you going to do about it?
For limited time, Aura is offering our listeners a 14-day trial when you visit aura.com slash SRS.
That's enough time for Aura to start scrubbing your personal info off these data broker sites.
That's aura.com slash SRS to sign up for a 14-day free trial and start protecting you and your loved ones.
That's A-U-R-A.com slash SRS.
Certain terms apply, so be sure to check the site for details.
Summer's here, and if you're anything like me, you didn't spend the winter just sitting around.
You stayed sharp and kept moving.
And now it's time your gear caught up.
And that's why I want to introduce you to Roka.
I've been looking for eyewear that can handle any situation with performance and style.
And let me tell you, these aren't your average shades.
I've tested them in the real world from shooting to fishing to off-roading, and they hold up.
They're lightweight, don't slide around on my face, and can take a hit without falling apart.
And the best part, they look good.
They're clean and modern, no frills here, just premium eyewear that performs without compromise.
That's something that I respect and that's also why every time I head out the door, I reach for my Roka shades.
Roka is based in Austin, Texas, American designed, no cut corners.
The optics are crystal clear, cut through glare, and the fit stays comfortable all day long.
Need a prescription?
They've got you covered with both sunglasses and eyeglasses.
Not only does Roka have awesome shades, they also have these that protect you against blue light.
I wear these every night when I'm winding down for the day and I still got to look at my phone or my laptop or my iPad.
It just helps you wind down and get ready for bed.
They are a one-stop shop for eyewear that's built to handle whatever life throws at you.
Roca ROCA is the real deal.
Ready to upgrade your eyewear?
Check them out for yourself at Roka.com and use code SRS for 20% off site-wide at checkout.
That's R-O-K-A.com.
All right, Alex, we're back from the break.
We're getting ready to discuss some of our capabilities versus China's capabilities.
And, you know,
we just got done kind of talking about power.
Is China leading the U.S.
in any other realms when it comes to the AI race?
I mean, Xi Jinping has said himself, you know, the winner of the AI race will achieve global domination.
Yeah, I think, well, the first thing, almost as you're mentioning, to understand is China has been operating against an AI master plan since 2018.
The CCP put out a broad whole of government
civil military fusion plan to win on AI.
Like you're mentioning, Xi Jinping himself has spoken about how
AI is going to define the future winners of this global competition.
From a military standpoint, they say explicitly, hey, we believe that AI
is a leapfrog technology, which means even though our military is worse than America's military today, if we overinvest in AI, we
have a more AI-enabled military than theirs, we could leapfrog them.
So
they've been super invested.
Right now, I think the best way to kind of paint the current situation is they
are way ahead on power and power generation.
They're behind on chips, but catching up on chips.
They
are ahead of us on data.
China has had, again, since 2018, a large-scale operation to dominate on data.
And today, in 2023, I think, there were over 2 million people in China who were
working inside data factories, basically as data labelers or annotators, basically creating data to fuel into AI systems.
I think that number in the US, by comparison, is something like 100,000.
So they're outspending us 12 to 1 on data.
They have over seven cities, full cities in China that are dedicated data hubs that are basically powering
this broad approach to data dominance.
And then on algorithms, I think
they are on par with us
because of large-scale espionage.
So
and this is, I think, one of these open secrets in the tech industry that Chinese intelligence basically steals all of the IP and technological secrets from
the United States.
There are a bunch of very concerning reports here.
So, one is there was a Google engineer who took the designs and all the IP of how Google designed their AI chips and just took those and moved to China and then started a company on top
using those designs.
The way he got those designs, by this way, it was this guy,
Leon Ding, I think.
The way he stole the data
out of
Google's corporate cloud, by the way, was that he it was so stupid.
He just took all the code, he copy-pasted it into Apple Notes, into like the Notes app, and then exported to a PDF and printed it and just walked out with it.
That's it.
That's it.
So
this was later discovered.
You know, we found out this happened, but for months,
we had no idea that they'd stolen all this critical IP.
Stanford University, this just came out last week.
Stanford University is
entirely infiltrated by CCP operatives.
Few crazy facts.
So first,
by law in China, any Chinese citizen must comply with Chinese with CCP intelligence gathering operations.
So if you're a Chinese citizen, you're living in the United States and the intelligence agencies in China reach out to you, you have to comply with them.
And so you have to give them what you're seeing, what you're finding, et cetera.
So that, and there's tons of Chinese nationals, Chinese citizens
across all the major elite universities, across all the major tech companies, across all the major AI labs, like they're everywhere.
The second thing that's crazy is,
you know, about a sixth of
Chinese students, so
Chinese citizens who are students in America are on scholarships sponsored by the CCP itself.
And for those on these scholarships, they have to report back to a handler basically.
What are the things they find?
What are the things they're learning?
Otherwise, their scholarships get revoked.
So
there's an incredibly large-scale intelligence operation running against the U.S.
tech industry, which is just collecting all the information and secrets and technological secrets from our greatest research institutions, our universities, our lab, AI labs, our tech companies
at massive scale.
And honestly, I think this is a very underrated element of how China caught up so quickly.
So,
you know, DeepSeek came out of nowhere.
Everyone was so surprised at how capable their model was and how they learned all these tricks.
You know, how much of that is because they came up with all of them on their own, or they managed to have a like exquisite high-end espionage operation to steal all of our trade secrets from the United States and then re-implement them back in China.
What does our espionage look like?
Well, there was a,
I think nowhere close to as good.
I mean, I think, so one thing that
the CCP did for DeepSeek, the DeepSeek lab,
is
after DeepSeek blew up and
the CEO of DeepSeek met with the Chinese premier,
they then locked up all the researchers into a
inside, I shouldn't say locked up, but they like huddled all the researchers together and they took all their passports.
So none of the AI researchers who work at DeepSeek are able to leave the country at all.
And they can't, they don't come into contact with any foreigners.
So they basically locked down the entire
research effort so that it, you know, that makes it very, very hard to conduct any sort of espionage into that operation.
And then there's that report, this is all in the news, but like, you know, a decade ago, 15 years ago,
all of, or many many of the cia operatives u.s operatives in china um were all killed because they were sort of um compromised because one of the communication channels they were using was compromised by chinese intelligence and you know the ccp was able to to effectively like round a lot of them up and kill them so our comparable their espionage on us is like extremely deep you know
huge risk there's incredible amounts of uh you know we're deeply deeply penetrated by
uh by chinese intel um and comparatively as far as i know we have like you know much less capability and i think they've designed it such that it's very hard to infiltrate their ai efforts um
geez so that's how they're so they're they're you know they're they're ahead of us on data they're they're able to catch up through espionage on algorithms pretty easily.
They're ahead of us on power.
So what are we ahead at?
Well, right now we're ahead in chips.
And that's kind of our saving grace is that the NVIDIA chips and the entire stack there are the pride of the world.
And we're the most advanced on these chips.
Chinese chips are also catching up.
There's like a bunch of recent reports that Huawei chips are getting to be, they're basically like one generation behind the NVIDIA chips.
So they're close.
They're close.
So all of this is
pretty concerning.
There was another
report that came out of CSIS recently that there was
a Chinese effort called, it's like the Next Generation Brain Understanding Project or something, where they're basically trying to use AI to fully
understand
the human personality effectively and human psychological behaviors.
I imagine that's ultimately for effectively like information warfare.
As we were talking about at breakfast, like, I mean, China has large-scale information operations, large-scale information warfare, and has been doing that for
decades and, you know, literally decades,
going back all the way to like in-person operations in Hong Kong.
Like they're so sophisticated at all that, and AI is going to enable them to just move much faster as well.
How do we combat that?
Well, I mean, I think we need our own information operations efforts.
Like, I think that's pretty critical.
That's specifically on that thread.
And then I think
we need to acknowledge that at the end of the day,
we are a more innovative country, but we have to dramatically
get our shit together if we want to win long-term in AI.
um,
we need to onshore manufacture, chip manufacturing.
Like, we need to be manufacturing huge numbers of chips.
We can't be dependent on Taiwan to manufacture our high-end chips.
Are we doing that yet at any capacity?
Extremely small capacity.
Like, there are a few fabs in Arizona that can produce some chips,
but the vast majority of the volume still comes out of Taiwan.
We need to tighten up security in
our AI companies dramatically.
like we need to we need to have proper counter intel
on
you know what is the espionage risk in within these companies
we need to solve the power problem that we talked about
we need to have
we need to be investing into you know the cyber threats like investing into large-scale cyber defense
we need to invest into data we need our own programs around data dominance to ensure that you know china doesn't just run away with uh with higher quality and greater ai data sets than us.
So you can go through each of the elements and build the proper plan for the United States to win.
But
have we started any of that?
I mean, I think some things are underway, but
I mean,
not enough, nowhere close to enough
to be sure that the U.S.
will win?
Definitely not.
And they also have a fundamental advantage.
You know,
one of the things that people say a lot now is like, oh, like what we need in the United States is an AI Manhattan project where we like, you know, we collect all the brilliant minds together, we collect our resources, and we have one large effort in the United States.
Well,
it turns out like it's actually really hard to pull that off in the United States, but China can pull it off super easily.
China can just say, hey, all the best AI people, you now work in one company.
We're going to pool together all of your resources.
We're going to put you right next to the largest nuclear power plant in the world.
We're going to build the largest data center in the world here.
All the chips that China has are going to go towards building this large-scale AI project.
And they just have the ability to collect all of their resources together and throw it at
winning on the AI race.
Whereas in the United States, we have...
all these companies.
And, you know, the United States government, as of yet, like, it's not going to force all these companies to combine and merge.
Like, that's, that's, like, such an, that today would be due to such an overreach of government power.
But because of that, we're going to have like, you know, five fragmented AI efforts.
And maybe in aggregate, we'll have way more chips.
And in aggregate, we'll have more power.
And in aggregate, we'll have, you know, more great researchers.
But we're not going to be able to focus those efforts, whereas China is easily going to be able to focus all their efforts.
Wow.
You had mentioned something downstairs about nuclear weapons.
Yeah.
I believe.
Yeah, so
this is where
stuff gets really weird for
national security, which is
you could clearly imagine scenarios where
advanced, very advanced cyber AI
invalidates nuclear deterrence.
What do I mean by this?
Right now,
you know, nobody fires nukes because we have MAD, we have mutually assured destruction.
And if I do a first strike against another country, they're going to be able to, while that nuke is in the air, do a second strike, and we'll both, you know, there will be destruction on both sides.
It'll be really bad.
So, because of this second strike capability,
luckily, we have a proper, you know, we have real deterrence.
Well, what if instead,
let's say I'm, you know, the United States and I have the most advanced AI cyber hacking capabilities in the world.
So I can build AI agents that hack into,
that can hack into any other country, can like turn off their energy grid, can disable their weapon systems, can disable everything.
So what do I do instead?
I launch the first strike and I immediately, or like first, first I send in my cyber AI agent capabilities.
I send my cyber AI
force effectively to disable all the weapon systems of
the enemy country.
And because it's
my I have like such so much AI capacity, I can take out all of your, I can like disable all of your weapon systems.
And then I send my first strike.
And then you don't have a second strike capability.
So if that happens, basically, the combination of AI and nuclear,
you know, you cannot deter AI plus nuclear with just nuclear.
So, then it forces this,
that's what will force this like proliferation of AI capabilities.
And so, even small countries are going to need to invest in lots of AI capabilities because their nuclear weapons are no longer a sufficient deterrent.
Jeez.
What about bioweapons?
Yeah, this is
this is the
element that is
really underrated right now.
So COVID leaked out of a virology lab in Wuhan and basically shut the world down for two years.
And that's like
the level one, you know, bio-risk kind of stuff.
Like this was relatively,
a relatively,
you know, innocuous, let's say,
pathogen, but it still killed
probably at least 10 million people globally and it was still shut the whole world down for two years.
Well, recent models, new models, the new AI models are able to outperform 95%
of MIT virologists.
So the newest models from OpenAI and Google are smarter than literally 95%
of virologists at MIT, based on a recent study by the Center for AI Safety.
So now you now, whether it's right now or whether it's in a few years,
it will be feasible to use AI-based capabilities to help you design
powerful pathogens.
And what's more than that, you're going to be able to design in certain characteristics of these pathogens.
You know, you'll be able to tune the virality, tune the lethality of them.
There's also due to recent advancements in synthetic biology, you now can create viruses that specifically target certain segments of DNA.
So
I could create a bioweapon that just targeted
any individual with a certain segment of DNA, which means I can target basically like any population or any group or any sub-segment of the population in the world,
which is really, really bad.
And so
the
ability, so first, even without AI, like biology, synthetic biology is making so much progress and that there's just like all sorts of inherent risk of
like all sorts of inherent risk of bioweaponry or leaks of pathogens and viruses and whatnot.
And then with AI, all of a sudden,
this is, you know, not literally today's models, but a few models a few generations down, you're going to be able to use these AI systems to design or build, you know,
next generation pathogens.
So
that's an entire, I mean, for a good reason, biological warfare is not, you know, one of the,
is not, you know, there are international treaties such that we don't engage in biological warfare.
But if you imagine these scenarios where countries,
you know, nuclear deterrence doesn't work, they don't have the resources to get to use, to utilize, to have large-scale AI data centers.
You know, it can,
you know, I'm worried that countries will
turn to biological weaponry, bioweapons, as their deterrence mechanism, which is highly destabilizing for, you know, the world.
Wow.
That's some scary shit.
The The flip side is there is new technology that can
that can also prevent this stuff.
So there's
there's this research coming out of this lab in Seattle, David Baker's lab, this guy who just won a Nobel Prize on biological noses,
or digital noses, sorry.
Which is basically you have these devices that can
detect proteins or chemicals or pathogens in the air automatically.
And so I think what this will like, you know, the real sort of like offense, defense of bio and bioweaponry will end up looking like we're just going to have large-scale deployment of digital noses effectively that in every space, on every like shipping container, on every plane,
you know, they're just constantly sensing for all existing known pathogens, any new pathogens that might exist, and are constantly just like,
you know, containing effective or like detecting and ultimately containing
interesting.
It's sniffing real time for all of that shit.
Yeah, exactly.
I mean, also on the flip side, I mean, I guess
if AI is developing
a new bioweapon, COVID comes out, again, COVID 2, we'll just call it,
then
our AI AI
should also be able to
figure out the
vaccines or the vaccine, the antidote to it, correct?
Yeah, totally.
So there will be an offense-defense
element to
just as just as in, as we were walking through like AI applied to command and control, there's an offense defense element.
AI applied to cyber, there's an offense defense element.
AI applied to bio and bioweaponry, there will be an offense defense element.
So all of these, thankfully, there's like,
you know, the hope is that
we end up in a, in a, in a global world, you know, that the world agrees that basically we're not going to go down any of these paths like because there's mutual deterrence and we just, you know, it's not worth it for anybody in the world to destabilize, you know, and risk humanity like that.
That's basically where we need to land.
Wow.
How concerned are you about China-Taiwan?
I mean, we were talking about this a little bit at breakfast, and
I can't believe they have not made a move yet.
I mean, I thought for sure it would happen towards the end of the last administration.
But
with their chip production capabilities, I mean, how concerned are you about China taking Taiwan?
I think
if it's going to happen, it's going to happen this decade, and it's probably going to happen this administration.
And um why do you say that
uh
i mean china it a at a macro sense they have huge demographic issues those are
i mean there's not like that's just like the of the force of gravity in their country they have this huge aging population they made the wrong bet you know many decades ago to have a one-child policy um and so they are going to have this like huge aging population um
over then that that plays that plays out really like quite soon.
Like over the next, like a decade from now, it's going to be
over time, they're going to look more and more like Japan in that way, where they have this like large aging population, and it'll paralyze a lot of ability to make any sort of aggressive moves.
So, particularly when it comes to military, industrial capacity, etc.
So, that's like one force of gravity that they have to contend with.
And then,
and so I think they're they're they're gonna want to move faster sooner rather than later.
And then they've, I mean, they've had such an insane military buildup over the course of the past few decades.
You know, I don't think it's,
and I think,
you know, we're currently in a situation where China has far more industrial capacity, far more manufacturing capacity than we do in the United States.
And so
that
is set, you know, that's a window for them.
So, do you think they'll do it?
They're oppressed to do it because of the aging population.
I think a lot of factors.
I think Xi is aging, right?
This will be an important component of his legacy,
as he would view it, I think.
They have an aging population, which will minimize their political latitude over time, naturally.
And then they have,
I mean, they're in this insane window where they have just incredible industrial manufacturing capabilities compared to anywhere else in the world.
You know, in 2023,
China deployed more industrial robots than the rest of the world combined.
That's like, I mean, we were talking a little bit about like automated factories and automated industrials.
They're raising it that faster than any other country in the world.
And so
I think that like you can look at all these dimensions and this window,
you know, there's if they're if they're going to do it, they're going to do it soon.
Yeah.
Yeah.
I mean, what percentage of the chips that we use come from Taiwan?
I mean, 95%
of the high-end chips are manufactured in Taiwan.
And so what happens if China takes Taiwan?
So, yeah, wargaming out.
So we were talking a little bit about this.
So
let's say China blockades or invades Taiwan.
Then
there's a question.
So these fabs are incredibly, incredibly valuable.
Because as we were just describing, if you believe in the pace of AI progress and AI technology, then
everything boils down to how much power you've got, how many chips you've got.
And if they own 95% of the world's
ship manufacturing capability, I mean, they're going to run away with it.
So then you look at that and you say,
will the Taiwanese people bomb the TSMC data centers?
And or will the US bomb the TSMC data centers and or will some other country bomb the data centers?
Or sorry, not the data, the fabs, the TSMC chip fabs.
I think, my personal belief, I don't think
the Taiwanese do it because even if they get blockaded or invaded, those fabs are still a huge
component of of Taiwan's survivability and Taiwan's relevance
as an entity,
even if they get blockaded or invaded by Taiwan.
So I don't think they do it.
China definitely doesn't do it because they
obviously are invading partially to get, you know, to gain those capabilities.
And so then,
does the U.S.
bomb them?
If the U.S.
bombs them, That's probably World War III.
It's hard to imagine that not just resulting in massive escalation.
And so you're looking at it, and
there's kind of no good options.
So
I think it's, I mean,
everyone's very focused on it, obviously, but it is like a real powder keg of
a region.
How do you think this all ends?
We had a little discussion about this at breakfast.
Yeah, yeah.
I mean,
I think,
I think if, so let's assume that in the next handful of years, next like three, four years,
there's an invasion or blockade of Taiwan.
And,
you know, I think it's, I think given how important AI is,
it's hard for the U.S.
to not take any sort of action in that scenario.
And then, you know,
almost all the actions you would see escalating into a major, major conflict.
So
the best case scenario is we deter the
invasion or blockade altogether.
And
I think, you know,
I think it certainly is in everyone's interest to not get into a large-scale world war that's hugely destructive and kills lots of people.
So, I think, like, fundamentally, we should be able to deter that conflict.
But
that's why all this matters so much.
We need to make sure our AI capabilities as a country are the best in the world.
We need to make sure that our military AI capabilities are the best in the world.
We need to make sure that, you know,
there's clear
economic deterrence of this kind of scenario.
Like,
we need to be investing in every way to deter this conflict such that
where this really will break down is if the Chinese, if the CCP calculus,
you know,
diverges from our own, if their calculus becomes,
oh, no, this is going to work out, you know, we can take this and then, you know, we're strong enough such that it'll work out for us.
And then our calculus is the opposite.
That's where, that's where the World War scenario happens.
So, um,
so I think it's possible to deter.
And I think we have to, you know, there's a lot of things we have to do to make sure that we deter that conflict.
And that should be,
I mean, certainly, I think it already is like 80% of the focus of the entire DOD.
So I mean, it's, it's just
we can deter, but I mean, what you're talking about, an aging population, I mean, they're getting desperate.
And
it sounds like, in order for them to legitimately win, they have to acquire those chip fabs, correct?
And so
they already have 250 times the shipbuilding capacity.
They have way more people.
They have more power than we do.
I mean,
military recruitment in the U.S.
was at an all-time low.
I don't know what it is today.
But...
I mean, even if it...
So
I guess what I'm saying is
you can only deter a desperate entity for so long before they throw a Hail Mary play.
Right?
Would you agree with that?
Yeah, and then it just depends on the...
You would have to dedicate an entire military to surround Taiwan
to effectively do that, in my opinion.
Yeah, I mean, I think that the, if
they assess, if the CCP and the PLA assess that
Taiwan is all their app, like they will focus their entire military capacity on seizing Taiwan, then
that becomes a really tricky calculus.
I mean, why wouldn't they?
If
Xi believes that the winner of the AI race achieves global domination,
he's getting older.
You just talked about how important his legacy is to him, which I'm sure you're right.
I don't know how you deter that.
And then they win the AI race.
Yeah, the only thing that we can do,
I think this is a long shot, but I think it's important, is
if
ultimately we actually end up collaborating on AI.
And I know that sounds kind of crazy, but
if we're able as a country to demonstrate,
just we're so far ahead and there's like, you know, one key element of how the whole AI thing plays out
is this idea of AI self-improvement or intelligence recursion, sometimes people call it.
But basically,
once AIs get sufficiently good, then you can start utilizing the AIs to help you build the next AI.
As sci-fi as that sounds, you utilize your current generation AI to to build the next generation AI faster and faster and faster and faster.
And so at some point,
your AI capabilities enable you, like, you know, there's some form of like, you know, just exponential takeoff.
They just, they just, you know, your AI capabilities get good really, really quickly.
And if somebody's even three to six months behind you, then
they're never going to catch up to you because you're running the self-improvement loop
faster than anybody else.
And so
this is a key idea.
I mean, I think
it's a little bit theoretical right now.
Like, it's not clear whether or not this intelligence recursion is going to be how it plays out, but a lot of people in AI believe it.
And I probably believe it too, that we will be able to use AIs to help us continue training the next AIs and improve things more quickly.
And if you believe that,
then if we're, let's say,
three to six months ahead of China and we maintain that advantage and we take off faster, then they're going to be way behind.
And then ultimately, we're going to be in a great position to say, hey,
actually,
like we're way ahead and
we should just, you know, you guys should quit your efforts.
We'll give you AI for all of your economic and
humanitarian uses throughout your society, and we agree we're not going to battle on military AI.
What would it take
to take the chip building capabilities that Taiwan has and implement that here in the U.S.
to protect it?
So,
yeah.
So, the first thing is
there's been hundreds of billions of dollars invested
just into
like the build out of those fabs and the the the
called foundries but the buildup of these of these large-scale chip factories effectively and the um and all the high-end equipment and tooling inside of them hundreds of billions of dollars of investment so
first off there needs to be hundreds of billions of dollars investment in the us that's not the hard part the second part that's that's really the hard part is um
all it's basically a large-scale factory operated by highly highly skilled
workers who
are very experienced in those processes and the whole thing operates like a you know like clockwork
and unless you can get those people to the US
you know you're gonna have to like rebuild all that know-how and all that technical capability.
And that's what takes a really long time.
And that's one of the things, you know.
So why do you think we haven't done that?
Why do you think we have not incentivized these brilliant minds to come here and do it for us?
So TSMC, the Taiwan semiconductor, the company that builds these fabs, they have stood up a few fabs in Arizona.
But they cited issues.
Like first there were issues around permitting and getting enough power and they dealt with some EPA issues and then
and then they just have issues where the like
you know the technicians working in Arizona don't aren't as skilled or don't work as hard as those working in Taiwan
so
they've built a few fabs in the United States they've tried to do it but our our red tape and our our power is not what it needs to be to be able to do this.
Red tape, power, workforce.
And then there's another key thing, which is
if you look at it from Taiwan Semiconductor, from TSMC's perspective, they're not all that incentivized to stand up all these capabilities in the United States.
Like, if
as soon as they start standing up all these capabilities in the United States, the United States is not incentivized to defend Taiwan.
Yeah.
And
it's a Taiwanese company.
So, and it's a critical part of their survival strategy.
So that's really where the rubber hits the road:
are they actually incentivized to do a large-scale build-out of
chip manufacturing capacity in the United States?
I think the answer is like, no.
Makes sense.
I mean, there would have to be
some type of a deal struck where
they fall under our wing.
Yeah, I mean, you could imagine some kind of deal with
China, between the U.S.
and China.
It'd have to be like a diplomatic deal at the highest levels, which is something along the lines of, you know,
hey, U.S.
can have Taiwan, but we need large-scale fabs
in, you know, we need large-scale chip manufacturing in the United States or something like that.
And like, you know, maybe
there's worlds where that kind of deal could get could get drawn up.
I don't know.
But that would, I mean, that would also mean that the United States would just have to say, hey,
all we care about actually at this point is chip manufacturing, and that we don't care actually about the
Taiwanese people and the country and all that stuff.
Man.
Man.
And are they working with China at all?
The TSMC.
Yeah.
So
they're.
I think they're technically not supposed to, but a lot of
the
Huawei, one of the leading companies in China, has been able to get
tons of chips from
tons of dyes, it's called, but basically tons of chips or chip
prerequisites from Taiwan.
And they usually do it through like they like start some cutout company that doesn't seem associated with them in like Singapore.
And then that Singaporean company buys a bunch of,
or Malaysia, and that, or the Singaporean Malaysian companies buy a bunch of chips from TSMC, and then they mail it back or something.
But there's clearly been,
there's been a lot of TSMC high-end
outputs that have gone to the Chinese companies.
Wow.
Wow.
Scary shit, man.
It gets, I mean, I think this is where
you have to believe, like right now, if you look at the,
you know, just as we were right now, like, if you look at the situation and all of the
dynamics at play right now,
it's, it's like, it's a powder keg.
It's like very, very, very volatile.
highly problematic in many ways.
And this is where, I mean, you just ultimately have to believe that there's, there's got to be some effort towards diplomatic solutions.
Yeah.
Because it is definitely true.
Like war will be really bad for both sides.
Yeah.
Yeah.
How do we coordinate with China with the AI?
Yeah.
So
what does that look like?
So yeah, right now.
Right now, we're definitely,
U.S.
and China,
we're definitely in an all-out
race dynamic.
And, you know, we're going to race, and I think this is correct, we're going to race to build the best AI systems.
They're going to race to build the best AI systems.
And we're both all in on this approach.
And we're both all in on racing towards building the most advanced AI capabilities, the largest data centers, the largest capacity, et cetera, et cetera.
And this is,
if you recall, kind of how
nuclear was.
Like, you know, in nuclear,
nuclear war, as well as application of nuclear
towards
power production, it was kind of, you know, all systems go like everyone racing towards building capacity, building capability.
And then
Chernobyl and Three Mile Island happen.
And it creates large-scale consternation around the technology and the risks of those technologies.
And
there were
a bunch of international treaties, and there's a large international response towards coordinating on nuclear technology.
Now,
all said and done, if you really, you know, if you look at nuclear, like
that set our country back, set many countries back, you know, many generations in terms of power generation.
But what it took was effectively these like small-scale disasters to
place that effectively,
that effectively were the forcing function for international cooperation.
You can imagine a scenario with AI
where,
because of all the things that we've been talking about,
there's some scenario where
maybe some terrorist group or some non-state actor or some, you know, North Korea or whomever, somebody decides to use it for
in a particularly adversarial or, you know, inhumane way and create, and that disaster has some
large-scale fallout.
So it create, you know, you take out the,
you take out power in
like one of the largest cities in the world and tons of people die.
Or you take out, or there's some pathogen that gets released and like tens of millions of people die.
Or, you know, some one of these things happens that causes the international community and everyone in the world to realize, oh shoot, we have to be coordinating on this.
And
we should be collaborating for AI to improve our societies and improve our economies and improve the lives of our people.
But we shouldn't, you know, we need to, we need to coordinate on its use towards,
for lack of a better term, scary things like bio or cyber warfare, or, you know, the list goes on.
So
long story short, short, I think the path really is
some kind of
sometimes you talk about it as like an AI oil spill or some kind of
incident that really causes the international community to realize like, hey,
we have to start coordinating on this.
I mean,
you say
China's all out, you know.
gone all in on the race day ice and the US is gone all out on the race day, but we're kneecapping ourselves.
I mean, you just mentioned the red tape, the EPA, the permitting,
and the power.
And we're not producing more power.
We're flat-blind.
We've established that.
As far as I know, we're not getting rid of the red tape, you know, to
jet launch this.
And
it just seems like...
We're cutting ourselves off at the knees here.
Right now, I mean,
we have a lot of work to do for sure.
We have to build strategies
to have energy dominance, to have data dominance,
on the algorithms.
I think we'll be okay.
They're going to espionage, but I think we'll be okay on algorithms.
We need to ensure we have chip dominance long term.
We need to make sure all this lends itself to military dominance.
I totally agree with you.
I mean, we need to
today
ensure that we have the proper strategies in place so that we stay ahead on all these areas.
The worst case scenario for the United States is the following, which is
CCP
does a large-scale Manhattan-style project inside their country,
realizes they can start, because of all the factors that we've talked about, they realize they can start overtaking the U.S.
on AI.
That lends itself to extreme hyper-military advantage, and
they use that to take over the world.
That's like worst case scenario for the US.
If US and AI, AI, US and China AI capabilities are even just roughly on par, I think you have deterrence.
I don't think either country will take the risk.
I think if US is way ahead of China, I think you maintain U.S.
leadership.
And that's a pretty safe world.
So
the worst case scenario is they get ahead of
Are there any other players other than the US and China involved in this?
Who else do we need to be watching out for?
So,
yeah, right now, definitely US and China.
A lot of other countries will matter,
but not all of them have enough ingredients to really properly be AI superpowers.
So, but other countries are going to, they have
key ingredients.
So
to name a few, A,
everything we've talked about with cyber warfare and information warfare, information operations,
Russia has very advanced operations in those areas.
And
that could end up mattering a lot if they ally with the CCP.
There's a lot of ways they can team up and have,
and that could be pretty bad.
There's,
you know, the countries in the Middle East will be very important because they have
incredible amounts of capital and they have lots of energy.
And so
that's, these are, you know, they're critical players in how all this plays out.
India.
matters a lot.
India has a lot of high-end technical talent.
I don't know if, I think right now, I don't know if between India and China, which has more high-end technical talent, but there's a lot in India for sure.
Massive population
also starting to industrialize in a real way
and
right next to China.
So India will matter a lot.
And then,
you know,
there's a lot of technical talent in Europe as well.
I think it's unclear exactly how this plays out
with the European capabilities.
I mean, they have to...
It seems like there's some efforts now for Europe to try to
build up large-scale power, build up large data centers, you know,
make a play.
I think yet to be seen how effective those efforts are going to be, but you can clearly see some scenarios where if they make
a hard turn and go all in, they could be relevant as well.
Is there a world where AI takes on a mind of its own?
So,
you know, obviously you can hypothetically paint the scenario where, like, you know, you have super intelligence or you have really powerful AI, and then,
you know, it realizes at some point that humans are kind of annoying and takes us all out.
But,
but I think it's a very, like, that's so preventable
as an outcome.
Because,
first of all, all the things we just talked about are like
the very real things that happen long before you have, you know, this hyper-advanced AI that takes everyone out.
That's first, that's the first thing.
So we have lots of things we have to get right before then.
And then second is,
you know, for AI to
actually
be capable of you know having a mind of its own and taking all humans out like we'd have to give it just incredible amounts of control.
It would have to just
basically be running everything and we're just sort of like along for the ride.
And
that's a choice.
We have this choice of whether or not to give all of our control to AI systems.
And as I was talking about before with like human sovereignty,
my belief is we should not cede control of our most critical systems.
We should design all the systems such that human decision-making, human control is really, really important.
Human oversight is really important.
This is one of the things that I actually think is
one of the things that we're working on as a company.
It's honestly one of, like, as I think about long-term missions, one of the most important things is creating human sovereignty.
So first is, how do we make sure all the data that goes into these AI models
increases human sovereignty such that the models are going to do what we tell them, are aligned with humans and aligned with
our objectives.
And two is that we create oversight.
So as AI starts doing more and more actions, doing more planning, you know, taking out, you know, carrying out more things
in the world, in the economy, in the military, et cetera, that humans are watching and supervising every one of those actions.
So
that's how we maintain control.
And that's how we prevent, you know, the Terminator scenarios or the, you know,
AI takes us out kind of scenarios.
Interesting.
Well, Alex, wrapping up the interview here, but man, what a fascinating discussion.
Thank you.
Thank you for being here.
One last question.
If you had three guests you'd like to see on the show, who would it be?
Ooh, that's a good question.
Who would I like to see?
Well, I really like what you've been doing recently, which is getting more tech folks on the pod.
So
I go in that direction.
I mean, I think Elon would be great to see on the show.
I think
we were talking about this.
Zach would be cool to see on the show.
I think
Sam Altman would be cool to see on the show.
So definitely like
more people in tech.
Outside of that,
I think
And we were talking about some of this like
International leadership like international like leaders of other countries is super important because
We talk about all these scenarios like international cooperation is going to matter so much right on
we'll we'll reach out to them and
Yeah, as far as world leaders is concerned,
we're on it.
But
well, Alex, thanks again for coming, man.
Fascinating discussion.
I'm just super happy to see all the success that you've amassed throughout your 28 years.
It is,
I love seeing it.
So, thank you for being here.
I know you're a busy guy.
Yeah, thanks for having me.
It was fun.
I am Michael Rosenbaum.
I am Tom Welling.
Welcome to Talk Bill, where it's fun to talk about small bills.
We're going to be talking to sometimes guest stars.
Are you liking the direction Lois is going in?
Yeah, because I'm getting more screen time.
That's good.
But mostly it's just me and Tom remembering.
I think we all feel like there was a scene missing here.
You got me, Tom.
Let's revisit it.
Let's look at it.
See what we remember.
See what we remember.
I had never been around anything like that before.
I mean, it was so fun.
Talk, Bill.
Talk Bill.
I just had a flashback.
Follow and listen on your favorite platform.
Let's get into it.