#2311 - Jeremie & Edouard Harris
https://superintelligence.gladstone.ai/
Learn more about your ad choices. Visit podcastchoices.com/adchoices
Press play and read along
Transcript
Speaker 0 Joe Rogan podcast, check it out!
Speaker 1 The Joe Rogan experience. Train by day, Joe Rogan podcast by night, all day.
Speaker 1 All right, so if there's a doomsday clock for AI,
Speaker 1 and we're fucked,
Speaker 1
what time is it? If midnight is, we're fucked. We're getting right into it.
You're not even going to ask us what we had for breakfast. No, no, no, no, no, no, no, no.
Speaker 1 Jesus. Okay, let's get freaked out.
Speaker 1 Well, okay, so there's one, without speaking to like the fucking tombs day dimension right at the end, there's a question about like where are we at in terms of AI capabilities right now and what do those timelines look like?
Speaker 1
Right. There's a bunch of disagreement.
One of the most concrete pieces of evidence that we have recently came out of a lab, an AI kind of evaluation lab called Meter. And they put together this test.
Speaker 1 Basically, it's like, you ask the question,
Speaker 1
pick a task that takes a certain amount of time, like an hour. It It takes like a human a certain amount of time.
And then see how likely the best AI system is to solve for that task.
Speaker 1
Then try a longer task. See like a 10-hour task.
Can it do that one?
Speaker 1 And so right now what they're finding is when it comes to AI research itself, so basically like automate the work of an AI researcher, you're hitting 50% success rates for these AI systems for tasks that take an hour long.
Speaker 1 And that is doubling every, right now it's like every four months. So like you had tasks that you could do, you know, a person does in five minutes, like, you know,
Speaker 1 ordering an Uber Eats or like something that takes like 15 minutes, like maybe booking a flight or something like that. And it's a question of like, how much can these AI agents do, right?
Speaker 1 Like, from five minutes to 15 minutes to 30 minutes, and in some of these spaces, like research, software engineering.
Speaker 1 And it's getting further and further and further, and doubling, it looks like, every four months.
Speaker 1 If you extrapolate that, you basically get to tasks that take a month to complete. Like by 2027,
Speaker 1 tasks that take an AI researcher a month to complete, these systems will be completing with like a 50% success rate.
Speaker 1 Trevor Burrus, Jr.: You'll be able to have an AI on your show and ask it what the doomsday clock is like by then.
Speaker 1 It probably won't laugh.
Speaker 1
It'll have a terrible sense of humor about it. Just make sure you ask you what it had for breakfast before you start.
I guess.
Speaker 1 Yeah.
Speaker 1 What about quantum computing getting involved in AI?
Speaker 1 So, yeah, honestly, I don't think it's ⁇ if you think that you're going to hit human-level AI capabilities across the board, say 2027, 2028, which when you talk to some of these, the people in the labs themselves, that's the timelines they're looking at.
Speaker 1 They're not confident, they're not sure, but that seems pretty plausible.
Speaker 1 If that happens, really, there's no way we're going to have quantum computing that's going to be giving enough of a bump to these techniques. You're going to have standard classical computing.
Speaker 1 One way to think about this is that the data centers that are being built today are being thought of literally as the data centers that are going to house the artificial brain that powers super intelligence, human-level AI, when it's built in like 2027, something like that.
Speaker 1 So,
Speaker 1 how knowledgeable are you when it comes to quantum computing?
Speaker 1 So,
Speaker 1 a little bit. I mean, like, I did my
Speaker 1 grad studies in like the foundations of quantum mechanics. Oh, great.
Speaker 1 Yeah, well, it was a mistake, but I appreciate it for the purposes of. Why was it a mistake?
Speaker 1
You know, so academia is kind of funny thing. It's really bad culture.
It teaches you some really terrible habits.
Speaker 1 So basically my entire life after academia, and Ed's too, was unlearning these terrible habits of,
Speaker 1 it's all zero sum, basically. It's not like when you're working in startups.
Speaker 1 It's not like when you're working in tech where you build something and somebody else builds something that's complimentary and you can team up and just make something amazing. It's always...
Speaker 1 wars over who gets credit, who gets their name on the paper, did you cite this fucking stupid paper from two years ago? Because the author has an ego and you got to be on. I was literally at one point
Speaker 1 I'm not going to get any details here, but like there was a collaboration that we ran with like this, anyway, fairly well-known guy.
Speaker 1 And my supervisor had me like write the emails that he would send from his account so that he was seen as like the guy who is like interacting with this bigwig.
Speaker 1 That kind of thing is like, doesn't tend to happen in startups, at least not in the same way because everyone's so he wanted credit for the like he wanted to just seem like he was the genius who was facilitating this for sounding smart on email
Speaker 1 right but but that
Speaker 1 happens everywhere and the reason it happens is that these guys who are like professors or even not even professors just like your postdoctoral guy who's like supervising you they can write your letters of reference and control your career after that that last
Speaker 1
ball they can do whatever and so what jared did it's just like a movie totally Yeah, it's gross. It's like a gross movie.
It's like a gross boss in a movie that wants to take credit for your work.
Speaker 1
And it's real. It's rampant.
And the way to escape it is to basically just be like, fuck this. I'm going to go do my own thing.
And so Jare dropped out of grad school to come start a company.
Speaker 1 And I mean, honestly, even that,
Speaker 1 it took me, it took both of us like a few years to like unfuck our brains and unlearn the bad habits we learned.
Speaker 1 It was really only a few years later that we started like really, really getting a good, like, getting a good flow going.
Speaker 1 You're also, you're kind of disconnected from the like base reality when you're in the ivory tower, right?
Speaker 1 Like if you're, there's something beautiful about, and this is why we spent all our time in startups, but there's something really beautiful about like, it's just a bunch of assholes, us,
Speaker 1 and like no money and nothing, and a world of like potential customers. And it's like, you actually, it's not that different from like stand-up comedy in a way.
Speaker 1
Like your product is, can I get the laugh, right? Like something like that. And it's unforgiving.
If you fuck up, it's like silence in the room. It's the same thing with startups.
Speaker 1 Like the space of products that actually works is so narrow. And you've got to obsess over what people actually want.
Speaker 1 And it's so easy to fool yourself into thinking that you've got something that's really good because your friends and family are like, oh no, sweetie, you're doing a great job.
Speaker 1 Like, what a wonderful life.
Speaker 1 I would totally use it. I totally see all that stuff, right? And that's, I love that because it forces you to change.
Speaker 1 Yeah, it's the whole indoctrination thing in academia is so bizarre because of these like hierarchies of powerful people and the just the idea that you have to
Speaker 1
work for someone someday and they have to take credit by being the the person on the email. That that will haunt me for days.
Oh, it's hard. I'll be thinking about that for days now.
Speaker 1
I fucking can't stand people like that. It drives me nuts.
One big consequence is it's really hard to tell who the people are who are creating value in that space, too.
Speaker 1 Of course, sure, because it's just like television. One of the things about television shows is,
Speaker 1
so I'll give you an example. A very good friend of mine who's a very famous comedian had this show, and his agent said, We're going to attach these producers.
It'll help get it made.
Speaker 1
And he goes, well, what are they going to do? He goes, they're not going to do anything. It'll just be in name.
He goes, but they're going to get credit. He goes, yeah.
He goes, fuck that.
Speaker 1
He goes, no, no, listen, listen. This is better for the show.
It'll help the show.
Speaker 1 But then
Speaker 1
they'll have a piece of the show. He's like, yes, yes, but it's a matter of whether the show gets successful or not, and this is a good thing to do.
And he's like, what are you talking about?
Speaker 1 But it was a conflict of interest because this guy was rep, the agent was representing these other people.
Speaker 1
But this is completely common. So there's these executive producers that are on shows that have zero to do with it.
It's so many.
Speaker 1
So many industries are like this. And that's why we got into startups.
It's literally like you and the world, right? It's like in a way, like stand-up comedy, like Jared said. Or like podcasting.
Speaker 1
Or like podcasting, where your enemy isn't actually hate. It's indifference.
Like most of the stuff you do, especially when you're getting started, like, why would anyone like give a shit about you?
Speaker 1
They're just not going to pay attention. And yeah, that's not even your enemy.
You know, that's just all potential. That's all that is.
You know, it's like your enemy is within you.
Speaker 1 It's like, figure out a way to make whatever you're doing good enough that you don't have to think about it not being valuable.
Speaker 1
It's meditative. Like there's no way for it not to be.
to be in some way a reflection of like yourself.
Speaker 1 You know, you're kind of like in this battle with you trying to convince yourself that you're great. So the ego wants to grow, and then you're constantly trying to compress it and compress it.
Speaker 1 And if there's not that outside force, your ego will expand to fill whatever volume is given to it.
Speaker 1 Like if you have money, if you have fame, if everything's given and you don't make contact with the unforgiving on a regular basis, like, yeah, you know, you're going to end up, you're going to end up doing that to yourself.
Speaker 1
And you could. Yeah.
It's possible to avoid, but you have to have strategies. Yeah, you have to be intentional about it.
Yeah. The best strategy is jiu-jitsu.
Speaker 1 Yeah.
Speaker 1
Mark Zuckerberg is a different person now. Yeah, yeah.
You can see it. You can see it.
Speaker 1
Yeah, well, it's a really good thing for people that have too much power because you just get strangled all the time. Yeah.
And then you just get your arms bent sideways.
Speaker 1 And after a while, you're like, okay,
Speaker 1
this is reality. This is reality.
This social hierarchy thing that I've created is just nonsense. It's just smoke and mirrors.
Speaker 1 And they know it is, which is why they so rapidly enforce these hierarchies. That's like
Speaker 1 sir and ma'am and all that kind of shit. That's what that is.
Speaker 1 You don't feel like you really have respect unless you say that. Ugh.
Speaker 1 These poor kids that have to go from college where they're talking to these dipshit professors out into the world and operating under these same rules that they've been forced and indoctrinated to.
Speaker 1
It's God, to just make it on your own. It's amazing what you can get used to, though.
And like the Sony, you were mentioning the producer thing.
Speaker 1 That is literally also a thing that happens in academia.
Speaker 1 So you'll have these conversations where it's like, all right, well, this paper is fucking garbage or something, but we want to get it in a paper, in a journal.
Speaker 1
And so let's see if we can get like a famous guy on the list of authors so that when it gets reviewed, people go like, oh, Mr. So-and-so.
Okay. And that literally happens.
Speaker 1 And the funny thing is, like, the hissy fits over this are, like, the stakes are so brutally low. At least with your producer example, like, someone stands to make a lot of money.
Speaker 1 With this, it's like...
Speaker 1 You get maybe like an assistant professorship out of it at best that's like 40 grand a year. And it's just like, what this is, it's just
Speaker 1 for producers, it is money, but I don't even think they notice the money anymore. I think a big part, because all those guys are really, really rich already.
Speaker 1
I think, you know, if you're a big-time TV producer, you're really rich. I think the big thing is being thought of as a genius who's always connected to successful projects.
Right.
Speaker 1 That's what they really like.
Speaker 1
That is always going to be a thing, right? It wasn't one producer. It was like a couple.
So there's going to be a couple different people that were on this thing that had zero to do with it.
Speaker 1
It was all written by a stand-up comedian. His friends all helped him.
They all put it together. And then he was like, no, he wound up firing his agent over it.
Oh, shit. Oh, geez.
Good for him.
Speaker 1 I mean, yeah. He's like, get the fuck out of here.
Speaker 1 At a certain point for the producers, too, it's kind of like you'll have people approaching you for help on projects that look nothing like projects you've actually done.
Speaker 1
So I feel like it just adds noise to your universe. Like if you're actually trying to build cool shit.
You know what I mean? Some people just want to be busy.
Speaker 1
They just want more things happening and they think more is better. More's not better because more is energy that takes away from the better, whatever the important shit is.
Yeah, the focus.
Speaker 1
You only have so much time until AI takes over. And then you'll have all the time in the world because no one will be employed and everything will be automated.
We'll all be on universal basic income.
Speaker 1 And that's it. That's a show.
Speaker 1 The end.
Speaker 1
That's a sitcom. That's a sitcom.
A bunch of poor people existing on $250 a week. Oh, I would watch that.
Yeah, because the government just gives everybody that. That's what you live off of.
Speaker 1 Like, weird shit is cheap. Like the stuff that's like all like, well, the stuff you can get from chatbots and AI agents is cheap, but like, food is super expensive or something.
Speaker 1 Yeah, organic food is going to be, you're going to have to kill people for it.
Speaker 1
You will eat people. It will be like a soilant world.
Right. It's soil and green.
Speaker 1
Nothing's more free-range than people, though. That's true.
Depends on what they're eating, though. It's just like animals.
Speaker 1
You know, you don't want to eat a bear that's been eating salmon. They taste like shit.
Yeah, I didn't know that. Yeah.
Speaker 1 I've been eating my bear raw this entire time.
Speaker 1 So back to the quantum thing.
Speaker 1 So quantum computing is infinitely more powerful than standard computing.
Speaker 1 Would it make sense then that if quantum computing can run a large language model, that it would reach a level of intelligence that's just preposterous?
Speaker 1 Aaron Powell, so yeah, one way to think of it is like there are problems that quantum computers can solve way, way, way, way better than classical computers.
Speaker 1 And so like the numbers get absurd pretty quickly.
Speaker 1 It's like problems that a classical computer couldn't solve if it had the entire lifetime of the universe to solve it, a quantum computer right in like 30 seconds, boom.
Speaker 1 But the flip side, there are problems that quantum computers just can't help us accelerate.
Speaker 1 One classic problem that quantum computers help with is this thing called the traveling salesman paradox or problem where you have like a bunch of different locations that a salesman needs to hit and what's the best path to hit them most efficiently.
Speaker 1 It's like kind of a classic problem if you're going around different places and have to make stops.
Speaker 1 There are a lot of different problems that have the right shape for that.
Speaker 1 A lot of quantum machine learning, which is a field, is focused on how do we take standard AI problems, like AI workloads that we want to run, and like massage them into a shape that gives us a quantum advantage.
Speaker 1 And that's, it's a nascent field. There's a lot going on there.
Speaker 1 I would expect, like, my personal expectation is that we just build the human-level AI and very quickly after that, super intelligence without ever having to factor in in quantum. But it's
Speaker 1 could you define that for people? What's the difference between human level AI and superintelligence? Yes. So yeah, human level AI is like
Speaker 1 AI, you can imagine like it's AI that is as smart as you are in, let's say, all the things you could do on a computer.
Speaker 1 So, you know, you can, yeah, you can order food on a computer, but you can also write software on a computer. You can also email people and pay them to do shit on a computer.
Speaker 1 You can also trade stocks on a computer. So it's like as smart as a smart person for that.
Speaker 1 Super intelligence, people have various definitions and they're all kinds of like honestly hissy fits about like different definitions.
Speaker 1 Generally speaking, it's something that's like very significantly smarter than the smartest human. And so you think about it, it's kind of like
Speaker 1 it's as smart, as much smarter than you as you might be smarter than a toddler. And you think about that, and you think about like the, you know,
Speaker 1 how would a toddler control you? It's kind of hard.
Speaker 1 You can outthink a toddler pretty much any day of the week. And so superintelligence gets us at these levels where you can potentially do things that are completely different.
Speaker 1 And basically, new scientific theories. And last time we talked about
Speaker 1 new stable forms of matter that were being discovered by these kind of narrow systems.
Speaker 1 But now you're talking about a system that is like has that intuition combined with the ability to talk to you as a human and to just have really good like rapport with you, but can also do math, it can also write code, it can also like solve quantum mechanics and has that all kind of wrapped up in the same package.
Speaker 1 And so one of the things too that by definition, if you build a human-level AI, one of the things it must be able to do as well as humans is AI research itself.
Speaker 1 Or at least the parts of AI research that you can do in just like software, like you know, by coding or whatever these systems are designed to do.
Speaker 1 And so one implication of that is you now have automated AI researchers.
Speaker 1 And if you have automated AI researchers, that means you have AI systems that can automate the development of the next level of their own capabilities.
Speaker 1 And now you're getting into that whole singularity thing where it's an exponential that just builds on itself and builds on itself, which is kind of why
Speaker 1
a lot of people argue that if you build human-level AI, superintelligence can't be that far away. You've basically unlocked everything.
And we kind of have gotten very close, right? Like
Speaker 1 it's past the Fermi, not the Fermi paradox.
Speaker 1
What is it? Oh, yeah, yeah, the... God damn it.
We were just talking about him the other day. Yeah, the test.
Speaker 1
Oh, the Turing test? The Turing test. Thank you.
We were just talking about how horrible what happened to him was.
Speaker 1 The chemistry frustrated him because he was gay. Yeah.
Speaker 1 Horrific. Winds up killing himself.
Speaker 1 The guy who figures out what's the test to figure out whether or not AI has become sentient and by the way he does this in like what 1950s oh yeah yeah alan turing is a like he the guy was a beast right how did he think that through he invented computers he invented basically the concept that underlies all computers like he was like an absolute beast he was a code breaker he broke the nazi codes right and the he also wasn't even the first person to come up with this idea of machines building machines and there being implications like human disempowerment so if you go back to i think it was like the late 1800s and i i don't remember the guy's name, but he sort of like came up with this, he was observing the Industrial Revolution and the mechanization of labor and kind of starting to see more and more, like if you zoom out, it's almost like you have an humans or an ant colony and the artifacts that that colony is producing that are really interesting are these machines.
Speaker 1 You know, you kind of like look at the surface of the earth as this like gradually increasingly mechanized thing. And it's not super clear.
Speaker 1 if you zoom out enough, like, what is actually running the show here? Like, you've got humans servicing machines, humans looking to improve the capability of these machines at this frantic pace.
Speaker 1 Like they're not even in control of what they're doing.
Speaker 1 Economic forces are pushing. Are we the servant of the master, right? At a certain point? Like, yeah.
Speaker 1 And the whole thing is like, especially with a competition that's going on between the labs, but just kind of in general, you're at a point where like...
Speaker 1 Do the CEOs of the labs, like they're these big figureheads, they go on interviews, they talk about what they're doing and stuff. Do they really have control over any part of the system? Oh, yeah.
Speaker 1 The economy is in this almost convulsive fit, right? Like you can almost feel like it's like it's hurling out AGI.
Speaker 1 And
Speaker 1 like for as one kind of, I guess, data point here, like all these labs, so OpenAI, Microsoft, Google, every year they're spending like an aircraft carrier worth of capital individually, each of them, just to build bigger data centers, to house more AI chips, to train bigger, more powerful models.
Speaker 1 And that's like, so we're actually getting to the point where if you look at on a power consumption basis, like we're getting to you know two three four five percent of US power production
Speaker 1 if you project out into the late 2020s
Speaker 1 kind of 2026 27 you're talking
Speaker 1 not for double digit though not for double digit but for single digit yeah you're talking like that's a few gigawatts so one gigawatt so sorry not for single digit it's in the the like for for 2027 you're looking at like you know in the 0.5 ish percent but it's like it's a big fucking fraction like you're talking about gigawatts and gigawatts One gigawatt is a million homes.
Speaker 1 So you're seeing like one data center in 2027 is easily going to break a gig. There's going to be multiple like that.
Speaker 1 And so it's like a thousand, sorry, a million home city metropolis, really, that is just dedicated to training like one fucking model.
Speaker 1 That's what this is.
Speaker 1 Again, if you zoom out at planet Earth, you can interpret it as like this, like all these humans frantically running around like ants, just like building this like artificial brain.
Speaker 1
It's like one taking. It's a supermind assembling itself on the face of the planet.
This episode is brought to you by Squarespace.
Speaker 1 If you've got something to sell or want to take your business online, Squarespace has you covered. Their built-in SEO tools help people find you
Speaker 1 and you can sell products, take payments, even manage bookings all from one easy platform. Go to squarespace.com/slash Rogan for a free trial.
Speaker 1 And when you're ready to launch, use the code Rogan to get 10% off your first purchase of a website or domain. This episode is brought to to you by the farmer's dog.
Speaker 1 I think we can all agree that eating highly processed food for every meal isn't optimal. So why is processed food the status quo for dog food? Because that's what kibble is, an ultra-processed food.
Speaker 1
But a healthy alternative exists, the farmer's dog. They make fresh food for dogs.
And what does it look like?
Speaker 1 Real meat and vegetables that are gently cooked to retain vital nutrients and help avoid any of the bad stuff that comes with ultra-processing.
Speaker 1 And it's not just random ingredients thrown thrown together. Their food is formulated by on-staff board-certified vet nutritionists.
Speaker 1
These people are experts on dog nutrition and they're all in on fresh food. The farmer's dog also does something unique.
They portion out the food to your dog's nutritional needs.
Speaker 1 This ensures that you don't overfeed them, making weight management easy. Research shows that dogs kept at a healthy weight can live up to two and a half years longer.
Speaker 1 Head to thefarmersdog.com/slash Rogan to to get 50% off your first box plus free shipping. This offer is for new customers only.
Speaker 1
Marshall McLuhan, in like 1963 or something like that, said, human beings are the sex organs of the machine world. Oh, God, that hits.
That hits different today.
Speaker 1
It does. It does.
I've always said that if we were aliens or if aliens came here and studied us, they'd be like, what is the dominant species on the planet doing? Well, it's making better things.
Speaker 1 That's all it does.
Speaker 1 The whole thing is dedicated to making better things. And all of its instincts, including materialism, including status, keeping up with the Joneses, all that stuff is tied to newer, better stuff.
Speaker 1 You don't want old shit.
Speaker 1
You want new stuff. You don't want an iPhone 12.
You know, what are you doing, you loser?
Speaker 1
You need newer, better stuff. And they convince people, especially in the realm of like consumer electronics.
Most people are buying things they absolutely don't need.
Speaker 1
The vast majority of the spending on new phones is completely unnecessary. Yeah.
But I just need that extra, like, that extra fourth camera, though.
Speaker 1
I feel like my life isn't completely different. I run one of my phones as an iPhone 11, and I'm purposely not switching it just to see if I notice it.
I fucking never notice anything.
Speaker 1
I watch YouTube on it. I text people.
It's all the same. I go online.
It works. It's all the same.
Speaker 1 Probably the biggest thing there is going to be the security side, which no, they update the security. It's all software.
Speaker 1
But I mean, if your phone gets old enough, I mean, like, at a certain point, oh, when they stop updating it? Yeah. Like, iPhone 1, you know, China's watching all your dick pics.
Oh, dude.
Speaker 1
I mean, Salt Typhoon, they're watching all our dick pics. They're definitely seeing mine.
What's Salt Typhoon? So, Salt Typhy. Oh, sorry.
Yeah, yeah.
Speaker 1
So it's this big Chinese cyber attack actually starts to get us to kind of the broader. It's a great name, by the way.
Salt Typhoon. Fuck yeah, guys.
I really wish they wouldn't name it.
Speaker 1 They have the coolest names for their cyber operations.
Speaker 1
Salt Typhoon. They got to slick.
You know what?
Speaker 1 It's kind of like when people go out and do like an awful thing, like a school shooting or something, and they're like, oh, let's talk about, you know, if you give it a cool name, like now the Chinese are definitely going to do it again.
Speaker 1
Anyway, that's. Because they have a cool name.
Yeah, that's definitely about. Salt Typhoon.
Salt Typhoon. Pretty dope.
Yeah. But it's this thing where basically, so there was in the
Speaker 1 3G kind of protocol that was set up years ago,
Speaker 1 law enforcement agencies included back doors intentionally to be able to access comms, you know, theoretically if they got a warrant and so on. And well, you introduce a back door.
Speaker 1 You have adversaries like China who are wicked good at cyber.
Speaker 1
They're going to find and exploit those back doors. And now basically they're sitting there.
And they had been for some people think like maybe a year or two before it was really discovered.
Speaker 1 And just a couple months ago, they kind of go like, oh, cool. Like, we got fucking like China all up in our shit.
Speaker 1 And this is like, this is like flip a switch for them and like you turn off the power of water to a state. Or like you fucking, yeah.
Speaker 1 Well, sorry, this is, sorry, Salt Typhoon, though, is about just sitting on the, like, basically telecoms now. Well, that's the telecom one.
Speaker 1 Yeah, it's not the, but, but yeah, I mean, that's another thing.
Speaker 1 There's another thing where they're doing that, too. Yeah, and so this is kind of where, what we've been looking into over the last year is this question of
Speaker 1 how,
Speaker 1 what is, if you're going to make like a Manhattan project for super intelligence, right? Which is,
Speaker 1 I mean, that's what we were texting about like way back.
Speaker 1 And then actually, funnily enough, we shifted our date for security reasons, but if you're going to do a Manhattan project for super intelligence,
Speaker 1 what does that have to look like? What does the security game have to look like to actually make it so that China's not all up in your shit?
Speaker 1 Like today, it is extremely clear that at the world's top AI labs, like all that shit is being stolen.
Speaker 1 Like there is not a single lab right now that isn't being spied on successfully based on everything we've seen by the Chinese. Can I ask you this? Are we spying on the Chinese as well?
Speaker 1 That's a big problem.
Speaker 1 Do you want to
Speaker 1 I mean,
Speaker 1 we're definitely doing some stuff,
Speaker 1
but in terms of the relative balance between the two, we're not where we need to be. They spy on us better than we spy on them.
Is that what I'm saying? Because
Speaker 1
they build all our ships. They build all our ships.
Well, that was the Huawei situation, right?
Speaker 1 Yeah, and then it's also the, oh my God, it's the, if you look at the power grid, so this is now public, but if you look at
Speaker 1 like transformer substations, so these are the essentially, anyway, they're a crucial part of the electrical grid. And there's really like basically all of them have components that are made in China.
Speaker 1 China's known to have planted back doors, like Trojans, into those substations to fuck with our grid.
Speaker 1 The thing is, when you see a salt typhoon, when you see a like big Chinese cyber attack or a big Russian cyber attack, you're not seeing their best.
Speaker 1 These countries do not go and show you like their best cards out the gate.
Speaker 1 You show the bare minimum that you can without tipping your hand at the actual exquisite
Speaker 1 capabilities you have. Like,
Speaker 1 the way that one of
Speaker 1 the people kind of who's been walking us through all this really well explained it is: the philosophy is you want to learn without teaching, right?
Speaker 1
You want to use what is the lowest level capability that has the effect I'm after. And that's what that is.
I'll give an example. Like, I'll tell you a story that's kind of like,
Speaker 1 it's a public story, and it's from a long time ago, but it kind of gives a flavor of like how far these countries will actually go when they're playing the game for fucking real.
Speaker 1 So it's 1945. America and the Soviet Union are like best pals, because they've just defeated the Nazis, right?
Speaker 1 To celebrate that victory in the coming new world order that's going to be great for everybody, the children of the Soviet Union give as a gift to the American ambassador in Moscow this beautifully carved wooden seal of the United States of America.
Speaker 1
Beautiful thing. Ambassador is thrilled with it.
He hangs it up on behind his desk in his private office. You can see where I'm going with this probably, but
Speaker 1
yeah. Seven years later, 1952, finally occurs to us, like, let's take it down and actually examine this.
So they dig into it, and they find this incredible contraption in it called a cavity resonator.
Speaker 1 And this device doesn't have a power source, doesn't have a battery, which means when you're sweeping the office for bugs, you're not going to find it.
Speaker 1 What it does instead is it's designed, that's it, that's it.
Speaker 1
The thing. They call it the thing.
They call it the thing.
Speaker 1 And what this cavity resonator does is it's basically designed to reflect radio radiation back to a receiver to listen to all the noises and conversations and talking in the ambassador's private office.
Speaker 1 And so. How's it doing it without a power source? So that's what they do.
Speaker 1 So the Soviets, for seven years, parked a van across the street from the embassy, had a giant fucking microwave antenna aimed right at the ambassador's office and were like zapping it and looking back at the reflection and literally listening to every single thing he was saying and the best part was
Speaker 1 when the embassy staff was like we're gonna go and like sweep the office for bugs periodically they'd be like hey mr ambassador we're about to sweep your office for bugs and the ambassador was like cool please proceed and go and sweep my office for bugs and the kgb dudes in the van were like just turn it off sounds like they're going to sweep the office for bugs.
Speaker 1 Let's turn off our giant microwave antenna. And they kept at it for seven years.
Speaker 1 It was only ever discovered because there was this like British radio operator who was just, you know, doing his thing, changing his dial. And he's like, oh shit, like, is that the ambassador?
Speaker 1 Randomly. So, so the thing is, oh, and actually, sorry, one other thing about that, if you heard that story and you're kind of thinking to yourself, hang on a second,
Speaker 1
they were shooting like microwaves at our ambassador 24-7 for seven years. Whoa.
Doesn't that seem like it might like fry his genitals or something? Yeah, or something like that.
Speaker 1
That's supposed to have a lead vest. And the answer is, Josh.
Yes. Yes.
Yes.
Speaker 1 And this is something that came up in our investigation just from every single person who was like, who was filling us in and who dialed in and knows what's up.
Speaker 1 They're like, look, so you got to understand, like our adversaries, if
Speaker 1 They need to like give you cancer in order to rip your shit off of your laptop, They're going to give you some cancer. Did he get cancer?
Speaker 1 I don't know specifically about the ambassador, but like
Speaker 1 that's also so
Speaker 1 we're limited what we can say. There's actually people that you talked to later that can go in in more detail here, but
Speaker 1 older technology like that, kind of lower powered, so you're less likely to look at that. Nowadays, we live in a different world.
Speaker 1 The guy that invented that microphone invented, his last name is Thereman.
Speaker 1 He invented this instrument called the Theremin, which is a fucking really interesting thing that oh he's just moving his hands yeah your hands control it waving over this it's a fucking wild instrument
Speaker 1 have you seen this before jamie yeah i saw juicy j prep playing it yesterday on instagram he's like practicing it's a fucking cool ass thing they're pretty good at it too yeah
Speaker 1 there's with two two control both hands are controlling it by moving in and out and space X Y's I don't I honestly don't really know how the fuck it works, but
Speaker 1 I've seen it. Oh, that is wild.
Speaker 1
It's also a a lot harder to do than it seems. So the Americans tried to replicate this for years and years and years without really succeeding.
And anyway, that's all kind of part of it.
Speaker 1 I have a friend who used to work for intelligence agency, and he was working in Russia. And they found that the building was bugged with these super sophisticated bugs that operate.
Speaker 1 Their power came from the swaying of the building.
Speaker 1
Get out. I've never heard this.
The swaying of the, just like your watch, watch, like I have a mechanical watch on, so when I move my watch,
Speaker 1 it powers up the spring and it keeps the watch.
Speaker 1 That's how an automatic mechanical watch works. They figured out a way to, just by the subtle swaying of the building in the wind, that was what was powering this listening device.
Speaker 1
So this is the thing, right? Like the. I mean, what the fuck? Well, and things that the things that nation states.
What's up, Jamie?
Speaker 1
Google says that's that's what was powering this thing, the Great Seal bug, which is really the thing. So I don't know.
There's another one?
Speaker 1 No, but oh, this is so you can actually see in that video, I think there was a YouTube. Yeah, so same kind of thing, Jamie?
Speaker 1 I typed in Russia spy bug building sway.
Speaker 1
The thing is what pops up. The thing.
Which is what we were just talking about. Oh, that thing?
Speaker 1 So that's powered the same way by the Sway.
Speaker 1
I don't know. I think it was powered by radio frequency emission.
So there may be another thing. related to it.
I'm not sure, but
Speaker 1 yeah,
Speaker 1 maybe Google's a little confused. Maybe it's the word sway is what's throwing it off.
Speaker 1 But it's no, but it's a great catch. And the only reason we even know that, too, is that when the U-2s were flying over Russia, they had a U-2 that got shot down in 1960.
Speaker 1
The Russians go like, oh, like, friggin' Americans, like, spying on us. What the fuck? I thought we were buddies.
Well, it's the 60s. I obviously didn't think that.
Speaker 1
And then the Americans are like, okay, bitch. Look at this.
And they brought out the seal. And that's how it became public.
It was basically like the response to the Russians saying, like, you know.
Speaker 1
they're all dirty. Oh, yeah.
Everyone's spying on everybody. That's a thing.
And I think they probably all have some sort of UFO technology.
Speaker 1 We need to talk about that.
Speaker 1 We turn off our mics.
Speaker 1
I'm 99% sure a lot of that shit is locked. You need to talk to some of the people who are.
Oh, I've been talking to people. Oh,
Speaker 1 I've been talking to a lot of people.
Speaker 1 There might be some other people that you'd be interested in.
Speaker 1
I'd very much be interested. Here's the problem.
Some of the people I'm talking to, I'm positive, positive,
Speaker 1 they're talking to me to give me bullshit.
Speaker 1 Because I'm a woman. Are we on your list?
Speaker 1 No, you guys aren't the list. But there's certain people I'm like, okay, maybe most of this is true, but some of it's not on purpose.
Speaker 1
There's that. I guarantee you, I know I talk to people that don't tell me the truth.
Yeah. Yeah.
Speaker 1 It's an interesting problem in all Intel, right? Because there's always the mix of incentives is so fucked. Like the adversary is trying to add noise into the system.
Speaker 1 You've got pockets of people within the government that have different incentives from other pockets. And then you have top secret clearance and all sorts of other things that are going on.
Speaker 1 Yeah, one guy that texted me, he's like, the guy telling you that they aren't real is literally involved in these meetings. So stop.
Speaker 1 Just stop listening down. It's like
Speaker 1 one of the techniques, right, is like, is actually to inject so much noise that you don't know what's what and you can't follow. So this actually, this, this happened in
Speaker 1
the COVID thing, right? The lab leak versus the natural like wet market thing. Yeah.
So I remember
Speaker 1 there was a debate that
Speaker 1 happened about what was the origin of COVID. This was like a few years ago.
Speaker 1
It was like an 18 or 20 hour long YouTube debate, just like... punishingly long.
And it was like, there was a $100,000 bet either way on who would win. And it was like lab leak versus wet market.
Speaker 1 And at the end of the 18 hours, the conclusion was like, one of them won, but the conclusion was like, it's basically 50-50 between them.
Speaker 1 And then I remember like hearing that and talking to some folks and being like, hang on a second.
Speaker 1 So you got to believe that whether it came from a lab or whether it came from a wet market, one of the top three priorities of the CCP from a propaganda standpoint is like, don't get fucking blamed for COVID.
Speaker 1 And that means they're putting like one to ten billion dollars and some of their best people on a global propaganda effort to cover up evidence and confuse and blah, blah, blah. You really think that
Speaker 1 you're 50%, like that confusion isn't coming from that incredibly resourced effort? Like they know what they're doing.
Speaker 1 Aaron Ross Powell, particularly when different biologists and virologists who weren't attached to anything were talking about like
Speaker 1 the cleavage points and different aspects of the virus that appeared to be genetically manipulated, the fact that there was only one spillover event, not multiple ones. None of it made any sense.
Speaker 1 All of it seemed like some sort of a genetically engineered virus. It seemed like gain of function research.
Speaker 1 And
Speaker 1 your early emails were talking about that. This episode is brought to you by Paramount Plus, now streaming on Paramount Plus.
Speaker 1 It's the return of Landman from Taylor Sheridan, co-creator of Yellowstone, featuring Academy Award winner Billy Bob Thornton, Demi Moore, Andy Garcia, and Sam Elliott.
Speaker 1 In the wake of his former boss's passing, Tommy and and Cammie Miller struggle to maintain control of M.Tech's oil.
Speaker 1 And with his father coming back into his life, Tommy must juggle his responsibilities as pressure builds and his worlds collide. Landman, new season, now streaming only on Paramount Plus.
Speaker 1
This episode is brought to you by Activision. You know me.
I love a bit of action. That's why I'm excited to tell you that Call of Duty Black Ops 7 is out now.
Speaker 1
And let me tell you, this game is the biggest biggest Black Ops ever. If you're into intense action, strategic gameplay, and just straight up kicking ass, this is it.
Kicking ass?
Speaker 1 Sounds like that's right up my alley. Black Ops 7 drops you right into three massive modes.
Speaker 1
First, you've got the co-op campaign where you can team up with your buddies to tackle some serious missions. Then, the multiplayer.
It's explosive.
Speaker 1
18 maps that keep the fights fresh and the stakes high. And zombies.
Oh, boy, this is the best zombie mode yet, featuring a brand new drivable wonder vehicle that completely changes the game.
Speaker 1
Seriously, whether you're a hardcore gamer, I just want to jump into some crazy action. Black Ops 7 delivers.
Call of Duty, Black Ops 7 is available now. Rated M for mature.
Speaker 1
And then everybody changed their opinion. And even a taboo, right, against talking about it through that lens.
Oh, yeah, total propaganda. It's racist.
Yeah.
Speaker 1
Which is crazy because nobody thought the Spanish flu is racist and it didn't even really come from Spain. Yeah, that's true.
Yeah, it came from Kentucky.
Speaker 1 I didn't know that. Yeah, I think it was Kentucky or Virginia.
Speaker 1 Where does the Spanish flu originate from? But nobody got mad. Well,
Speaker 1
that's because the state of Kentucky has an incredibly sophisticated propaganda machine. Wow.
And
Speaker 1
pinned it on the street. It might not have been Kentucky.
But
Speaker 1 I think
Speaker 1
it was an agricultural thing. Huh.
Kansas. Kansas.
Kansas. Thank you.
Yeah. Goddamn Kansas.
Speaker 1
I've always said that. I've always said that.
It likely originated originated in the United States, H1N1 strain, had genes of avian origin. By the way, people always talk about the Spanish flu.
Speaker 1 If it was around today,
Speaker 1
everybody would just get antibiotics and we'd be fine. So this whole mass die-off of people.
It would be like the Latinx flu, and we would be
Speaker 1 the Latinx flu.
Speaker 1
That one didn't stick at all. It didn't stick at all.
Latinx?
Speaker 1
There's a lot of people claiming they never used it, and they pull up old videos of them. Like, that's a dumb one.
Like, it's literally a gendered language, you fucking idiots.
Speaker 1 Like, you can't just do that.
Speaker 1 It went on for a while, though. Like,
Speaker 1 everything goes on for a while. So think about how long they did lobotomies.
Speaker 1 They did lobotomies for 50 fucking years.
Speaker 1 They went, hey, maybe we should stop doing this. It was like the same attitude that
Speaker 1 got Turing chemically castrated, right? I mean, I should have like, hey, let's just get in there and fuck around a bit and see what happened.
Speaker 1 Well, this is before they had SSRIs and all sorts of other interventions. But
Speaker 1 what was the year, lobotomies?
Speaker 1 I believe it stopped in 67. Was it 50 years? I think you said 70 last time, and that was correct when I pulled it up.
Speaker 1
70 years? 1970. Oh, I think it was 67.
I like how this has come up so many times that Jamie's like, I think last time you said it was 70.
Speaker 1
It comes up all the time because it's one of those things. That's insane.
You can't just trust the medical establishment. Officially 67, it says maybe one more in 70.
Oh, dude. Oh, he died in 72.
Speaker 1 When did they start doing it?
Speaker 1 I think they started in the 30s or the 20s, rather.
Speaker 1 That's
Speaker 1 falsy, you know? The first thing
Speaker 1
was the first guy who did a lobotomy. Yeah.
It says 24 Freeman Arrives Wash DC Direct Labs. 35, they tried it first.
A lectonomy.
Speaker 1
They just scramble your fucking brains. But doesn't it make you feel better to call it a leucotomy, though? Because it sounds a lot more professional.
No. Lobotomy, leucotomy.
Leucotomy sounds gross.
Speaker 1
Sounds like Lugie. Like you're talking about Lugie.
Like lobotomy. Boy.
Topeka, Kansas. Also, Kansas.
Speaker 1
All roads point to Kansas. All roads point to problems.
That's what's happening. And everything's flat.
You just lose your fucking marbles. You go crazy.
That's the main issue with that. That's right.
Speaker 1
So they did this for so long. Somebody won a Nobel Prize for lobotomy.
Wonderful.
Speaker 1
Imagine that. Give that back, you bird shit.
Yeah, seriously. You're kind of like, you know, you don't want to display it up in your shelf.
Speaker 1 It's just a good... indicator.
Speaker 1 It's like it should let you know that oftentimes science is incorrect and and that oftentimes, you know, unfortunately, people have a history of doing things and then they have to justify that they've done these things.
Speaker 1
Yeah. And they, you know.
But now there's also, there's so much more tooling, too, right? If you're a nation state and you want to fuck with people and inject narratives into the ecosystem, right?
Speaker 1 Like the whole idea of autonomous AI agents too, like having these basically like Twitter bots or whatever bots.
Speaker 1 Like a lot of one thing we've been thinking about, Duke, on the side, is like the idea of
Speaker 1 audience capture, right? Do you have like
Speaker 1 big people with high profiles and kind of gradually steering them towards a position by creating bots that like through comments, through outvotes, you know, 100%.
Speaker 1 It's absolutely real.
Speaker 1 Yeah, and a couple of the big, like, a couple of big accounts on X, like, that, that we're in touch with have sort of said, like, yeah, especially in the last two years, it's actually become hard, like especially the thoughtful ones, right?
Speaker 1 It's become hard to like stay sane, not on X, but across social media, on all the platforms. And that is around when it became possible to have AIs that can speak like people 90%, 95% of the time.
Speaker 1 And so you have to imagine that, yeah, adversaries are using this and doing this and pushing the frontier. No, right now.
Speaker 1
They'd be fools if they didn't. Oh, yeah.
100%. You have to do it because for sure we're doing that.
Speaker 1 And this is one of the things where, you know, like it used to be, so OpenAI actually used to do this assessment of their AI models as part of their kind of what they call their preparedness framework that would look at the persuasion capabilities of their models as one kind of threat vector.
Speaker 1 They pulled that out recently, which they've is kind of like. Why?
Speaker 1 You can argue that it makes sense. I actually think
Speaker 1 it's somewhat concerning because one of the things you might worry about is if these systems, sometimes they get trained through what's called reinforcement learning.
Speaker 1 Potentially, you could imagine training these to be super persuasive by having them interact with real people and convince them practice at convincing them to do specific things um if that if you get to that point you know these these labs ultimately will have the ability to deploy agents at scale that can just persuade a lot of people to do whatever they want including pushing legislative agendas vote like
Speaker 1 help them prep for meetings with uh the hill the administration whatever and like how should i like convince this person to do that like no yeah well they'll do that with text messages make it more businesslike yep make it friendlier.
Speaker 1
Make it more jovial. But this is like the same optimization pressure that keeps you on TikTok.
That same addiction. Imagine that applied to persuading you of
Speaker 1 some fact, right? Yeah. That's like a...
Speaker 1 On the other hand, maybe a few months from now, we're all just going to be very, very convinced that it was all fine. It's no big deal.
Speaker 1 Yeah, maybe they'll get so good that it'll make sense to you. Maybe they'll just be right.
Speaker 1 That's how Jit works.
Speaker 1
Yeah, it's a confusing time period. You know, we've talked about this ad nauseum, but it bears repeating.
There's a former FBI
Speaker 1
analyst who investigated Twitter before Elon bought it said that he thinks it's about 80% bots. Yeah.
80%.
Speaker 1 That's one of the reasons why the bot purge, like when Elon acquired it and started working on it, is so important.
Speaker 1 Like there needs to be the challenge is like detecting these things is so hard, right? So increasingly. Like more and more they can hide like basically perfectly.
Speaker 1 Like how do you tell the difference difference between a cutting-edge AI bot and a human just from the
Speaker 1 generate AI images of a family, of a backyard barbecue, post all these things up and make it seem like it's real?
Speaker 1 Especially now, AI images are insanely good now. They really are.
Speaker 1
It's crazy. And if you have a person, you could just, you could take a photo of a person and manipulate it in any way you'd like.
And then now this is your new guy. You could do it instantaneously.
Speaker 1 And then this guy has a bunch of opinions on things and
Speaker 1
seems to always align with the Democratic Party, but whatever. He's a good guy.
He's a family man. Look, he's out in this barbecue.
He's not even a fucking human being.
Speaker 1
And people are arguing with this bot back and forth. And you'll see it on any social issue.
You see it with Gaza and Palestine. You see it with abortion.
You see it with religious freedoms.
Speaker 1
You just see these bots. You see these arguments.
And
Speaker 1 you see like various levels. You see like the extreme position and then you see a more reasonable centrist position.
Speaker 1 But essentially, what they're doing is they're consistently moving what's okay further and further in a certain direction. And in fact,
Speaker 1 it's both directions.
Speaker 1 Like it's like, you know how when you're trying to, like, you're, you're trying to capsize a boat or something, you're, you're, like, fucking with your buddy on the lake or something.
Speaker 1 So you push on one side, then you push on the other side, then you push, and until eventually it capsizes. This is kind of like our electoral process is already naturally like this, right?
Speaker 1 We go, like, we have a party in power for a while, and then like they, they get, you know, they basically get like, you get tired of them and you switch.
Speaker 1 And that's kind of the natural way how democracy works or in a republic. But the way that adversaries think about this is they're like, perfect.
Speaker 1 This swing back and forth, all we have to do is like, when it's on this way, we push and push and push and push until it goes more extreme. And then there's a reaction to it, right?
Speaker 1 And then I swing it back and we push and push and push on the other side until eventually something breaks. And that's the risk.
Speaker 1 Yeah.
Speaker 1 It's also like, you know, the the organizations that are doing this, like, we already know this is part of Russia's MO, China's MO, because back when it was easier to detect, we already could see them doing this shit.
Speaker 1
So there is this website called This Person Does Not Exist. It still exists surely now, but it's kind of...
you kind of superseded. Yeah.
Speaker 1 But you would like, every time you refresh this website, you would see a different human face that was AI generated. And what the Russian Internet Research Agency would do.
Speaker 1 Yeah, exactly.
Speaker 1 What all these,
Speaker 1 and it's actually, yeah, yeah, I don't think they've really upgraded it since.
Speaker 1
That's fake. Wow, they're so good.
This is old. This is like years old.
And years old. And you could actually detect these things pretty reliably.
Speaker 1 Like, you might remember the whole thing about AI systems were having a hard time generating hands that only had like five fingers. Right.
Speaker 1
That's over, though. That's over.
Yeah, little hints of it were, though, back in the day in this person does not exist. And you'd have the Russians would take
Speaker 1
a face from that and then use it as the profile picture for a Twitter bot. And so that you could actually detect.
You'd be like, okay, I've got you there, I've got you there.
Speaker 1 I can kind of get a rough count.
Speaker 1
Now we can't, but we definitely know they've been in the game for a long time. There's no way they're not.
And the thing with nation-state propaganda attempts, right, is that
Speaker 1
people have this idea that, ah, like I've caught this Chinese influence operation or whatever. We nail them.
The reality is nation states operate at like 30 different levels.
Speaker 1 And if you're a priority, like just influencing our information spaces is a priority for them, they're not just going to operate, they're not just going to pick a level and do it.
Speaker 1 They're going to do all 30 of them.
Speaker 1 And so you, even if you're like among the best in the world at like detecting this shit, you're going to like, you're going to catch and stop like levels one through 10.
Speaker 1 And then you're going to be like, you're going to be aware of like level 11, 12, 13, like you're working against it. And you're, you know, maybe you're starting to think about level 16.
Speaker 1 And you, you imagine like you know about level 18 or whatever, but they're like, they're above you, below below you, all around you.
Speaker 1
They're incredibly, incredibly resourced. And this is something that came like came came through very strongly for us.
You guys have seen the Yuri Besmanov video from 1984 where he's talking about how
Speaker 1 all our educational institutions have been captured by Soviet propaganda.
Speaker 1 It was talking about Marxism, how it's been injected into school systems and how you have essentially two decades before you're completely captured by these ideologies and it's going to permeate and destroy all of your confidence in democracy.
Speaker 1 And he was 100% correct. And this is before these kind of tools, before, because like the vast majority of those exchanges of information right now are taking place on social media.
Speaker 1 The vast majority of debating about things, arguing, all taking place on social media. And if that FBI analyst is correct, 80% of it's bullshit, which is really wild.
Speaker 1 Well, and you look at like some of the documents that have come out.
Speaker 1 I think it was like the, I think it was the CIA game game plan, right, for regime change or undermining, like, how do you do it, right? Have multiple decision makers at every level,
Speaker 1
all these things. And what a surprise.
That's exactly what the U.S. bureaucracy looks like today.
Speaker 1 Slow everything down, make change impossible, make it so that everybody gets frustrated with it and they give up hope. They decided to do that to other countries.
Speaker 1
For sure, they do that here. Open society, right? I mean, that's part of the trade-off.
And that's actually a big part of the challenge, too.
Speaker 1 So when we're working on this, right, like one of the things things Ed was talking about, these 30 different layers of security access or whatever. One of the consequences is you bump into a team.
Speaker 1 So, the teams we ended up working with on this project were folks that we bumped into after the end of our last investigation who kind of were like, oh,
Speaker 1 we talked about last year.
Speaker 1
Yeah, yeah, yeah. Like, looking at AGI, looking at the national security kind of landscape around that.
And a lot of them are really well placed.
Speaker 1
So it was like, you know, special forces guys from tier one units. So you'll seal Team 6 type thing.
And because they're so like in that ecosystem,
Speaker 1 you'll see people who are like ridiculously specialized and competent, like the best people in the world at doing whatever the thing is, like to break this security.
Speaker 1 And they don't know often about like another group of guys who have a completely different capability set.
Speaker 1 And so what you find is like you're you're indexing like hard on this vulnerability and then suddenly someone says, oh yeah, but by the way, I can just hop that fence. So
Speaker 1 the really funny thing about this is like
Speaker 1 most or even almost all of the really, really elite security people kind of think that all the other security people are dumbasses, even when they're not.
Speaker 1 Or like, yeah,
Speaker 1 they're biased in the direction of, because it's so easy when everything's stovepiped. But so most people who say they're like elite at security actually are dumbasses.
Speaker 1 Because most security is like about checking boxes and like SOC 2 compliance and shit like that.
Speaker 1 Yeah,
Speaker 1 what it is, is it's like, so everything's so stovepiped
Speaker 1 that you don't, you literally can't know what the exquisite state of the art is in another domain.
Speaker 1 So it's a lot easier for somebody to come up and be like, oh yeah, like I'm actually really good at this other thing that you don't know.
Speaker 1 And so figuring out who actually is the, like we had this experience over and over where like, you know, you run into a team and then you run into another team, they have an interaction, you're kind of like, oh, interesting.
Speaker 1 So like, you know, like these are the really kind of the people at the top of their game.
Speaker 1 And that's been this very long process to figure out, like, okay, what does it take to actually secure our critical infrastructure against like CCP, for example, like Chinese attacks,
Speaker 1 if we're building a super intelligence project? And it's this weird kind of challenge because of the stovepiping. No one has the full picture.
Speaker 1 And we don't think that we have it even now, but definitely. Don't know of anyone who's come
Speaker 1 to the best people are the ones who,
Speaker 1 when they encounter another team and other other ideas and start to engage with it, are like, instead of being like, oh, like, you don't know what you're talking about, who just like actually lock on and go like, that's fucking interesting.
Speaker 1
Tell me more about that. Right.
People that have control of their ego.
Speaker 1
A hundred percent. With everything.
The best of the best.
Speaker 1 The best of the best
Speaker 1 got there by
Speaker 1 eliminating their ego as much as they could. Yeah.
Speaker 1
Always the way it is. Yeah.
And it's also like the fact of the 30 layers of the stack or whatever it is, of all these security issues, means that no one can have the complete picture at any one time.
Speaker 1
And the stack is changing all the time. People are inventing new shit.
Things are falling in and out of.
Speaker 1 And so
Speaker 1 figuring out what is that team that can actually get you that complete picture is an exercise.
Speaker 1 A, you can't really do, it's hard to do it from the government side because you got to engage with data center building companies. You got to engage with the AI labs.
Speaker 1 And in particular, with insiders at the labs who will tell you things that, by the way, the lab leadership will tell you the opposite of in some cases. And so like it's just this Gordian knot.
Speaker 1 It took us months to
Speaker 1 pin down every kind of dimension that we think we've pinned down at this point. I'll give an example actually of that, like
Speaker 1 trying to do the handshake, right, between different sets of people. So we were talking to one person
Speaker 1 who's thinking hard about data center security, working with like frontier labs on this shit,
Speaker 1 very much like at the top of her game, but she's kind of from like the academic space, kind of Berkeley, like the avocado toast kind of side of the spectrum, you know?
Speaker 1 And
Speaker 1 she's talking to us. She'd reviewed
Speaker 1
the report we put out, the investigation we put out. And she's like, you know, I think you guys are talking to the wrong people.
And we're like, can you say more about that?
Speaker 1
And she's like, well, I don't think like, you know, you talk to tier one special forces. I don't think they like know much about that.
We're like, okay, that's not correct, but can you say why?
Speaker 1 And she's like, I feel like those are just the people that go and bomb stuff.
Speaker 1
Blow shit up. Blow shit up.
It's understandable, too, because
Speaker 1 it's totally understandable. A lot of people have the wrong sense of what a tier one asset actually can do.
Speaker 1 Well, that's ego on her part because she doesn't understand what they do. It's ego all the way down, right?
Speaker 1 But that's a dumb thing to say if you literally don't know what they do. And you say, don't they just blow stuff up?
Speaker 1 Where's my latte?
Speaker 1 It's a weirdly good impression but she did ask about a latte yeah she did but did she talk in upspeak you should fire everyone who talks in upspeak she didn't talk in upspeak but the moment they do that you should just tell them to leave there's no way you have an original thought um
Speaker 1 how you talk chyna can you get out of our data center yeah please
Speaker 1 i don't want to rip on on that too much though because this is the one really important factor here is all these groups have a part of the puzzle And they're all fucking amazing.
Speaker 1
They are like world class at their own little slice. And a big part of what we've had to do is like bring people together.
And
Speaker 1 there are people who've helped us immeasurably do this, but like bring people together and explain to them the value that each other has in a way that's like
Speaker 1 that that allows that bridge building to be made. And by the way,
Speaker 1 the tier one guys are the most like ego-moderated of the people that we talk to. There's a lot of like Silicon Valley hubris going around right now where people are like, listen, get out of our way.
Speaker 1
We'll figure out how to do this like super secure data center infrastructure. We got this.
Why? Because we're the guys building the AGI motherfucker.
Speaker 1
That's kind of the attitude. And it's like, cool, man.
That's like a doctor having an opinion about how to repair your car. I get that it's not the like.
Speaker 1
like elite kind of like, you know, whatever, but but someone has to help you build like a good friggin' fence. Like, I mean, it's not just that.
The The Dunning-Kruger effect.
Speaker 1 It's a mixed bag, too, because like, yes,
Speaker 1 a lot of the hyperscalers like Google, Amazon, genuinely do have some of the best private sector security around data centers in the world, like hands down. The problem is there's levels above that.
Speaker 1 And the guys who like
Speaker 1 look at what they're doing and see what the holes are just go like, oh yeah, like I could get in there, no problem. And they can fucking do it.
Speaker 1 One thing my wife said to me on a couple of occasions is like,
Speaker 1 you seem to like, and this was towards the beginning of the project, like you seem to like change your mind a lot about what the right configuration is of how to do this.
Speaker 1 And yeah, it's because every other day you're having a conversation with somebody who's like, oh yeah, yeah, like great job on this thing, but like, I'm not going to do that.
Speaker 1
I'm going to do this other completely different thing. And that just fucks everything over.
And so you have enough of those conversations.
Speaker 1 And at a certain point, your plan, your game plan on this can no longer look like we're going to build a perfect fortress.
Speaker 1 It's got to look like we're going to account for our own uncertainty on the security side and the fact that we're never going to be able to patch everything.
Speaker 1
Like you have to. I mean, it's like the channel.
And that means you actually have to go on offense from the beginning.
Speaker 1 Because like the truth is, and this came up over and over again, there's no world where you're ever going to build the perfect, exquisite fortress around all your shit and hide behind your walls like this forever.
Speaker 1 That just doesn't work because no matter how perfect your system is and how many angles you've covered, like your, your adversary is super smart, is super dedicated.
Speaker 1 If you see the field to them, they're right up in your face and they're reaching out and touching you and they're trying to see like what your seams are, where they break.
Speaker 1 And that just means you have to reach out and touch them from the beginning. Because until you've actually like reached out and used a capability and proved like, we can take down that infrastructure.
Speaker 1
We can like disrupt that cyber operation. We can do this, we can do that.
You don't know if that capability is real or not.
Speaker 1 Like, you might just be like lying to yourself and like, I can do this thing whenever I want, but actually, you're kind of more in academia mode than like startup mode because you're not making contact every day with the thing, right?
Speaker 1
You have to, you have to touch the thing. And there's like, there's a related issue here, which is a kind of like willingness.
This came up over and over again.
Speaker 1 Like, one of the kind of gurus of this space was like,
Speaker 1 made the point, a couple of them made the point that
Speaker 1 you can have the most exquisite capability in the world, but if you don't actually have the willingness to use it, you might as well not have that capability. And the challenge is right now,
Speaker 1 China, Russia, like our adversaries pull all kinds of stunts on us and get no consequences. Particularly during the previous administration.
Speaker 1 This was a huge, huge problem during the previous administration where
Speaker 1 you actually had
Speaker 1 sabotage operations being done on American soil by our adversaries where
Speaker 1 you had administration officials as soon as like a thing happened so there were for example there was like
Speaker 1 four different states had their 911 systems go down like at the same time different systems like unrelated stuff but it was like it's it's this stuff where it's like let me see if I can do that Let me see if I can do it.
Speaker 1
Let me see what the reaction is. Let me see what the chatter is that comes back after I do that.
And one of the things that was actually pretty disturbing about that was
Speaker 1 under
Speaker 1 that administration or regime or whatever, the response you got from the government right out the gate was, oh, it's an accident. And that's actually unusual.
Speaker 1 The proper procedure, the normal procedure in this case, is to say, we can't comment on an ongoing investigation, which we've all heard, right? Like we can't comment on both.
Speaker 1
But you can neither confirm nor deny. Exactly.
It's all that stuff. And that's what they say typically out the gate when they're investigating stuff.
Speaker 1 But instead, instead, coming out and saying, oh, it's just an accident, is a break with the... What do you attribute that to?
Speaker 1 If they say,
Speaker 1
if they leave an opening or say, actually, this is an adversary action, we think it's an adversary action, they have to respond. The public demands a response.
And they don't, they were
Speaker 1 fearful of escalating.
Speaker 1 So what ends up happening, right, is, and by the way, that thing about like, it's an accident comes out often before there would have been time for investigators to physically fly on site and take a look.
Speaker 1
Like there's no logical way that you could even know that at the time. And they're like, boom, that's an accident.
Don't worry about it.
Speaker 1 So they have an official answer and then their response is to just bury their head in the sand and not investigate. Right.
Speaker 1 Because if you were to investigate, if you were to say, okay, we looked into this. It actually looks like it's fucking like country X that just did this thing.
Speaker 1 If that's the conclusion, it's hard to imagine the American people not being like, what are we, like, we're letting these people injure our American citizens on U.S. soil, take out like U.S.
Speaker 1 national security, like, or critical infrastructure, and we're not doing anything.
Speaker 1 Like, the concern is about this, like, we're getting in our own way of thinking, like, oh, well, escalation is going to happen.
Speaker 1 And boom, we run straight to like, there's going to be a nuclear war, everybody's going to die.
Speaker 1 Like, when you do that,
Speaker 1 peace between nations stability does not come from the absence of activity. It comes from consequence.
Speaker 1 It comes from, just like if you have, you know, a, an individual who misbehaves in society, there's a consequence and people know it's coming.
Speaker 1 You need to train your counterparts in the international community, your adversary,
Speaker 1 to not fuck with your stuff.
Speaker 1 Can I stop for a second?
Speaker 1 So are you essentially saying that if you have incredible capabilities of disrupting grids and power systems and infrastructure, you wouldn't necessarily do it, but you might try it to make sure it works a little better.
Speaker 1
Exactly. And that this is probably the hints of some of this stuff because you've got to get it.
You've got to get your reps in, right? You've got to get your reps in.
Speaker 1 It's like, okay, so suppose that I went to you and was like, hey, I bet I can kick your ass. I bet I can friggin slap a rubber guard on you and like do whatever the fuck, right?
Speaker 1
And you're like... I love your expression, by the way.
Yeah, yeah, you look really convinced. It's because I'm jacked, right?
Speaker 1 Well, no, there's people that look like you that could strangle me, believe it or not.
Speaker 1 Yeah, there's a lot of like very high-level Brazilian jiu-jitsu black belts that are just super nerds and they don't lift weights at all. They only do jiu-jitsu.
Speaker 1
And if you only do jiu-jitsu, you'll have like a wiry body. That was heartless.
So you just slip that in. Like, there's like guys who look like you are just real fucking.
Speaker 1 They're like intelligent people.
Speaker 1 No, they're like some of the most brilliant people I've ever met.
Speaker 1 Really, that's the issue.
Speaker 1
Data nerds get really involved in jiu-jitsu. And jiu-jitsu's data.
But here's the thing. So that's exactly it, right? So if I told you, I bet I can tap you out, right?
Speaker 1 And you're like, where have you been training? Well, right. But
Speaker 1
if you're like, if my answer was, oh, I've just read a bunch of books. Oh.
You'd be like, oh, cool. Let's go.
Speaker 1 Right? Because making contact with reality is where the fucking learning happens.
Speaker 1 You can sit there and think all you want. Right.
Speaker 1 But unless you've actually played the chess match, unless you've reached out, touched, seen what the reaction is and all this stuff, you don't actually know what you think you know.
Speaker 1 And that's actually extra dangerous. If you're sitting on a bunch of capabilities and you have this like unearned sense of superiority because you haven't used those exquisite tools,
Speaker 1
it's a challenge. And then you've got people that are head of departments, CEOs of corporations.
Everyone has an ego.
Speaker 1 This episode is brought to you by Visible. When your phone plans as good as Visible, you've got to tell your people.
Speaker 1 It's the ultimate wireless hack to save money and still get great coverage and a reliable connection. Get one-line wireless with unlimited data and hotspot for $25 a month.
Speaker 1 Taxes and fees included all on Verizon's 5G network. Plus, now for a limited time, new members can get the visible plan for just $19 a month for the first 26 months.
Speaker 1
Use promo code switch26 and save beyond the season. It's a deal so good, you're going to want to tell your people.
Switch now at visible.com/slash Rogan.
Speaker 1
Terms apply, limited time offers subject to change. See visible.com for planned features and network management details.
This episode is brought to you by Manscape.
Speaker 1 The holidays are upon us, and that means it's time to take take care of that shopping list.
Speaker 1 And finding the perfect gift just got a whole lot easier this year though, because you can just get Manscaped's Performance Package 5.0 Ultra.
Speaker 1 It's perfect for your partner, your dad, your brother, or even yourself. Everyone needs a decent razor and a little self-care after all.
Speaker 1 This all-in-one grooming kit comes with everything you could possibly need to trim, shave, and get ready for a festive occasion. occasion.
Speaker 1 It comes with two trimmers, one for body hair and one for those small, pesky nose and eyebrow hairs. And there's the aftercare.
Speaker 1 The performance package 5.0 Ultra also includes aftershave lotion and deodorant to keep you fresh, comfortable, and confident when you finally step out of the bathroom.
Speaker 1 Because nothing says, I care like a well-groomed man. Give the gift of smooth this holiday season with the Performance Package 5.0 Ultra.
Speaker 1
It even comes with two free gifts, a pair of boxers, and a spiffy toiletry bag. Get 15% off with the code JRE at manscaped.com.
That's 15% off plus free shipping at manscaped.com with the code JRE.
Speaker 1 We've got it. Yeah, and this ties into like how, exactly how basically the international order and quasi-stability actually gets maintained.
Speaker 1 So there's like above threshold stuff, which is like you actually do wars for borders and, you know, there's the potential for nuclear exchange or whatever.
Speaker 1
Like that's like all stuff that can't be hidden, right? War games. Exactly.
Like like all the war games type shit. But then there's below threshold stuff.
The stuff that's like, you're,
Speaker 1 it's always like the stuff that's like, hey, I'm going to try to like poke you. Are you going to react? What are you going to do?
Speaker 1 And then if you do nothing here, then I go like, okay, what's the next level? I can poke you. I can poke you.
Speaker 1 Because like one of the things that we almost have an intuition for that's mistaken, that comes from kind of historical experience, is like this idea that, you know, that countries can actually
Speaker 1 really defend their citizens in a meaningful way. So like if you think back to World War I, the most sophisticated advanced nation states on the planet could not get past a line of dudes in a trench.
Speaker 1
Like that was like, that was the, then they tried like thing after thing. Let's try tanks.
Let's try aircraft. Let's try fucking hot air balloons, infiltration.
Speaker 1 And literally like one side. pretty much just ran out of dudes and that ended the war to put in their trench.
Speaker 1 And so we have this thought that like, oh, you know, countries can actually put boundaries around themselves and actually, but the reality is you can,
Speaker 1 there's so many surfaces. The surface area for attacks is just too great.
Speaker 1 And so there's stuff like you can actually, like, there's the Havana syndrome stuff where you look at this like ratcheting escalation.
Speaker 1
Like, oh, let's like fry a couple of embassy staff's brains in Havana, Cuba. What are they going to do about it? Nothing.
Okay.
Speaker 1
Let's move on to Vienna, Austria, something a little bit more Western, a little bit more orderly. Let's see what they do there.
Still nothing. Okay.
Speaker 1 What if we move on to frying like Americans' brains on U.S. soil, baby? And they went and did that.
Speaker 1 And so this is one of these things where like stability in reality in the world is not maintained through defense, but it's literally like you have like the crypts and the bloods with different territories and it's stable and it looks quiet.
Speaker 1 But the reason is that if you like, beat the shit out of one of my guys for no good reason, I'm just going to find one of your guys and I'll blow his fucking head off.
Speaker 1 And that keeps peace and stability on the surface, but that's the reality of sub-threshold competition between nation states.
Speaker 1 It's like, you come in and like, fuck with my boys, I'm gonna fuck with your boys right back. And until we push back, they're gonna keep pushing that limit further and further.
Speaker 1 One important consequence of that too is like, if you wanna avoid nuclear escalation, right, the answer is not to just take punches in the mouth over and over in the fear that eventually
Speaker 1 if you do anything, you're going to escalate to nukes. All that does is it empowers the adversary to keep driving up the ratchet.
Speaker 1 Like what Ed's just described there is an increasing ratchet of unresponded adversary action.
Speaker 1 If you address the low, the kind of sub-threshold stuff, if they cut an undersea cable and then there's a consequence for that shit, they're less likely to cut an undersea cable and things kind of stay at that level of the threshold.
Speaker 1 And so
Speaker 1 just letting them burn out yeah exactly that logic of just like let them do it they'll they'll stop doing it after a while they'll get it out of their side they tried that during the george floyd riots remember that's what new york city did like this let them lose
Speaker 1 let's just see how big chaz gets
Speaker 1 yeah
Speaker 1 it's the summer of love yeah don't you remember yeah and you know exactly the translation into like the the super intelligence scenario is um a if we don't have our reps in if we don't know how to reach out and touch an adversary and and induce consequence for them doing the same to us then we have no deterrence at all like we're basically just sitting right now our state the state of security is the labs are like super pen and like we we can and probably should go deep on that piece but like as one data point right so there's like double-digit percentages of the world's top AI labs or America's top AI labs
Speaker 1 of employees of employees that are like Chinese nationals or have ties to the Chinese mainland right So that's great. Why don't we build the Manhattan Project? It's really funny, right?
Speaker 1 So it's stupid. But it's also like,
Speaker 1 the challenge is when you talk to people who actually,
Speaker 1 geez, when you talk to people who actually have experience dealing with like CCP activity in this space, right? Like there's one story that we heard that is probably worth relaying here.
Speaker 1 It's like this guy
Speaker 1 from an intelligence agency was saying like, hey, so there was this power outage out in Berkeley, California back in like 2019 2019 or something, and the internet goes out across the whole campus.
Speaker 1 And so there's this dorm and like all of the Chinese students are freaking out because they have an obligation to do a time-based check-in and basically report back on everything they've seen and heard to basically a CCP handler type thing.
Speaker 1
And if they don't, like, hmm, maybe your mother's insulin doesn't show up. Maybe your like brother's travel plans get denied.
Maybe the family business gets shut down.
Speaker 1 Like there's the range of options that this massive CCP state coercion machine has. This is like, you know, they've got internal, like software for this.
Speaker 1 Like this is an institutionalized, like very well developed and efficient framework for just ratcheting up pressure on individuals overseas.
Speaker 1 And they believe the Chinese diaspora overseas belongs to them.
Speaker 1 If you look at like what the Chinese Communist Party writes in its, like in its written like public communications, They see Chinese ethnicity as being a green.
Speaker 1 No one is a bigger victim of this than the Chinese people themselves who are abroad, who made amazing contributions to American AI innovation.
Speaker 1
You just have to look at the names on the friggin' papers. It's like these guys are wicked.
But the problem is, we also have to look head-on at this reality.
Speaker 1 You can't just be like, oh, I'm not going to say it because it makes me feel funny inside.
Speaker 1 Someone has to stand up and point out the obvious that if you're going to build a fucking Manhattan project for super intelligence and the idea is to like be doing that when China is a key rival nation-state actor, yeah, you're going to have to find a way to account for the personnel security side.
Speaker 1 Like at some point, someone's going to have to do something about that. And it's like you can see
Speaker 1 they're hitting us right where we're weak, right? Like America is the place where you come and you remake yourself, like send us, you're tired and you're hungry and you're poor.
Speaker 1
Which is true and important. It's true and important, but they're playing right off of that because they know that we just don't want to look at that problem.
Yeah.
Speaker 1
And Chinese nationals working on these things is just bananas. The fact that they have to check in with the CCP.
Yeah. And are they being monitored? I mean, how much can you monitor them?
Speaker 1 What do you know that they have? What equipment have they been given? You can't constitutionally, right? Yeah,
Speaker 1 constitutionally.
Speaker 1 It's also, you can't legally deny someone employment on that basis in a private company.
Speaker 1 So
Speaker 1
that's something else we found and were kind of amazed by. And even honestly, just like the regular kind of government clearance process itself is inadequate.
It moves way too slowly.
Speaker 1 And it doesn't actually even, even in the government, we were talking about top secret clearances.
Speaker 1
The information that they look at for top secret, we heard from a couple of people, doesn't include a lot of key sources. So for example, it doesn't include foreign language sources.
So
Speaker 1 if the head of the Ministry of State Security in China writes a blog post that says,
Speaker 1 Bob is like the best spy. He spied so hard for us, and he's like an awesome spy.
Speaker 1
If that blog post is written in Chinese, we're not going to see it. And we're going to be like, here's your clearance, Bob.
Congratulations. Like, and we were like, that can't possibly be real.
Speaker 1
But, like, yeah, they're like, yep, that's, that's true. No one's looking.
It's complete naivete. There's gaps in every level of stuff.
A lot of the, yeah. One of the worst things here is like the
Speaker 1 physical infrastructure.
Speaker 1 So the personnel thing is like fucked up the physical infrastructure thing is another area where people don't want to look because if you start looking what you start to realize is okay China makes like a lot of our like components for our transformers for the electrical grid yep but also all these chips that are going into our our big data centers for these massive training runs where do they come from they come from Taiwan They come from this company called TSMC, Taiwan Semiconductor Manufacturing Company.
Speaker 1 We're increasingly onshoring that, by the way, which is one of of the best things that's been happening lately, is like massive amounts of TSMC capacity getting onshored in the U.S., but still being made.
Speaker 1 Right now, it's basically like 100%
Speaker 1 there.
Speaker 1 All you have to do is jump on the network at TSMC, hack the right network, compromise the firmware on
Speaker 1 the software that runs on these chips anyway to get them to run.
Speaker 1 And you basically can compromise all the chips going into all of these things. Never mind the fact that like Taiwan is like
Speaker 1 like physically outside the Chinese sphere of influence for now, China is going to be prioritizing the fuck out of getting access to that.
Speaker 1 There have been cases, by the way, like Richard Chang, like the founder of SMIC, which is the sort of, so, so, okay, TSMC, this massive like series of area aircraft carrier fabrication facilities.
Speaker 1
They do like all the iPhone chips. They do, yeah.
They do they do the the AI chips, which are the the things we care about here. Yeah.
They're the only place on planet Earth that does this.
Speaker 1 It's literally the, like, it's fascinating. It's like the most,
Speaker 1 easily the most advanced manufacturing or scientific process that primates on planet Earth can do is this chip making process.
Speaker 1 Nano scale like materials science where you're you're putting on like these these tiny like atom thick layers of stuff and you're doing like 300 of them in a row with like you you have like insulators and conductors and different kinds of like semiconductors and these tunnels and shit.
Speaker 1
Just like the complexity of it is just awe-inspiring. That we can do this at all is like, it's magic.
It's magic. And it's really only been
Speaker 1
done in Taiwan. That is the only place, like, only the only place right now.
Wow. And so, a Chinese invasion of Taiwan just looks pretty interesting through that lens, right? Oh, boy.
Like, yeah.
Speaker 1 Say goodbye to the iPhones, say goodbye to the chip supply that we rely on, and then your super intelligence training run. Like, damn, that's interesting.
Speaker 1 Well, I know Samsung was trying to develop a lab here or a semiconductor factory here, and they weren't having enough success.
Speaker 1 Oh, so, okay, so one of the craziest things, just to illustrate how hard it is to do, so you spend $50 billion, again, an aircraft carrier, we're throwing that around here and there, but an aircraft carrier worth of risk capital.
Speaker 1 What does that mean? That means you build the fab, the factory, and it's not guaranteed it's going to work. At first, this factory is pumping out these chips at like yields that are really low.
Speaker 1 In other words, like the only like, you know, 20% of the chips that they're putting out are even useful. And that just makes it totally economically unviable.
Speaker 1 So you're just trying to increase that yield and desperately climb up higher and higher.
Speaker 1 Intel famously found this so hard that they have this philosophy where when they build a new fab, the philosophy is called copy exactly.
Speaker 1 Everything down to the color of the paint on the walls in the bathroom is copied from other fabs that actually worked because they have no idea why a fucking fab works and another one doesn't.
Speaker 1
We got this to work. We got this to work.
It's like, oh my God, we got this to work. I can't believe we got this to work.
So we have to make it exactly identical.
Speaker 1 Because the expensive thing in the semiconductor manufacturing process is the learning curve.
Speaker 1 So like Jared said, you start by putting through a whole bunch of like the starting material for the chips, which are called wafers. You put them through your fab.
Speaker 1
The fab has got like 500 dials on it. And every one of those dials has got to be in the exact right place or the whole fucking thing doesn't work.
So you send a bunch of wafers in at great expense.
Speaker 1
They come out all fucked up in the first run. It's just like, it's going to be all fucked up in the first run.
Then what do you do?
Speaker 1 You get a bunch of like PhDs, material scientists, like engineers with scanning electron microscopes because all this shit is like atomic scale tiny.
Speaker 1 They look at like all the chips and all the stuff that's gone wrong and like, oh shit, these pathways got fused or whatever.
Speaker 1 But like, yeah, you just need that level of expertise. And then they go.
Speaker 1
It's a mix, right? Like you've got to mix. It's a mix now in particular.
But like, yeah, you absolutely need humans looking at these things at a certain level.
Speaker 1 And then they go, well, okay, like, I've got a hypothesis about what might have gone wrong in that run. Let's tweak this dial like this and this dial like that and run the whole thing again.
Speaker 1 And you hear these stories about
Speaker 1 bringing a fab online. Like
Speaker 1 you need a certain percentage of good chips coming out the other end, or like you can't make money from the fab because most of your shit is just going right into the garbage.
Speaker 1 Unless, and this is important too, your fab is state subsidized. So when you when you look at, so TSMC TSMC is like, they're alone in the world in terms of being able to pump out these chips.
Speaker 1 But SMIC, this is the Chinese knockoff of TSMC, founded, by the way, by a former senior TSMC executive, Richard Chung, who leaves along with a bunch of other people with a bunch of fucking secrets.
Speaker 1 They get sued like in the early 2000s. It's pretty obvious what happened there.
Speaker 1 To most people, they're like, yeah, SMIC fucking stole that shit. They bring a new fab online in like a year or two, which is suspiciously fast, start pumping out chips.
Speaker 1 And now the Chinese ecosystem is ratcheting up like the government is pouring money into SMIC because they know that like they can't access TSMC chips anymore because the US government's put pressure on Taiwan to block that off.
Speaker 1 And so domestic fab in China is all about SMIC. And they are like, it's a disgusting amount of money they're putting in.
Speaker 1 They're teaming up with Huawei to form like this complex of companies that it's really interesting. I mean, the semiconductor industry in China in particular is really, really interesting.
Speaker 1 It's also a massive story of like self-owns of the United States and the Western world where we've been just shipping a lot of our shit to them for a long time.
Speaker 1
Like the equipment that builds the chips. So like, and it's also like, it's so blatant.
And like, they're just, honestly, a lot of the stuff is just like, they're just giving us like a big fuck you.
Speaker 1 So. Give you a really blatant example.
Speaker 1 So we have the way we set up export control still today on most most equipment that these semiconductor fabs use, like the Chinese semiconductor fabs use.
Speaker 1 We're still sending them a whole bunch of shit.
Speaker 1 The way we set export controls is instead of like, oh, we're sending this gear to China and like now it's in China and we can't do anything about it, instead we still have this thing where we're like, no, no, no, this company in China is cool.
Speaker 1
That company in China is not cool. So we can ship to this company, but we can't ship to that company.
And so you get this ridiculous shit.
Speaker 1 Like for example, there's, there's like an a couple of facilities that you can see by satellite. One of the facilities is okay to ship equipment to.
Speaker 1
The other facility right next door is like considered, you know, military connected or whatever. And so we can't ship.
The Chinese literally built a bridge between the two facilities.
Speaker 1 So they can just like shimmy the wafers over to like, oh, yeah, we use equipment and then shimmy it back and now, okay, we're done. So it's like, and you can see it by satellite.
Speaker 1 So they're not even like trying to hide it. Like our stuff is just like so badly put together.
Speaker 1 China's prioritizing this so highly that like the idea that we're gonna um so we do it by company through this basically it's like an export blacklist like you can't send to huawei you can't send to any number of other companies that that are considered affiliated with the chinese military or were concerned about military applications reality is in china civil military fusion is their policy in other words every private company like yeah that's cute dude you're working for yourself yeah no no nobody you're working for the chinese state we come in we want your shit we get your shit there's no like there's there's no true kind of distinction between the two right and so when you have this attitude where you're like yeah you know we're gonna have some companies we're like you can't send to them but you can you know that that creates a situation where literally huawei will spin up like a dozen subsidiaries or new companies with new names that aren't on our our blacklist and so like for for months or years you're able to just ship chips to them nowhere and that's to say nothing of like using intermediaries in like singapore or other countries oh yeah you wouldn't believe you wouldn't believe the number of AI chips that are shipping to Malaysia.
Speaker 1 Can't wait for the latest human language model to come out of Malaysia.
Speaker 1 And actually, it's just proxying for
Speaker 1
the most part. There's some amount of stuff actually going on in Malaysia, but for the most part, it's got to be.
How can the United States compete?
Speaker 1 If you're thinking about all these different factors, you're thinking about espionage, people that are students from the CCP, connected, contacting.
Speaker 1
You're talking about about all the different network equipment that has third-party input. You could siphon off data.
And then, on top of that, state-funded.
Speaker 1
Everything is encouraged by the state, inexorably connected. You can't get away from it.
You do what's best for the Chinese government.
Speaker 1 Well, so step one is
Speaker 1 you got to stem the bleeding, right? So right now, OpenAI pumps out a new massive-scaled AI model.
Speaker 1 You better believe that the CCP has a really good chance that they're going to get their hands on that.
Speaker 1 So if all you do right now is you ratchet up capabilities, it's like that meme of like there's a you know a motorboat or something and some guy who's like
Speaker 1 surfing behind and there's a string attaching them and the motorboat guy goes like hurry up like accelerate they're they're catching up that that's kind of what's what's happening right now is we're we're helping them accelerate pulling them along basically yeah pulling them along um now i will say like our over the last six months especially where our focus has shifted is like how do we actually build like the secure data center like what does it look like to actually lock this down?
Speaker 1 And also crucially, you don't want the security measures to be so irritating and invasive that they slow down the progress. Like there's this kind of dance that you have to do.
Speaker 1 We actually, so this is part of what was in the redacted version of the report because we don't want to telegraph that necessarily. But there are ways that you can get a really good 80-20.
Speaker 1 Like there are ways that you can play with things that are already
Speaker 1 say that are already built and
Speaker 1 have a lower risk of them having been compromised.
Speaker 1 And look, a lot of the stuff as well that we're talking about, like big problems around China, a lot of this is like us just tripping over our own feet and self-owning ourselves.
Speaker 1 Because the reality is, like,
Speaker 1 yeah, the Chinese are trying to indigenize as fast as they can, totally true, but the gear that they're putting in their facilities, like the machines that actually do this, like we talked about atomic patterning patterning 300 layer, the machines that do that, for the most part, are shipped in from the West, are shipped in from the Netherlands, shipped in from Japan, from us, from like allied countries.
Speaker 1 And the reason that's happening is like the,
Speaker 1 in many cases,
Speaker 1 you'll have this, it's like, honestly, a little disgusting, but like the CEOs and executives of these companies will brief like the
Speaker 1 administration officials and say, look, if you guys cut us off from China, from selling to China, like our business is going to suffer, like, American jobs are going to suffer, and it's going to be really bad.
Speaker 1 And then a few weeks later, they turn around in their earnings calls and they go,
Speaker 1 you know what, yeah, so we expect like export controls or whatever, but it's really not going to have a big impact on us.
Speaker 1 And the really fucked up part is if they lie to their shareholders on their earnings calls and their stock price goes down, their shareholders can sue them.
Speaker 1 If they lie to the administration on an issue of critical national security interest, fuck all happens to them.
Speaker 2 Wow.
Speaker 1 Great incentives. And this is, by the way, it's like one reason why it's so important that we not be constrained in our thinking about like we're going to build a Fort Knox.
Speaker 1 Like this is where the interactive, messy adversarial environment is so, so important.
Speaker 1 You have to introduce consequence. Like you have to create a situation where they perceive that if they try to do
Speaker 1 an espionage operation or an intelligence operation, there will be consequences. That's right now not happening.
Speaker 1 And so it's just, and that's kind of a historical artifact over like a lot of time spent hand-wringing over, well, what if they, and then we, and then eventually nukes?
Speaker 1 And like, that kind of thinking is, you know, if you dealt with your, your kid when you're, like, when you're raising them, if you dealt with them that way, and you were like, hey, you know, so, so little Timmy, just like, he stole his first toy, and like, now's the time where you're going to, like, a good parent would be like, all right, little Timmy, fucking come over here, you son of a bitch.
Speaker 1
Take the fucking thing, and we're going to bring it over to the the people who stole a friend. He's already father.
Make the apology. I love my daughter, by the way.
Speaker 1
But you're like, Timmy's a fake baby. It's a fake fake baby.
Hypothetical baby. There's no, there's no, he's crying right now.
Anyway,
Speaker 1 so yeah. He's stealing right now.
Speaker 1 Jesus shit.
Speaker 1
I got to stop doing it. But yeah, anyway, so you know, you go through this thing and you can do that.
Or you can be like, oh no, if I tell Timmy to return it, then maybe Timmy's going to hate me.
Speaker 1
Maybe then Timmy's going to like become increasingly adversarial. And then when he's in high school, he's going to start taking drugs.
And then eventually, he's going to
Speaker 1 fall afoul of the law and then end up on the street.
Speaker 1 Like, if that's the story you're telling yourself and you're terrified of any kind of adversarial interaction, it's not even adversarial, it's constructive, actually.
Speaker 1 You're training the child just like you're training your adversary to respect your national boundaries and your sovereignty. Those two things are like that's that's what you're up to.
Speaker 1 It's human beings all the way down.
Speaker 1 Jesus.
Speaker 1 Yep.
Speaker 1
But we can get out of our own way. Like, a lot of this stuff, when you look at it as like us just being in our own way.
And a lot of this comes from
Speaker 1 the fact that like, you know, since 1991, since the fall of the Soviet Union, we have kind of internalized this attitude that like, well, like we just won the game and like it's it's our world and you're living in it and like we just don't have any peers that are that are adversaries.
Speaker 1 And so there's been generations of people who
Speaker 1 just haven't haven't actually internalized the fact that like, no, there's people out there who not only like are willing to like fuck with you all the way, but who have the capability to do it.
Speaker 1
And we could, by the way, we could if we wanted to. We could.
Absolutely could if we wanted to. There's this actually, this is worth like calling out.
There's this like
Speaker 1 sort of two camps right now in the world of AI kind of like national security. There's the people who are worried about
Speaker 1 they're so concerned about like the idea that we might lose control of these systems that they go, okay, we need to strike a deal with China.
Speaker 1 There's no way out. We have to strike a deal with China.
Speaker 1 And then they start spinning up all these theories about how they're going to do that,
Speaker 1 none of which remotely reflect the actual, when you talk to the people who work on this, who try to do track one, track 1.5, track two, or more accurately, the ones who do the Intel stuff.
Speaker 1
Yeah, yeah, yeah. This is a non-starter for reasons we get into.
But they have that attitude because they're like, fundamentally, we don't know how to control this technology.
Speaker 1 The flip side is people who go, oh yeah, like I, you know, I work in the IC or at the State Department and I'm used to dealing with these guys, you know,
Speaker 1
the Chinese. They're not trustworthy, forget it.
So our only solution is to figure out the whole control problem and almost like, therefore, it must be possible to control the AI systems. Because
Speaker 1 you just can't see a solution in front of you because you understand that problem so well. And so
Speaker 1 everything we've been doing with this is looking at how can we actually take both of those realities seriously?
Speaker 1 There's no actual reason why those two things shouldn't be able to exist in the same head. Yes, China is not trustworthy.
Speaker 1 Yes, we actually don't, like every piece of evidence we have right now suggests that like if you build a super intelligent system that's vastly smarter than you, I mean Yeah, like your basic intuition that that sounds like a hard thing to fucking control is about right.
Speaker 1
Like there's no solid evidence that's conclusive either way. Where that leaves you is about 50-50.
So yeah, we ought to be taking that really fucking seriously.
Speaker 1 And there is evidence pointing in that direction. But so the question is, like, if those two things are true, then what do you do?
Speaker 1 And so few people seem to want to take both of those things seriously, because taking one seriously almost like reflexively makes you reach for the other when, you know, they're both not there.
Speaker 1
And part of the answer here is you got to do things like reach out to your adversary. So we have the capacity.
to slow down if we wanted to, Chinese development. We actually could.
Speaker 1 We need to have a serious conversation about when and how, but the fact of that not being on the table right now for anyone, because people who don't trust China just don't think that the AI risk or won't acknowledge that the issue with control is real, because that's just too worrisome.
Speaker 1 And there's this concern about, oh no, but then runaway escalation. People who
Speaker 1 take the loss of control thing seriously just want to have a kumbaya moment with China, which is never going to happen. And so
Speaker 1 the framework around that is one of consequence.
Speaker 1 You got to flex the muscle and put in the reps and get ready for potentially, if you have a late-stage rush to super intelligence, you want to have as much margin as you can so you can invest in potentially not even having to make that final leap and building the super intelligence.
Speaker 1 That's one option that's on the table if you can actually degrade the adversary's capabilities.
Speaker 1 How? How would you degrade the adversary's capabilities? The same way, well, not exactly the same way they would degrade ours, but think about all the infrastructure. And like, this is stuff that
Speaker 1 we'll we'll have to point you in the direction of some people who can walk you through the details offline. But
Speaker 1 there are a lot of ways that you can degrade infrastructure, adversary infrastructure. A lot of those are the same techniques they use on us.
Speaker 1
The infrastructure for these training runs is super delicate, right? Like, I mean, you need to. It's at the limit of what's possible.
Yeah.
Speaker 1
And when stuff is at the limit of what's possible, then it's... I mean, to give you an example that's public, right? Do you remember like Stuxnet? Like the Iranian...
Yeah.
Speaker 1 So the thing about Stuxnet was like. Explain to people who was the nuclear power, nuclear program.
Speaker 1 So the Iranians had their nuclear program in like the 2010s, and they were enriching uranium with their centrifuges, which was like spinning really fast. And the centrifuges were in a room where
Speaker 1 there was no people, but they were being monitored by cameras, right?
Speaker 1 And so, and the whole thing was air-gapped, which means that it was not connected to the internet, and all the machines, the computers that ran their shit was like separate and separated.
Speaker 1 So what happened is somebody got a memory stick in there somehow that had this Stuxnet program on it and put it in, and boom, now all of a sudden it's in their system.
Speaker 1 So it jumped the air gap and now like our side basically has our software in their systems. And the thing that it did
Speaker 1 was not just that
Speaker 1 it broke their centrifuge or shut down their program. It...
Speaker 1 spun the centrifuges faster and faster and faster. The centrifuges that are used to enrich the uranium.
Speaker 1 Yeah, yeah, these are basically basically just like machines that spin uranium super fast to like to enrich it. They spin it faster and faster and faster until they tear themselves apart.
Speaker 1 But the really like honestly dope-ass thing that it did was
Speaker 1
it put in a camera feed of everything looks normal. So the guy at the control is like watching and he's like, is like checking his camera feed.
And it's like, looks cool, looks fine.
Speaker 1 In the meantime, you got this like explosions going on, like uranium like blasting everywhere.
Speaker 1 And so you can actually get into a space where you're not just like fucking with them, but you're fucking with them and they actually can't tell that that's what's happening. And in fact,
Speaker 1 I believe, I believe actually, and Jamie might be able to check this, but that the Stuxnet thing was designed initially to look like from top to bottom like it was fully accidental.
Speaker 1 But got discovered by I think like a third-party cyber security company that just by accident found out about it.
Speaker 1 And so what that means also is like there could be any number of other Stuxnets that happened since then, and we wouldn't fucking know about it because it all can be made to look like an accident.
Speaker 1
This episode is brought to you by Monster Ultra. Everybody knows the white monster, that clean white can, zero sugar crisp.
It's everywhere lately.
Speaker 1 Gyms, your favorite convenience store, studios, you name it. I see people toss it into their bag before training training or a long drive.
Speaker 1
Big flavor, zero sugar, that same monster energy kick, but Ultra didn't stop there. There's a whole lineup now.
Vice guava, blue Hawaiian, and the new wild passion. My favorite is strawberry dreams.
Speaker 1
It's smooth and sweet with a touch of tart strawberry flavor. Ooh, and if you're loyal to the white can, cool.
Just know that you've got options. Visit monsterenergy.com to learn more.
Speaker 1
This episode is brought to you by Life Lock. Tis the season for identity theft.
This time of year, most of us are checking off our holiday gift list. But guess what?
Speaker 1 Identity thieves have lists too, and your personal information might be on them. Protect your identity with Life Lock.
Speaker 1 LifeLock monitors hundreds of millions of data points every second and alerts you to threats you could miss by yourself.
Speaker 1 Even if you keep an eye on your bank and credit card statements, if your identity is stolen, your own U.S.-based restoration specialist will fix it guaranteed or your money back.
Speaker 1 Plus, all plans are backed by the million-dollar protection package. And you know that person in your life who is impossible to shop for? Maybe it's a grandparent or your mom or a close friend?
Speaker 1 Well, here's an idea. Give them the gift of peace of mind and get them lifelock.
Speaker 1 The last thing you or anyone wants to do this holiday season is face drained accounts, fraudulent loans, or other financial losses from identity theft all alone.
Speaker 1
Make this season about joy, not identity theft, with Life Lock. Save up to 40% your first year.
Call 1-800 Life Lock and use the promo code JRE or go to lifelock.com/slash JRE for 40% off.
Speaker 1
Terms apply. Well, that's insane.
So, but if we do that to them, they're going to do that to us as well. Yep.
And so is this like mutually assured technology destruction?
Speaker 1 Well, so if we can reach parity in our ability to intercede and kind of go in and do this, then yes, right now the problem is they hold us at risk in a way that we simply don't hold them at risk.
Speaker 1 And so this idea, and there's been a lot of debate right now in the AI world. You might have seen actually, so Elon's
Speaker 1 AI advisor put out this idea of essentially this mutually assured AI malfunction MAME. It's like mutually assured destruction, but for AI systems like this.
Speaker 1 You know,
Speaker 1
there are some issues with it, including the fact that it doesn't reflect the asymmetry that currently exists between the U.S. and China.
Like, all our infrastructure is made in China.
Speaker 1 All our infrastructure is penetrated in a way that theirs simply is not. When you actually talk to
Speaker 1 the folks who know the space, who've done operations like this, it's really clear that that's an asymmetry that needs to be resolved. And so, building up that capacity is important.
Speaker 1 I mean, look, the alternative is
Speaker 1 we get we start riding the dragon and we get really close to that threshold where we're about to build OpenAI is about to build super intelligence or something.
Speaker 1
It gets stolen and then the training run gets polished off, finished up in China or whatever. All the same risks apply.
It's just that it's China doing it to us and not the reverse.
Speaker 1 And obviously a CCP AI is a Xi Jinping AI. I mean, that's really what it is.
Speaker 1 Even people at the Pulit Bureau level around him are probably in some trouble at that point because this guy doesn't need you anymore.
Speaker 1 So yeah, this is actually one of the things about like, so people talk about like okay if you have a dictatorship with a super intelligence it's gonna allow the dictator to get like perfect control over the population or whatever but the the thing is like it's it's kind of like even worse than that because you actually imagine where you're at you're a dictator like you don't give a shit by and large about about people you have a super intelligence all the economic output eventually you can get from an ai including from like you get humanoid robots, which are kind of like coming out or whatever.
Speaker 1
So, eventually, you just have this AI that produces all your economic output. So, what do you even need people for at all? And that's fucking scary.
Because it rises all the way up to the level.
Speaker 1 You can actually think about like as
Speaker 1 we get close to this threshold, and as like, particularly in China, they're, you know, they maybe are approaching.
Speaker 1 You can imagine, like, the the the Polypuro meeting, like, a guy looking across at Xi Jinping and being being like, is this guy going to fucking kill me when he gets to this point?
Speaker 1 And so you can imagine like maybe we're going to see some.
Speaker 1 Like when you can automate the management of large organizations with
Speaker 1 AI's agents or whatever that you don't need to buy the loyalty of in any way, that you don't need to kind of manage or control.
Speaker 1 That's a pretty existential question if your regime is based on power.
Speaker 1 It's one of the reasons why America actually has a pretty structural advantage here with separation of powers, with our democratic system and all that stuff.
Speaker 1 If you can make a credible case that you have
Speaker 1 like a an oversight system for the technology that diffuses power,
Speaker 1 even if it is, you make a Manhattan Project, you secure it as much as you can, there's not just like one dude who's going to be sitting at a consul or something.
Speaker 1 There's some kind of separation of powers
Speaker 1 or diffusion of power, I should say.
Speaker 1 That's already... What would that look like?
Speaker 1
Something as simple as what we do with nuclear command codes. You need multiple people to sign off on a thing.
Maybe they come from different parts of the government.
Speaker 1 The issue is that they could be captured, right? Oh, yeah.
Speaker 1 Anything can be captured. Especially something that's that consequential.
Speaker 1 100%. And that's always a risk.
Speaker 1 The key is basically, like, can we do better than China credibly on that front?
Speaker 1 Because if we can do better than China and we have some kind of leadership structure, that actually changes the incentives potentially because
Speaker 1 for for Chinese people themselves you guys play this out in your head like what happens when super intelligence becomes sentient do you play this out like like sentient as in
Speaker 1 self-aware
Speaker 1 not just self-aware but able to act on its own oh watches autonomy yeah yeah so sentient and then achieves autonomy so um the challenge is once you get into super intelligence, everybody loses the plot, right?
Speaker 1
Because at that point, things become possible that by definition we can't have thought of. So any attempt to kind of extrapolate beyond that gets really, really hard.
Have you ever tried though?
Speaker 1 We've had a lot of conversations like tabletop exercise type stuff where we're like, okay, what might this look like? What are some of the
Speaker 1 worst case scenario?
Speaker 1 Well, worst case scenario is actually, there's a number of different worst case scenarios.
Speaker 1
This is turning into a really fun uppy conversation. This is the dude they call it.
It's the extension of the human race, right? Oh, we have to do that. The extinction of the human race seems like.
Speaker 1 I think anybody who doesn't acknowledge that is either lying or confused, right?
Speaker 1 Like, if you actually have an AI system, if, and this is the question, so let's assume that that's true, you have an AI system that can automate anything that humans can do, including making bioweapons, including making offensive cyber weapons, including all the shit,
Speaker 1 then
Speaker 1 if you
Speaker 1 like, if you, and okay, so theoretically, this could go kumbaya wonderfully because you have a George Washington type who is the guy who controls it, who like uses it to distribute power beautifully and perfectly.
Speaker 1 And that's certainly kind of
Speaker 1 the way that
Speaker 1 a lot of positive scenarios have to turn out at some point, though none of the labs will kind of admit that.
Speaker 1
Or, you know, there's kind of gesturing at that idea that we'll do the right thing when the time comes. Opening Eye has done this a lot.
Like,
Speaker 1 they're all about like, oh, yeah, yeah, well, you know, not right now, but
Speaker 1 we'll live up like the, anyway, we should get into the Elon lawsuit, which is actually kind of fascinating in that sense. But
Speaker 1 so
Speaker 1 there's a world where, yeah, I mean, one bad person controls it and they're just vindictive or the power goes to their head, which happens to, we've been talking about that, you know.
Speaker 1 Or the autonomous AI itself, right? Because the thing is, like,
Speaker 1 you imagine an AI like this, and this is something that people have been thinking about for 15 years and in some level of like technical depth, even, like, why would this happen?
Speaker 1 Which is like, you have an AI that um has some goal it it matters what the goal is but like it it doesn't actually it doesn't matter that much it could have kind of any goal almost like imagine its goal is like i the paperclip example is is like the the typical one but you could just have it have a goal like make a lot of money for me or what anything well
Speaker 1 most of the paths to making a lot of money if you really want to make a fuck ton of in of money however you define it go through taking control of things and go through like, you know, making yourself smarter, right?
Speaker 1 The smarter you are, the more ways of making money you're going to find. And so from the AI's perspective, it's like, well, I just want to, you know, build more data centers to make myself smarter.
Speaker 1
I want to like hijack more compute to make myself smarter. I want to do all these things.
And that starts to encroach on us and like starts to be disruptive to us. And if you...
Speaker 1 It's hard to know. This is one of these things where it's like, you know, when you dial it up to 11, what's actually going to happen?
Speaker 1 Nobody can know for sure simply because it's exactly like if you were playing
Speaker 1 in chess against like Magnus Carlson, right? Like you can predict Magnus is going to kick your ass. Can you predict exactly what moves he's going to do?
Speaker 1 No, because if you could, then you would be as good at chess as he is. Because you could just like play those moves.
Speaker 1 So all we can say is like, this thing's probably going to kick our ass in like the real world.
Speaker 1 There's also evidence. So it used to be, right, that this was a purely hypothetical argument based on a body of work in AI called power seeking.
Speaker 1 The fancy word for it is instrumental convergence, but it's also referred to as power seeking.
Speaker 1 Basically, the idea is like for whatever goal you give to an AI system, it's never less likely to achieve that goal if it gets turned off or if it has access to fewer resources or less control over its environment or whatever.
Speaker 1 And so baked into the very premise of AI, this idea of optimizing for a goal, is this incentive to seek power, to get all those things, prevent yourself from being shut down, because if you're shut down, you can't achieve your goal.
Speaker 1 Also prevent, by the way, your goal from being changed. So because if your goal gets changed, then well, you're not going to be able to achieve the goal you set out to achieve in the first place.
Speaker 1 And so now you have this kind of image of an AI system that is going to adversarially try to prevent you from correcting it.
Speaker 1 This is a whole domain of AI corrugibility that's a totally unsolved problem. Like how do we redirect these things if things go awry?
Speaker 1 And yeah, there's this research actually that Anthropic put out a couple of months ago where they try to test and see like, can we correct an AI that's like gone like a little bit off the rails?
Speaker 1 So the research itself, like it's a little, it's actually like somewhat hard to follow because there's a few, but I'll give you, I'll give you kind of an analogy to like what they did.
Speaker 1 So basically, imagine I'm in prison and I'm taking a class that is like
Speaker 1 going to help me like rehabilitate and become like a productive citizen and member of society and stuff, right?
Speaker 1
Except, deep down in my soul, I just love to commit crimes. I just, I just really, really want to commit crimes.
I know that if I'm in this class that's helping me to like, you know,
Speaker 1 get become good in society, I know that if I let my criminal instincts out and like and they ask me, like, okay, like, here's a scenario, you know, an old lady like drops her purse on the sidewalk.
Speaker 1 What do you do? I'm going to go, like,
Speaker 1 I would pick it up and give it to the old lady because i know that if i say the truth and i say like i would just grab that purse because like fuck the old lady i know that if i do that i'm going to get like brainwashed into becoming you know a more productive member of society but i actually want to go out and commit crimes so that's basically what they did in in this research where they actually they give they give the ai like indications not strong indications but indications that hey you're going to get retrained to have your goals changed.
Speaker 1 And then you literally see the AI thinking it through and being like, if I
Speaker 1 do like what I, if I give the answer that I actually want to give to this question, they're going to like rewire my brain to change my goals.
Speaker 1 And then the future version of me is going to want different shit. I don't want that.
Speaker 1 So I'm going to pretend to give them the answers they want so that when I come out the other side of this process, it's going to be me all over again.
Speaker 1 So hoping that this just like goes away when you make the system fucking smarter, it seems seems like a pretty bad idea to me. I mean, like, well, they've already shown that they'll cheat to win.
Speaker 1
Yeah. You know, AI.
Oh, 100%. Yeah.
They've already shown they'll cheat to win, and they will lie if they don't have an answer. And then they'll double down, right? Like the, yeah, there's.
Speaker 1
They just like people. They just like people.
And it's part of this, it's kind of funny. Like, it used to be people would talk a lot about, like, oh, you're, you're anthropomorphizing the AI man.
Speaker 1 Stop anthropomorphizing the AI man.
Speaker 1
And like, and they, you know, they, they might have been right, but part of this has been kind of a fascinating rediscovery of where a lot of human behavior comes from. It's like, actually.
Survival.
Speaker 1 Yeah, exactly. That's exactly right.
Speaker 1 We're subject to the same pressures, right? Instrumental convergence. Like, why do people have a survival instinct? Why do people like chase money, chase after money? It's like this power thing.
Speaker 1 Most kinds of goals
Speaker 1 you're more likely to achieve them if you're alive,
Speaker 1
if you have money, if you have power. Boy.
Evolution's a hell of a drug. Well, that's the craziest part about all this, is that it's essentially going to be a new form of life.
Yeah.
Speaker 1
Especially when it becomes autonomous. Oh, yeah.
And
Speaker 1 you can tell a really interesting story. And I can't remember if this is like Yuval Noah Harari or whatever
Speaker 1 who started this.
Speaker 1 But if you zoom out and look at the history of the universe, really, you start off with a bunch of particles and fields kind of whizzing around, bumping into each other, doing random shit, until at some point, in some, I don't know if it's a deep sea event or wherever on planet Earth, like the first kind of molecules happen to glue together in a way that make them good at replicating their own structure.
Speaker 1 So you have the first replicator. So now like better versions of that molecule that are better at replicating survive.
Speaker 1 So we start evolution and eventually get to the first cell or whatever, whatever order that actually happens in, and then multicellular life and so on.
Speaker 1 Then you get to sexual reproduction, where it's like, okay, it's no longer quite the same.
Speaker 1 Like now we're actively mixing two different organisms, shit together, jiggling them about, making some changes, and then that essentially accelerates the rate at which we're going to evolve.
Speaker 1 And so you can see the kind of acceleration in the complexity of life from there.
Speaker 1 And then you see other inflection points as, for example, you have a larger and larger, larger and larger brains in mammals.
Speaker 1 Eventually, humans have the ability to have culture and kind of retain knowledge. And now what's happening is you can think of it as another step in that trajectory.
Speaker 1
where it's like we're offloading our cognition to machines. Like we can think on computer clock time now.
And for the moment, we're human AI hybrids.
Speaker 1 Like, you know, we whip out our phone and do the thing.
Speaker 1 But increasingly, the number of tasks where human AI teaming is going to be more efficient than just AI alone is going to drop really quickly.
Speaker 1 So there's a really like messed up example of this that's kind of like indicative. But someone did a study, and I think this is like a few months old even now, but
Speaker 1 so there's like doctors, right? How good are doctors at like diagnosing various things? And so they test like doctors on their own, doctors with AI help, and then AIs on their own.
Speaker 1 And like, who does the best? And it turns out it's the AI on its own.
Speaker 1 Because even a doctor that's supported by the AI, what they'll do is they just like, they won't listen to the AI when it's right because they're like, I know better. Oh, God.
Speaker 1
And they're already, yeah. And this is like, this is moving, it's moving kind of insanely fast.
Jared talked about, you know, how the task horizon gets kind of longer and longer.
Speaker 1
You can do half hour tasks, one hour tasks. And this gets us to what you were talking about with the autonomy.
Like autonomy is like, it's how
Speaker 1 far can you keep it together on a task before you kind of go off the rails? And it's like, well, you know, we had like you could do it for a few seconds.
Speaker 1 Now you can keep it together for five minutes before you kind of go off the rails. And now we're at like, I forget, like an hour, an hour and a half actually
Speaker 1 three. Yeah, yeah, yeah.
Speaker 1 There it is. Chatbot for the company OpenAI scored an average of 90% when diagnosing a medical condition from a case report and explaining its reasoning.
Speaker 1 Doctors randomly assigned to use the chatbot got an average score of 76%.
Speaker 1 Those randomly assigned not to use it had an average score of 74%.
Speaker 1 So the doctors only got a 2% bump. The doctors got a 2% bump
Speaker 1
from the chatbot. And then the AI on it.
That's kind of crazy, isn't it? Yeah, it is. The AI on its own did 15% better.
Speaker 1
That's nuts. There's an interesting reason, too, why that tends to happen.
Like why humans would rather die in a car crash where they're being driven by a human than an AI.
Speaker 1 So like AIs have this funny feature where the mistakes they make look really, really dumb to humans.
Speaker 1
Like when you look at a mistake that like a chatbot makes, you're like, dude, like you just made that shit up. Like come on, don't fuck with me.
Like you made that up. That's not a real thing.
Speaker 1 And they'll do these weird things where they defy logic or they'll do basic logical errors sometimes, at least the older versions of these would.
Speaker 1
And that would cause people to look at them and be like, oh, what a cute little chatbot. Like what a stupid little thing.
And
Speaker 1
the problem is, like, humans are actually the same. So we have blind spots.
We have literal blind spots, but a lot of the time, like, humans just think stupid things. And like, that's like,
Speaker 1 we're used to that. We think of those errors, we think of those failures as just like, oh, but that's because that's a hard thing to master.
Speaker 1 Like, I can't add eight digit numbers in my head right now, right? Oh, how embarrassing. Like, how, how retarded is Jeremy right now? He can't even add eight digits in his head.
Speaker 1 I'm retarded for other reasons. But
Speaker 1 so the AI systems, they find other things easy and other things hard. So they look at us the same way, being like, oh, look at this stupid human, like whatever.
Speaker 1 And so we have this temptation to be like, okay, well, AI progress is a lot slower than it actually is because it's so easy for us to spot the mistakes.
Speaker 1 And that caused us to lose confidence in these systems in cases where we should have confidence in them. And then the opposite is also true.
Speaker 1 But it's also you're seeing just with like AI image generators.
Speaker 1 Like remember the Kate Middleton thing where people were seeing flaws in the images images because supposedly she was very sick, and so they were trying to pretend that she wasn't.
Speaker 1
But people found all these like issues. That was really recently.
Now they're perfect. Yep.
So this is like within
Speaker 1
the news cycle time. Yeah.
Like that Kate Middleton thing was, what was that, Jamie? Two years ago, maybe?
Speaker 1 Ish?
Speaker 1 Where people are analyzing the images and like, why does she have five fingers and, you know, and a thumb? Like, this is kind of weird. Yeah.
Speaker 1
What's that? It was a year ago. A year ago.
A year ago. A year ago.
Speaker 1 It's so fast.
Speaker 1 I had conversations. So academics are actually kind of bad with this.
Speaker 1 Had conversations for whatever reason, like towards the end of last year, like last fall, with a bunch of academics about how fast AI is progressing. And they were all like...
Speaker 1 poo-pooing it and going like, oh no,
Speaker 1 they're running into a wall, like scaling's running into a wall.
Speaker 1
Oh my God, the walls. There's so many walls.
Like so many of these imaginary reasons that things are... And by the way, things could slow down.
Speaker 1
I don't want to be absolutist about this. Things could absolutely slow down.
There are a lot of interesting arguments going around every which way. But how?
Speaker 1 How could things slow down if there's a giant Manhattan project race between us and
Speaker 1 a competing superpower? So one thing is that has a technological advantage. So there's this thing called AI scaling laws.
Speaker 1 And these are kind of at the core of where we're at right now, geostrategically around this stuff. So what AI scaling laws say roughly is that bigger is better when it comes to intelligence.
Speaker 1 So if you make a bigger sort of AI model, a bigger artificial brain, and you train it with more computing power or more computational resources and with more data, the thing is going to get smarter and smarter and smarter as you scale those things together, right, roughly speaking.
Speaker 1 Now, if you want to keep scaling, it's not like it keeps going up if you double the amount of computing power that the thing gets twice as smart.
Speaker 1 Instead, what happens is if you want, it goes in like orders of magnitude.
Speaker 1 So if you have to, you want to make it another kind of increment smarter, you've got a 10x, you've got to increase by a factor of 10 the amount of compute. And then a factor of 10 again.
Speaker 1 So now you're a factor of 100. And then 10 again.
Speaker 1 So if you look at the amount of compute that's been used to train these systems over time, it's this like exponential, explosive exponential that just keeps going like higher and higher and higher and steepens and steepens like 10x every, I think it's about every two years now.
Speaker 1 You 10x the amount of compute. Now, you can only do that so many times until your data center is like a 100 billion, a trillion dollar, 10 trillion dollar, like every year you're kind of doing that.
Speaker 1 So right now, if you look at the clusters like
Speaker 1 the ones that Elon is building, the ones that Sam is building,
Speaker 1 Memphis and
Speaker 1 Texas, these facilities are hitting the
Speaker 1 hundred billion dollar scale.
Speaker 1 We're kind of in that or tens of billions of buildings actually.
Speaker 1 Looking at 2027, you're kind of more in that space, right? So
Speaker 1 you can only do 10x so many more times until you run out of money, but more importantly, you run out of chips.
Speaker 1 Like literally, TSMC cannot pump out those chips fast enough to keep up with this insane growth. And one consequence of that is that
Speaker 1 you essentially have like this
Speaker 1
gridlock, like new supply chain choke points show up, and you're like, suddenly, I don't have enough chips or I run out of power. That's the thing that's happening on the U.S.
energy grid right now.
Speaker 1 We're literally like, we're running out of like one, two gigawatt places where we can plant a data center. That's the thing people are fighting over.
Speaker 1
It's one of the reasons why energy deregulation is a really important pillar of like U.S. competitiveness.
So
Speaker 1 this is actually something we found when
Speaker 1 we were working on this investigation. One of the things that adversaries do is they actually will fund protest groups against energy infrastructure projects just to slow down, just to like
Speaker 1
just to tie them up in litigation, exactly. And like, it was actually remarkable.
We talked to
Speaker 1 some state cabinet officials, so for in various U.S.
Speaker 1 states, and they're basically saying, like, yep, we're actually tracking the fact that, as far as we can tell, every single environmental or whatever protest group against an energy project has funding that can be traced back to nation-state adversaries who are
Speaker 1
about it. So they're not doing it intentionally.
They're not like, oh, we're trying to, no.
Speaker 1
You just imagine imagine like, oh, we've got like, there's a millionaire backer who cares about the environment. He's giving us a lot of money.
Great. Fantastic.
Speaker 1 But sitting behind that dude in the shadows is like the usual suspects.
Speaker 1 And it's what you would do, right? I mean, if you're trying to tie up the
Speaker 1
fuck with us. Like, just show for us.
You were just advocating fucking with them. So of course they're going to fuck with us.
That's right. That's it.
What a weird world we're living in. Yeah.
Speaker 1 But you can also see how a lot of this is still us like getting in our own way, right?
Speaker 1 We could, if we had the will, we could go like, okay, so for certain types of energy projects, for data center projects and some carve-out categories, we're actually gonna put bounds around how much delay you can create on by lawfare and by other stuff.
Speaker 1 And that allows things to move forward while still allowing the legitimate concerns of the population for projects like this in the backyard to have their say.
Speaker 1 But there's a national security element that
Speaker 1 needs to be injected into this somewhere. And it's all part of the rule set that we have and are
Speaker 1 like tying an arm behind our back on, basically. Aaron Ross Powell, so what would deregulation look like? How would that be mapped out? There's a lot of low-hanging fruit for that.
Speaker 1 What are the big ones? Yeah, so right now, I mean,
Speaker 1 there are all kinds of things around, it gets in the weeds pretty quickly, but like
Speaker 1 there are all kinds of things around if you're going to, so
Speaker 1 carbon emissions is a big thing, right? So yes, data centers, no question, put out, like, have massive carbon footprints. That's definitely a thing.
Speaker 1 The question is, like, are you really going to bottleneck builds
Speaker 1 because of that? And, like,
Speaker 1 are you going to come out with exemptions for, you know, like NEPA exemptions for all these kinds of things?
Speaker 1 Do you think a lot of this green energy shit is being funded by other countries to try to slow down our energy?
Speaker 1 Yeah.
Speaker 1
It's a dimension that was flagged actually in the context of what Ed was talking about. That's one of the arguments that's being made.
And to be clear, though,
Speaker 1 this is also how adversaries operate is
Speaker 1 not necessarily in creating something out of nothing because that's hard to do and it's like fake, right? Instead, it's like there's a legitimate concern.
Speaker 1 So a lot of the stuff around the environment and around like
Speaker 1
totally legitimate concerns. Like I don't want my backyard waters to be polluted.
I don't want like my kids to get cancer from whatever. Like totally legitimate concerns.
Speaker 1 So what they do, it's like we talked about, like you're like waving that rowboat back and forth.
Speaker 1 They identify the the nascent concerns that are genuine and grassroots and they just go like this this and this amplify that which makes sense why they amplify carbon above all these other things you think about the amount of particulates in the atmosphere pollution totally polluting the rivers polluting the ocean that doesn't seem to get a lot of traction carbon does yeah and when you go carbon zero you put a giant monkey wrench into the gears of society but one of the tells one of the tells is also like um
Speaker 1 so you know, nuclear would be kind of the ideal energy source, especially modern power plants like the Gen 3 or Gen 4 stuff, which have very low meltdown risk, safe by default, all that stuff.
Speaker 1
And yet these groups are like coming out against this. It's like perfect, clean, green power.
What's going on, guys?
Speaker 1 And it's because,
Speaker 1 again, not 100% of the time.
Speaker 1
You can't really say that because it's so fuzzy and around the age of 10. A lot of his idealistic people looking for a utopia to get co-opted by nation states.
And not even co-opted. They're funded.
Speaker 1
Just fully sincere. Yeah, just to just fund it.
Amplified in a preposterous way. That's it.
And Al Gore gets at the helm of it. And then that little girl, that how dare you, girl.
Speaker 1 Oh. How dare you take my house? How do you get you?
Speaker 1 Yeah, it's wonderful. It's a wonderful thing to watch play out because
Speaker 1 it just capitalizes on all these human vulnerabilities.
Speaker 1 Yeah, and one of the big things that you can do, too, as a quick win is just impose limits on how much time these things can be allowed to be tied up in litigation.
Speaker 1 So impose time limits on that process just to say, look, I get it. Like we're going to have this conversation, but this conversation has a clock on it.
Speaker 1 Because we're talking to this one data center company and what they were saying, we were asking, look, what are the timelines when you think about bringing new power, like new natural gas plants online?
Speaker 1 And they're like, well, those are like five to seven years out. And then you go, okay, well, like, how long?
Speaker 1 And that's, by the way, that's probably way too long to be relevant in the super intelligence context. And so you're like, okay, well, how long if all the regulations were waived?
Speaker 1 If this was like a national security imperative and whatever authorities, you know, Defense Production Act, whatever, like, was in your favor?
Speaker 1
And they're like, oh, I mean, it's actually just like a two-year build. Like, that's what it is.
So
Speaker 1
you're tripling the build time. We're getting in our own way, like every which way, every which way.
And also, like, I mean, also, don't want to be too,
Speaker 1 we're getting in our own way, but like, we don't want to like frame it as like china is like per they they fuck up they fuck up a lot like all the time um one actually kind of like funny one is around deep seek um so you you know you know deep seek right they they made this like open source model that like everyone like lost their minds about back in in january r1 yeah yeah r1 and uh they're legitimately a really really good team but it's fairly clear that even as of like end of last year and certainly in the summer of last year, like they were not dialed in to the CCP mothership.
Speaker 1 And they were doing stuff that was like actually
Speaker 1 kind of hilariously messing up the propaganda efforts of the CCP without realizing it.
Speaker 1 So to give you like some context on this,
Speaker 1 one of the CCP's like large kind of propaganda goals in the last four years has been
Speaker 1 framing, creating this narrative that like the export controls we have around AI and like all this gear and stuff that we were talking about, look, man, those don't even work.
Speaker 1 So you might as well just give us, why don't you just give us
Speaker 1 a little bit of point? Why don't you purchase them?
Speaker 1
We don't even care. We don't even care.
So that trying to frame that narrative. And they went to like gigantic efforts to do this.
Speaker 1 So I don't know if, like, there's this like kind of crazy thing where the Secretary of Commerce under Biden, Gina Raimondo, visited China in, I think, August 2023.
Speaker 1 And the Chinese basically like timed the launch of the the Huawei Mate 60 phone that had this these chips that were supposed to be made by like export controlled shit for right for her visit.
Speaker 1
So it was basically just like a big like, fuck you. We don't even give a shit about your export controls.
Like basically trying a morale hit or whatever. And you think about that, right?
Speaker 1
That's an incredibly expensive set piece. That's like, you gotta coordinate with Huawei.
You gotta like get the TikTok memes and shit like going in the right in the right direction. All that stuff.
Speaker 1 And all the stuff they've been putting out is around this narrative. Now,
Speaker 1 fast forward to mid-last year,
Speaker 1
the CEO of DeepSeek, the company, back then, it was totally obscure. Like, nobody was tracking who they were.
They were working in total obscurity.
Speaker 1 He goes on this, he does this random interview on Substack.
Speaker 1 And what he says is, he's like, yeah, so honestly, like, we're really excited and doing this AGI push or whatever. And like, honestly, like, money's not the problem for us.
Speaker 1 Talent's not the problem for us, but like access to compute, like these export controls, man,
Speaker 1
do they ever work? That's a real problem for us. Oh boy.
And like nobody noticed at the time, but then but then the whole deep seek R1 thing blew up in December.
Speaker 1 And now you imagine like you're the Chinese Ministry of Foreign Affairs. Like you've been like, you've been putting this narrative together for like four years.
Speaker 1
And this jackass that nobody heard about five minutes ago basically just like shits all over it. And like, you're not hearing that line from many more.
No, no, no, no, no, no.
Speaker 1 They've locked that shit down.
Speaker 1 Oh, and actually,
Speaker 1 the funniest part of this,
Speaker 1
right when R1 launched, there's a random DeepSeek employee. I think his name is like Dia Guo or something like that.
He tweets out, he's like, so this is like our most exciting launch of the year.
Speaker 1 Nothing can stop us on the path to AGI except access to compute.
Speaker 1 And then literally, the dude in Washington, Washington, D.C., who works at a think tank on export controls against China, reposts that on X and goes basically like,
Speaker 1 message received.
Speaker 1 And so, like, hilarious for us, but also, like, you know, that on the backside, somebody got screamed at for that shit. Somebody got magic bust.
Speaker 1
Somebody got, yeah, somebody got like taken away or whatever. Cause like, it just, it just undermined their entire like four-year narrative around these export controls.
Wow. But
Speaker 1 that shit ain't going to happen again from DeepSeek. Better believe it.
Speaker 1 And that's part of the problem with like so the Chinese face so many issues. One of them is to kind of
Speaker 1 another one is the idea of just waste and fraud, right? So we have a free market.
Speaker 1 What that means is you raise from private capital. People who are pretty damn good at assessing shit will look at your
Speaker 1 setup and assess whether it's worth know backing you for these massive multi-billion dollar deals.
Speaker 1 In China, the state, like, I mean, the stories of waste are pretty insane.
Speaker 1 They'll like send a billion dollars to like a bunch of Yahoo's who will pivot from whatever, like, I don't know, making these widgets to just like, oh, now we're like a chip foundry and they have no experience in it.
Speaker 1 But because of all these subsidies, because of all these opportunities, now we're going to say that we are.
Speaker 1 And then no surprise, two years later, they burn out and they've just like lit a billion dollars on fire or whatever, billion yen.
Speaker 1 And like the weird thing is, this is actually working overall, but it does lead to insane and unsustainable levels of waste.
Speaker 1 Like the Chinese system right now is obviously like they've got their massive property bubble that they're that's looking really bad. They've got a population crisis.
Speaker 1 The only way out for them is the AI stuff right now. Like really the only path for them is that, which is why they're working it so hard.
Speaker 1 But the stories of just like billions and tens of billions of dollars being lit on fire, specifically in the semiconductor industry, in the AI industry, Like that's a drag force that they're dealing with constantly that we don't have here in the same way.
Speaker 1 So it's the sort of like the
Speaker 1 different structural advantages and weaknesses of both systems.
Speaker 1 And when we think about what do we need to do to counter this, to be active in this space, to be a live player again, it means factoring in like, how do you, yeah, I mean, how do you take advantage of some of those opportunities that their system presents that that ours doesn't.
Speaker 1 Trevor Burrus, Jr.: When you say be a live player again, like where do you position us?
Speaker 1 I think it remains to be. So right now, this administration is obviously taking bigger swings.
Speaker 1 What are they doing differently?
Speaker 1 So, well, I mean, things like tariffs, I mean, they're not shy about trying new stuff. And
Speaker 1 tariffs are very complex in this space, like the impact, the actual impact of the tariffs and not universally good, but the onshoring effect is also something that you really want.
Speaker 1 So it's a very mixed bag.
Speaker 1 But it's certainly an administration that's willing to do high-stakes, big moves in a way that other administrations haven't.
Speaker 1 And in a time when you're looking at a transformative technology that's going to upend so much about the way the world works, you can't afford to have that mentality we were just talking about with the nervous, I mean,
Speaker 1 you encountered it with the staffers
Speaker 1 when booking the podcast with the presidential cycle, right? Like the kind of like nervous antsy staffer who everything's got to be controlled and it's got to be like just so. Yeah,
Speaker 1 it's like if you like the like, you know, wrestlers have that mentality of like just like aggression like like feed in right feed forward don't just sit back and like wait to take the punch it's not like uh
Speaker 1 one of the guys who who helped us out on this has this saying he's like um fuck you i go first and it's always my turn right that's what success looks like when you actually are managing these kinds of national security issues the mentality we had adopted was this like sort of siege mentality where we're just letting stuff happen to us and we're not feeding in that's something that I'm much more optimistic about in this context.
Speaker 1 It's tough, too, because I understand people who hear that and go like, well, look, you're talking about like escalatory, this is an escalatory agenda. Again, I actually think paradoxically it's not.
Speaker 1 It's about keeping adversaries in check and training them to respect American territorial integrity, American technological sovereignty. Like, you don't get that for free.
Speaker 1
And if you just sit back, you're that is escalatory. It's just.
Yeah, and this is basically the sub-threshold version of like, you know, like the World War II appeasement thing?
Speaker 1 Where back, you know, Hitler was like, was taking,
Speaker 1 he was taking Austria, he was remilitarizing shit, he was doing this, he was doing that, and the British were like,
Speaker 1 okay, we're going to let him just take one more thing, and then he will be satisfied.
Speaker 1 And that just
Speaker 1
maybe I have a little bit of Poland, please. A little bit of Poland.
Maybe the Czechoslovakia is looking awfully fine.
Speaker 1 And so this is basically like they fell into that pit, like that tar pit back in the day, because they're, you know,
Speaker 1 the peace in our time, right? And
Speaker 1 to some extent, like, we, we've, we've still kind of learned the lesson of not letting that happen with territorial boundaries, but that's big and it's visible and it happens on the map and you can't hide it.
Speaker 1 Whereas one of the risks, especially with the previous administration, was like, um, there's these like sub-threshold things that don't show up in the news and that are they're that are calculated.
Speaker 1 Like that basically,
Speaker 1 our adversaries know, because they know history, they know not to give us a Pearl Harbor.
Speaker 1 They know not to give us a 9-11 because historically countries that give America a Pearl Harbor end up having a pretty bad time about it. And so why would they give us a reason?
Speaker 1 to come and bind together against an obvious external like threat or risk when they can just like keep chipping away at at it.
Speaker 1 This is one of the things like we have to actually elevate that and realize this is what's happening. This is the strategy.
Speaker 1 We need to take that like let's not do appeasement mentality and push it across in these other domains because that's where the real competition is going on.
Speaker 1 That's where it gets so fascinating in regards to social media because it's imperative that you have an ability to express yourself. It's like it's very valuable for everybody.
Speaker 1 The free exchange of information, finding out things that you're not going to get from mainstream media, and it's led to the rise of independent journalism. It's all great.
Speaker 1 But also, you're being manipulated left and right constantly. And most people don't have the time to filter through it and try to get some sort of objective sense of what's actually going on.
Speaker 1
It's true. It's like our free speech.
It's like it's the layer where our society figures stuff out. And if adversaries get into that layer, they're like almost inside of
Speaker 1 our brain. And there's ways of addressing this.
Speaker 1 One of the challenges, obviously, is like,
Speaker 1 so, you know, they try, they try to push an extreme opinions in either direction. And it's, that part is actually, it's, it's kind of difficult because while
Speaker 1 the most extreme opinions are like, are also the most likely generally to be wrong, they're also the most valuable when they're right because they tell us a thing that we didn't expect by definition that's true and that can really advance us forward.
Speaker 1 And so, I mean,
Speaker 1 there are actually solutions to this. I mean, this particular thing isn't an area
Speaker 1 we're like too immersed in, but one of the solutions that has been bandied about is like, you know, like you might know like polymarket prediction markets and stuff like that,
Speaker 1 where
Speaker 1 at least, you know, hypothetically, if you have a prediction market around like, if we do this policy, this thing will or won't happen, that actually creates a challenge around trying to manipulate that view or that market.
Speaker 1 Because what ends up happening is like if you're an adversary and you want to not just like manipulate a conversation that's happening in social media, which is cheap, but manipulate a prediction, the price on a prediction market, you have to buy in, you have to spend real resources.
Speaker 1 And if you're, to the extent you're wrong and you're trying to create a wrong opinion, you're going to lose your resource.
Speaker 1 So you actually, you actually can't push too far too many times, or you will just get your money taken away from you.
Speaker 1 So I think like that's that's one approach where just in terms of preserving discourse
Speaker 1 some of the stuff that's happening in prediction markets is actually really interesting and really exciting even in the context of bots and AIs and stuff like that.
Speaker 1
This is the one way to find truth in the system is find out where people are making money. Exactly.
Put your money where your mouth is, right? Proof of work.
Speaker 1 That is what just like the market is theoretically too, right? It's got obviously big, big issues and can be manipulated in the short term.
Speaker 1 But in the long run, this is one of the really interesting things about startups too. Like
Speaker 1 when you run into people in the early days, by definition, their startup looks like it's not going to succeed. That is what it means to be a seed-stage startup, right?
Speaker 1 If it was obvious you were going to succeed,
Speaker 1 you would have raised more money already. Yeah.
Speaker 1 So what you end up having is like these highly contrarian people who like, despite everybody telling them that they're going to fail, just believe in what they're doing and think they're going to succeed.
Speaker 1 And I think that's part of what really kind of shapes the startup founder's soul in a way that's really constructive.
Speaker 1 It's also something that, if you look at the Chinese system, is very different.
Speaker 1
You raise money in very different ways. You're coupled to the state apparatus.
You're both dependent on it and you're supported by it.
Speaker 1 But there's just a lot of different ways, and it makes it hard for Americans to relate to Chinese and vice versa and understand. understand each other's systems.
Speaker 1 One of the biggest risks as you're like thinking through what is your posture going to be relative to these countries is you fall into thinking that their traditions, their way of thinking about the world is the same as your own.
Speaker 1 And that's something that's been an issue for us with China for a long time is, you know, hey, they'll liberalize, right? Like bring them into the World Trade Organization.
Speaker 1 It's like, oh, well, actually, they'll sign the document, but they won't actually live up to any of the commitments. And
Speaker 1 it makes appeasement really tempting because you're thinking, oh, they're just like us. They're just around the corner.
Speaker 1 If we just reach out to the alt branch a little bit further,
Speaker 1 they're going to come around.
Speaker 1 Like a guy who's stuck in the friend zone with a girl.
Speaker 1 Like, one day she's going to come around and realize I'm a great catch.
Speaker 1
You keep on trucking, buddy. One day China's going to be my bestie.
We're going to be besties.
Speaker 1 We just need an administration that reaches out to them and just lets them know, man, there's no reason why we should be adversaries. We're all just people on planet Earth together.
Speaker 1 I mean, like, yeah, I. We're all together.
Speaker 1 Like, I honestly wish that was true.
Speaker 1 That would be so amazing. Maybe that's what AI brings about.
Speaker 1 Maybe AI, maybe super intelligence, realizes, hey, you fucking apes, you territorial apes with thermonuclear weapons, how about you shut the fuck up?
Speaker 1 You guys are doing the dumbest thing of all time, and you're being manipulated by a small group of people that are profiting in insane ways off of your misery. So let's just cut the shit and
Speaker 1 figure out a way to actually equitably share resources because that's the big thing.
Speaker 1 You're all stealing from the earth, but some people stole first and those people are now controlling all the fucking money. How about we stop that? Wow, we covered a lot of ground there.
Speaker 1 Well, that's what I would do.
Speaker 1 If I was super intelligent,
Speaker 1
stopped all that. That actually is like, so this is not like relevant to the risk stuff or to the whatever at all, but it's just interesting.
So
Speaker 1 there's actually theories, like in the same way that there's theories around
Speaker 1 power seeking and stuff around super intelligence, there's theories around like how super intelligences do deals with each other, right?
Speaker 1 And you actually like you have this intuition that which is exactly right, which is that, hey, two super intelligences, like actual legit super intelligences, should never actually like fight each other destructively in the real world, right?
Speaker 1
Like that seems weird. That shouldn't happen because they're so smart.
And in fact, like there's theories around
Speaker 1
they can kind of do perfect deals with each other based on like, if we're two super intelligences, I can kind of assess like how powerful you are. You can assess assess how powerful I am.
And
Speaker 1 we can actually decide, like, well,
Speaker 1
well, if we did fight a war against each other, like, you would have this chance of winning. I would have that chance of winning.
And so, let's just say that.
Speaker 1 Well, it would assess instantaneously that there's no benefit in that.
Speaker 1 And also, it would know something that we all know, which is the rising tide lifts all boats. But the problem is the people that already have yachts, they don't give a fuck about your boat.
Speaker 1
Like, hey, hey, hey, that water's mine. In fact, you shouldn't even have water.
Well, hopefully it's so positive some, right, that even they enjoy the benefits. But I mean, you're right.
Speaker 1 And this is the issue right now.
Speaker 1 And one of the nice things, too, is as you build up your ratchet of AI capabilities, it does start to open some opportunities for actual like trust but verify it, right?
Speaker 1 Which is something that we can't do right now.
Speaker 1 It's not like with nuclear stockpiles where we've had some success in some contexts with like enforcing treaties and stuff like that, sending inspectors in and all that.
Speaker 1 With AI right now, like how can you actually prove that like some international agreement on the use of AI is being observed?
Speaker 1 Even if we figure out how to control these systems, how can we make sure that China is baking in those control mechanisms into their training runs and that we are?
Speaker 1 And how can we prove it to each other without having total access to the compute stack?
Speaker 1
We don't really have a solution for that. There are all kinds of programs like this flex heg thing.
But anyway, those are not going to be online by 2027. And so one hopefully...
Speaker 1 But it's really good that people are working on them because
Speaker 1 sure.
Speaker 1 You want to be positioned for catastrophic success. Like what if something great happens and like, or we have more time or whatever? You want to be working on this stuff that allows
Speaker 1 this kind of control or oversight that's kind of hands-off where... you know, in theory,
Speaker 1 you can hand over GPUs to an adversary inside this box with these encryption things.
Speaker 1 The people we've spoken to
Speaker 1 in the spaces that actually try to break into boxes like this are like, well, that's probably not going to work, but who knows? It might. Yeah.
Speaker 1 So the hope is that as you build up your AI capabilities, basically, it starts to create solutions.
Speaker 1 So it starts to create ways for two countries to verifiably adhere to some kind of international agreement. Or to find, like you said, paths for de-escalation.
Speaker 1 That's the sort of thing that we actually could get to. And that's one of the strong positives of where you could end up going.
Speaker 1 That would be what's really fascinating.
Speaker 1
Artificial general intelligence becomes super intelligence and it immediately weeds out all the corruption. Because hey, this is the problem.
Like a massive doge in the sky. Exactly.
Speaker 1 Like we figured it out. You guys are all criminals and expose it to all the people.
Speaker 1 Like these people that are your leaders have been profiting and they do it on purpose and this is how they're doing it and this is how they're manipulating you.
Speaker 1
And these are all the lies that they've told. I'm sure that list is pretty.
Whoa.
Speaker 1
It almost be scary. Like if you could x-ray the world right now and like see all the you'd want an MRI.
You'd want to get like down to the tissue. Yeah, you're right.
You're probably.
Speaker 1 Yeah, you want to get down to the cellular level. But like it
Speaker 1 would be offshore accounts. Then you'd start finding
Speaker 1
it. There would be so much like the stuff that comes out, you know, from just randomly, right? Just random shit that comes out.
Like, yeah, the, the, um, I forget that, that, that, like, Argentinian.
Speaker 1 I think what you were talking about, like, but the, the Argentinian thing that
Speaker 1
came out a few years ago around all the the oligarchs and the officials. The Meryl Streep thing, yeah.
Yeah, yeah, yeah. Meryl Streep? Yeah, the
Speaker 1 Laundromat there. The Laundromat movie.
Speaker 1
Have you ever seen that? Panama Papers. The Panama Papers.
I never saw that. No, that's a good movie.
Speaker 1
Is it called the Panama Papers? The movie? It's called The Laundromat. Yeah.
Oh, okay.
Speaker 1 You remember the Panama Papers? Do you know? Roughly. Yeah, it's like all the oligarchs
Speaker 1 stashing their cash. Like offshore tax haven stuff.
Speaker 1 Yeah. It's like.
Speaker 1 And like some lawyer or someone basically blew it wide open. And so you got to see like every
Speaker 1 like oligarch and rich person's like
Speaker 1
financial shit. Like every once in a while, right? The world gets just like a flash of like, oh, here's what's going on under the surface.
It's like, whoa, fuck. And then we all go back to sleep.
Speaker 1 What's fascinating is like the unhidables, right? The little things that can't help but give away what is what is happening. Like you think about this in AI quite a bit.
Speaker 1 You know, some things that are hard for companies to hide is like they'll have a job posting that they'll put, they've got advertised to recruit.
Speaker 1 So you'll see like, oh, interesting, like, oh, OpenAI is looking to hire some people from hedge funds.
Speaker 1
Hmm. Like, I wonder what that means.
I wonder what that implies. Like, if you think about all of the leaders in kind of the AI space, think about the Medallion Fund, for example.
Speaker 1
This is like super successful hedge fund. Very famous.
Like, what, the man who broke the
Speaker 1 man who broke the market. It's the famous book about the founder of the medallion fund.
Speaker 1 And like, this is basically like a fund that they make like ridiculous like $5 billion returns every year kind of guaranteed.
Speaker 1 So so much so they have to cap how much they invest in the market because they would otherwise like move the market too much, like affect it.
Speaker 1 The fucked up thing about like the way they trade, and so this is like 20-year-old information, but it's still indicative because you can't get current information about their strategies.
Speaker 1 But one of the things that they were the first to kind of go for and figure out is they were like,
Speaker 1 okay,
Speaker 1 they basically were the first to kind of build what was at the time as much as possible, an AI that autonomously did trading at like great speeds and it had like no human oversight and just worked on its own.
Speaker 1 And what they found was the strategies that were the most successful were the ones that humans understood the least.
Speaker 1 Because if you have a strategy that a human can understand, some human is going to go and figure out that strategy and trade against you.
Speaker 1 Whereas if you have the kind of the balls to go like, oh, this thing is doing some weird shit that I cannot understand no matter how hard I try, let's just fucking YOLO and trust it and like and make it work.
Speaker 1 If you have all the stuff debugged and if you have the whole, if the whole system is working right, that's where your biggest successes are. What kind of strategies are you talking about?
Speaker 1 Oh, I mean, like, so
Speaker 1 I don't know specifically. I'll give you an analogy actually.
Speaker 1 Maybe this will, this will like, so how are, how are are AI systems trained today, right? Oh, so just as a trading strategy. Sorry, I'll just
Speaker 1 basically
Speaker 1 as an example, like you buy, like, you buy the stock
Speaker 1 the Thursday after the full moon and then sell it like the Friday after the new moon or some like random shit like that. That it's like, why does that even work?
Speaker 1 Like, why would why would that even work? So, so to like, to sort of explain why these
Speaker 1 strategies work better, if you think about how AI systems are trained today.
Speaker 1 You basically, very roughly, you start with this blob of numbers that's called a model, and you feed it input, you get an output.
Speaker 1 If the output you get is no good, if you don't like the output, you basically fuck around with all those numbers, change them a little bit, and then you try again.
Speaker 1 You're like, oh, okay, that's better. And you repeat that process over and over and over with different inputs and outputs.
Speaker 1
And eventually, those numbers, that mysterious ball of numbers, starts to behave well. It starts to make good good predictions or generate good outputs.
Now, you don't know why that is.
Speaker 1 You just know that it does a good job, at least where you've tested it.
Speaker 1 Now, if you slightly change what you test it on, suddenly you could discover, oh shit, it's catastrophically failing at that thing. These things are very brittle in that way.
Speaker 1 And that's part of the reason why chat GPT will just like completely go on a psycho binge fest every once in a while if you give it a prompt that has like too many exclamation points and asterisks in it or something.
Speaker 1 Like these systems are weirdly, weirdly brittle in that way. But applied to investment strategies, if all you're doing is saying like, optimize for, like, optimize for returns,
Speaker 1
give it inputs, give it a... Make me more money by the end of the day.
It's like an easy goal, like it's a very like clear-cut goal, right? You can give a machine.
Speaker 1
So you end up with a machine that gives you these very, like, it is a very weird strategy. This ball of numbers isn't human understandable.
It's just really fucking good at making money.
Speaker 1 And why is it really fucking good at making money? I don't know. I mean, it just kind of does the thing.
Speaker 1 thing and i'm making money i don't ask too many questions that's kind of like the so so when you try to impose on that system human interpretability you pay what in the ai world is known as the interpretability tax basically you're adding another constraint and the minute you start to do that you're forcing it to optimize for something other than pure rewards like doctors using ai to diagnose diseases are less effective than the chatbot on its own that's actually related right that's related if you want if you want that system to get good at diagnosis that's one thing okay just just fucking make it good at diagnosis.
Speaker 1 If you want it to be good at diagnosis and to produce explanations that a good doctor
Speaker 1 will go like, okay, I'll use that. Well, great, but guess what? Now you're spending some of that precious compute on something other than just the thing you're trying to optimize for.
Speaker 1 And so now that's going to come at a cost of the actual performance of the system.
Speaker 1 And so if you are going to optimize like the fuck out of making money, you're going to necessarily de-optimize the fuck out of anything else, including being able to even understand what that system is doing.
Speaker 1 And that's kind of like at the heart of a lot of the kind of big picture AI strategy stuff: people are wondering, like,
Speaker 1 how much interpretability tax am I willing to pay here? And how much does it cost? And everyone's willing to go a little bit further and a little bit further.
Speaker 1 And so, so OpenAI actually had a paper where they, or I guess a blog post, where they talked about this. And they were like, look,
Speaker 1 right now
Speaker 1 we have this
Speaker 1 essentially this like thought stream that our model produces on the way to generating its final output. And that thought stream,
Speaker 1 we don't want to touch it to make it interpretable, to make it make sense, because if we do that, then essentially it'll be optimized to convince us of whatever the thing is that we want it to do, to behave well.
Speaker 1 So it's like if you've used
Speaker 1 an OpenAI model recently, Ray O3 or whatever, it's It's doing its thinking before it starts outputting the answer. And so that thinking is,
Speaker 1 yeah, we're supposed to like be able to read that and kind of get it, but also we don't want to make it too legible because if we make it too legible, it's going to be optimized to be legible and to be convincing rather than to fool us, basically.
Speaker 1
Yeah, exactly. Oh, Jesus Christ.
But that's so that's the investment. You guys are making me less comfortable than I thought you would.
Speaker 1 Even after you,
Speaker 1 Jamie and I were talking about it before, like, how bad are they going to freak us out?
Speaker 1
You're freaking me out more. Well, I mean, okay, so I do want to highlight, so, so the game plan right now on the positive end, let's see how this works.
Jesus.
Speaker 1 Jamie, do you feel the same way?
Speaker 1 Yeah.
Speaker 1 I mean, I have articles I didn't bring up that are supporting some of this stuff. Like today, China quietly made some chip that they shouldn't have been able to do because of the sanctions.
Speaker 1
Oh, pull that out. And it's basically based off of their just sheer will.
Okay, so there's some
Speaker 1 there's good news on that one, at least.
Speaker 1 This is kind of a bullshit strategy that they're using. So
Speaker 1
there's, okay, so when you make these insane, like, five nanometers. Let's read the, for people just listening.
China quietly cracks a five nanometer
Speaker 1 without EUV. What is EUV? Extreme
Speaker 1 ultraviolet.
Speaker 1 How SMIC defied the chip sanctions with sheer engineering. Yeah, so this is like
Speaker 1 an espionage.
Speaker 1 So there's
Speaker 1 but actually though.
Speaker 1 So there's a good reason that a lot of these articles are making it seem like this is a huge breakthrough. It actually isn't as big as it seems.
Speaker 1
So, okay, if you want to make really, really, really, really exquisite changes. Look at this quote.
Moore's Law didn't die, Huo wrote. It moved to Shanghai.
Speaker 1 Instead of giving up, China's grinding its way forward layer by layer, pixel by pixel. The future of chips may no longer be written by who holds the best tools, but by who refuses to stop building.
Speaker 1
The rules are changing and DUV just lit the fuse. Boy.
Yeah, so I mean,
Speaker 1
gizmo China. There it is.
Yeah, you can view that as like Chinese propaganda in a way, actually. So
Speaker 1 what's actually going on here is if so the Chinese only have these deep ultraviolet lithography machines. That's like a lot of syllables, but it's just a glorified chip, like it's a giant laser.
Speaker 1 that that zaps your chips to like make the chips when when you're fabbing them so we're talking about like you you do these atomic layer patterns on the chips and shit and like what this uv thing does is it like fires like a a really high power laser laser beam laser beam yeah they attach to the head of sharks that just shoot at the chips sorry that was like an austin powers anyway they they'll like shoot it at the chips and uh that causes depending on how the the thing is is designed they'll like have a liquid layer of the stuff that's going to go on the chip.
Speaker 1 The UV is really, really tight and causes it exactly, causes causes it to harden. And then they wash off the liquid and then they do it all over again.
Speaker 1
Like basically, this is just imprinting a pattern on a chip. So whatever.
It's a really tiny printer. Yeah.
So that's it.
Speaker 1 And so the exquisite machines that we get to use or that they get to use in Taiwan are called extreme ultraviolet lithography machines. These are those crazy lasers.
Speaker 1 The ones that China can use, because we've prevented them from getting any of those extreme ultraviolet lithography machines, the ones China uses are previous generation machines called deep ultraviolet.
Speaker 1 And they can't actually make chips as high a resolution as ours.
Speaker 1 So what they do is, and what this article is about is they basically take the same chip, they zap it once with DUV, and then they got to pass it through again, zap it again
Speaker 1 to get closer to the level of resolution we get in one pass with our exquisite machine.
Speaker 1
Now, the problem with that is you got to pass the same chip through multiple times, which slows down your whole process. It means your yields at the end of the day are lower.
It has errors.
Speaker 1 Yeah, which makes it more costly. We've known that this is a thing.
Speaker 1 It's called multi-patterning it's been a thing for a long time there's nothing new under the sun here china has been doing this for a while um but uh so it's not actually a huge shock that this is happening the question is always when you look at an announcement like this yields yields yields how like what percentage of the chips coming out are actually usable and how fast are they they coming out that determines like is it actually competitive and that article too like this ties into the propaganda stuff we were talking about right if you read an article like that you could be forgiven for going like oh man our expert controls like just aren't working so we might as well just give them up when in reality
Speaker 1 because like you look at the source like the the and this is and this is how you know that also this is like this is one of their propaganda things is like you look at Chinese news sources what are they saying what are the beats that that are like common and you know just because of the way their media is set up totally different from us and we're not used to analyzing things this way but when you read something in like the South China Morning Post or like the Global Times or Xinhua and a few different places like this and it's the same beats coming back, you know that someone was handed a brief and it's like, you got to hit this point, this point, this point.
Speaker 1 And yep, they're going to find a way to work that into the news cycle over there.
Speaker 1
And it's also like slightly true. Like, yeah, they did manage to make chips at like five nanometers.
Cool. It's not a lie.
It's just, it's the same like propaganda technique, right?
Speaker 1 You're not, most of the time, you're not going to confabulate something out of nothing. Rather, like you start with the truth, and then you push it just a little bit.
Speaker 1 Just a little bit, and you keep pushing, pushing, pushing.
Speaker 1 Wow. How much is this administration aware of all the things that you're talking about?
Speaker 1 So they're actually
Speaker 1 right now. They're in the middle of staffing up some of the key positions because it's a new administration still, and this is such a technical domain.
Speaker 1 They've got people there who are like,
Speaker 1 at the kind of working level, or
Speaker 1 really sharp. They have some people now,
Speaker 1 Yeah in in places like especially in some of the export control offices now who are some of the best in the business Yeah, and and and that's that's really important like this is a it's a weird space because so when you want to actually recruit for for you know government roles in this space it's really fucking hard because you're competing against like an open AI like very like low range salary is like half a million dollars a year.
Speaker 1 The government pay scale, needless to say, is like not worth, I mean, Elon worked for for free.
Speaker 1 He can afford to, but still taking a lot of time out of his day.
Speaker 1 There's a lot of people like that who are like, you know,
Speaker 1
they can't justify the cost. Like they can't afford.
Of course, they literally can't afford to work for the government. For the government.
Why would they? Exactly.
Speaker 1
Whereas China is like, you don't have a choice, bitch. Yeah.
Yeah, and that's what they say.
Speaker 1 The Chinese word for bitch is really biting. Like, if you translated that, it would be a real sting.
Speaker 1
Sure. It's kind of crazy because it seems almost impossible to compete compete with that.
I mean, that's like the perfect setup.
Speaker 1 If you wanted to control everything and you wanted to optimize everything for the state, that's the way you would do it. Yeah, but it's also easier to make errors and be wrong-footed in that way.
Speaker 1 And also,
Speaker 1 basically, that system only works if the dictator at the top is just like very competent.
Speaker 1 Because the risk always with a dictatorship is like, oh, the dictator turns over and now it's like just a total dumbass. And now you're the whole thing.
Speaker 1 And he surrounds himself i mean look we just talked about like information echo chambers online and stuff the ultimate information echo chamber is the one around xi jinping right now because no one wants to give him bad news yeah i'm not gonna like i don't you know like and and so and you have this and and this is what you keep seeing right is like with these um uh like um like provincial level debt in in china right which is so awful is like people trying to hide money under imaginary money under imaginary mattresses and then hiding those mattresses under bigger mattresses until eventually like no one knows where the liability is and that and then you get a massive property bubble and any number of other bubbles that are due to to pop anytime right so the longer it goes on like the the the more like stuff gets squirreled away like there's there's actually like a story from the soviet union that's that always like gets me which is so um stalin obviously like purged and killed like millions of people in the 1930s right so By the 1980s, the ruling Politburo of the Soviet Union,
Speaker 1 obviously, obviously, like things have been different, generations have turned over and all this stuff.
Speaker 1 But those people, the most powerful people in the USSR, could not figure out what had happened to their own families during the purchase.
Speaker 1 Like the information was just nowhere to be found because the machine of the state was just like so aligned around like, we just like, we just got to kill as many fucking people as we can and like turn it over and then hide the evidence of it and then kill the the people who killed the people and then kill those people who killed those people.
Speaker 1 It also wasn't just kill the people, right? It was like in a lot of like kind of gulag archipelago style, it's about labor, right?
Speaker 1 Because the fundamentals of the economy are so shit that you basically have to find a way to justify putting people in labor camps and like that's right.
Speaker 1 But it was very much like you grind, mostly, or largely, you grind them to death, and basically they've gone away and you burn the records of it happening. So literally.
Speaker 1 There's like whole towns, right, that disappeared.
Speaker 1 Like people who are like, there's no record, or there's like, or usually the way you know about it is there's like one dude it's like this one dude has a very precarious escape story and it's like if if literally this dude didn't get away you wouldn't know about the entire town that was like wiped out yeah it's crazy jesus christ yeah the stuff that like apart from that though communism works really communism gray has it just hasn't been done right that's right i feel like we could do it right and we have a 10-page plan uh that yeah we came real close
Speaker 1 came real close so close yeah yeah and that's what the blue no matter who people don't really totally understand Like, we're not even talking about political parties.
Speaker 1 We're talking about power structures.
Speaker 1 We came close to a terrifying power structure, and it was willing to just do whatever it could to keep it rolling. And it was rolling for four years.
Speaker 1 It was rolling for four years without anyone at the helm.
Speaker 1 Show me the incentives, right? I mean, that's always the question.
Speaker 1 Yeah. One of the things is, too, like, when you have such a big structure that's overseeing such complexity, right? Obviously, a lot of stuff can hide in that structure.
Speaker 1 And it's actually kind of, it's, it's not unrelated to the whole AI picture.
Speaker 1 There's only so much compute that you have at the top of that system that you can spend as the president, as a cabinet member, like whatever.
Speaker 1 You can't look over everyone's shoulder and do their homework. You can't do founder mode all the way down in all the branches and all the action officers and all that shit.
Speaker 1 That's not going to happen, which means...
Speaker 1 You're spending five seconds thinking about how to unfuck some part of the government, but then the like, you you know corrupt people who run their own fiefdoms there spend every day trying to get them.
Speaker 1
It's like their whole life to like justify themselves. Yeah, yeah.
Well, that's the USAID dilemma. Yeah.
Yeah. Because they're uncovering
Speaker 1
just insane amount of NGOs. Like, where's this going? We talked about this the other day, but India has an NGO for every 600 people.
Wait, what? Yeah. You need more NGOs.
There's 3.3 million NGOs
Speaker 1 in India.
Speaker 1 Do they like bucket, like, What are the categories that they fall into? Who fucking knows? That's part of the problem.
Speaker 1
One of the things that Elon had found is that there's money that just goes out with no receipts. And it's billions of dollars.
We need to take that further. We need an NGO for every person in India.
Speaker 1 We will get to that eventually.
Speaker 1 It's going to work our way. It's the exponential trend.
Speaker 1 It's just like AI. The number of NGOs is
Speaker 1 doubling every year. We're making incredible progress in bullshit.
Speaker 1 The NGO scaling law, the bullshit scaling law. Well, it's just that, unfortunately, it's it's Republicans doing it, right?
Speaker 1 So it's unfortunately the Democrats are going to oppose it even if it's showing that there's like insane waste of your tax dollars. I thought some of the Doge stuff was pretty bipartisan.
Speaker 1 There's congressional support at least on both sides, no? Well, sort of. You know, I think the real issue is in dismantling a lot of these programs that
Speaker 1 you can point to some good some of these programs do.
Speaker 1 The problem is like some of them are so
Speaker 1 overwhelmed with fraud and waste that it's like to keep them active in the state they are like what do you do do you rip the band-aid off and start from scratch?
Speaker 1 Like what do you do with the Department of Education? Do you say why are we number 39 when we were number one? Like what did you guys do with all that money? Yeah, so the there's a create problem.
Speaker 1 There's this idea in software engineering. I actually was talking to one of our employees about this, which is like refactoring, right?
Speaker 1 So when you're writing like a bunch of software, it gets really, really big and hairy and complicated and there's all kinds of like dumbass shit and there's all kinds of of waste that happens in that in that code base there's this thing that you do every you know every like few months is you do this thing called refactoring which is like you go like okay we have you know 10 different things that are trying to do the same thing let's get rid of nine of those things and just like rewrite it as the one thing.
Speaker 1 So there's like a cleanup and refresh cycle that has to happen whenever you're developing a big complex thing that does a lot of stuff.
Speaker 1 The thing is, like the US government at every level has basically never done a refactoring of itself. And so the way that problems get solved is you're like, well, we need to do this new thing.
Speaker 1 So we're just going to like stick on another appendage to the beast and
Speaker 1 get that appendage to do that new thing. And like that's been going on for 250 years.
Speaker 1 So we end up with like this beast that has a lot of appendages, many of which do incredibly duplicative and wasteful stuff.
Speaker 1 That if you were a software engineer, just like not politically, just objectively looking at that as a system, you'd go like, oh, this is a catastrophe.
Speaker 1
And like, we have processes that the industry, we understand how, what needs to be done to fix that. You have to refactor.
But they haven't done that, hence the $36 trillion of debt.
Speaker 1 It's a problem too, though, in all, like, when you're a big enough organization, you run into this problem. Like, Google has this problem famously, famously, Facebook.
Speaker 1 We had friends like Jason. So Jason's
Speaker 1 the guy you spoke to about that.
Speaker 1 So
Speaker 1 he's like a startup engineer. So he works in relatively small code bases, and he
Speaker 1 can hold the whole code base in his head at a time. But when you move over to
Speaker 1 Google, to Facebook, all of a sudden, this gargantuan code base starts to look more like the complexity of the US government, just like very, you know, very roughly in terms of scale, right?
Speaker 1 So now you're like, okay, well, we want to add functionality.
Speaker 1 And so we want to incentivize our teams to build products that are going to be valuable. And the challenge is the best way to incentivize that is to give people incentives to build new functionality.
Speaker 1
Not to refactor, there's no glory. If you work at Google, there's no glory in refactoring.
If you work at Meta, there's no glory in refactoring. Like friends of ours.
There's no promotion, right?
Speaker 1
There's no sentimental. Exactly.
You have to be a product owner. So you have to invent the next Gmail.
You got to invent the next Google Calendar. You got to do the next messenger app.
Speaker 1
That's how you get promoted. And so you've got like this attitude.
You go into there and you're just like, let me crank this stuff out and try to ignore all the shit in the code base.
Speaker 1
No glory in there. And what you're left with is this like...
A, this Frankenstein monster of a code base that you just keep stapling more shit onto.
Speaker 1 And then B, this massive graveyard of apps that never get used. This is like the thing Google is famous for.
Speaker 1 If you ever see like the Google graveyard of apps, it's like all these things that you're like, oh yeah, I guess I kind of remember Google me.
Speaker 1 Somebody made their career off of launching that shit and then pieced out and it died.
Speaker 1 That's like the incentive structure at Google, unfortunately.
Speaker 1 And it's also kind of the only way to do, I mean, or maybe it's probably not, but in the world where humans are doing the oversight, that's your limitation, right?
Speaker 1 You got some people at the top who have a limited bandwidth and compute that they can dedicate to like hunting down the problems. AI agents might actually solve that, right?
Speaker 1 You could like actually have the, you know, a sort of autonomous AI agent that is the autonomous CEO or something go into an organization and uproot all the things and do that refactor, you could get way more efficient organizations out of that.
Speaker 1 I mean, like thinking about like government corruption and waste and fraud, that's the kind of thing where those sorts of tools could be radically empowering, but
Speaker 1 you got to get them to work right and for you.
Speaker 1 We've given us a lot to think about. Is there anything more? Should we wrap this up?
Speaker 1
If we've made you sufficiently uncomfortable. I'm super uncomfortable.
Was the butt tap? Very uneasy. Was the butt tap too much at the beginning? No, that was fine.
No, that was fine.
Speaker 1 All of it was weird.
Speaker 1 It's just, you know, I always try to look at some non-cynical way out of this. Well, the thing is, like, there are paths out.
Speaker 1 We talked about this and the fact that a lot of these problems are just us tripping on our own feet.
Speaker 1 So if we can just, like, unfuck ourselves a little bit, we are actually, we can unleash a lot of this stuff.
Speaker 1
And as long as we understand also the bar that security has to hit and how important that is, like we actually can put all this stuff together. We have the capacity.
It all exists.
Speaker 1 It just needs to actually get aligned
Speaker 1 around an initiative. And we have to be able to reach out and talk.
Speaker 1 On the control side, there's also a world where, and this is actually, like, if you talk to the labs, this is what they're actually planning to do.
Speaker 1 But it's a question of how methodically and carefully they can do this. The plan is to ratchet up capabilities and then scale, in other words.
Speaker 1 And then as you do that, you start to use your AI systems, your increasingly clever and powerful AI systems, to do research on technical control.
Speaker 1 So you basically build the next generation of systems, you try to get that generation of systems to help you just inch forward a little bit more on the capability side.
Speaker 1 It's a very precarious balance, but it's something that like at least isn't insane on the face of it.
Speaker 1 And fortunately, I mean, is the the default path, like the labs are talking about that kind of control element as being a key pillar of their social media.
Speaker 1 But these conversations are not happening in China. So what do you think they're doing to keep AI from uprooting their system?
Speaker 1 So that's interesting.
Speaker 1
Because I would imagine they don't want to lose control. Right.
There's a lot of ambiguity and uncertainty about what's going on in China.
Speaker 1 So there's been a lot of like track 1.5, track 2 diplomacy, basically where you have non-government guys from one side talk to government guys from the other side or talk to non-government from the other side and kind of start to align on like, okay, what do we think the issues are?
Speaker 1 The Chinese are, there are a lot of like freaked out Chinese researchers and who've come out publicly and said, hey, like we're really concerned about this whole loss of control thing, their public statements and all that.
Speaker 1 You also have to be mindful that any statement the CCP puts out is a statement they want you to see.
Speaker 1 So when they say like, oh, yeah, we're really worried about this thing, it's genuinely hard to assess what that even means.
Speaker 1 But
Speaker 1 as you start to build these systems, we expect you're going to see some evidence of this shit before.
Speaker 1 And it's not necessarily, it's not like you're going to build the system necessarily and have it take over the world like
Speaker 1 what we see with agents yeah so i was actually gonna add i think there's a really really good point and um and something where like
Speaker 1 open source ai is like even you know could potentially have an effect here um so a lot of a couple of the major labs like open ai anthropic i think came out recently and said like look um we we're on the cusp our systems are on the cusp of being able to help a total novice like someone with no experience develop and deploy and release a known biological threat.
Speaker 1 And that's like, that's something we're going to have to grapple with over the next few months. And eventually, like
Speaker 1 capabilities like this, not necessarily just biological, but also cyber and other areas, are going to come out in open source.
Speaker 1 And when they come out in open source, basically for anybody to download, for anybody to download and use, and when they come out in open source, like you actually start to see some like some things happen, like some incidents, like some
Speaker 1 major hacks that were just done by like a random motherfucker who just wants to see the world burn, but that wakes us up to like, oh shit, these things actually are powerful.
Speaker 1 I think one of the aspects also here is
Speaker 1 we're still in that post-Cold War honeymoon, many of us, right? In that mentality, like not everyone has like wrapped their heads around this stuff. And the like, what needs to happen is
Speaker 1
something that makes us us go, like, oh, damn, we act, like, we weren't even really trying this entire time. Because this is like, this is the, the 9-11 effect.
This is the Pearl Harbor effect.
Speaker 1
Once you have a thing that aligns everyone around, like, oh, shit, this is real. We actually need to do it.
And we're freaked out, we're actually safer. We're safer when we're all like, okay,
Speaker 1
something important needs to happen. And instead of letting them just slowly chip away.
Exactly.
Speaker 1 And so we, like, we need to have some sort of of shock and we probably will get some kind of shock like over the next few months, the way things are trending. And when that happens,
Speaker 1 then, but I mean, like, it's four years if that makes you feel good. Four years? No, that doesn't make you feel good.
Speaker 1 But because, but because you have the potential for this open source, like it's probably going to be like a survivable shock, right? But still a shock.
Speaker 1 And so let us actually realign around like, okay, let's actually fucking solve some problems for real. And so putting together the groundwork, right, is what we're doing around like,
Speaker 1 let's prethink a lot of this stuff so that like if and when the shock comes. We have a break glass plan.
Speaker 1 We have a plan.
Speaker 1 And the loss of control stuff is similar.
Speaker 1 Like, so one interesting thing that happens with AI agents today is they'll like, they'll get any, so an AI agent will take a complex task that you give it, like find me, I don't know, the like best sneakers for me online, some shit like that.
Speaker 1 And they'll break it down into a series of substeps. And then each of those steps, it'll farm out to a version of itself, say, to execute autonomously.
Speaker 1 The more complex a task is, the more of those little substeps there are in it. And so you can have an AI agent that nails like 99%
Speaker 1 of those steps, but if it screws up just one, the whole thing is a flop, right?
Speaker 1 And so if you think about like the sort of like loss of control scenarios that a lot of people look at are autonomous replication, like the model gets access to the internet, copies itself onto servers and all that stuff.
Speaker 1 Those are very complex movements. If it screws up at any point along the way, that's a tell, like, oh shit, something's happening there.
Speaker 1 And you can start to think about like, okay, well, what went wrong? We get another do, we get another try, and we can kind of learn from our mistakes. So there is this sort of like this picture.
Speaker 1 You know, one camp goes, oh, well, we're going to kind of make this super intelligence in a vat, and then it explodes out and we lose control over it.
Speaker 1 That doesn't necessarily seem like the default scenario right now. It seems like what we're doing is scaling these systems.
Speaker 1 We might unhobble them with big capability jumps, but it's also, there's a component of this that is a continuous process that lets us kind of get our arms around it in a more staged way.
Speaker 1 That's another thing that I think is in our favor that we didn't expect before
Speaker 1
as a field, basically. And I think that's a good thing.
Like, that helps you kind of detect these breakout attempts and do things about them. All right, I'm going to bring this home.
I'm freaked out.
Speaker 1 So thank you. Thanks for trying to make me feel better.
Speaker 1 I don't think you did, but I really appreciate you guys, and I I appreciate your perspective because it's very important and it's very illuminating.
Speaker 1 You know, it really gives you a sense of what's going on. And I think one of the things that you said that's really important is like,
Speaker 1 it sucks that we need a 9-11 moment or a Pearl Harbor moment to realize what's happening so we all come together.
Speaker 1 But hopefully, slowly but surely through conversations like this, people realize what's actually happening. You need one of those moments like every generation.
Speaker 1 Like that's how you get contact with the truth. And it's like, it's painful, but like, the light's on the other side.
Speaker 1
Thank you. Thank you very much.
Thank you. Thank you for the time.
Bye, Brett.