Empires of AI With Karen Hao

51m

Ed Zitron is joined by Karen Hao, author of the book Empire of AI, to talk about the confusing world of OpenAI and the associated mind-poison of artificial general intelligence.

Book link: https://www.penguinrandomhouse.com/books/743569/empire-of-ai-by-karen-hao/
(all the buy links are available there)
Karen’s website: https://karendhao.com/
Twitter: https://x.com/_karenhao
LinkedIn: https://www.linkedin.com/in/karendhao/
BlueSky: https://bsky.app/profile/karenhao.bsky.social

YOU CAN NOW BUY BETTER OFFLINE MERCH! go to https://cottonbureau.com/people/better-offline use free99 for free shipping on orders of $99 or more.

You can also order a limited-edition Better Offline hat until 5/22/25! https://cottonbureau.com/p/CAGDW8/hat/better-offline-hat#/28510205/hat-unisex-dad-hat-black-100percent-cotton-adjustable 

Newsletter: wheresyoured.at 
Reddit: http://www.reddit.com/r/betteroffline 
Discord chat.wheresyoured.at 
Ed's Socials - http://www.twitter.com/edzitron 
instagram.com/edzitron
https://bsky.app/profile/edzitron.com 
https://www.threads.net/@edzitron 
email me ez@betteroffline.com

See omnystudio.com/listener for privacy information.

Listen and follow along

Transcript

This is an iHeart podcast.

On Fox One, you can stream your favorite news, sports, and entertainment live, all in one app.

It's fing roll and unfiltered.

This is the best thing ever.

Watch breaking news as it breaks.

Breaking tonight, we're following two major stories.

And catch history in the making.

Gibby, meet Freddy!

Debates,

drama, touchdowns.

It's all here,

Hi, I'm Morgan Sung, host of Close All Tabs from KQED, where every week we reveal how the online world collides with everyday life.

There was the six-foot cartoon otter who came out from behind a curtain.

It actually really matters that driverless cars are going to mess up in ways that humans wouldn't.

Should I be telling this thing all about my loved life?

I think we will see a Twitch streamer president maybe within our lifetimes.

You can find close all tabs wherever you listen to podcasts.

There's more to San Francisco with the Chronicle.

More to experience and to explore.

Knowing San Francisco is our passion.

Discover more at sfchronicle.com.

Want a bourbon with the story?

American-made wild turkey never compromises.

Aged in American oak with the darkest char, our pre-prohibition style bourbons are distilled and barreled at a low-proof to retain the most flavor.

Most bourbons aren't.

Well, we aren't most bourbons.

Wild Turkey 101 bourbon makes an old-fashioned or bold-fashioned for bold nights out or at home.

Now that's a story worth telling.

Wild Turkey, trust your spirit.

Copyright 2025, Capari America, New York, New York.

Never compromise.

Drink responsibly.

Coolzone Media.

Hello, and welcome to Better Offline.

I'm your host, Ed Zitchron.

As ever, remember you can buy Better Offline merchandise.

Link is in the episode notes.

Today I'm joined by Karen Howe, the author of the upcoming book Empire of AI, which tells the story of Open AI and the arms race surrounding large language models.

Karen, thank you for joining me.

Thank you so much for having me, Ed.

So,

you describe the progress of these models and these companies as a kind kind of colonialism.

Can you get into that for me?

Yeah, so if you think about the way that empires of old operated during the very long history of European colonialism, they were essentially taking resources that were not their own, exploiting massive amounts of labor, as in not paying them or paying them extremely small amounts of money.

And they were doing this all under a civilizing mission, this idea that they were bringing modernity and progress to all of humanity when in fact, what was actually happening was they were just fortifying themselves and the empire and the people at the top of the empire.

And everyone else that kind of lived in the world had to live in the thrash of what the people at the top decided based on their whims for what was part of their self-serving agenda.

And that's essentially what we're seeing with empires of AI today.

where they are taking data that is not their own, they're laying claim to it, they're taking land, they're taking energy, they're taking water, they are exploiting massive amounts of labor, both labor that goes into the inputs for developing these AI models, but also exploiting labor in the sense that they are ultimately creating labor-automating technologies that is eroding away people's labor rights as we speak.

And they're doing it under this civilizing mission of they are doing it for the benefit of all of humanity.

And what I say in the book is empires of AI, they're not as overtly violent as empires of old.

And so maybe that can become confusing and people think, oh, well, it can't be that bad.

But the thing is, we've had 150 years of social and moral progress.

And so empires of modern day are going to look different in the way that empires of old operated.

And when you look just at like the actual parallels, there are just so many extraordinary parallels between the kind of basis of empire building back then and now, that I think it is fundamentally the only frame that I have found to really help understand and grapple with the sheer scope and scale and the actual, like, what is actually happening here within the AI industry.

One theme from the book I also noticed was that despite all of the backs and forths between all the people, very rarely product came out, though.

Like, it was interesting.

There seemed to be all these conversations about research and all of these things they were saying, but it usually just ended with like some sort of release and then kind of just moved on.

Yeah.

It almost makes me wonder what they're always what they're working on half the time.

Yeah, you know, it's, I think it's a product of two different things that you notice that in the book.

One is that I finished writing the book before a lot of the most recent product releases came out.

Right.

That's just the nature of writing things on the time scale of books.

Yeah, it's not fun.

Yeah, as it

I froze the manuscript in like the early days of January, right before DeepSeek, right before Stargate,

right before

a string of other releases.

So that's one is that through most of OpenAI's history,

it was really more focused on research conversations.

And it's only been in the last year or so that it's really dramatically shifted much more to talking about product.

But the second

reason is that I personally, like that is my expertise, I came up in AI reporting covering the research.

And so I wanted to focus on that in the book and really unpack it, especially because there's not as much reporting on the research these days.

And I wanted to kind of track that history and the internal conversations that happen when people say that they're developing so-called AGI.

And you talk about in 2019 in the book that your rose-colored glasses got knocked off by a story.

What was it that really made you start being suspicious of these companies?

Yeah, so I, in 2019, was when I started covering OpenAI, and I embedded within the company for three days in August of 2019 to profile what had then become a newly minted capped profit nested in a non-profit.

And I think the thing that really started tipping me off was

It was actually really small things initially.

The first thing was they publicly professed to be this bastion of transparency, and they were going to share all their research to the world.

And they had accumulated a significant amount of good will on the basis of this idea.

And they were raising, not literally fundraising, but they had amassed a lot of capital on the basis of this idea.

And when I started embedding within the company, I realized that they were incredibly secretive.

Like they were not, they wouldn't allow me to see anything or talk to anyone beyond very strictly sanctioned conversations.

And even in those conversations, I would notice that researchers were giving side-eye to the communications officer every other sentence because they were worried about stepping into a lane that was considered proprietary.

And I was like, wait a minute, why are there things that are proprietary?

And why are people being secretive if all this is supposed to ultimately be shared with the public?

But the other thing was when I was talking with executives, like the very first interview that I had had was with Greg Rockman and Ilya Setzgeever, the CTO and chief scientist.

And I just asked them very basic questions: like, why do you think we should spend this much money on AGI and not on something else?

And can you articulate for me, what does AGI look like?

What would you even want AGI to do?

And can you articulate for me, you know, part of their origin story as a company was they want to build AGI, good AGI first before the bad people built bad AGI.

So I was like, like, well, what would bad AGI look like as well?

Or like, what are the harms that are coming out of some of this rapid AI progress?

And they weren't able to answer any of those questions.

And that was when I thought, hold on a second.

Like, I thought that this was a non-profit meant to counter some of the ills of Silicon Valley.

One of the ills being that

most companies end up being thrown boatloads of cash without like clear articulated ideas about what they're going to do with that cash.

And here I am in this meeting room trying to just ask the most basic question, like the most boilerplate stuff that there should be some kind of answer to.

And they can't even answer that.

So it seems like it is actually very much just an animal of Silicon Valley.

This is not actually something different from what we're seeing with the rest of the tech industry.

It felt as well, there was a comment, and forgive me for forgetting it exactly where, where it was like the, our secrets could be written on a grain of rice or something like that.

Yeah.

And I have to admit, as I read it, I got this weird feeling like, does anyone actually have any IP?

Because when you actually look at the conversations they're having, and you are likely privy to more here, it felt like they wouldn't talk about what they were doing at all and not.

I say this as run a PR firm, written a lot about the valet.

It feels like they'd say more

but no one wanted to say anything not even secret it's like nobody really knew and you even described some of the managerial stuff in there like no one really knew what was going on anyway it just feels like a remarkably disorganized company considering the scale

yeah so i think early on at open ai it was completely disorganized in the sense that they had no idea you know they they decide, okay, we're going to build this AGI thing, but then they were like, what does that even mean?

We have no idea.

And

there a lot of, there weren't real managers at the company either because they had just gathered up a bunch of researchers from academia and they didn't really have much of a sense of how to organize themselves other than a traditional academic lab where there's a professor and grad students.

And

I mean, academia, you know,

has its function, but that ultimately wasn't the right structure for trying to move a group of people towards a similar goal.

And over time, OpenAI did start cleaning itself up a little bit.

It did start restructuring itself.

It started focusing more on GPT models because they hit on that in around 2018, 2019.

But similarly, there's still just, because there's no clarity about its mission and ultimately what it is trying to build, you end up with just a lot of rifts within the organization over this very fundamental question.

People People fundamentally disagree about what open AI is, they disagree about what AGI is, they disagree about what it means to ensure that something should benefit all of humanity.

And

I think because there was all this confusion or there were all these different interpretations ultimately of these like basic tenets of the organization,

I think people also just

They wouldn't quite clearly articulate to one another what they were doing.

It wasn't necessarily that they were trying to be secretive to one another.

It was more just that they weren't really on the same page.

And

this eventually became sort of less and less true in the sense that as Sam Altman installed himself as CEO and started really exerting a particular type of path for AI progress, then they started having research documents that explicitly articulated, we are a scaling lab, like we are going after scale.

Wait,

how long did it take them to put those documents together, though?

What year about?

I think their first research roadmap was in 2017.

So it was one and a half years into the company.

Not so bad.

Into the nonprofit.

Yeah.

Yeah.

So I will admit there is another colonial thing that stood out.

Well, two, specifically.

One, it definitely feels that there are a lot of unintelligent cousin types who were put in there because their mate was there throughout the company.

But two, it's this kind of religious view around AGI, this kind of nebulous justification for just about every I was disappointed.

I understand why you mentioned him.

Yadowski was in there.

I think the less wrong people, this is a personal belief, just no need to mention them again for anyone.

I think that Yadowski, anyone who writes a 600,000 word Harry Potter book should be put in prison, including J.K.

Rowling.

But it feels like there really is this belief system that's pushed throughout this industry, which mirrors colonialism, mirrors the very Judeo-Christian push of the British and many other colonial entities.

Yeah, absolutely.

So one of the things that I was most surprised by when reporting the book is

I had seen all the divisions around boomers and doomers, people saying, hey, I can bring us to utopia or people saying, hey, I can kill us all.

I really did think initially that it was just rhetoric and that it was just a tool for accruing more power.

And the the thing that surprised me most was how many people I met that genuinely deeply believed in both, especially the Doomer ideology.

Like I was interviewing people whose voices were quivering because they were talking about their anxiety around the potential end of the world.

And that was a very sincere reaction.

And I think that is part, you're exactly right, that it is a huge parallel with empire building in the past, is that empires need to have an ideology that convinces themselves why they are ultimately doing something that is for the benefit of the world.

So in the past, when they had the civilizing mission, we're bringing this to the world, it also wasn't rhetoric.

It was also a deeply seated religious

and spiritual and scientific belief that they were doing something that was better off for everyone.

I mean, the origins of the BBC in England were religious indoctrination on some level.

It kind of, it's, I admit I'm surprised to hear the quivering voices stuff.

I think, because I think that, again, this personal opinion, Yadowski, I think, is full of, I think a lot of those less wrong guys are full of shit.

I think they're doing it for the, not for the bit, but it's the same kind of.

I don't know, horse trading shit that people do around anything.

It's like, we don't have anything to believe in, so let's all agree on this.

But it's interesting to hear that people are,

I don't know how to to put this, actually believing this crap, even though it doesn't feel like there's any real evidence.

Yeah, well, I think the analogy that I've started using is, I really feel like Open AI is Dune,

where, you know, in Dune, there is a mythology that is created by a certain group of people with full understanding that they're creating a mythology, right?

Right.

But then as they start to embody and act out this mythology, not only do many, many people who didn't know that it was originally created come to believe it, also the people who created it come to believe it themselves.

And I think this is essentially exactly what is happening within AI, with the ideologies, is that maybe there was at some point someone who was more aware that there was some kind of rhetorical trick that they were playing around really propagating this kind of belief.

But it is not, we're not at that point anymore.

Like there are lots and lots of people who genuinely believe these things.

And I think it's self-perpetuating because when you believe it, you look for signs of it.

And you research things that would suggest more evidence for your belief.

And so they're kind of continuing to reinforce their beliefs.

And the more these AI models have progressed, the stronger these beliefs have become because whether you believe AI will bring utopia or dystopia, there is an abundance of evidence that you can point to now

to reinforce your own, yeah, exactly, to reinforce your own starting point.

And so it's sort of like a microcosm

of society today where,

you know, most the average person no longer encounters information that can change their minds.

It just continues to entrench whatever they already believed before.

Do you believe Sam Altman believes this shite?

Do you think he believes in AGA?

Is he part of it?

It's really, it's really interesting because I think the

no matter who I interviewed and no matter how long they worked with Sam Altman or how closely they worked with Sam Altman, not a single person was able to fully articulate what his beliefs are.

And I think that is very much by design.

Is that

beautiful?

That's

yeah.

And

And it wasn't, and they would explicitly say this too.

They would call out, I'm not actually sure what he believes.

And this was the most consistent thing that people said about him.

There's more to San Francisco with the Chronicle.

More to experience and to explore.

Knowing San Francisco is our passion.

Discover more at sfchronicle.com.

Be honest, how many tabs do you have open right now?

Too many?

Sounds like you need Close All Tabs from KQED, where I, Morgan Sung, Doom Scroll so you don't have to.

Every week, we scour the internet to bring you deep dives that explain how the digital world connects and divides us all.

Everyone's cooped up in their house.

I will talk to this robot.

If you're a truly engaged activist, the government already has data on you.

Driverless cars are going to mess up in ways that humans wouldn't.

Listen to Close All Tabs, wherever you get your podcasts.

Every business has an ambition.

PayPal Open is the platform designed to help you grow into yours with business loans so you can expand and access to hundreds of millions of PayPal customers worldwide.

And your customers can pay all the ways they want with PayPal, Venmo, Pay Later, and all major cards so you can focus on scaling up.

When it's time to get growing, there's one platform for all business, PayPal Open.

Grow today at PayPalopen.com.

Loan subject to approval in available locations.

So I've shopped with Quince before they were an advertiser and after they became one.

And then again, before I had to record this ad, I really like them.

My green overshirt in particular looks great.

I use it like a jacket.

It's breathable and comfortable and hangs on my body nicely.

I get a lot of compliments.

And I liked it so much I got it in all the different colors, along with one of their corduroy ones, which I think I pull off.

And really, that's the only person that matters.

I also really love their linen shirts too.

They're comfortable, they're breathable, and they look nice.

Get a lot of compliments there, too.

I have a few of them, love their rust-colored ones as well.

And in general, I really like Quince.

The shirts fit nicely, and the rest of their clothes do, too.

They ship quickly, they look good, they're high quality, and they partner directly with ethical factories and skip the middleman.

So you get top-tier fabrics and craftsmanship at half the price of similar brands.

And I'm probably going to buy more from them very, very soon.

Keep it classic and cool this fall.

With long-lasting staples from Quince, go to quince.com/slash better for free shipping on your order and 365-day returns.

That's q-u-i-n-ce-e dot com slash better.

Free shipping and 365-day returns.

Quince.com/slash better.

I really noticed as well, if you read your book and you really look, you actually can't get much of an idea of who Sam Altman is at all.

And in fact, you can't work out why he's brilliant at all.

And I've read a lot of stuff about Sam Altman.

The long and short of it, I can understand, is that he's got good psychology and he's really charming.

Everyone talks about the psychology and the charming.

And it's just really,

it is so.

He is such a bizarre man.

Like everything about him is so, like, the way that people talk about him is so strange.

Yeah, so this is what I sort of concluded about why he's been able to pull off what he does.

He is a once-in-a-generation talent when it comes to storytelling.

And

he has a loose relationship with the truth,

which is a really powerful combination.

And so when he's talking to someone,

he shines most when he's talking to small groups of people in one-on-one meetings.

And what he says is more tightly correlated with what that person needs to hear rather than what he believes, which is part of the reason why people say ultimately, like, they don't really know what he believes because he doesn't really indicate it.

And so I think that is what makes him incredibly persuasive.

And

he is really good at understanding people and what they need and what they want.

And, you know, he's well-resourced.

So he's able to then deliver to them what they need and want.

And what I realized is with that kind of

talent,

you would inherently be incredibly polarizing as a figure because if the person agrees with you, you're the best asset in the world for what they want to achieve.

You're incredibly persuasive.

You're able to get the resources.

You can do exactly what

that person can do for you exactly what you want them to do.

But if you disagree with this person, that person becomes the greatest threat ever because they are so persuasive, you have fear that they're going to be able to carry out exactly what you don't want them to carry out.

And so

that kind of boils down to why he's just such an enigmatic and an extremely polarizing person.

It really depends on whether or not someone agrees or disagrees with him.

He also doesn't seem that smart.

I don't know.

He seems quite good at talking to people, but when I hear him talk, he doesn't seem that eloquent.

And it makes me wonder if perhaps Sam Altman is a symptom of a greater problem that so much of our power structure and money is based on someone making a decision based on the last intelligent person or intelligent seeming person they talk to.

Yeah,

I think our society is also just, we still have such a,

we have, we worship people that are wealthy.

Yeah.

Like, and so even if he's not saying something that is convincing you in real time,

he has all of the kind of indicators that this person has been remarkably successful and you should listen to what he says because then that will make you successful too.

And so, I think that is part of the part of the kind of mythos around him: is that if you can join up with him, it will greatly enrich you.

And you know, like there's a lot of evidence to suggest that, too, that like there have been plenty of people that have allied themselves with Sam Loman and that have become much richer for it.

And so,

whether or not people are joining up with him because they necessarily 100% agree with like his ideology or his actions or anything like that.

Or if it's more because ultimately they get to benefit from that alliance, I think is, yeah.

Almost feels like it's people connecting with other people to see how far they can get far more than AI.

Because one of the other things I really noticed when you were telling the story of the firing.

of Sam Orton getting fired in November 2023, as much as people wanted to pretend,

they kept bringing up the tender.

And to explain for the listeners, the tender was that OpenAI

had plans to let people sell their stock.

It really felt like that was more the primary concern than any loyalty to Altman.

It was, I wouldn't say it was the primary concern, but

I mean, yeah, it really depended on who I was.

Yeah, it's hard to tell exactly, just to be clear.

Yeah, exactly.

Like, every employee sort of had a different calculus that ultimately led them to revolt against the board and want Altman back.

And there were different calculuses among Microsoft and investors.

But one of the key things that I think is

necessary to understand just why there is so much seeming loyalty around Altman in general is he is very, very good at establishing relationships with money involved where he is the linchpin to the other person accessing that money.

And so the tender offer is a perfect example of this in that employees ultimately, they realized that Altman is just, he's really good at fundraising and whether or not an employee believes in the AGI thing they all agree that open AI ultimately needs an enormous amount of capital and also many of them are doing it in part because they can then like guarantee their own financial future and so with

Altman gone, it became increasingly clear that OpenAI wouldn't survive.

And so that's not something that a lot of employees wanted.

It became clear that even if OpenAI did survive, they would be a a lot more shortchanged in terms of the amount of capital that they would be able to get because he would no longer be their champion for that.

And also, the tender offer could potentially go away and they would not be personally enriched as well.

And, you know, many of them,

the thing to also understand is like

it's very expensive to live in the Bay Area.

And so for the worker, for the employees in the moment, losing the tender offer wasn't like, oh no, I'm going to lose like my retirement.

It was also this sense of like, I'm literally going to lose my financial security right now.

Like I already tried to, I already bought, you know, a house based on the fact that

you mentioned that.

Yeah, yeah, there was someone who put money.

Yeah, who put plenty of money down and that the

tender offer dissolving was a real financial stress.

It was a threat to their financial existence in the world.

I imagine so.

But I'm just, just, the way it was framed in public was that this was some big loyalty thing where everyone was like, I love OpenAI and Sam Altman.

And that just didn't feel like it was what was happening that people seemed angry at Ilya, but they just seemed angry because something changed rather than like.

Yeah.

Yeah.

I mean, I think, I think there were certainly people within the company that did feel loyalty to Altman and

that was one of their primary motivating things.

But by and large, when I was interviewing lots of employees for understanding what ultimately led them to rally around Sam, there was actually

more

practical concerns than just personal loyalty that was driving the thing, whether it was financial or whether it was just, I really believe in AGI and I don't want OpenAI to go away because it'll scrap all of the work that we've done.

And

of course, you know, of course, the narrative would, I mean, the OpenAI themselves have been pushing again and again and again this idea that all of the employees or whatever, more than 90% of the employees ended up signing the petition.

And

they cite this number as just a show of solidarity and loyalty to Altman.

But then, of course, if you look at the track record after the Boyd crisis, how many people have subsequently left the company

once things have sort of stabilized and there isn't a crisis situation, that is, I think, much more revealing of

how much loyalty people have to him.

So tell me about Jack Clark.

So Jack Clark is the, what is he at Anthropic?

He's one of the co-founders at Anthropic now.

Yeah.

Without putting you on the spot, it kind of feels like Jack Clark has got off a little easy with everyone, not even saying you.

You're one of the few people.

Jack Clark worked at The Register, which is an extremely critical IT publication, and now he's out in conferences saying that AI agents will control everything.

He just feels like one of the weirdest characters in this whole story.

Yeah.

Yeah, it's interesting.

Like when I went to profile OpenAI in 2019,

I actually, the first person I reached out to was Jack because I had spoken to him before and he had until then, until recently, been playing communications for a head for OpenAI and then he had shifted into a policy role.

And I remember when

I was at the company, I was like, hey, Jack, like, do you think you can actually give me more access to seeing the things that I like to do?

Like stuff that I like to see.

Yeah, like I was literally, I was literally asking, so they wouldn't let me go beyond the first floor.

There were three floors, second and third.

I'm so sorry.

There's computers there.

It's not like they have an AI machine.

Come on.

Yeah.

And I was like, hey, like, can I just literally just go up to the second floor?

Can you take me up and just let me walk?

And he looked at me with this

deep, deep side eye of like, no, Karen, like, you absolutely cannot.

And I was like, shit,

you're a former journalist.

You know how this works.

Like,

you know, the last article that Jack wrote in 2014 for the register was shock and AWS, The Fall of Amazon's Deflationary Cloud, just as Jeff Bezos did, the books and CDs.

Amazon's rivals are now doing it.

He used to write these like very grouchy L Reg style pic.

It's just so weird.

Yeah, I mean, I think this is the, like, I've.

But it gets back to that thing you were saying about the kind of the doctrine.

Yeah.

So, like,

because I started covering this company in 2019,

I talked with people then that I then talked to for the book, and I was able to sort of have this unique opportunity to track how people's individual beliefs evolve when they are seeped in this world.

And there were people that I was talking to back then that were like, I don't really believe in this AGI thing.

That by the time I was talking to them for the book, we were like AGI all the way, like that this is a genuine, true belief.

And I think there's a lot of reasons for this transformation.

Like one is that you are only talking to people who believe this.

So you're just constantly in this environment where you're not talking to people who are challenging or testing

that belief and instead just like continuously being reinforced in this echo chamber.

But I think there's another thing that I kind of came to realize while reporting on the book is like people who really, really believe that AGI is possible, that we will actually be able to replicate human intelligence, it's not a belief about what AI is capable of.

It is a belief about what human intelligence is.

And A lot of people in the AI world today have this belief that human intelligence or everything in the world is inherently computable.

And all you have to do is just amass more data and more compute, and eventually, you will get to that thing.

You will be able to replicate that thing.

And when you are in this kind of environment where you have people constantly arguing to you that this is why AGI is possible because everything's computable, and then you see the rapid clip of your models being able to do more and more functions that you know other people outside in society previously would have suggested suggested were not possible.

It's sort of this self-reinforcing belief machine.

Like it just manufactures more.

Like you said, every sign kind of gives you.

Yeah.

Exactly.

And so I think,

and one of the things that I also

have just as a general realization, not just with Open AI, but in general, when I'm covering tech companies, I kind of have a policy for myself to do a little bit of a detox after I spend a lot of time talking with them.

Because it is really like when you're talking with all these people that exist in this world, you do adopt their worldview and you do adopt their talking points and you do see things through their eyes.

And usually I then have to like let myself just be in the actual world for a little bit and remind myself of what the average person thinks and what the average person values and remind myself that there are, you know, there are problems beyond the Silicon Valley's borders that just look fundamentally different from what they conceive the world to be.

And so I did that with the opening eye profile.

I did that with,

I profiled Facebook years ago, and I did that with Facebook.

I did that with the book, where I would interview lots of people in like these big batches and kind of really do my best to try and occupy their shoes for a couple weeks, a month.

And then I would spend my time explicitly not interviewing open AI people, just interviewing other people that were out in the world to just like reset my brain chemistry a little bit.

Because it really does feel that way.

It really does feel like

you kind of get absorbed into this singular worldview, and then you have to kind of remind yourself of the

greater reality.

I'm gonna ask this question without getting you in too much trouble.

Do you think that's what happened with Kevin Roose?

Because

it's really,

I know, I don't want to put you in a situation where you have to talk ill of someone, but that interview was bizarre.

I so

I've been, I think,

really lucky in that I've covered the tech industry almost always not living in SF.

I agree.

That's a great thing.

And, and, you know, like, I've been able to figure that out in my career.

And that was an explicit decision.

Like, I did not want to live in SF anymore.

Like, I had lived in SF.

I wanted to get out.

And I think this is a really,

it's a really hard balance for any journalist is.

You need to decide whether you're close to your subject and immersed in their world and therefore might be co-opted by their world or whether you exist outside of that world, and therefore you don't have as much access, you don't get to go to the parties where you hear tips all the time.

And

that's 100% just like it's been a tension in my career as well.

It's like I constantly feel like I'm missing things because I'm not an SF.

But the thing that I think I have gained from not being an SF is just a continued connection to non-SF worlds, you know, like I

notice when I spend too much time with SF people that I start, my vocabulary changes.

Like

how I talk about things changes because people in SF talk about things in a very particular way.

You know, like they are talking about like optimization hacking and like they have a particular utilitarian maximization mindset around how they do things and why.

And I have to then

kind of step away from that and reset my language even when

I sit down to write a story that's for the greater public.

And so, yeah, so I think this is something that's just challenging in general is like

it's really hard to not get too close to your sources and to not start adopting

everything that they say as

your own,

especially if you are literally living with them.

And it, yes, in some some cases, right?

Could be anyone.

But it does feel like there is a kind of almost word contagion or thought contagion with this stuff, with AGI, that it pickles certain people.

They hear about the idea of the autonomous computer and it drives them mad.

And everything, to your point, they start chasing it, even though there's not really any evidence that we can do it.

Yeah, I mean, I mean,

like, when I first started covering AI, I also

was so enthralled by the premise.

Like when I, when I, so before I covered AI,

I didn't really, like, when I first started covering it at MIT Technology Review,

I did not realize before then that AI was actually trying to recreate human intelligence.

I thought it was just,

you know, I mean, it is a marketing, it is a marketing term.

But even then, this sounds like it might be a definition that people would argue over.

Right, right, right.

But I mean, like, in the original, like when AI was coined as a term in 1956, like John McCarthy, he did explicitly coin the term both to be, to, to attract funders, so as a marketing term, and because he was trying to describe what he wanted the field to do, which was to recreate human intelligence.

And that is just, it's, it's, it's such a,

evocative thing, like to think, wait a minute, could we actually do that?

And what would that mean?

And there's so much philosophical,

it's just a philosophical mind field.

Like, and if you, if you are someone that loves philosophizing,

you can just sit there for like days and days and days and think, holy crap, like, what would that be?

What would that look like?

How could we do that?

Times columns.

And, and so I really got, I got pulled into just the kind of sheer

enigma of that.

and also the power of that.

Of, oh, wow, if we could do this, like, if, if, you know, if, if imagine being in the shoes of someone who's actually doing the AI research and thinking to yourself,

I might be contributing to the recreation of my own intelligence or of our collective intelligence.

Like, that's intoxicating, you know?

It feels like philosophy marketing, though, because

I just look at this stuff and I hear about this stuff and I always think, okay, but what are you doing today?

And then I look at what they're doing today and I say, that doesn't seem anything like that.

And I actually don't think that there's anything harmful in discussing AGI.

What pisses me off is how many people don't seem to be discussing AGI.

They discuss the ramifications on the edges.

Because something that, and Casey Kagawa, friend of the show, has brought up a number of times with me is like, no one seems to be discussing personage.

Like, if we make a conscious computer, do we give it a social security number?

That's actually really funny because I think there are too many people discussing personage.

I don't see them in the black.

Well, perhaps they're not doing it in the media because AGI gets brought up as this vague term and then they go, huh?

What do you think?

This could be good, could be bad.

Millions, trillions?

I don't fucking know.

And it's just, it's so bizarre because

I look at, I've been covering, I personally with AI really only started looking at it hard in 2023, which is my own fault.

And I've looked, and perhaps that has also colored my belief system because

I kept looking for the thing, like the stuff, the thing that everyone was freaking out about.

And you look and it's like, we've extrapolated from large language models that AGI will come out.

But actually, that kind of leads me to another question.

Sam Altman's a confusing person.

What about

Dario Amadei?

Dario Amade, what do you think?

Does he believe in AGI, you think?

You think he's a true believer?

I do think Dario is a true believer, yeah.

And I do think that he's a true doomer as well.

Like he genuinely has a lot of anxiety around

the AGI creating the end of the world.

Whether or not,

and also, like, what does it mean to be a true believer?

You know, like.

Does he believe the bollocks he's saying?

Because he claims that AGI will be here by 2027 or quicker.

Yeah.

So that then is when he's just wearing his CEO hat and he needs to say something.

When you say wearing the CEO hat, can you be a bit more specific?

I think Dario is an interesting case in that he originally, he has a different background than Sam.

You know, Sam is a VC that, or an investor that then became the CEO of an AI company and his skill is storytelling, right?

That's what all investors do.

Dario was,

he was a scientist.

He studied, I think, computational neuroscience.

And he had a kind of deep fascination with this idea of

how do you figure out how the brain works and how do you replicate like it was it was

he didn't initially call himself an AI researcher in I think the early days of his academic career but like he was essentially studying a lot of the things that hardcore AI researchers study the brain computer science like all of these things and so I think he has this fascination and I

don't know this for sure but I would guess that he is of the category of people that I described that believes that everything is fundamentally computable in the world and human intelligence is computable.

And so he does really believe that if he can figure it out, like AGI will happen.

But then he has to run a company and a company can't just do science.

And actually one of the things that

people mentioned to me about their

criticisms of Darier when he initially ran Anthropic was that he didn't care about the business at all.

Like he seemed to have no interest in anything other than the science.

And there were people within the company that were like, this is not going to work as a company if you cannot literally do business, if you, if you cannot raise money.

And so I think what happened,

I didn't actually report this out, but my guess is what happened is Dario then had to shift to not just being a scientist, but also being a businessman.

And he had to learn how to storytell.

And is he

and I think honestly, he tries to match, you know, Sam Altman is a really successful storyteller and able to accrue a lot of capital.

I think Dario tries to match the stories that Altman tells

in order to try and accrue the same amount of capital and try to take capital away, maybe, because ultimately they are personal arch nemeses and Anthropic and Open AI are

compared to how they hate each other so much.

Is it just because Sam Altman doesn't like that Wario walked off?

I don't know that Sam

I can't figure out whether Sam genuinely ever hates anyone, but people certainly hate him.

And Dario hates him for sure.

Does Dario hate him?

I think it goes back to this idea of do you agree or do you disagree with Sam about something fundamental and therefore do you perceive him as the greatest asset ever or the greatest threat ever?

And from in Dario's case, he fundamentally disagreed with Sam about

certain key decisions around

safety, AI safety, the Doomer, the Doomer brand of AI safety,

where Dario was the one that decided to

blow up the amount of computer chips that were being used to train a single model.

So he did that from GPT-2 to GPT-3.

They went from a couple dozen chips to 10,000 chips all at once to train GPT-3.

And Dario wanted to do this because he wanted to create an internal lead in order to then have some time to do research on this model that would emerge from 10 000 chips and altman does this thing where he will convince he will ally with people so he so he was like oh 10 000 chips that's that's a brilliant idea we should totally do that um but then once it was done

He sort of shifted to, okay, now we should release it or now we should give it to Microsoft because we have this deal with Microsoft.

We need to make them happy.

We need to give them some kind of really exciting thing deliverable to justify the first $1 billion they gave us so that they can then give us more money.

More billions.

And so it was actually, it was like the two, it was both Altman and Amade together that I would credit as being responsible for dramatically accelerating the AI race because Amode was the one that decided we need to blow it up to 10,000 chips.

And then Altman was the one that persuaded him, yes, you should do it because I agree with you.

And then kind of flipped to, okay, now we need to get this out in the world as quickly as possible.

And

Amade,

I think, feels

like his intelligent, like Altman, as a politically savvy person, was able to use his intelligence.

against him to achieve exactly the opposite of what he ultimately wanted, which was to slow, yeah, to slow things down rather than accelerate it.

This sounds like colonial Britain.

It's just white guys getting angry at each other over tiny grievances from years ago.

Here's a weird question.

Well, first of all, do any of them seem happy in any way?

Do any of them seem to enjoy anything?

I ask this seriously.

I genuinely do.

They seem miserable.

That is the consistent theme from all of them, Jack Clark included.

They all seem pissed off, off scared paranoid weird it's like they're being driven mad by this yeah yeah yeah i think that is an entirely accurate description i i think you cannot be not driven mad um in this world where you have convinced yourself that the stakes are the future of humanity you know like how do you not How do you not buckle under that pressure?

I mean, skill issue.

I think I'd be fine.

Give me $1 billion.

But

it does make me think that right now,

and Bloomberg came out with a headline just as we were recording this, that Stargate, SoftBank Stargate is

hitting snags over tariff fears.

They can't seem to raise the money.

I wonder if we're going to see new levels of paranoia, anxiety with all of these people as the AI trade starts to collapse a bit.

Yeah,

this has been an interesting theme that I've picked up on with the way that Altman operates.

Is when he starts sounding incredibly optimistic in public about the future of OpenAI, the future of AGI, the future of all these things, it means that something is going wrong.

Like it will become the opposite signal because

he will roll out the most grandiose language when he needs to cover up something

that is really

stressing him out.

And so we're seeing, you know, like this happening again more recently, where,

I mean, in the beginning of the year, he had this post where he was like, We are no longer just building AGI, we are now on our path to building super intelligence.

Like he was sort of like upping the ante,

saying, Okay,

continue to hold on,

continue to stay with the program because we are about to supercharge, turbocharge this like 10 times more.

And it was like at the time when OpenAI was starting to really feel weak because it had just lost a string of executives, including some of its most important ones, Ilya Sutgov and Mira Maradi.

And it was under just massive amounts of scrutiny and it wasn't making the clip of research progress that it needed to kind of solve what itself defined as the key challenges to reaching HEI.

And so,

yeah, so I think the more that

it sort of becomes clear that people are no longer really buying into this AI future that they've painted, the stronger they're going to paint it, the more they're going to roll out this rhetoric.

He mentioned this because there was a tweet from April 15th where he said, the open AI team is executing just ridiculously well at so many things right now.

The coming months and years should be amazing.

So I'm going to guess things were bad.

Yeah, yeah.

I mean, like, cool.

I now know how to read Sam's tweets.

Yeah.

This was like a thing that I just consistently, consistently, every time I was reporting on things that were going really bad, sure enough, Altman would roll out some kind of like really crazy, some really crazy, yeah, statement out in public.

So that was actually, that tweet's actually a perfect example because he says, things will be awesome in the coming months and years.

It's always like, hold on.

Stick with me.

Yeah, stick with me.

Things might might look a little bit weird now, but oh boy, like, just you wait for what I'm seeing inside that, like, you need to just have patience for.

You know, it's always that kind of May 7th.

Pictures, great to see progress on the first Stargate in Abilene with our partners at Oracle today.

Will be the biggest AI training facility in the world.

The scale, speed, and skill of the people building this is awesome.

And then this story comes out a week later.

Bloody hell.

This tells.

So, final question.

How do you feel?

What do you think this Fiji Simo, forgive me if I messed up the name there?

What do you think about her becoming CEO of applications and Sam Altman

doing something else?

Yeah, so

I haven't actually done any reporting on this myself, but my sense of

what's happening is Altman's not a good manager.

He's not actually, like, he's a fundraising CEO.

He's not someone that can run the company.

And I think probably what happened is that

after Mira Marati left, she was the one that was actually doing the day-to-day operations and the running of the show.

After she left, he then, you know, made a big show of, I'm going to be much more close to the

work now.

I'm going to do the day-to-day running.

And probably

his time is up in doing that because

what in my book, I report, like talk a lot about how like.

He's not good at that.

Like he is, he will, he's not good at making decisions.

He's very conflict diverse.

So what he does is he'll just say he'll agree with every single team even when they're disagreeing with one another and it causes chaos and it causes rifts where you don't the person at the top is not able to make a decision and say we are all gonna go this way now and some of you are going to be unhappy like he he does not do that and so it just leads to a lot of tumult chaos part of the reason why open ai has had so many product releases and and features and things like that um i think is actually also a product of this in that he doesn't want to tell any team, like all of these product releases and features are different teams working on these things.

And he doesn't want to tell any team, like, we're going to have this person release first and have their moment in the sun, and then we're going to work a little bit more, and then you get your moment in the sun, you know, a year later.

He's like, everyone gets their moment in the sun.

Like, we're going to do releases.

We're going to do like 12 days of shipments.

We're going to just release that was insane

in 12 days.

So,

12 days of shipments for the listeners that don't remember, that was when they claimed they were going to release

12 new products.

12 new products over the 12 days.

And it wasn't 12 new, it was like four new products.

And some of them were like an API for an API.

It's just so strange.

It feels like while you're also describing an empire, you're also describing this kind of very petty underpin.

It really does mirror British colonialism.

Right.

You've got a guy who doesn't want to rule, who wants the power of a ruler and all the assets, but someone else, ideally in another country, should take responsibility.

Yeah.

Truly awful.

I mean, this is the paradox of empire is like, it feels inevitable because it feels so strong, and it also feels so weak when you start to look at it under the surface.

It was a really great book, and I really appreciate your time.

Where can people find you?

I am on LinkedIn and Blue Sky these days, and also on my website, karendehow.com.

And yeah, reach out.

I have a contact form there and I try to respond to as many people as possible.

Wonderful.

Thank you so much for joining us.

I'm, of course, EdZitron.

You'll now get a thing I recorded over a year ago that people still complain about about where you can find stuff.

Thank you for listening.

Thank you for listening to Better Offline.

The editor and composer of the Better Offline theme song is Matosowski.

You can check out more of his music and audio projects at matosowski.com.

M-A-T-T-O-S-O-W-S-K-I dot com.

You can email me at easy at betteroffline.com or visit betteroffline.com to find more podcast links and of course my newsletter.

I also really recommend you go to chat.where's your ed.at to visit the Discord and go to r/slash betteroffline to check out our Reddit.

Thank you so much for listening.

Better Offline is a production of CoolZone Media.

For more from CoolZone Media, visit our website, coolzonemedia.com, or check us out on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.

This is an iHeart podcast.