Should we be worried about OpenAI?

56m
A year ago, we saw a stand-off between OpenAI's non-profit board and its leader, Sam Altman. Since then, the board has been reshuffled, Altman has consolidated power, and under his leadership, some strange things have happened. If AI might change the world, and OpenAI is leading the field -- how worried should we be? We check in with tech reporter Casey Newton of the newsletter Platformer and the podcast Hard Fork.
Listen to our previous OpenAI episode (https://pjvogt.substack.com/p/who-should-be-in-charge-of-ai)
Support the show: searchengine.show

To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy

Learn more about your ad choices. Visit https://podcastchoices.com/adchoices

Listen and follow along

Transcript

This episode is brought to you in part by Odo.

Running a business is hard enough, so I make it harder with a dozen different apps that don't talk to each other.

One for sales, another for inventory, a separate one for accounting.

Before you know it, you're drowning in software instead of growing your business.

That's where Odoo comes in.

Odo is the only business software you'll ever need.

It's an all-in-one, fully integrated platform that handles everything.

CRM, accounting, inventory, e-commerce, HR, and more.

No more app overload, no more juggling juggling logins, just one seamless system that makes work easier.

And the best part, Odo replaces multiple expensive platforms for a fraction of the cost.

It's built to grow with your business, whether you're just starting out or already scaling up.

Plus, it's easy to use, customizable, and designed to streamline every process.

So you can focus on what really matters: running your business.

Thousands of businesses have already made the switch.

Why not you?

Try Odo for free at odoo.com.

That's odoo.com.

This episode is brought to you in part by Viore.

A new perspective on performance apparel.

Perfect if you're sick and tired of traditional old workout gear.

Viore clothes are incredibly versatile and comfortable, perfect for whatever your day brings.

They're designed to look great beyond the gym, whether you're running errands, heading to the office, or meeting up with your friends.

One specific Viore item I would recommend is the core short.

This is a short that started it all for Viore.

It is one short for every sport, you know, for whatever sports you play.

It's ideal for fitness, running, and training, but also genuinely stylish and comfortable enough to just wear all day.

Viori is an investment in your happiness.

For search engine listeners, they are offering 20% off your first purchase.

Get yourself some of the most comfortable and versatile clothing on the planet at viore.com/slash pjsearch.

That's vuori.com/slash pjsearch.

Exclusions apply.

Visit the website for full terms and conditions.

Not only will you receive 20% off your first purchase, but enjoy free shipping on any U.S.

orders over $75 and free returns.

Go to viori.com slash pjsearch and discover the versatility of Viori clothing.

Exclusions Apply.

Visit the website for full terms and conditions.

Welcome to Search Engine.

No question too big, no question too small.

This week, should we be worried about OpenAI?

So last fall, we reported a story about OpenAI, the leading company in artificial intelligence, led by Charismatic co-founder Sam Altman.

The company was famous not just for its runaway success, but also for their unusual ethos and structure.

Rather than simply being a for-profit company, it was a non-profit in charge of a for-profit company.

And that nonprofit could seemingly disable the for-profit company at any point if it decided that the company was acting in a way that was dangerous for society.

It was like a tech company company with a doomsday switch built into it.

A recognition both of AI's potential power to reshape society, as well as an understanding perhaps that the last round of technological innovation has not been completely wonderful for the world.

Anyway, our last story was about how OpenAI's non-profit guardians had decided that the company had in fact gone off course.

In November 2023, they deposed their own leader, suddenly and dramatically.

Sam Altman is out as CEO of OpenAI, the company just announcing a leadership transition.

The godfather of Chat GPT kicked out of the company he founded.

It looks like things were over for Sam Altman until his loyalists got on board with a counter coup.

Nearly every rank and file employee at the company signed a petition demanding his return.

90% of the company's 770 employees signed a letter threatening to leave unless the current board of directors resigned and reinstated Altman as head of OpenAI.

Finally, Microsoft, OpenAI's biggest shareholder, also stepped in in support of Sam.

Quickly thereafter, he was reinstated.

Sam Altman back as CEO of OpenAI.

OpenAI posting on X that Sam Altman will now officially return as CEO.

It's also overhauling the board that fired him with new directors, ending a dramatic five-day standoff that's transfixed Silicon Valley and the artificial intelligence industry.

So OpenAI's rebellious board was basically replaced with a compliant one.

Sam Altman, who was temporarily deemed too dangerous to run his own company, instead consolidated power there.

That was a year ago.

In the years since, OpenAI has not turned on an army of Terminators to kill us all, but the company has transformed into a somewhat different seeming institution, with lots of strange public errors and judgment along the way.

We hoped to talk to someone at OpenAI for this story.

They did not make anyone available for comment.

So instead, i called a tech journalist i know want to see something crazy of course i want to see something crazy okay

oh i guess i can only go one way wait what are you doing i just got a new webcam and it follows my face but it didn't follow your face now it's not following it damn it

stood up and just sashed out of frame and i was like trying to figure out what i was supposed to pay attention to

casey newton founder and editor of the platformer newsletter, co-host of the Hard Fork podcast, and perhaps a sometimes too early adopter of exciting technologies.

Casey's the reporter we spoke to last year when everything seemed to be exploding at OpenAI, and he's continued covering all the strange happenings at the company since then.

I wanted to talk to him not because I'm a gossip hound for Silicon Valley, but because I really wondered, if AI is a technology that can really change the world, how concerned should I be about some relatively erratic behavior from the company leading the field?

Casey was happy to fill me in on what had been going on with Sam Altman and his very valuable startup since I last wondered about these things 12 months ago.

Well, I think on the business side, OpenAI has had an incredible year.

The New York Times recently reported that its monthly revenue had hit $300 million in August, which was up 1,700% since the beginning of 2023.

And it expects about $3.7 billion in annual sales this year.

I went back to February and back then it was predicted that OpenAI was going to make a mere $2 billion this year.

So just this year, the amount of money they expected to make doubled.

They further believe that their revenue will be $11.6 billion

next year.

So those are growth rates that we typically see only for kind of once in a generation companies that really manage to hit on something new and novel in

And what about like, how are they actually running the place?

Because I will tell you, my perception as a person who follows this less closely than you is like, I feel like I see as many stories about OpenAI tripping over its clown shoes as I do stories about how the new GBT is slightly better than the one that preceded it.

Like, can you give me like the timeline of the last year, like which stories stuck out to you and how you thought about them?

So I think at a high level, and somewhat to my surprise sam altman changed very little about the way that he led open ai in the last year like if the concern that came up last year was that sam was not being very collaborative that he was not empowering other leaders that he was operating this as a sort of very strong ceo who was not delegating a lot of power I haven't seen a lot of change in the past year.

I have seen him continue to pursue his own highest priorities, like fundraising to build giant microchip fabrication plants, for example, which has been a huge priority for him.

At the same time, there have been stories that have come out along the way that reminded you why people were nervous about the company last year.

One that comes to mind is that it was revealed this spring that OpenAI had been forcing employees when they left to sign non-disclosure agreements, which is somewhat unusual.

But then very unusually, they told those employees, if you do not sign this NDA, we can claw back the equity that we have given you in the company.

So how unusual is that?

Like, how unusual is that in tech for a tech company to say like, like if a person quits Facebook and then they say Facebook was a bad company,

how unusual would it be for Facebook to be like, we are taking back your stock?

It would be impossible.

They don't do that.

They don't do that.

No, they don't do that.

So, this is just extraordinarily unusual.

You know, sometimes with like a C-suite executive or someone very high up in the company, if they maybe let's say they're fired, but the company doesn't want them to run around bad mouthing them to their competitors, they might make that person sign an NDA in exchange for a lot of money.

But this thing was just hitting the rank and file employees at OpenAI, and that was really, really unusual.

And afterwards, Sam Altman posted on X

saying that he would not do this, and that it was one of the few times he had been genuinely embarrassed running OpenAI.

He did not know this was happening and he should have is what he said.

And just to like, like, I feel like journalists have this bias, which is like, we believe in transparency, we believe in disclosure.

Sometimes I think non-journalists care less than we do because we kind of have a rooting interest in transparency and disclosure, but it's also been really confusing, not as a reporter, but just as a human being.

I don't know.

There's a lot of things I worry about.

Most of them are selfish and personal.

Like what happens with OpenAI is maybe in the top 500 or a couple hundred, but there is a part of my mind that worries about it.

And when I worry about it and I try to like, my prediction ledger activates, I'm always like, well, it seems like a lot of people are quitting.

A lot of the people work on the let's stop this from screwing up the world team, but they always quit.

And they're like, well, we just have a difference of agreement, can't say more.

And it's really confusing.

Yeah, absolutely.

And, you know, I will say that there has been great reporting over the past year by other journalists who have gotten at what some of those concerns are.

And a lot of them wind up being the same thing, which is we launched a product and I think we should have done a lot more testing before we launched that product, but we didn't.

And so now we have accelerated this kind of AI arms race that we are in.

And that will likely end badly because we are much closer to building super intelligence than we are to understanding how to safely build a super intelligence.

I see.

It's like what I've noticed as a user of AI, I actually noticed the safeguards.

The other day, I saw somebody was making a meme making fun of a celebrity online.

And as often happens these days, I like didn't recognize the celebrity.

And I plugged the picture into ChatGPT and I was like, who's this?

Which is the main way I use ChatGPT is to say, what's this?

And it was like, I don't identify human beings.

I was like, okay.

That's a rule that you're following.

But what you're saying is that in these fast rollouts, smart rules like that, which would stop people from using AI in a bad way, or stop AI from just deciding to do things that are bad, those might be getting overridden.

And that if all these companies are competing with each other to build the most powerful thing the fastest, one company ignoring safeguards means all the other companies ignore safeguards.

Exactly.

And we have seen this time and time again.

I mean, this is really fundamental to the DNA of Open AI.

When they released Chat GPT, other companies had developed large language models that were just as good, but Sam got spooked that his rival, Anthropic, which had an LLM named Claude, was going to release their product first and might steal all of their thunder.

And so they released ChatGPT to get out in front of Claude.

And that was essentially the starting gun that launched the entire AI race.

And so I think it is fundamental to how Sam sees the world that all of this stuff is inevitable.

And if it's going to happen anyway, all other things being equal, you would rather be the person who did it, right?

And got the credit and the glory and the users and the revenue.

So that is our overarching problem here.

AI developers might care about safety, but in the rush to be first in the field, the company who wins could actually be the company who cares about safety the least, which is why we are talking about worrying incidents from the industry leader, OpenAI.

So one of the incidents was this NDA incident first reported by Vox this May, and the company did backtrack on those NDAs.

An OpenAI spokesperson told Vox, quote, we have never canceled any current or former employees' vested equity, nor will we if people do not sign a release or non-disparagement agreement when they exit, end quote.

A separate incident casing I dug into was the Scarlett Johansson incident.

Do you want to tell that story?

Yeah.

So for a while, OpenAI had been working on a voice mode for ChatGPT.

So instead of just typing in a box, you could tap a button on your phone and interact with the model using a voice.

And a movie that has long inspired people in Silicon Valley is the Spike Jones film, Her.

And in that film, Joaquin Phoenix, who plays the protagonist of that film, talks constantly to an AI companion who is voiced by Scarlett Johansson.

Do you want to know how I work?

Yeah, actually.

basically, I have intuition.

I mean, the DNA of who I am is based on the millions of personalities of all the programmers who wrote me.

But what makes me, me,

is my ability to grow through my experiences.

So basically, in every moment, I'm evolving, just like you.

And I just wanted to say, before you even continue with your story, what is so weird about this movie being a huge inspiration to people in Silicon Valley is it is a cautionary, dystopian film.

I saw this movie.

This is not a joke.

I saw this movie and it upset me so much at the time.

I was talking to a friend afterwards and she said, I think you should probably talk to a psychiatrist and go on antidepressants, which I did for several years.

I'm not on them any longer.

I went on them because of the movie, her.

Oh my gosh.

So strange to me that people saw this movie and were like, ah,

we should have this.

But anyway, they love it.

They want to make it the future.

Well, look, you could take different lessons from her.

You know, I think a bad lesson to take would be human companionship is worthless at the moment that we invent AI superintelligence because we can just talk to super intelligence all day long and turn our backs on humanity.

That would be a bad lesson to learn.

I think a lot of people in Silicon Valley looked at her and they thought, oh, that's a really good natural user interface.

Like if we could just wear earbuds all day long and you could answer any question you ever had just by saying, hey, her, what's going on with this?

That would be great.

And then in fact, you do start to see the arrival of products like Siri and Alexa and sort of baby steps toward this new world.

So I completely agree with you.

Her is a dystopian film.

It should not be viewed as a blueprint to build the future.

At the same time, I do feel like I see what Silicon Valley saw in it.

Right.

You could see Star Wars and be like, oh, spaceships, one person could pilot could be a good idea.

It doesn't mean you're trying to build like TIE Fighters to take over Aldran or whatever.

Right.

And lightsabers are a good idea and we should build it.

I completely agree.

And I still think about it.

So

her comes out.

Tech people are like, oh, it would be really good to have an AI you could talk to.

That's like one lesson from the movie.

Lightsabers would be good too.

And when OpenAI releases their voice agent, which is sort of, you know, a real life version of part of this movie, the thing that a lot of people notice is that one of the possible voices for the voice agent sounds quite a bit like Scarlett Johansson, the voice from the movie.

Hey, how's it going?

Hey, Rocky.

I'm doing great.

How about you?

I'm awesome.

Listen, I got some huge news.

Oh, do tell.

I'm Williams.

Well, in a few minutes, I'm going to be interviewing at Open AI.

Have you heard of them?

Open AI?

Huh?

Sounds vaguely familiar.

Kidding, of course.

That's incredible, Rocky.

What kind of interview?

Not only did the voice sound very much like Scarlett Johansson, it was also presented in this very flirty way.

When they did this demo, it was like, it's a man using a assistant who has the voice of a woman who sounds a lot like Scarlett Johansson.

And she's like, oh, PJ, you're so bad.

You know, I feel like that was pretty, that was like kind of the tone of it.

And

it was sort of like, what are you doing here exactly?

After the product launched, a user on TikTok even asked ChatGPT itself if it believed it was a Johansson clown.

Hey, is your voice supposed to be Scarlett Johansson?

No, my voice isn't designed to replicate Scarlett Johansson or any specific person.

Hilariously, the voice has never sounded more similar to Johansson's to me than when it was denying the resemblance.

Casey said the company itself had also contributed to this confusion.

Sam Altman had primed everyone to think that way because a couple days before they do this demonstration where they show off the voice for the first time, Sam Altman tweets the word her,

or I should say he posts it on X.

And so, of course, when this demo happens, everyone is like, oh.

And so everyone was sort of primed to think, oh, wow, OpenAI has realized Silicon Valley's decade-long dream of making the movie her a reality.

And then what happens?

Then it turned out that Scarlett Johansson was really mad because Sam Altman had gone to her last year and said, hey, would you like to be a voice for this thing?

And she thought about about it and she said, no, I don't want to.

And then apparently after he had posted, like in just in the couple days before the demo, he'd gone back to her agents and tried to renegotiate this whole thing and said, are you sure you don't want to be the voice for this thing?

And she said no.

And they showed it off anyway.

And they never said, this is Scarlett Johansson, but they absolutely let everyone believe it.

A new controversy tonight in the world of artificial intelligence as one of Hollywood's biggest movie stars says her voice was copied without her consent by one of the most powerful AI companies.

Actress Scarlett Johansson claims OpenAI's ChatGPT mimicked her voice for its latest personal assistant program.

This bizarre moment led to Scarlett Johansson then making the rounds on TV, advocating for legislation to protect the intellectual property, really the identity of actors like herself.

Obviously, we're all waiting and supporting like this, like the passing of legislation to protect everybody's individual rights.

And I think, you know, it's, yeah, we're like, we're still, we're still waiting for it, right?

So like until this is just maybe sort of highlights like how vulnerable everybody is to it.

I think this was the story for me of all the stories that like really stuck with me.

And maybe it was because the message it gave me was a kind of impunity.

And like the promise, as I've understood from OpenAI, has been exactly the opposite of impunity.

impunity.

And obviously, like of all the choices they make, whether they find a soundalike voice actress and do a voice that sounds a lot like Scarlett Johansson and then kind of smudge the truth, I could see a person getting overenthusiastic and making that mistake.

It's the kind of mistake a podcast would make in its first couple of years.

You're like, oh, geez.

Oh, God, we're really sorry.

But it seems careless.

Also, this is a product where one of people's concerns is the copyright implications, where these AI companies are hoovering up a lot of people's creative work to make their products.

And it just felt like what you expect from a company that doesn't care what you think and wants to do what it wants.

And I don't know if I'm overreading, but it was a moment that kind of like gave me a little bit of future nausea.

I agree with you.

And I think you framed it really well because this is the company that has told us from the beginning, we're working on something very powerful.

We think it could solve a lot of problems.

If it falls into the wrong hands, it could also be extremely dangerous.

And so that's why we're going to come up with a very unusual structure for ourselves and try to do absolutely everything we can do in our power to proceed safely, cautiously, and responsibly.

And so you look at the Scarlet Johansson thing and like none of that squares with their behavior in that case.

So that was the Scarlett Johansson incident.

Casey told me about another incident, this one from this past August.

Let's call that one the lazy student problem.

I mean, this is a, this is a kind of short and funny one, but there was reporting this year that they built a tool that detects when students are using Chat GPT to do their homework, but they won't release it.

How do they explain why they're not releasing it?

As someone who has had to have a conversation with a teenager about why they shouldn't cheat using Open AI and really stumbled on the part where I was like, listen, it's the wrong thing to do and you probably won't get caught.

And also, yes, probably all your friends are doing it.

And then, like, there was like several ellipses of haws while I realized like the hole I dug myself into.

Why won't they just release the homework checker?

So, I should say, the Wall Street Journal broke this story, and the statement they gave to them was: the text watermarking method we're developing is technically promising, but has important risks we're weighing while we research alternatives.

We believe the deliberate approach we've taken is necessary given the complexities involved and its likely impact on the broader ecosystem beyond open AI.

That is what they said.

The journal sort of made an alternate case, which is that if you can't use ChatGPT to cheat on your homework, you will stop paying the company $20 a month.

It's so funny to imagine what part of their revenue is coming from high schoolers and college kids.

And also, like, I don't know, maybe there's an argument that sort of like the same way we don't need to do long division.

Nobody needs to be able to think or reason in a essay form, but I kind of think people still need to be able to think or reason in an essay form.

I mean, maybe long division is important too.

I don't know.

Right.

So if we're trying to decide if we trust OpenAI to be not just a profitable company, but also a kind of unusually ethical AI standard bearer, their willingness to accept a bunch of grubby $20 bills from high schoolers who want to skip their homework and play more Fortnite, it's not the end of the world, but it is behavior unethical enough that you'd probably fire a babysitter over it.

Casey also told me about an additional incident that had given some people pause.

The investments incident.

This one had to do with Sam Altman personally, specifically the way he's been quietly spending his money, investing in companies like Stripe, Airbnb, and Reddit.

We did learn about Sam Altman's investment empire this year, thanks to some reporting in the Wall Street Journal.

And they really dug into all of the stakes that he has in many startups and found that he controls at least $2.8 $2.8 billion

worth of holdings.

And he's used those holdings to create a line of debt, which he has from JP Morgan Chase, which gives him access to hundreds of millions of more dollars, which he can put into private companies.

And, like, why is this interesting?

Well,

one, that's kind of a pretty risky gamble to have a lot of your

net worth tied up in in like debt that you raised using your venture investments as collateral.

Like that's kind of like a rickety ladder of investments right there.

But it also creates questions around what companies is OpenAI doing deals with?

Are those companies that Sam has investments in?

Of course, you know, Sam doesn't own equity in OpenAI right now.

And so his own wealth is tied up in these investments.

And while nobody really thinks that Sam is doing any of this for the money, there was just kind of also this financial element to what we learned about him this year that I think raised some questions for people.

I feel like one of the things where I feel a little bit disabused is I think a couple of years ago, I hadn't made up my mind, but I felt very willing to entertain the possibility that Sam Altman was a very unusual kind of person, that he didn't seem to be motivated by.

accumulating wealth to the same degree as maybe other people are, that he might not be entirely motivated by accumulating power,

that he might just have a vision for a technology that could be really useful or could be really dangerous, and thought he might be the best person to be a steward of that.

I'm not saying I was right then, I'm not saying I was wrong then, but like, do you feel like you have a changed or refined view of what motivates this person who has a lot of power?

I essentially have the same view of his motivations.

And I think the generous version of it is that he is in a long line of Silicon Valley entrepreneurs who thought they could use innovation to solve some of the world's biggest problems and that that is how they want to spend their lives.

I think the less generous version of it is that this person coming out of that tradition

found himself working on this technology that could essentially be like the technology that ends all other technologies.

Because if the thing works out, the thing you've created just creates all other innovation automatically for the rest of time.

And

that

is a position of extraordinary power to put yourself into it.

And I do think that he is attracted to the power and the influence that will come from being one of the people that invents this incredibly powerful thing.

After a short break, Casey already mentioned that there have been a lot of senior level departures at OpenAI.

We're going to dive deeper into who left and what they seemed to believe about the company they were quitting.

Plus, we'll look at a fairly worrying warning manifesto published by an ex-OpenAI employee.

Let's have to see math.

This episode of Search Engine is brought to you in part by LinkedIn.

As a small business owner, you don't have the luxury of clocking out early.

Your business is on your mind 24-7.

When you clock out, LinkedIn clocks in.

LinkedIn makes it easy to post your job for free, share it with your network, and get qualified candidates that you can manage all in one place.

I have actually tried posting a job to LinkedIn jobs.

It is exactly as easy as they advertise.

LinkedIn's new feature can actually help you write job descriptions and then quickly get your job in front of the right people with deep candidate insights.

Either post your job for free or pay to promote.

Promoted jobs get three times more qualified applicants.

At the end of the day, the most important thing to your small business is the quality of candidates.

And with LinkedIn, you can feel confident that you're getting the best.

Based on LinkedIn data, 72% of small businesses using LinkedIn say that LinkedIn helps them find high-quality candidates.

Find out why more than 2.5 million small businesses use LinkedIn for hiring today.

Find your next great hire on LinkedIn.

Post your job for free at linkedin.com slash pjsearch.

That's linkedin.com slash pjsearch to post your job for free.

Terms and conditions apply.

This episode is brought to you in part by Robert Half.

Need contract help for those workload peaks and backlogged projects?

You're not alone.

Robert Half found that 67% of companies surveyed say they will increase their use of contract talent.

That's why their recruiters leverage their experience and use award-winning AI to quickly find the skilled candidates they want.

Learn about their specialized talent in finance, accounting, technology, marketing, legal, and administrative support at Robert Half.

They know talent.

Visit roberth.com/slash talent today.

Welcome back to the show.

So if you, like me, were at best quarter paying attention to developments at OpenAI the past 12 months, the thing you still may have noticed was just a very unusual amount of senior level people leaving their jobs.

It was the kind of turnover you'd expect to see at a Halloween store in November, not typically at one of the most valuable new American technology companies.

We've already mentioned this, but OpenAI employees were in many cases discouraged from criticizing the company.

And yet, there's still been some evidence about why they left and what they saw before they did.

So we're going to get into that.

This part is not so much an incident as it is a series of incidents, a trend.

Let's call this bit sudden departures.

So the first big one out the door this year is this guy, Andre Karpathy, who was part of the founding team.

He left for a while to go to Tesla.

He comes back for exactly one year and then leaves.

Okay.

In May, Ilya Sutzkover, who was one one of the board members who had forced Sam out last year, he announces that he is leaving the company and doesn't really say much about why he's leaving.

But within a month, it's revealed that he's working on his own AI company called Safe Superintelligence and raises a billion dollars just to get it off the ground.

Oh, wow.

Yeah.

He had a guy on his research team named Jan Leike.

So this was somebody else who was trying to make sure that AI is built safely.

He leaves to go to Anthropic to work on that problem there.

Gretchen Kruger, who's another policy researcher, leaves in May.

Then in August, John Schulman, who was one of the members of the founding team, he announced that he was going to Anthropic and he had previously helped to build ChatGPT.

And then Greg Brockman, who is the president of OpenAI and one of its kind of main public-facing spokespeople, he announces that he he is taking an extended leave of absence.

Basically, just says he really needs a break.

I'm not entirely sure what happened there.

And then finally, Mira Marati announces that she is leaving in September.

She had also been part of this board drama last year.

And at the same day that she left, it was revealed that the company's chief research officer, Bob McGrew, and another research VP, Barrett Zoff, were also leaving the company.

That's just a lot of talent walking out the door, PJ.

And I can say, if you look at the other major AI companies, so like a Google, a Meta, and Anthropic, there has been nothing comparable this year in terms of that level of turnover.

So you have like huge turnover at the top of a company that, in theory, people should want to stay at because it's like leading the industry, it's incredibly valuable, it's the winning team, and people are walking out the door saying they don't want to play for it.

Yeah, totally.

But you know, another really important story about Mira Marati is that before

Sam was ousted last year, she had written a private memo to Sam raising questions about his management and had shared his concerns with the board.

Oh, interesting.

And my understanding is that that had weighed heavily on the board when they fired Sam, because to have the CTO of the company

coming to you and saying, hey, this is a real problem.

Yeah.

That's going to get your attention in a way that, you know, maybe a rank and file employee might not have been able to get their attention.

So we have known for some time now that Mira has had longstanding concerns with Sam's management style.

And so when she finally left, it felt like the end to a story that we had been following for some time.

And so has she said anything publicly that is very decipherable about her reason for exiting?

So, you know, she said there's never an ideal time to step away from a place one cherishes, which I felt like was just an acknowledgement that this seemed like a pretty bad time to step away.

But she said that she wanted the time and space to do her own exploration.

And on the day that we recorded this, the information reported that she's already talking to some other recently departed Open AI people about potentially starting another AI company with them.

So

because that is what people do.

Like most people, when they leave OpenAI, they start an AI company that looks shockingly similar to OpenAI, just without Sam.

And why is that?

Well, my glib answer is that the high-ranking people who leave OpenAI seem to feel like the problem with OpenAI is Sam Altman.

And that if you could build AI without Sam Altman, you would probably be having a better time.

I see.

I see.

And then there's this one other guy who left that I want to talk about.

Yeah.

It's this guy named Leopold Aschenbrenner.

Okay.

Have you heard of this guy?

No, I've not.

So he is

quite young.

He's still in his 20s.

He was a researcher at OpenAI.

He is fired, he says, for taking some concerns to the board about safety research.

OpenAI denies this.

But he goes away and he comes back in June and he publishes a 50,000-word document online called Situational Awareness.

Were you aware of situational awareness?

I was not aware of situational awareness.

Okay, well, I'm here to make you aware of situational awareness.

It's this very long document that was the talk of Silicon Valley for a week or so.

And in it, Leopold says, essentially, the rest of you out there in the world don't seem to be getting it.

You don't understand how fast AI is developing.

You don't understand that we're actually running out of benchmarks to have it blow past.

And this technology really is about to change everything just within a few years.

And it sure seems like outside our tiny little bubble here, not enough people are paying attention.

And this document winds up getting circulated all throughout the Biden White House.

It's circulated in the Trump campaign.

And I think Leopold Ashtonbrenner, you know, might in a Trump administration have talked himself into a role like leading the Homeland Security Department or something.

But yeah, he was another one of the interesting departures this year.

That's a crazy document.

Like, what do you make of it?

I think that while you might take issue with some of his logic and some of his graphs, and maybe he's hand waving past certain potential limits in the development of this technology, he is getting at something real, which is that it does seem like even though AI is essentially topic number one in tech, it doesn't feel like people are really reckoning with the potential consequences, maybe as much as they should have.

You know, some people may listen to this and say, well, you you know, Casey has sort of fallen for all of the hype here.

You know, there remains this contingent of people who believe that this whole thing is a house of cards and that once the successor to GPT-4 comes out, we will see that the rate of progress has slowed.

And in fact, no one is going to invent super intelligence anytime soon.

And all of these things are just going to sort of wash away.

It might just be an effect of who I spend my time with and the conversations that are happening at dinners and drinks in San Francisco every day.

But I am more or less persuaded that we are very close to having technology that is smarter than very smart humans in most cases.

And that if you are the person who controls the keys to that technology, then yes, you will be extraordinarily powerful.

Listening to Casey, I started to imagine a potential world where AI continues to grow at whatever pace it grows at, but where OpenAI squanders its early lead in the industry and just becomes less important over time.

I wanted to know what Casey thought of this possibility.

Do you think there's a world where Open AI becomes less important to the future of this thing?

And,

you know, we'll end up talking more about these other companies because these other companies have absorbed so much of the talent of that place.

Yes.

And there's actually this really fascinating precedent for this in Silicon Valley.

So we call Silicon Valley Silicon Valley because it was where the semiconductor industry was founded.

And the biggest early semiconductor company was called Fairchild.

And much like OpenAI, in the early days of chip manufacturing, it attracted all the best talent.

But one by one, for various reasons, a lot of people leave Fairchild and they go on to start their own companies, companies with names like Intel.

And there wind up being so many of these companies that they start calling them the Fairchildren because they were born out of this initial company that sort of seeded the ecosystem with talent, made some of the key early discoveries, and then lost all that talent.

My guess is, you probably didn't know the name Fairchild before I said it just now, but you do know the name Intel.

And the question is: do Anthropic and some of these other upstarts become the actual winners of this race?

And OpenAI, 50 years from now, is just a footnote in history.

So, how much should we be worried about OpenAI?

I guess the answer for now seems to be somewhat.

If you think AI really could be powerful, and if you think AI safety is then important, it doesn't really seem like the incentives in a race to dominate the AI market are really that aligned.

OpenAI might end up leading the field.

It might end up being a fair child, but it's hard to imagine why any AI company would succeed while also moving forward with an abundance of caution.

At least, not without some regulation.

After a quick break, we're going to switch tracks a little bit.

We talked a lot about why this technology may be concerning.

A lot of people agree, so much so that on some quarters of social media, you can get shamed just for using AI products.

But I am one of the people who both worries about AI and uses AI.

And in the last year, as the technology has gotten much more powerful, I find I'm using it in stranger ways.

When we come back, I'm going to talk to Casey a little bit about how he thinks about the ethical concerns here and also about the very bizarre way he has begun talking intimately with a machine.

Let's have to see some ads.

This episode of Search Engine is brought to you in part by Rosetta Stone.

I'm always threatening to learn a new language.

This month, I would like to learn German.

I spent some time among Germans this summer and I found them to be very friendly and I wanted to be able to communicate in their language instead of mine.

The easy way to learn German is Rosetta Stone.

They've been the trusted leader in language learning for over 30 years with 25 different languages to choose from.

Spanish, French, German, Japanese, and more.

What makes Rosetta Stone work is their immersive method.

No clunky English translations, just a natural build from words to phrases to full sentences, so you can actually start thinking in the language.

So don't wait, unlock your language learning potential now.

Search engine listeners can grab Rosetta Stone's lifetime membership for 50% off.

That's unlimited access to 25 language courses.

for life.

Visit rosettastone.com slash search engine to get started and claim you're 50% off today.

Don't miss out.

Go to rosettastone.com slash search engine and start learning today.

This episode of Search Engine is brought to you in part by Perfectly Snug.

I am a hot sleeper.

I do all the right things and still wake up at 3 a.m.

wide awake and sweaty.

Perfectly snug is a fix for that.

It's a two inch mattress topper with whisper quiet fans that actively move heat and humidity away from your body.

The sensors actually work.

You don't have to fiddle with settings in the middle of the night.

And if you, like me, need quick relief, burst mode cools things down in about 10 minutes.

I recommend their dual zone setup so that one side of the bed can run cooler than the other side of the bed.

And there's no water to refill and nothing snaking off the bed.

It's just a slim topper that sips power.

Setup is a few minutes.

They offer a 30-night risk-free trial with free shipping and returns.

If you, like me, are tired of sweating through the night, try perfectly snug.

This episode of Search Engine is brought to you in part by Mint Mobile.

You know it's not on my summer bucket list?

Paying a sky-high wireless bill.

If you, like me, do not want to fork over way too much money every month for the same service that you're already getting, you can pay way less thanks to Mint Mobile.

And right now, you can get three months of unlimited premium wireless for just 15 bucks a month.

Switching could not be easier.

You can keep your phone, your number, all your contacts, no contracts, no hidden fees, and no surprise overjobs.

The best part, you'll save enough money each month to put toward actual fun stuff.

Trips, concert tickets, you name it, instead of wasting it on your phone bill.

This year, skip breaking a sweat and breaking the bank.

Get this new customer offer and your three-month unlimited wireless plan for just $15 a month at mintmobile.com/slash search.

That's mintmobile.com/slash search.

Upfront payment of $45 required, equivalent to $15 a month.

Limited time, new customer offer for first three months only.

Speeds may slow above 35 gigs on the unlimited plan.

Taxes and fees extra.

See Mint Mobile for details.

Welcome back to the show.

So I wanted to ask Casey about this AI question I've been personally conflicted on and remain somewhat personally conflicted on.

It's the first time in my life I've seen a new digital technology that some people despise so much.

They don't want to use it at all.

I see people shaming each other online for using AI at all.

And that feels like a very online response to something, but it doesn't feel like a strategy.

But I also like understand where the impulse to shame comes from.

Like, how do you square it for yourself where people's jobs are important, people having jobs is important all that money just sort of getting swept into a big pile for open ai doesn't feel like totally socially advantageous at the same time like i use chat gpt it's not replacing anybody's job in my usage of it but i don't think as it became more useful there'd be a point where i would say it's immoral for me to use it i'm gonna stop Yeah, I mean, we have always used software tools since their advent to try to automate away drudgery.

And that has traditionally been seen as a good thing, right?

It's nice that you have a spreadsheet to do your financial planning and aren't trying to do it all on a legal pad.

Presumably that brought a benefit to your life, made you better at your job and also helped you do it faster.

And I view the AI tools I use as doing that.

They take something that used to take me a lot of time and effort and now make it simpler.

For just one example, like I have a human editor who reads my column before I send it out, but I also will most of the time just run it through Claude, actually, which is Anthropics model, and just see if it can find any spelling or grammatical errors.

And every once in a while, it really saves my bacon.

And all it costs me is $20 a month.

So I don't think there is any shame in using these tools as a kind of backstop to prevent you from making a mistake or, you know, from doing some research, because that's just the way that we've always used software and technology.

I understand the anxiety about this.

I understand people who, for their own principled reasons, decide, well, I don't want to use this in my work.

Maybe I'm a creative person.

It's very important to me that all the work that I do is 100% human and has no AI in it.

These are all like very reasonable positions to strike.

But I think that to tell someone, you shouldn't use this particular kind of software because

it is evil, I don't understand that argument.

Can I tell you about another way I've been using AI this year?

Yeah.

And I was actually thinking about you because during one of our conversations, we were reflecting on the fact that there were only a couple of things that people could do to improve their mental health.

And one was therapy and the other was meditation.

And you were saying how frustrating it is to know what the answer is and to not want to do it, right?

It's like, yes, if you started a meditation practice, like that would obviously be very helpful, but then you have to like sit quietly with your thoughts for 20 minutes a day.

Like, obviously that seems horrible.

Yes.

So recently I've been experiencing these feelings of burnout related to my newsletter where I love doing it, but it also feels harder than it has.

And I've been doing it at least three times a week, sometimes as many as five for seven years.

And so I think this is just sort of a natural thing.

And so I felt like I need to maybe break glass in the case of this emergency and try something that I had never previously wanted to do, which was meditate.

Oh, wow.

So I'm only a few days into this.

I don't want to tell you that I've solved anything here.

I did enjoy my first few experiences, but one of the things that I did both in the run-up to and the aftermath of these meditation experiences was to just chat with Claude.

Because Claude lets you create something called a project where you can upload a few documents and you can chat with those documents.

And then you can just also kind of check in with it from day to day and tell you what you're noticing or observing or if you have questions.

And to me, this was a perfect use case for this technology because I truly know nothing about meditation.

I, you know, people have talked to me about it.

I've, you know, done it a couple of times before, but I've never read a book about it.

I've never talked with any of my friends at length about it.

So I'm just as fresh as you can be.

And the level of knowledge that is inside Claude, which was, of course, just stolen from the internet, you know, without paying anyone from their labor, is actually quite high.

And it was able to help.

give me a good start.

And then afterwards, I could come back and say, well, you know, here's what I noticed.

And I struggle with this thing.

It was like, oh, well, you might want to try that.

Or, hmm, you know, I sort of wish it was a little bit more like this.

And it would say, oh, well, then you might want to try this other kind of meditation.

Tell me more about that.

Okay.

Yeah.

Sure.

Here's everything.

And I was talking earlier about like, what will it be like when you have an AI coworker?

It's like, well, I have a meditation coach that I pay 20 bucks a month for.

Some people are laughing.

Some people are saying, Casey, you can meditate for free.

You don't need a coach.

I get that.

I am somebody who likes to like pay for access to expertise and I feel like I have it.

And first of all, I am going to go meditate after this because I want to recenter myself.

And I didn't get to do it this morning.

I don't know if I'm still going to be doing this in like two or three weeks.

But if I am, I think the AI is actually going to be part of that story because it's giving me a place where I can go after these experiences to reflect.

Again, I hear people saying, Casey, you realize that journals exist.

You could like write this down.

Yeah, I know.

I get what you're saying.

What I'm telling you is this is a journal that talks back to you.

This is a journal that is an expert about the thing that I'm journaling about that is holding my hand through a process.

None of this existed two years ago, right?

Totally.

The challenge of talking about any of this stuff is

when the rate of change in your day-to-day is high, sometimes it feels quite obvious.

Other times it becomes this weird blind spot where you don't even realize that the conditions around you have changed, right?

This is what Leopold is getting at in situational awareness is like, you need to stop and collaborate and listen, as Vanilla Ice once said, right?

You need to do what you're doing on on this podcast, PJ, which is like, it's been a year.

What happened?

This is the right question, right?

You know, we were talking so much earlier about these AI critics that are like, it's all hype.

It's constantly wrong.

Screw these Silicon Valley bros, right?

And I totally get all of the animus and resentment that powers that.

But something that those folks do to their detriment is they.

tune out everything that is happening in AI because they think I've already made up my mind about this stuff.

I already know that I hate everyone involved.

I hate the output and I hope it chokes and dies, right?

Like, this is how these people feel.

And again, I get it.

I understand all of those emotions.

What I'm saying to you, though, is you actually have to look around.

You have to engage.

You have to keep trying out these chatbots every two or three months, if only to get a sense of what they can do now that they couldn't do two to three months ago, because otherwise you are going to miss what is happening here.

And it is wild.

It is wild.

It's, to me, it's really interesting that it is.

in a strange way, a tool you are using to know yourself.

And I don't mean to overstate it.

Like it is also just a journal that is talking to you and giving you the pointers, but like, I find that interesting.

I also feel like for whatever reason, I think because there's such a culture of like, we don't want to be enthusiastic about technology anymore, particularly this technology, which you don't want to end up looking like the person who was gleefully celebrating the arrival of our doom.

And so there's kind of a weird lack of just like 10 years ago, I think had this come out, there'd be a tech press that would say, here's 10 new ways you can use this.

Here's how I'm using it.

Kind of nobody wants to be seen doing that.

So no no one's using it.

I had a thing happen a couple of days ago.

I think Sam Altman, he was retweeting someone whose suggestion was, like, ask your agent from all of our interactions, what is one thing that you can tell me about myself that I may not know about myself?

And I asked it this question, and I got an answer.

And it wasn't like a...

Fortune cookie horoscope, like vague enough that it would apply to anybody and maybe be useful anyway.

Like it was a real thing that I hadn't noticed.

It was like the preponderance of your questions to me are about trying to put structure and precision around processes in your life that do not have them.

You are constantly asking how long things should take and how much time to allocate.

It is clearly something you're struggling with.

Wow.

Which is the kind of thing like a good friend would tell me.

Yeah.

And it is not an experience I've had with software.

And I don't know, like I find myself in a moment where I'm trying to hold everything in my head at the same time to say, these are technologies we should be skeptical of.

And to your point, keep paying attention to.

And also

in the time before this possibly changes the world in ways I might not enjoy,

pretty useful.

Absolutely.

Absolutely.

I mean, it's interesting because like, I think you're right.

I think we've always used software to automate drudgery.

And one way you could think of that is like, it does eliminate human labor.

And the people who have drudgy jobs and I've had drudgy jobs aren't like, I'm so glad that I've been freed to pursue something else.

They're like upset that their source of income is being taken taken from them.

Why do you think AI is the

place where these anxieties finally come to a head?

Because in previous eras of software, whatever skepticism people had about it, this skepticism actually feels new to me.

That's a great question.

I think there's a lot that goes into it.

I think that we're living at a time where there's kind of low watermark.

in trust in our technology companies.

I think the social media era really destroyed most of the goodwill that Silicon Valley had in the world because people see these technologies like Facebook and Instagram as TikTok as mainly just things that like steal our time and reshape the way we relate to each other in ways that are obviously worse.

And the whole time the people building these technologies insist that actually that they're saving the world and that there's nothing wrong with them.

Yeah.

And so when another generation comes along and says, oh, hi, we are actually here to invent God.

There's going to be a lot of,

there's going to be a lot of skepticism about that

and you know it is the ai companies themselves who told us this thing will create massive job loss it will create massive social disruption we may have to

come up with a new way of organizing society when we are done with our work.

That is something that every CEO of every AI company believes, PJ, is that we will have to reorganize society because essentially capitalism won't make sense anymore.

So, most people will agree that like they don't like change, you know, change is bad.

And when they say they don't like change, it usually means, well, I have a new manager at work.

The change that these people are talking about is that capitalism won't exist anymore.

And it's unclear.

Like, it's so funny because everybody, I mean, this is a little bit broadly.

Many people in our generation are like, I would love for capitalism to not exist anymore.

By which they don't mean robots do the work now and robots are your boss and robots take all the money and you're hoping for maybe universal basic income.

Like no one meant for capitalism to go away like that.

Yeah, yeah, exactly.

And nobody wanted capitalism to go away and be replaced with something where Silicon Valley seemed to be in control of everyone's future.

Right.

And so we continue to pay attention to this because while who knows how true these promises will come, the idea that this is socially disruptive seems like a safe bet.

Yeah, you know, maybe something else to say that's important is that the way all of this is unfolding is anti-democratic, right?

No one really asked for this, and the average person does not get a vote, right?

If you're just like an average person, you don't want AI to replace your job.

There's really nothing you can do about it.

And so I think that actually breeds a ton of resentment against these companies.

And while the government is starting to pay attention, at least here in the United States, they're being very, very gentle about everything.

And so if you wanted to change the course of AI, it's not actually clear how you would go about that.

And so I think that's another really big reason why people often resent it.

And it's funny, there's always a part in my mind when you see these stories of all these departures to say, okay, that's like the internal drama of a company that I do not have an internal view on.

And it might matter, it might not.

I would have to know more than I know to know.

But to what your point is, if part of the problem is that these technologies can restructure society, we have a democratic society, but the way they're restructuring society is not democratic, then the fact that even within these companies, they're becoming more like monarchies does seem like something that's worth paying attention to.

Yeah, yeah, absolutely.

Casey Newton, he writes the newsletter Platformer.

Go check it out.

You can also listen to him every week on the podcast Hard Fork.

We're going to keep using you to monitor this.

Yeah, let me just say I'm going to keep paying attention to it.

Casey, thank you.

You're welcome.

Search Engine is a presentation of Odyssey and Jigsaw Productions.

It was created by me, PJ Vote, and Shruthi Pinamanani, and is produced by Garrett Graham and Noah John.

Fact-Checking This Week by Mary Mathis.

Theme, Original Composition, and Mixing by Armin Bazarian.

Our executive producers are Jenna Weiss-Burman and Leah Reese Dennis.

Thanks to the team at Jigsaw, Alex Gibney, Rich Perello, and John Schmidt.

And to the team at Odyssey, J.D.

Crowley, Rob Morandi, Craig Cox, Eric Donnelly, Kate Rose, Matt Casey, Mauric Curran, Josephina Francis, Kurt Courtney, and Hilary Schove.

Our agent is Oren Rosenbaum at UTA.

If you would like to help support the making of this show, if you would like to vote for our existence, you can sign up for a premium subscription at searchengine.show.

You'll get ad-free episodes of the show as well as the occasional bonus episode.

You can follow and listen to Search Engine with PJ Vote now for free on the Odyssey app or wherever you get your podcasts.

Thanks for listening.

We'll see you next week.

This episode of Search Engine is brought to you in part by ChiliPad.

Will my kids sleep tonight?

Will I wake up at 3 a.m.

again?

Am I going to wake up hot and sweaty because my partner leaves the heat on?

Those are the thoughts that bounce around my head when I can't sleep too.

And let's face it, sleep slips away when you're too hot, uncomfortable, or caught in a loop of racing thoughts.

But cool sleep helps reset the body and calm the mind.

That's where Chilipad by SleepMe comes in.

It's a bed cooling system that personalizes your sleep environment.

So you'll fall asleep faster, stay asleep longer, and actually wake up refreshed.

I struggle with sleep constantly, and I have found that having a bed that is cool and temperature controlled actually really does make a huge difference.

ChiliPad works with your current mattress and uses water to regulate the temperature.

Visit www.sleepme slash search to get your ChiliPad and save 20% with code search.

This limited offer is available for search engine listeners and only for a limited time.

Order it today with free shipping and try it out for 30 days.

Even turn it for free if you don't like it with their sleep trial.

Visit www.sleep.me slash search and see why cold sleep is your ultimate ally in performance and recovery.