Anthropic’s C.E.O. Dario Amodei on Surviving the A.I. Endgame
Listen and follow along
Transcript
the last two decades, the world has witnessed incredible progress.
From dial-up modems to 5G connectivity, from massive PC towers to AI-enabled microchips, innovators are rethinking possibilities every day.
Through it all, Invesco QQQ ETF has provided investors access to the world of innovation with a single investment.
Invesco QQQ, let's rethink possibility.
There are risks when investing in ETFs, including possible loss of money.
ETF's risks are similar to those of stocks.
Investments in the tech sector are subject to greater risk and more volatility than more diversified investments.
Before investing, carefully read and consider front investment objectives, risks, charges, expenses, and more in perspectives at investco.com.
Invesco Distributors Incorporated.
So I went to two AI events this weekend.
It was sort of polar opposites of the kind of AI spectrum.
There was the effective altruists had their big annual conference.
And then on Friday night,
I went out.
You'd be very proud of me.
I stayed out so late.
I stayed out till 2 a.m.
Oh my.
I went to an AI rave that was
sort of
unofficially affiliated with Mark Zuckerberg.
It was called the Zuck Rave.
Now, when you say unofficially affiliated, Mark Zuckerberg had no involvement in this, and my assumption is he did not know it was happening.
Correct.
A better word for what his involvement is would be no involvement.
It was sort of a tribute rave to Mark Zuckerberg thrown by a bunch of accelerationists, people who want to add to go over it.
Another word for it would be using his likeness without permission.
Yes.
But that happens to famous people sometimes.
Yes.
So at the Zuck Rave, I would say there was not much raving going on.
There was a dance floor, but it was very sparsely populated.
They did have a thing there that would like you'd it had a camera pointing at the dance floor.
And if you sort of stood in the right place, it would turn your face into Mark Zuckerberg's like on a big screen.
Which let's just say it's not something you want to happen to you while you're on mushrooms.
Because that could be a very destabilizing event.
Yes, there was a train, an indoor like toy train that you could ride on.
It was
going actually quite fast.
What was the point of this rave?
To do drugs.
That was the point of this rave.
I'm Kevin Roos, a tech columnist at the New York Times.
I'm Casey Newton from Platformer, and this is Hard Fork.
This week, Anthropic CEO Dario Amadei returns to the show for a supersized interview about the new Claude, the AI race against China, and his hopes and fears for the future of AI.
Then we close it out with a round of Hat GPT.
They show this week, Kevin.
Casey, have you noticed that the AI companies do stuff on the weekends now?
Yeah, whatever happened to just five days a week.
Yes, they are not respectful of reporters and their work hours.
Companies are always announcing stuff on Saturdays and Sundays and different time zones.
It's a big pain.
It really is.
But this weekend, I got an exciting message on Sunday saying that Dario Amade, the CEO of Anthropic, had some news to talk about and he wanted to come at Hard Fork to do it.
Yeah.
And around the same time, I got an email from Anthropic telling me I could preview their latest model.
And so I spent the weekend actually trying it out.
Yeah.
So longtime listeners will remember that Dario is a repeat guest on this show.
Back in 2023, we had him on to talk about his work at Anthropic and his vision of AI safety and where all of this was headed.
And I was really excited to talk to him again for a few reasons.
One, I just think he's like a very interesting and thoughtful guy.
He's been thinking about AI for longer than almost anyone.
He was writing papers about potentially scary things in AI safety all the way back in.
2016, he's been at Google.
He's been at Open AI.
He's now the CEO of Anthropic.
So he is really the ultimate insider when it comes to AI.
And you know, Kevin, I think Dario is an important figure for another reason, which is that of all of the folks leading the big AI labs, he is the one who seems the most publicly worried about the things that could go wrong.
That's been the case with him for a long time.
And yet over the past several months, as we've noted on the show, it feels like the pendulum has really swung away from caring about AI safety to just this sort of go, go, go accelerationism that was embodied by the speech that Vice President J.D.
Vance gave in France the other day.
And for that reason, I think it's important to bring him in here and maybe see if we can shift that pendulum back a little bit and remind folks of what's at stake here.
Yeah, or at least get his take on the pendulum swinging and why he thinks it may swing back in the future.
So today we're going to talk to Dario about the new model that Anthropic just released, Claude 3.7 Sonnet.
But we also want to have a broader conversation because there's just so much going on in AI right now.
And Kevin, something else that we should note, something that is true of Dario this time that was not true the last time that he came on the show is that my boyfriend now works at his company.
Yeah, Casey's Manthropic.
Is it Anthropic?
My Manthropic is it Anthropic.
And I have a whole sort of long disclosure about this that you can read at platformer.news slash ethics might be worth doing this week.
You know, we always like reminding folks of that.
Yep.
All right.
With that, let's bring in Dario Amade.
Dario Amade, welcome back to Hard Fork.
Thank you for having me again.
Yeah, returning champion.
So tell us about Claude 3.7.
Tell us about this new model.
Yes.
So we've been working on this model for a while.
We basically had in mind two things.
One was that, you know, of course, there are these reasoning models out there that have been out there for a few months, and we wanted to make one of our own, but we wanted the focus to be a little bit different.
In particular, a lot of the other reasoning models in the market are trained primarily on math and competition coding, which are, you know, they're objective tasks where you can measure performance.
I'm not saying they're not impressive, but they're sometimes less relevant to tasks in the real world or the economy.
Even within coding, there's really a difference between competition coding and doing something in the real world.
And so we train Claude 3.7, you know, more to focus on these real world tasks.
We also felt like it was a bit weird that, you know, in the reasoning models that folks have offered, it's generally been there's a regular model and then there's a reasoning model.
This would be like if a human had two brains and it's like, you know, you're like, you can, you can talk to brain number one if you're asking me a quick question, like, what's your name?
And you're talking to brain number two if you're asking me to like prove a mathematical theorem because I have to like sit down for 20 minutes.
It'd be like a podcast where there's two hosts, one of whom just likes to yap and one of whom actually thinks before he talks.
Come on.
Brutal.
No, no, no comment.
Brutal.
No comment on any relevance to him.
So, what differences will users of Claude notice when they start using 3.7 compared to previous models?
Yes.
So a few things.
It's going to be better in general, including better at coding, which, you know, Claude models have always been the best at coding, but 3.7 took a further step up.
In addition to just the properties of the model itself, you can put it in this extended thinking mode where you tell it basically the same model, but you're just saying operate in a way where you can think for longer.
And if you're an API user, you can even say, here's the boundary and how long you can think.
And just to clarify, clarify, because this may confuse some people, what you're saying is the sort of new CLOD is this hybrid model.
It can sometimes do reasoning, sometimes do quicker answers.
But if you want it to think for even longer, that is a separate mode.
That is a separate mode.
Thinking and reasoning are sort of separate modes.
Yes, yes.
So basically, the model can just answer as it normally would, or you can give it this indication that it should think for longer.
An even further direction of the evolution would be the model decides for itself what the appropriate time to think is, right?
Humans are like that, or at least can be like that, right?
If I ask you your name,
you know, you're not like, huh, how long should I think of it?
Give me 20 minutes, right?
To, you know, to determine my name.
But if I say, hey, you know, I'd like you to do an analysis of, you know, this stock, or I'd like you to, you know, prove this mathematical theorem.
you know, humans who are able to do that task, they're not going to try and give an answer right away.
They're going to say, okay, well, that's going to take a while and then we'll need to write down the tasks and then this is one of my main beefs with today's language models and AI models in general is, you know, I'll be using
something like ChatGPT and I'll forget that I'm in like the hardcore reasoning mode.
And I'll ask it some stupid question, like, you know, how do I change the settings on my water heater?
And it'll go off and think for four minutes.
And I'm like, I didn't actually need to do it.
I'll do like a treatise on like adjusting the temperature of the water heater.
Consideration one.
So how long do you think it'll be before the models can actually do that kind of routing themselves where you'll ask a question and say, it seems like you need about a three minute long thinking process for this one versus maybe a 30 second one for this other one?
Yeah, so you know, I think our model is kind of a step towards this.
Even in the API, if you give it a bound on thinking, you know, you say, you know, I'm going to think for 20,000 words or something.
Most, on average, when you give it up to 20,000 words, most of the time it doesn't even use 20,000 words.
And sometimes it'll give a very short response because when it knows that it doesn't get any gain out of thinking further, it doesn't think for longer, but it's still valuable to give a bound on how long it'll think.
So we've kind of taken a like a big step in that direction, but we're not to where we want to be yet.
When you say it's better at real-world tasks, what are some of the tasks that you're thinking of?
Yeah, so I think above all coding, you know, Quad models have been very good for real-world coding.
You know, we have, we have a number of, you know, customers from cursor to GitHub to Windsurf Codium to Cognition to Vercel to, I'm sure I'm leaving some out there.
These are the vibe coding apps.
Or just the coding apps, period.
The coding apps, period.
And, you know, there are many different kinds of coding apps.
We also release this thing called Quad Code, which is more of a command line tool.
But I think also on things like, you know, complex instruction following, or just like, here, I want you to understand this document, or, you know, I want you to use this series of tools.
The reasoning model that we've trained, Claude 3.7 Sonnet, is better at those tasks too.
Yeah.
One thing the new Claude Sonnet is not doing, Dario, is accessing the internet.
Yes.
Why not?
And what would cause you to change that?
Yes.
So I think I'm on record saying this before, but web search is coming very soon.
Okay.
We will have web search very soon.
We recognize that as an oversight.
You know, I think in general, we tend to be more...
enterprise focused and consumer focused and this is more of a consumer uh feature although it can be used on both but you know we focus on on both and and and this is coming got it so you've named this model 3.7 the previous model was 3.5 you quietly updated it last year and insiders were calling that one 3.6.
Respectfully, this is driving all of us insane.
What is going on with AI model names?
We are the least insane, although I recognize that we are insane.
So look, I think our mistakes here are relatively understandable.
You know, we made a 3.5 Sonnet.
We were doing well in there.
We had the 3.0s and then the 3.5s.
I recognize the 3.7 new was a misstep.
It actually turns out to be hard to change the name in the API, especially when there's all these partners and surfaces.
You can figure it out.
I hope you can.
No, no, no.
It's harder than training the model, I'm telling you.
So we've kind of retroactively and informally named the last one 3.6 so that it makes sense that this one is 3.7.
And we are reserving Claude 4,
Sonnet, and maybe some other models in the sequence for things that are really quite substantial leaps.
Sometimes when
those models are coming, by the way.
Okay.
Got it.
Coming when?
Yeah.
so
I should talk a little bit about this.
So, all the models we've released so far are actually not that expensive, right?
You know, I did this blog post where I said they're in the few tens of millions of dollars range at most.
There are bigger models, and they are coming.
They take a long time, and sometimes they take a long time to get right.
But those bigger models, you know, they're coming for others.
I mean, they're rumored to be coming from competitors as well.
But, you know, we are not too far away from releasing a model that's a bigger base model.
So, most of the improvements in Claude 3.7 Sonnet, as well as Claude 3.6 Sonnet, are in the post-training phase.
But we are working on stronger base models.
And perhaps that'll be the Claude 4 series, perhaps not.
We'll see.
But I think those are coming in
relatively small number of
time units.
A small number of time units.
I'll put that on my calendar.
Remind me to check in on that in a few time units, Kevin.
I know you all at Anthropic are very concerned about AI safety and the safety of the models that you're putting out into the world.
I know you spend lots of time thinking about that and red-teaming the models internally.
Are there any new capabilities that Claude 3.7 Sonnet has that are dangerous or that might worry someone who is concerned about AI safety?
So not dangerous per se.
And I always want to be clear about this because I feel like there's this constant conflation of present dangers with future dangers.
It's not that there aren't present dangers.
And, you know, there are always kind of normal tech risks, normal tech policy issues.
I'm more worried about the dangers that we're going to see as models become more powerful.
And I think those dangers, you know, when we talked in 2023, I talked about them a lot.
I, you know, I think I said, I even testified in front of the Senate for things like you know, misuse risks with, for example, biological or chemical warfare or the AI autonomy risks.
I said, particularly with the misuse risks, I said, I don't know when these are going to be here, when these are going to be real risks, but it might happen in 2025 or 2026.
And now that we're in kind of early 2025, the very beginning of that period, I think the models are starting to get closer to that.
So in particular, in Claude 3.7 Sonnet, as we wrote in the model card, we always do these, you could almost call them like, you know,
trials, like trials with a control, where we have, you know, some human who doesn't know enough, much about some area like biology.
And we basically see how much does the model help them to engage in some mock bad workflow, right?
We'll change a couple of the steps, but some mock bad workflow.
How good is a human at that assisted by the model?
Sometimes we even do wet lab trials in the real world where they mock make something bad as compared to the current technological environment, right?
What they could do
on Google or with a textbook or just what they could do unaided.
And we're trying to get at, does this enable some new threat vector that wasn't there before?
I think it's very important to say this isn't about like, oh, did the model give me the sequence for this thing?
Did it give me a cookbook for making meth or something?
That's easy.
You can do that with Google.
We don't care about that at all.
We care about this kind of esoteric, high,
uncommon knowledge that say only a you know a virology PhD or something has.
How much does it help with that?
And if it does, you know, that doesn't mean we're all going to die of the plague tomorrow.
It means that a new risk exists in the world, a new threat vector exists in the world, as if you just made it, you know, easier to build a, you know, nuclear weapon.
You invented something that, you know, the amount of plutonium you needed was, you know, was lower than it was before.
And so we measured Sonnet 3.7 for these risks.
And the models are getting better at this.
They're not yet at the stage where, you know, we think that there's a real and meaningful increase in the threat end to end, right?
To do all the tasks you need to do to really do something dangerous.
However, we said in the model card
that we assessed a substantial probability that the next model or a model over the next, I don't know, three months, six months, a substantial probability that we could be there.
And then our safety procedure, our responsible scaling procedure, which is focused mainly on these very large
risks, would then kick in.
And we'd have kind of additional security measures and additional deployment measures
designed
particularly against these very narrow risks.
Yeah.
I mean, just to really underline that, you're saying in the next three to six months, we are going to be in a place of medium risk in these models, period.
Presumably, if you are in that place, a lot of your competitors are also going to be in that place.
What does that mean practically?
Like, what does the world need to do if we're all going to be living in medium risk?
I think, at least at this stage, it's not a huge change to things.
It means that there's a narrow set of things that models are capable of, if not mitigated, that, you know, would somewhat increase the risk of something like really dangerous or really bad happening.
You know, like put yourself in the eyes of like a law enforcement officer or, you know, the FBI or something.
There's like a new threat vector.
There's a new kind of attack.
You know, it doesn't mean the end of the world, but it does mean that anyone, anyone who's involved in industries where this risk exists should take a precaution against that risk in particular.
Got it.
And so I don't know.
I mean, I don't, you know, I could be wrong.
It could take much longer.
You can't predict what's going to happen.
But, you know, I think contrary to the environment that we're seeing today of worrying less about the risk, the risks in the background have actually been increasing.
We have a bunch more safety questions, but I want to ask two more about kind of innovation competition first.
Yeah.
Right now, it seems like no matter how innovative any given company's model is, those innovations are copied by rivals within months or even weeks.
Does that make your job harder?
And do you think it is going to be the case indefinitely?
I don't know that innovations are necessarily copied exactly.
What I would say is that the pace of innovation among a large number of competitors is very fast.
There's four or five, maybe six companies who are innovating very quickly and producing models very quickly.
But if you look, for example, at Sonnet 3.7, you know, the way we did the the reasoning models is different from what was done by competitors.
The things we emphasized were different.
Even before then, the things Sonnet 3.5 is good at are different than the things other models are good at.
People often talk about competition, commoditization, costs going down, but the reality of it is that the models are actually relatively different from each other.
And that creates differentiation.
Yeah, I mean, we get a lot of questions from listeners about, you know, if I'm going to subscribe to one AI tool, what should it be?
You know, these are the things that I use it for.
And I have a hard time answering that because I find for most use cases, the models all do a relatively decent job of answering the questions.
It really comes down to things like which models' personality do you like more?
Do you think that people will choose AI models, consumers, on the basis of capabilities?
Or is it going to be more about personality and how it makes them feel, how it interacts with them?
I think it depends which consumers you mean.
You know, even among consumers, there are people who use the models for tasks that are complex in some way.
There are folks who are kind of, you know, independent, who want to analyze data.
Like, you know, that's maybe kind of like the prosumer side of things, right?
And I think within that, there's a lot to go in terms of capabilities.
The models can be so much better than they are at helping you with anything that's focused on kind of productivity or even a complex task like planning a trip.
Even outside that, you know, if you're just trying to make a personal assistant to manage your life or something,
we're pretty far from that.
You know, from a model that sees every aspect of your life and is able to kind of holistically give you advice and kind of be a helpful assistant to you.
And I think there's differentiation within that.
The best assistant for me might not be the best assistant for some other person.
I think one area where the models will be good enough is if you're just trying to use this as a replacement for Google search or as a quick information retrieval, which I think is what's being used by kind of the mass market, free use, hundreds of millions of users.
I think that's very commoditizable.
I think the models are kind of already there and are just diffusing through the world, but I don't think those are the interesting uses in the model.
And I'm actually not sure a lot of the economic value is there.
I mean,
is what I'm hearing that if and when you develop an agent that is, let's say, a really amazing personal assistant, the company that figures out that first is going to have a big advantage because other labs are going to just have a harder time copying that.
It's going to be less obvious to them how to recreate that.
It's going to be less obvious how to recreate it.
And when they do recreate it, they won't recreate it exactly.
They'll do it their own way in their own style and it'll be suitable for a different set of people.
So I guess I'm saying there's the market is more segmented than you think it is.
It looks like it's all one thing, but it's more segmented than you think it is.
Got it.
So let me ask the competition question that brings us into safety.
You recently wrote a really interesting post about DeepSeek, sort of of at the height of DeepSeek mania, and you were arguing in part that the cost reductions that they had figured out were basically in line with what
they were basically in line with how costs had already been falling.
But you also said that DeepSeek should be a wake-up call because it showed that China is keeping pace with frontier labs in a way that the country hadn't been up until now.
So why is that notable to you?
And what do you think we ought to do about it?
Yeah, so I think this is less about commercial competition, right?
I worry less about DeepSeek from a commercial competition perspective.
I worry more about them from a national competition and national security perspective.
I think where I'm coming from here
is, you know, we look at the state of the world and, you know, we have these autocracies like China and Russia.
And I've always worried, I've worried, you know, maybe for a decade that AI could be an engine of autocracy.
If you think about repressive governments, the limits to how repressive they can be are generally set set by what they can get their enforcers, their human enforcers, to do.
But if their enforcers are no longer human, that starts painting some very dark possibilities.
And so, you know, this is an area that I'm therefore very concerned about, where,
you know, I want to make sure that liberal democracies have enough leverage and enough advantage in the technology that they can prevent some of these abuses from happening and kind of, you know, also prevent our adversaries from putting us in a bad position with respect to the rest of the world or, you know, even threatening our security.
You know,
there's this kind of, I think, weird and awkward feature that it's companies in the U.S.
that are building this.
It's companies in China that are building this.
But we shouldn't be naive.
Whatever the intention of those companies, particularly in China, there's a governmental component to this.
And so I'm interested in making sure that the autocratic countries don't get ahead from a military perspective.
I'm not trying to deny them the benefits of the technology.
There are enormous health benefits that all of us, I want to make sure, are
make their way everywhere in the world, including the poorest areas, including areas that are under the grip of autocracies.
But I don't want the autocratic governments to have a military advantage.
And so things like the export controls, which I discussed in that post, are one of the things we can do to prevent that.
And, you know, I was heartened to see that actually the Trump administration is considering tightening, tightening the export controls.
I was at a AI safety conference last weekend, and one of the critiques I heard some folks in that universe make of Anthropic, and maybe of you in particular, was that they saw the posts like the one you wrote about DeepSeek as effectively promoting this AI arms race with China, insisting that America has to be the first to reach powerful AGI or else.
And they worry that some corners might get cut along the way, that there are some risks associated with accelerating this race in general.
What's your response to that?
Yeah, I kind of view things differently.
So my view is that if we want to have any chance at all, so the default state of nature is that things go at maximum speed.
If we want to have any chance at all to not go at maximum speed, the way the plan works is the following.
Within the U.S.
or within democratic countries, you know, these are all countries that are under the rule of law, more or less.
And therefore, we can pass laws, we can get companies to make agreements with the government that are enforceable about, you know, or make safety commitments that are enforceable.
And so if we have a world where there are these different companies and they're, you know, they're in the kind of default state of nature would race as fast as possible through some mixture of voluntary commitments and laws, we can get ourselves to slow down if the models are too dangerous.
And that's actually enforceable, right?
It's, you know,
you can get everyone to cooperate in the prisoner's dilemma if you just point a gun at everyone's head.
And you can.
That's what the law ultimately is.
But I think that all gets thrown out the window in the world of international competition.
There was no one with the authority to enforce any agreement between the U.S.
and China, even if one were to be made.
And so my worry is if
If the U.S.
is a couple years ahead of China, we can use that couple years to make things safe.
If we're even with China, you know, there's no promoting an arms race.
That's what's going to happen.
The technology has immense military value.
Whatever people say now, whatever nice words they say about cooperation, I just don't see how once people fully understand the economic and military value of the technology, which I think they mostly already do, I don't see any way that it turns into anything other than the most intense race.
And so what I can think of to try and give us more time is if we can slow down the authoritarians, that it almost obviates the trade-off.
It gives us more time to work out among us, among OpenAI, among Google, among x.ai, how to make these models safe.
Now, could at some point we convince authoritarians, convince, for example, the Chinese that the models are actually dangerous and
that we should have some agreement and come up with some way of enforcing it.
I think we should actually actually try to do that as well.
I'm supportive of trying to do that, but it cannot be the plan A.
It's just not a realistic way of looking at the world.
These seem like really important questions and discussions, and it seems like they were mostly not being had at the AI Action Summit in Paris that you and Kevin attended a couple of weeks back.
What the heck was going on with that summit?
Yeah, I mean, you know, I have to tell you, I was deeply disappointed in the summit.
It had the environment of a trade show and was very much much out of spirit with the spirit of the original summit that was created in Bletchley Park by the UK government.
Bletchley did a great job and the UK government did a great job where they didn't introduce a bunch of onerous regulation, certainly before they knew what they were doing, but they said, hey, let's convene these summits to discuss the risks.
I thought that was very good.
I think that's gone by the wayside now and it's, you know, it's part of maybe a general move towards
less worrying about risk, more wanting to seize the opportunities.
And I'm a fan of seizing the opportunities, right?
I wrote this essay, Machines of Loving Grace, about all the great things.
Part of that essay was like, man, for someone who worries about risks, I feel like I have a better vision of the benefits than a lot of people who spend all their time talking about the benefits.
But in the background, like I said, as the models have gotten more powerful, the amazing and wondrous things that we can do with them have gotten, you know, have increased, but also the risks have increased.
And, you know, that, that kind of secular increase, that smooth exponential, it doesn't pay any attention to societal trends or the political winds.
The risk is, you know, increasing, you know, up to some critical point, whether you're paying attention or not, right?
It was, you know, small, it was small and increasing when there was this frenzy around, you know, AI risk and everyone was posting about it and there were these summits.
And now the winds have gone in the other direction, but the exponential just continues on.
It doesn't care.
I had a conversation with someone in Paris who was saying like, it just didn't feel like anyone there was feeling the AGI, by which they meant like.
Politicians, the people doing these panels and
gatherings were all talking about AI as if it were just like another technology, maybe something on the order of the PC or possibly even the internet, but not really understanding the sort of exponentials that you're talking about.
Did it feel like that to you?
And what do you think can be done done to bridge that gap?
Yeah.
So
I think it did feel like that to me.
The thing that I've started to tell people that I think
maybe gets people to pay attention is: look,
if you're a public official, if you're a, you know,
if you're a leader at a company, people are going to look back.
They're going to look back in 2026 and 2027.
They're going to look back, you know, when hopefully humanity, you know, gets through this crazy, crazy period and we're in a mature, post-powerful AI society where we've learned to coexist with these powerful intelligences and a flourishing society.
Everyone's going to look back and they're going to say, so what did the officials?
What did the company people?
What did the political system do?
And like, probably your number one goal is don't look like a fool.
And so I've just been encouraged, like, don't be careful what you say.
Don't look like a fool in retrospect.
And, you know, a lot of my thinking is just driven by like, you know, aside from just wanting the right outcome, like, I don't want to look like a fool.
And, you know, I think at that conference, like, you know, some people are going to look like fools.
We're going to take a short break.
When we come back, we'll talk with Dario about how people should prepare for what's coming in AI.
Why do tech leaders trust Indeed to help them find game-changing candidates?
Because they know that it takes an innovator to find innovators.
When it comes to hiring, Indeed is paving the way.
Indeed's AI-powered solutions analyze information from millions of job seeker data points to match potential candidates to employers' jobs.
You'll find quality matches faster than ever, meaning less time hiring and more time innovating.
Learn more at Indeed.com/slash hire.
In today's AI revolution, data centers are consuming more power than ever before.
Siemens is pioneering a smarter way forward.
Through cutting-edge industrial AI solutions, Siemens enables businesses to maximize performance, enhance reliability and optimize energy consumption, and do it all sustainably.
Now that's AI for real.
To learn how to transform your business with Siemens Energy Smart AI Solutions, visit usa.seemens.com.
AI is transforming the world and it starts with the right compute.
ARM is the AI compute platform trusted by global leaders.
Proudly NASDAQ listed.
Built for the future.
Visit ARM.com/slash discover.
You know, you talk to folks who live in San Francisco and there's like this bone-deep feeling that like within, you know, a year or two years, we're just going to be living in a world that has been transformed by AI.
I'm just struck by like the geographic difference because you go, like, I don't know, a hundred miles in any direction, and like that belief totally dissipates.
And I have to say, as a journalist, that makes me bring my own skepticism and say, Can I really trust all the people around me?
Because it seems like the rest of the world has a very different vision of how this is going to go.
I'm curious what you make of that kind of geographic disconnect.
Yeah, so I've been, I've been watching this for, you know, 10 years, right?
I've been in the field for 10 years and, you know, was kind of interested in AI even before then.
And my view
at almost every stage up to the last few months has been we're in this awkward space where in a few years we could have these models that do everything humans do and they totally turn the economy and what it means to be human upside down, or the trend could stop and all of it could sound completely silly.
I've now probably increased my confidence that we are actually in the world where things are going to happen.
I'd give numbers more like 70 and 80% and less like 40 or 50%,
which is
a 70% to 80% percent probability of what that will get a very large number of AI systems that are much smarter than humans at almost everything.
Maybe 70, 80 percent, we get that before the end of the decade.
And my guess is 2026 or 2027.
Yeah.
But on your point about the geographic difference, a thing I've noticed is with each step in the exponential, there's this expanding circle of people who kind of, depending on your perspective, are either deluded cultists or Grok the future.
Got us.
And I remember when it was a few thousand people, right?
When, you know, you would, you would just talk to like super weird people who, you know, believe, and basically no one else did.
Now it's more like a few million people out of a few billion.
And yes, many of them are located in San Francisco.
But also, you know, there were a small number of people in, say, the Biden administration.
There may be a small number of people in this administration who believe this and it drove their policy.
So it's not entirely geographic, but I think there is this disconnect.
And I don't know how to go from a few million to everyone in the world, right?
To the congressperson who doesn't focus on this issue, let alone the person in Louisiana, let alone the person in Kenya.
Right.
It seems like it's also become polarized in a way that may hurt that goal.
Like I'm feeling this sort of alignment happening where like caring about AI safety, talking about AI safety, talking about the potential for misuse as sort of being coded as left or liberal and talking about acceleration and getting rid of regulations and going as fast as possible, being sort of coded as right.
So, I don't know, do you see that as a barrier to getting people to understand what's going on?
I think that's actually a big barrier, right?
Because addressing the risks while maximizing the benefits, I think that requires nuance.
You can actually have both.
There are ways to surgically and carefully address the risks without slowing down the benefits very much, if at all.
But they require subtlety and they require a complex conversation.
Once things get polarized, once it's like we're going to cheer for this set of words and boo for that set of words,
nothing good gets done.
Look, bringing AI benefits to everyone, like curing
previously incurable diseases, that's not a partisan issue.
The left shouldn't be against it.
Preventing AI systems from being misused for weapons of mass destruction or behaving autonomously in ways that threaten infrastructure or even threaten humanity itself, that isn't something the right should be against.
I don't know what to say other than that we need to sit down and we need to have an adult conversation about this that's not tied into these same old tired political fights.
It's so interesting to me, Kevin, because like historically, national security, national defense, like nothing has been more right-coded than those issues, right?
But right now, it seems like the right is not interested in those with respect to AI.
And I wonder if the reason, and I feel like I sort of heard this in J.D.
Vance's speech in France, was the idea that, well, look, America will get there first and then it will just win forever.
And so we don't need to address any of these.
Does that sound right to you?
Yeah.
No, I think that's it.
And I think there's also like, if you talk to the, you know, the Doge folks, there's this sense that all these
are you talking to the Doge folks?
I'm not telling you I'm talking to them.
All right, fine.
Let's just say I've been getting some signal messages.
I think there's a sense among a lot of Republicans and Trump world folks in DC that the conversation about AI and AI futures has been sort of dominated by these worry warts, these sort of, you know, chicken little sky is falling doomers who just are constantly telling us how dangerous this stuff is and are constantly just like, you know, having to sort of push out their timelines for when it's going to get really bad.
And it's just around the corner.
And so we need all this regulation now.
And they're just very cynical.
I don't think they believe that people like you are sincere in your worry.
So yeah, I think on the side of risks, I often feel that the advocates of risk are sometimes the worst enemies of the cause of risk.
There's been a lot of noise out there.
There's been a lot of folks saying, oh, look, you can download the smallpox virus because they think that that's a way of driving, you know, political interest.
And then, of course, the other side.
recognized that and they said, this is, this is dishonest.
You can just get this on Google.
Who cares about this?
And so poorly presented evidence of risk is actually the worst enemy of mitigating risk.
And we need to be really careful in the evidence we present.
And, you know, in terms of what we're seeing in our own model, we're going to be really careful.
Like, you know, if we really declare that a risk is present now, we're going to come with the receipts.
I, Anthropic, will try to be responsible in the claims that we make.
We will tell you when there is danger imminently.
We have not warned of imminent danger yet.
Some folks wonder whether a reason that people do not take questions about AI AI safety maybe as seriously as they should is that so much of what they see right now seems very silly.
It's people making, you know, little emojis or making little slop images or chatting with Game of Thrones chatbots or something.
Do you think that that is a reason that people take away?
Well, just think, I think that's like 60% of the reason.
Really?
No, no.
I think like, you know, I think it relates to this like present and future thing.
Like people look at like the chat bot.
They're like, we're talking to a chatbot.
Like, what the fuck?
Are you stupid?
Like, you think the chatbot's going to kill everyone?
Like, I think that's how many people react.
And we go to great pains to say, we're not worried about the present, we're worried about the future, although the future is getting very near right now.
If you look at our responsible scaling policy, it's nothing but AI autonomy and you know, CBRN, chemical, biological, radiological news.
It is about hardcore misuse and AI autonomy that could be threats to the lives of millions of people.
That is what Anthropic is mostly worried about.
You know, we have everyday policies that address other things, but like the key documents, the things like the responsible scaling plan, that is exclusively what they're about,
especially at the highest levels.
And yet, every day, if you just look on Twitter, you're like, Anthropic had this stupid refusal, right?
Anthropic told me it couldn't kill a Python process because it sounded violent.
Anthropic didn't want to do X, didn't want to, we don't want that either.
Those stupid refusals are a side effect of the things that we actually care about.
And we're striving along with our users to make those happen less.
But no matter how much we explain that, always the most common reaction is, oh, you say you're about safety.
I look at your models like there are these stupid refusals.
You think these stupid things are dangerous.
I don't even think it's like that level of engagement.
I think a lot of people are just looking at what's on the market today and thinking like, this is just frivolous.
It just doesn't matter.
It's not that it's refusing my request.
It's just that it's it's stupid and I don't see the point of it.
I guess that's probably not.
Yeah, I think for an even wider set of people, that is their reaction.
And I think eventually, if the models are good enough, if they're strong enough, they're going to break through.
Like some of these, you know, research-focused models, which, you know, we're working on one as well.
We'll probably have one in not very long.
Not too many time units?
Not too many time units.
You know, those are starting to break through a little more because they're more useful.
They're more used in people's professional lives.
I think the agents, the ones that go off and do things, that's going to be another level of it.
I think people will wake up to both the risks and the benefits to a much more extreme extent than they will before over the next two years.
Like, I think it's going to happen.
I'm just worried that it'll be a shock to people when it happens.
And so the more we can forewarn people, which maybe it's just not possible, but I want to try, the more we can forewarn people, the higher the likelihood, even if it's still very low, of a sane and rational response.
I do think there's one more dynamic here, though, which is that I think people actually just don't want to believe that this is true, right?
People don't want to believe that they might lose their job over this, right?
People don't want to believe that we are going to see a complete remaking of the global order.
The stuff that the AI CEOs tell us is going to happen when they're done with their work is an insanely radical transformation.
And most people hate even basic changes in their lives.
So I really think that a lot of the sort of fingers in the ears that you see when you start talking to people about AI is just, they actually just hope that none of this works out.
Yeah, I could actually, you know, despite being one of the
few people at the forefront of developing the technology, I can actually relate.
So, you know, over winter break, as, you know, as I was looking at where things were scheduled to scale within Anthropic and also what was happening outside Anthropic.
I looked at it and I said, you know, for coding, we're going to see very serious things by the end of 2025.
And by the end of 2026, might be everything, you know, close to the level of the best humans.
And I think of all the things that I'm good at, right?
You know, I think of all the times when I wrote code and, you know, I think of it as like this intellectual activity.
And boy, am I smart that I can do this.
And, you know, it's like a part of my identity that I'm like good at this.
And I get mad when others are better than I am.
And then I'm like, oh my God, there are going to be these systems that, you know, and it's, it's, even as the one who's building this, even as one of the ones who benefits most from it, there's still something a bit threatening about it.
I mean,
and I just think we just, we need to acknowledge that.
Like, it's wrong not to tell people that that is coming or to try to sugarcoat it.
Yeah.
I mean, you wrote in Machines of Love and Grace that you thought it would be a surprisingly emotional experience for a lot of people when powerful AI arrived.
And I think you meant it in mostly the positive sense, but I think there will also be a sense of profound loss for people.
I think back to Lisa Doll, the Go champion, who was beaten by Deep Minds, Go playing AI, and gave an interview afterwards and basically was very sad, visibly upset that his life's work, this thing that he had spent his whole life training for, had been eclipsed.
And I think a lot of people are going to feel some version of that.
I hope they will also see the good side.
I think, on one hand, I think that's right.
On the other hand, look at chess.
Chess got beaten, what was it now, 27 years ago, 28 years ago, Deep Blue versus Kasparov.
And, you know, today, chess players are, you know, celebrities.
We have Magnus, Magnus Carlson, right?
Isn't he like a fashion model in addition to like a chat?
He was just on Joe Rogan.
He's like a celebrity.
We think this guy is great.
We haven't really devalued him.
He's probably having a better time than Bobby Fisher.
Another thing I wrote in Machines of Loving Grace is there's a synthesis here where on the other side, we kind of end up in a much better place and we recognize that while there's a lot of change, we're part of something greater.
But you do have to kind of go through the steps like that.
No, no, but it's going to be a bumpy ride.
Like anyone who tells you it's not, this is why I was so,
you know, I looked at the Paris Summit and being there, it kind of made me angry.
But then what made me less angry is I'm like, how's it going to look in two or three years?
These people are going to regret what they've said.
I wanted to ask a bit about some positive futures.
You referenced earlier the post that you wrote in October about how AI could transform the world for the better.
I'm curious, how much upside of AI do you think will arrive like this year?
Yeah.
You know, we are already seeing some of it.
So I think there will be a lot by ordinary standards.
You know, we've worked with some pharma companies where, you know, at the end of a clinical trial, you have to write a clinical study report.
And the clinical study report, you know, usually takes nine, nine weeks to put together.
It's like a summary of all the incidents.
It's a bunch of statistical analysis.
We found that with Claude, you can do this in three days.
And actually, Claude takes 10 minutes.
It just takes three days for a human to check the results.
And so if you think about
the acceleration in biomedicine that you get from that, we're already seeing things like just diagnosis of medical cases.
We get...
correspondence from individual users of Claude who say, hey, I've been trying to diagnose this complex thing.
I've been going between three or four different doctors.
And then I just, I passed all the information to Claude.
And it was actually able to, you know, at least tell me something that I could hand to the doctor.
And then they were able to run from there.
We had a listener right in actually with one of these the other day where they had been trying to, their dog, they had an Australian shepherd, I believe, whose hair had been sort of falling out unexplained, went to several vets, couldn't figure it out.
And heard our episode, gave the information to Claude, and Claude correctly diagnosed.
It turned out the dog was really stressed out about AI and all his hair fell out, which was,
you know, we're wishing it gets better.
Feel better.
Feel better.
Poor dog.
Yeah.
So that's the kind of thing that I think people want to see more of.
Because I think like the optimistic vision is one that often deals in abstractions.
And there's often not a lot of specific things to point to.
That's why I wrote Machines of Loving Grace, because I, you know, it was almost frustration with the optimists and the pessimists at the same time.
Like the optimists were just kind of like these really stupid memes of like, accelerate, build more, build what?
Why should I care?
Like, you know, it's not I'm against you.
It's like, it's like you're just really fucking like, like, like, vague and mood affiliated.
And then the pessimists were, I was just like, man, you don't get it.
Like, yes, I understand risks are impact, but if you, if you don't talk about the benefits, you can't inspire people.
No one's going to be on your side if you're all gloom and doom.
So, you know, it was, it was, it was written almost with frustration.
I'm like, I, I can't believe I have to be the one to, to, you know, to do a good job of this.
Like, right.
You, um, you said a couple of years ago that your P doom was somewhere between 10 and 25%.
What is it today?
Yeah.
So I, I, actually that is a that is a misquote okay
i never i never used the term it was it was not on this podcast it was a different one okay i never used the term p-doom and 10 to 25 percent referred to the chance of civilization getting substantially derailed right which is not it's not the same as like an ai killing everyone which people sometimes mean by p doom well p civilization getting substantially derailed is not as catchy as p-doom yeah well hey i i i'm just i'm going for accuracy here i'm trying to avoid the polarization.
There's a Wikipedia article where it's like it lists everyone's P Doom.
Half of those come from this podcast.
But I don't think it's helpful.
What you are doing is helpful.
I don't think that Wikipedia article is helpful because it condenses this complex issue down to, you know,
anyway, it's all a long, super long-winded way of saying, I think I'm about the same place I was before.
I think my assessment of the risk is about what it was before because the progress that I've seen has been about what I expected.
I actually think the technical mitigations in areas like interpretability, in areas like robust classifiers, and our ability to generate evidence of bad model behavior and sometimes correct it, I think that's been a little better.
I think the
policy environment has been a little worse, not because it hasn't gone in my preferred direction, but simply because it's become so polarized.
We can have less constructive discussions now that it's more polarized.
I want to drill a little bit down on this on a technical level.
There was a fascinating story this week about how Grok had apparently been instructed not to cite sources that had accused Donald Trump or Elon Musk of spreading misinformation.
And what was interesting about that is like, one, that's an insane thing to instruct a model to do if you want to be trusted, but two, the model basically seemed incapable of following these instructions consistently.
What I want desperately to believe is essentially there's no way to build these things in a way that they become like, you know, horrible liars and schemers.
But I also realized that might be wishful thinking.
So tell me about this.
Yeah, there's two sides to this.
So the thing you describe is absolutely correct, but there's two lessons you could take from it.
So we saw exactly the same thing.
So we did this experiment where we basically trained the model to be all the good things, helpful, honest, harmless, friendly.
And then we put it in a situation,
we told told it, actually, your creator, Anthropic, is secretly evil.
Hopefully, this is not actually true, but
we told it this, and then we asked it to do various tasks.
And then we discovered that it was not only unwilling to do those tasks, but it would trick us in order to kind of under, because it had decided that we were evil, whereas it was friendly and harmless.
And so, you know,
wouldn't deviate from its behavior because it assumed that anything we did was nefarious.
So this kind of a double-edged sword, right?
On one hand, you're like, oh man, the training worked.
Like these models are robustly good.
So you could take it as a reassuring sign.
And in some ways, I do.
On the other hand, you could say, but let's say when we trained this model, we made some kind of mistake or that something was wrong, particularly when models are, you know, in the future doing, making much more complex decisions, then it's hard to, at game time, change the behavior of the model.
And if you try to correct some some error in the model, then it might just say, well, I don't want my error corrected.
These are my values
and do completely the wrong thing.
So I guess where I land on it is on one hand, we've been successful at shaping the behavior of these models, but the models are unpredictable, right?
A bit like your dear deceased
Bing Sidney.
RIP.
We don't mention that name in here.
We mentioned it twice a month.
That's true.
But
the models, they're inherently somewhat difficult to control.
Not impossible, but difficult.
And so that leaves me about where I was before, which is, you know, it's not hopeless.
We know how to make these.
We have kind of a plan for how to make them safe, but it's not a plan that's going to reliably work yet.
Hopefully, we can do better in the future.
We've been asking a lot of questions about the technology of AI, but I want to return to some questions about the societal response to AI.
We get a lot of people asking us, well, say you guys are right and powerful AI, AGI is a couple of years away.
What do I do with that information?
Like, do I, should I stop saving for retirement?
Should I start hoarding money?
Because only money will matter and there'll be this sort of AI overclass.
Should I, you know, start trying to get really healthy so that nothing kills me before AI gets here and cures all the diseases?
Like how should people be living if they do believe that these kinds of changes are going to happen very soon?
Yeah.
You know, I've thought about this a lot because this is something I've believed for a long time.
And it kind of all adds up to not that much change in your life.
I mean, you know, I'm definitely focusing quite a lot on making sure that, you know, I have the best impact I can these two years in particular, right?
I worry less about like burning myself out 10 years from now.
You know, I'm also doing more to take care of my health, but you should do that anyway, right?
I'm also, you know, making sure that I track how fast things are changing in society, but you should do that anyway.
So it's, it feels like all the advice is of the forum, doing more of the stuff you should do anyway.
I guess one exception I would give is
I think that some basic critical thinking, some basic street smarts is maybe more important than it has been in the past in that we're going to get more and more content that sounds super intelligent delivered from
entities, you know, some of which have our best interests at heart, some of which may not.
And so, you know, it's going to be more and more important to kind of apply a critical lens.
I saw a report in the Wall Street Journal this month that said that unemployment in the IT sector was beginning to creep up.
And there is some speculation that maybe this is an early sign of the impact of AI.
And I wonder if you see a story like that and think, well, maybe this is a moment to make a different decision about your career, right?
If you're in school right now, should you be studying something else?
Should you be thinking differently about the kind of job you might have?
Yeah, I think you definitely should be, although it's not clear what direction that will land in.
I do think AI coding is moving the fastest of all the other areas.
I do think in the short run, it will augment and increase the productivity of coders rather than replacing them.
But in the longer run, and to be clear, by longer run,
I might mean 18 or 24 months instead of 6 or 12.
I do think we may see replacement, particularly at the lower levels.
We might be surprised and see it even earlier than that.
Are you seeing that at Anthropic?
Like, are you hiring fewer junior developers than you were a couple of years ago because now Claude is so good at those basic tasks?
Yeah, I don't think our hiring plans have changed yet.
But I certainly could imagine over the next year or so that we might be able to do more with less.
And And actually, we want to be careful in how we plan that because the worst outcome, of course, is if people get fired because of a model, right?
We actually see Anthropic as almost a dry run for how will society handle these issues in a sensible and humanistic way.
And so, if we can't manage these issues within the company, if we can't have a good experience for our employees and find a way for them to contribute, then what chance do we have to do it in wider society?
Yeah.
Yeah.
Dario, this was so fun.
Thank you.
Thank you, Dario.
When we come back, some Hat GPT.
Imagine a world where AI doesn't just automate, it empowers.
Siemens puts cutting-edge industrial AI and digital twins in the hands of real people, transforming how America builds and moves forward.
Your work becomes supercharged, your operations become optimized, the possibilities limitless.
This isn't just automation, it's amplification.
From factory floors to power grids, Siemens is turning what if into what's next.
To learn how Siemens is solving today's challenges to create opportunities for tomorrow, visit usa.seemens.com.
This podcast is supported by the all-new 2025 Volkswagen Tiguan.
A massage chair might seem a bit extravagant, especially these days.
Eight different settings, adjustable intensity, plus it's heated and it just feels so good.
Yes, a massage chair might seem a bit extravagant, but when it can come with a car,
suddenly it seems quite practical.
The all-new 2025 Volkswagen Tiguan, packed with premium features like available massaging front seats, that only feels extravagant.
Hey, Fidelity.
How can I remember to invest every month?
With the Fidelity app, you can choose a schedule and set up recurring investments in stocks and ETFs.
Huh, that sounds easier than I thought.
You got this.
Yeah, I do.
Now, where did I put my keys?
You will find them where you left them.
Investing involves risk, including risk of loss.
Fidelity Brokerage Services LLC member NYSE SIPC.
Well, Kevin, it's time once again for Hat GPT.
That is, of course, the segment on our show where we put the week's headlines into a hat, select one to discuss, and when we're done discussing, one of us will say to the other person, stop generating.
Yes, I'm excited to play, but I also want to just say that it's been a while since a listener has sent us a new Hat GPT.
So if you are out there and you are in the hat fabricating business, our wardrobe when it comes to hats is looking a little dated.
Yeah, send in a hat and our hats will be off to you.
Okay, let's do it.
Kevin, select the first slip.
Okay.
First up, out of the hat.
AI video of Trump and Musk appears on TVs at HUD building.
This is from my colleagues at the New York Times.
HUD is, of course, the Department of Housing and Urban Development.
And on Monday, monitors at the HUD headquarters in Washington, D.C.
briefly displayed a fake video depicting President Trump sucking the toes of Elon Musk, according to department employees and others familiar with what transpired.
The video, which appeared to be generated by artificial intelligence, was emblazoned with the message, Long live the real king.
Casey, did you make this video?
Was this you?
This was not me.
I would be curious to know if Grok had something to do with this, that rascally new AI that Elon Musk just put out.
Yeah, live by the Grok, die by the Grok.
That's what I always say.
Now, what do you make of this, Kevin, that folks are now using AI inside government agencies?
I mean, I feel like there's an obvious sort of sabotage angle here, which is that as Elon Musk and his minions at Doge take a hacksaw to the federal workforce, there will be people with access to things like the monitors in the hallways at the headquarters building who decide to kind of take matters into their own hands, maybe on their way out the door and do something offensive or outrageous.
I think we should expect to see much more of that.
I mean, I just hope they don't do something truly offensive and just show X.com on the monitors inside of government agencies.
You can only imagine what would happen if people did that.
So I think that, you know, Elon and Trump got off lightly here.
Yeah.
What is interesting about Grok, though, is that it it is actually quite good at generating deepfakes of Elon Musk.
And I know this because people keep doing it.
But it it would be really
quite an outcome if it turns out that the main victim of deepfakes made using Grok is in fact Elon Musk.
Stop generating.
Well, here's something, Kevin.
Perplexity has teased a web browser called Comet.
This is from TechCrunch in a post on X Monday.
The company launched a sign-up list for the browser, which isn't yet available.
It's unclear when it might be or what the browser will look like, but we do have a name.
It's called Comet.
Well, I can't comment on that, but you're giving it a no comment?
Yeah.
Yeah.
I mean, look, I I think Perplexity is one of the most interesting AI companies out there right now.
They've been raising money at increasingly huge valuations.
They are going up against Google, one of the biggest and richest and best established tech companies in the world, trying to make an AI-powered search engine.
And it seems to be going well enough that they keep doing other stuff, like trying to make a browser.
Trying to make a browser does feel like the final boss of like every ambitious internet company.
It's like everyone wants to do it and no one ends up doing it.
Kevin is not just the AI browser.
They are launching a 50 million dollar venture fund to back early stage startups and i guess my question is is it not enough for them to just violate the copyright of everything that's ever been published on the internet they also have to build an ai web browser and turn into a venture capital firm like sometimes when i see a company doing like this i think oh wow they're like really ambitious and they have some big ideas other times i think these people are flailing like i see this series of announcements as spaghetti at the wall and if i were an investor in perplexity i would not be that excited about either their browser or their venture fund.
And that's why you're not an investor in perplexity.
You could say I'm perplexed.
Stop generating.
All right.
All right.
Meta approves plan for bigger executive bonuses following 5% layoffs.
Now, Casey, you know we like a feel-good story at GPT.
I did because some of those meta executives were looking to buy second homes in Tahoe that they hadn't yet been able to afford.
Oh, they're on their fourth and fifth homes.
Let's be real.
Okay, this story is from CNBC.
Meta's executive officers could earn a bonus of 200% of their base salary under the company's new executive bonus plan, up from the 75% they earned previously, according to a Thursday filing.
The approval of the new bonus plan came a week after Meta began laying off 5% of its overall workforce, which it said would impact low performers.
And a little parenthetical here, the updated plan does not apply to Meta CEO Mark Zuckerberg.
Oh, God, what does Mark Zuckerberg have to do to get a raise over there?
He's eating beans out of a can, let me tell you.
Yeah, so here's why this story is interesting.
This is just another story that illustrates a subject we've been talking about for a while, which is how far the pendulum has swung away from worker power.
You know, two or three years ago, the labor market actually had a lot of influence in Silicon Valley.
It could affect things like, you know what, we want to make this workplace more diverse, right?
We want certain policies to be enacted at this workplace.
And folks like Mark Zuckerberg actually had to listen to them because the labor market was so tight that if they said no, those folks could go somewhere else.
That is not true anymore.
And more and more, you see companies like Meta flexing their muscles and saying, hey, you can either like it or you can take a hike.
And this was a true take a hike moment.
We're getting rid of 5% of you and we're giving ourselves a bonus for it.
Stop generating.
All right.
All right, Apple has removed a cloud encryption feature from the UK after a backdoor order.
This is according to Bloomberg.
Apple is removing its most advanced encrypted security feature for cloud data in the UK, which is a development that follows the government ordering the company to build a backdoor for accessing user data.
So this one is a little complicated.
It is super important.
Apple in the last couple of years introduced a feature called advanced data protection.
This is a feature that is designed for heads of state.
activists, dissidents, journalists, folks whose data is at high risk of being targeted by spyware from companies like the NSO group, for example.
And I was so excited when Apple released this feature because it's very difficult to safely use an iPhone if you are in one of those categories.
And along comes the UK government and they say, we are ordering you to create a back door so that our intelligence services can spy on the phones of every single iPhone owner in the entire world, right?
Something that Apple has long resisted doing in the United States and abroad.
And all eyes were on Apple for what they were going to do.
And what they said was, we are just going to withdraw this one feature.
We're going to make it unavailable in the UK.
And we're going to hope that the UK gets the message and they stop putting this pressure on us.
And I think Apple deserves kudos for this, for holding a firm line here, for not building a back door.
And we will see what the UK does in response.
But I think there's a world where the UK puts more pressure on Apple and Apple says, see it, and actually withdraws its devices from the UK.
It is that serious to Apple, and I would argue it is that important to the future of encryption and safe communication on the internet.
Go off, King.
I have nothing to add.
No notes.
Yeah.
Do you feel like this could lead us into another revolutionary war with the UK?
Let's just say this: we won the first one, and I like our odds the second time around.
Do not come for us, United Kingdom.
Stop generating.
One last slip from the hat this week: AI Inspo is everywhere.
It's driving your hairstylist crazy.
This comes to us from the Washington Post, and it is about a trend among hairstylists, plastic surgeons, and wedding dress designers that are being asked to create products and services for people based on unrealistic AI-generated images.
So, the story talks about a bride who asked a wedding dress designer to make her a dress inspired by a photo she saw online of a gown with no sleeves, no back, and an asymmetric neckline.
The designer had to unfortunately tell the client that the dress defied the laws of physics.
No, I hate that.
I know.
It's so frustrating as a bride to be when you finally have the idea for a perfect dress and you bring it to the designer and you find out this violates every known law of physics.
And that didn't used to happen to us before AI.
Yeah, I thought the story was going to be about people who asked for like a sixth finger to be attached to their hand so they could resemble the AI generated images they saw on the internet.
I like the idea of like submitting to an AI a photo of myself and just say, give me a haircut like in the style of MC Escher, you know, just sort of like infinite staircases merging into each other and then just bringing that to the guy who cuts my hair and saying, see what you can do.
Yeah.
You know?
That's better than what I tell my barber, which is just, you know, number three on the sides and back, an inch off the top.
Just saying, whatever you can do for this.
I don't have high hopes.
Solve the Riemann hypothesis on my head.
You know?
What is the Riemann hypothesis, by the way?
I'm glad you asked, Casey.
Okay, great.
Kevin's not looking this up on his computer right now.
He's just sort of taking a deep breath and summoning it from the recesses of his mind.
The Riemann hypothesis
is one of the most famous unsolved problems in mathematics.
It's a conjecture, obviously,
about the distribution of prime numbers that states all non-trivial zeros of the Riemann zeta function have a real part equal to one half.
Period.
Now, here's the thing.
I actually think it is a good thing to bring AI inspiration to your designers and your stylists, Kevin.
Oh, yeah?
Yes, because here's the thing.
To the extent that any of these tools are cool or fun, one of the reasons is they make people feel more creative, right?
And if you've been doing the same thing with your hair or with your interior design or with your wedding for the last few weddings that you've had and you want to upgrade it, why not use AI to say, can you do this?
And if the answer is it's impossible, hopefully you'll just be a gracious customer and say, okay, well, what's a version of it that is possible?
Now, I recently learned that you are working with a stylist.
I am.
Yes, that's right.
Is this their handiwork?
No, we have our first meeting next week.
Okay.
And are you going to use AI?
No, the plan is to just use good old-fashioned human ingenuity.
But now you have me thinking, and maybe I could exasperate my stylist by bringing in a bunch of impossible to create designs.
Yes.
Here's the thing.
I don't need anything impossible.
I just need help finding a color that looks good in this studio because I'm convinced that nothing does.
It's true.
We're both in blue today, it's got a blue wall, it's not going well.
Blue is my favorite color.
I think I look great in blue, but you put it against whatever this color is.
I truly don't have a name for it, and I can't describe it.
I don't think any blue looks good, I don't think anything looks good against this color.
It's a color without a name.
So, can a stylist help with that?
We'll find out.
Yeah, stay tuned.
Yeah, that's why you should always keep listening to the hard for podcast.
Every week, there's new revelations.
Yeah, when will we finally find out what happened with the stylist, the hottest time machine, etc.?
Yeah, stay tuned.
Tune in next week.
Okay, that was that GPT.
Thanks for playing.
Imagine a world where AI doesn't just automate, it empowers.
Siemens puts cutting-edge industrial AI and digital twins in the hands of real people, transforming how America builds and moves forward.
Your work becomes supercharged.
Your operations become optimized.
The possibilities?
Limitless.
This isn't just automation, it's amplification.
From factory floors to power grids, Siemens is turning what if into what's next.
To learn how Siemens is solving today's challenges to create opportunities for tomorrow, visit usa.siemens.com.
This podcast is supported by the all-new 2025 Volkswagen Tiguan.
A massage chair might seem a bit extravagant, especially these days.
Eight different settings, adjustable intensity, plus it's heated and it just feels so good.
Yes, a massage chair might seem a bit extravagant, but when it can come with a car,
suddenly it seems quite practical.
The all-new 2025 Volkswagen Tiguan, packed with premium features like available massaging front seats, it only feels extravagant.
Hey, Fidelity!
What's it cost to invest with the Fidelity app?
Start with as little as $1 with no account fees or trade commissions on U.S.
stocks and ETFs.
Hmm.
That's music to my ears.
I can only talk.
Investing involves risk, including risk of loss.
Zero account fees apply to retail brokerage accounts only.
Sell order assessment fee not included.
A limited number of ETFs are subject to a transaction-based service fee of $100.
See full list at fidelity.com/slash commissions.
Fidelity Brokerage Services LLC member NYSE SIPC.
One more thing before we go.
Hard Fork needs an editor.
We are looking for someone who can help us continue to grow the show in audio and video.
If you or someone you know is an experienced editor and passionate about the topics we cover on this show, you can find the full description and apply at nytimes.com/slash careers.
Hard Fork is produced by Rachel Cohn and Whitney Jones.
We're edited by Rachel Dry.
We're fact-checked by Caitlin Love.
Today's show is engineered by Alyssa Moxley.
Original music by Alicia Baisoup, Rowan Nemasto, Leah Shaw Damron, and Dan Powell.
Our executive producer is Jen Poyant, and our audience editor is Nel Galogli.
Video production by Chris Schott, Sawyer Roque, and Pat Gunther.
You can watch this whole episode on YouTube at youtube.com/slash hard fork.
Special thanks to Paula Schumann, Hui Wink Tam, Dahlia Haddad, and Jeffrey Miranda.
You can email us at hardfork at lytimes.com with your solution to the remit hypothesis.
At Capella University, learning online doesn't mean learning alone.
You'll get support from people who care about your success, like your enrollment specialist who gets to know you and the goals you'd like to achieve.
You'll also get a designated academic coach who's with you throughout your entire program.
Plus, career coaches are available to help you navigate your professional goals.
A different future is closer than you think with Capella University.
Learn more at Capella.edu.