The A.I. Jobpocalypse + Building at Anthropic with Mike Krieger + Hard Fork Crimes Division

1h 8m
“The job market is not looking great for young graduates.”

Listen and follow along

Transcript

the last two decades, the world has witnessed incredible progress.

From dial-up modems to 5G connectivity, from massive PC towers to AI-enabled microchips, innovators are rethinking possibilities every day.

Through it all, Invesco QQQ ETF has provided investors access to the world of innovation with a single investment.

Invesco QQQ, let's rethink possibility.

There are risks when investing in ETFs, including possible loss of money.

ETF's risks are similar to those of stocks.

Investments in the tech sector are subject to greater risk and more volatility than more diversified investments.

Before investing, carefully read and consider front investment objectives, risks, charges, expenses, and more in perspectives at investco.com.

Invesco Distributors Incorporated.

Casey, how was your Memorial Day weekend?

My Memorial Day weekend was,

it was good.

I was like, you know, I need to unplug, as you know, I needed to unplug a bit.

I'm not a big unplugger.

I normally am very comfortable feeling.

Yeah, you're a screen maxer.

I'm a screen maxer, but this was a weekend where I was like, okay, I got to get out of this danged house.

I got to see some nature.

And so I went with my boyfriend up to Fort Funston, this beautiful part of San Francisco.

Great beaches.

These giant dunes that sit atop this battery of guns that could shoot rounds 13 miles in the ocean.

And I was like, I'm so excited to like, you know, to just kind of stare at the ocean.

And so we, we sort of climb up into the dunes and we sit down and the big waves are rolling in and the winds pick up and I'm being sandblasted in my face at like 40 miles an hour.

And within 30 seconds, I have grit in my teeth.

And I'm thinking, this was not the nature I was promised.

Why do I feel like I'm dying?

But it did do a great job of exfoliating your skin.

My skin has never really been

therm abrasion and some people pay lots of money for it.

Yes, I have been abrased.

I've been majorly abrazed.

I'm Kevin Roos, a tech columnist at the New York Times.

I'm Casey Newton from Platformer.

And this is Hard 4.

This week, is AI already taking away jobs?

Kevin makes the case.

Then, Anthropic Chief Product Officer Mike Krieger joins us to discuss Claude 4, the future of work, and the viral saga over whether an AI could blackmail you.

And finally, the site for Hard Fork Crimes Division.

Dun-duh.

Is blackmail still a crime?

Hope so.

Well, Kevin, you have delivered some interesting news to us via the New York Times this week, and that is that the job market is not looking great for young graduates.

Yes, graduation season is upon us.

Millions of young Americans are getting their diplomas and heading out into the workforce.

And so I thought it was high time to investigate what is going on with jobs and AI and specifically with entry-level white-collar jobs, the kind that a lot of recent college graduates are applying for, because there are a couple of things that have made me think that we are starting to see signs of a looming crisis for entry-level white collar jobs.

So I thought I should investigate that.

Yeah, well, I'm excited to talk about this because I got an email today from a recent college grad and she wanted to know if I could help her get a job in marketing and tech.

And I thought, if you're just emailing me asking for a job, there must be a crisis going on in the job market.

Yes, that would not be my step one in looking for a job or maybe even my step 500.

But you've actually spent a lot of time looking into this question.

So tell us a little bit about what you did and what you were trying to figure out exactly.

So I've been interested in this question of AI and automation for years and like when are we going to start to see large-scale changes to employment from the use of AI.

And there are a couple of things that make me worried about this moment specifically and whether we are starting to see signs of an emerging jobs crisis for entry-level white-collar workers.

The first one is economic data.

So if you look at the unemployment rate for college graduates right now, it is unusually high.

It's about 5.8% in the U.S.

That has risen significantly, about 30% since 2022.

Recently, the New York Federal Reserve put out a bulletin on this and said the employment situation for recent college graduates had, quote, deteriorated noticeably.

And this tracks with some data that we've been getting from job websites and recruiting firms showing that especially for young college graduates in fields like tech and finance and consulting, the job picture is just much worse than it was even a few few years ago.

And that rate that you mentioned, Kevin, that is higher for young people in entry-level jobs than it is for unemployment in the United States overall.

Is that right?

Yes, unemployment in the United States is actually like doing quite well.

We're in a very tight labor market, which is good.

We have, you know, pretty close to full employment.

But if you look specifically at the jobs done by recent college graduates, it is not looking so good.

And actually, the sort of job placement rates at a bunch of colleges and even top business schools like Harvard and Wharton and Stanford are worse this year than they have been in recent memory.

I was having dinner with a Wharton student last week and she was telling me that a lot of her classmates had yet to be placed and it was it was a real concern.

So anecdotally, that sounds right to me.

Okay, so that's the economic data that you're seeing.

What else is making you worried?

So one of the other things that's making me worried is the rise of so-called agentic AI systems, these AI tools that can not just do a question and answer session or respond to some prompt, but you can actually give them a task or a set of tasks and they can go out and do it and sort of check their own work and use various tools to complete those assignments.

One of the things that actually has updated me the most on this front are these Pokemon demos.

Casey, do you know what I'm talking about here?

You're talking about like Claude plays Pokemon?

Yes.

So within the last few months, it's become very trendy for AI companies to test their agentic AI systems by having them play Pokemon essentially from scratch with no advanced training.

And some of them do quite well.

Google said on stage at IO last week that Gemini 2.5 had actually been able to finish the entire game of Pokemon.

One of the games, there are, I think, probably at least 36 different Pokemon games on the market.

And I actually know for a fact, Google was playing a different Pokemon game than Anthropic was.

Oh, interesting.

So I'm not a Pokemon expert, but I also, like, I think people see these Pokemon demos and they think, well, that's cute.

But like, you know, how many people play Pokemon for a living?

It seems like more of a stunt than a real improvement in capabilities.

But the thing I am hearing from researchers in the AI industry and people who work on these systems is that this is not actually about Pokemon at all, that this is about automating white collar work.

Because if you can give an AI system a game of Pokemon and it can sort of figure out how to play the game, how to, I don't know Pokemon very well.

I'm more of a magic the gathering guy, but my sense is you have to like go to various places and complete various tasks and collect various Pokemon.

You have to go into various gyms, you take your Pokemon, they compete against rival Pokemon, and your Pokemon have to vanquish the others in order for you to progress through the game, Kevin.

I hope that was helpful.

Exactly.

So, as I was saying, that is how you play Pokemon.

And what they are telling me is that this is actually some of the same techniques that you would use to train an AI to, for example, do the work of an entry-level software engineer or a paralegal or a junior consultant.

Yeah.

If your job job is mostly like writing emails and updating spreadsheets, that is a kind of video game.

And if an AI system can just look at Pokemon and through trial and error, figure out how to play that and win that, it can probably figure out how to play the email and spreadsheet game too.

Exactly.

And one of the signs that is worrying me is that it does seem like these AI agents are becoming capable of carrying out longer and longer sequences of tasks.

So tell us about that.

So recently, Anthropic held an event to show off the newest model of Claude, Claude Opus 4, I believe it's called.

I believe it's Claude 4 Opus, actually.

Claude 4 Opus?

Got your ass.

Oh, yeah.

It's Claude 4 Sonnet and Claude 4 Opus.

Sometimes I feel like you don't respect the names of these products.

Do you know how much work went into the naming of these products?

At least five minutes.

They spent at least five minutes coming up with that, and then you're just going to shit all over it.

I'm so sorry.

Sorry.

So anyway, Claude Opus 4.

Claude 4 Opus.

No, I swear it's Claude Opus.

It's Claude 4 Opus.

No, it's Claude Opus 4.

What?

I'm looking at the Anthropic blog post.

Oh my God.

Claude Opus 4 and Claude Sonnet 4.

This is so confusing.

It's like your boyfriend doesn't even work there.

I'm going to be in big trouble when I get home.

So, okay.

Back to my point.

So, Anthropic holds this event last week where they're showing off their latest and greatest versions of Claude.

And one of the things they say about Claude Opus 4, their newest, most powerful model, is that it can code for hours at a time without stopping.

And in one demo with a client on a real coding task, Claude was able to code for as much as seven hours uninterrupted.

Now, you might think, well, that's just coding.

Maybe that's a very special field.

And there are some things about coding that make it low-hanging fruit for these sort of reinforcement learning models that can learn how to do tasks over time.

The problem for workers is that a lot of jobs, especially at the entry levels of white-collar occupations, are a lot like that, where you can build these sort of reinforcement learning environments where you can collect a bunch of data.

You can sort of have it essentially play itself like it would play Pokemon and eventually get very good at those kinds of tasks.

Yeah, you know, at Google I.O.

last week, Kevin, they showed off a demo of a feature where you can teach the AI how to do something.

You effectively show that you say to the AI, hey, watch me do this thing.

And then it watches you do the thing and then it can replicate the thing.

Can you imagine how many managers all around the world took a look at that and said, Once I could teach the computer how to do things, a bunch of people are about to lose their damn jobs.

Totally.

And this is why some of the people building this stuff are starting to say that it's not just going to be software engineering that becomes displaced by these AI agents.

It's going to be all kinds of different work.

Dario Amade, the CEO of Anthropic, gave an interview to Axios this week in which he said that within one to five years, 50% of entry-level white-collar jobs could be replaced.

Now, that could be wildly off.

Maybe it is much harder to train these AI systems in domains outside of coding.

But given what is happening just in the tech industry and just in software engineering, I think we have to take seriously the possibility that we are about to see a real bloodbath for entry-level white-collar workers.

Yeah, absolutely.

And we wonder why people don't like AI.

All right.

So first we've got the economic data showing that there is some sort of softness around hiring for young people.

We also just have the rise of these agentic systems.

But is there evidence out there, Kevin, that says that the AI actually is already replacing these jobs?

So, I talked to a bunch of economists and people who study the effects of AI on labor markets.

And what they said is that we can't conclusively see yet in the large economic samples that AI is displacing jobs.

But what we can see are companies that are starting to change their policies and procedures around AI to sort of of prioritize the use of AI over the use of human labor.

So I'm sure you've been following these stories about these so-called AI-first companies.

Shopify was an early example of this.

Duolingo also did something related to this, where basically they are telling their employees: before you go out and hire a human for a given job or a given task, see if you can use AI to do that task first.

And only if the AI can't do it, are you allowed to go out and hire someone?

Yeah.

And by the way, if you're wondering, Hard Fork is an AI second organization because at Hard Fork, the listener always comes first.

That's true.

So I think that what worries me, in addition to sort of the hints of this that we see in the economic data and the kind of evidence that these AI agents are getting much better, much more quickly than people anticipated, is just that the culture of automation and employment is changing very rapidly at some of the big tech companies.

Yeah, this feels like a classic case where the data is taking a while to catch up to the truth on the ground.

I also collect stories about this and would share maybe just a few things that I've noticed over the past couple of weeks here, Kevin.

The Times had a great story about how some Amazon engineers say that their managers are increasingly pushing them to use AI, raising their output goals and becoming less forgiving about them missing their deadlines.

The CEO of Klarna, which is a sort of buy now, pay-it-later company, says its AI agent is now handling two-thirds of customer service chats.

The CEO of IBM said the company used AI agents to replace the work of 200 HR employees.

Now he says that they took the savings and plowed that into hiring more programmers and salespeople.

And then finally, the CEO of Duolingo says that the company is going to gradually stop using contractors to do work that AI can handle.

So that's just a collection of anecdotes.

But if you're looking for kind of spots on the horizon where it seems like there is truth to what Kevin is saying, I do think we're seeing that.

Yeah.

And I think the thing that makes me confident in saying that this is not just a blip, that there is something very strange going on in the job market now is talking with young people who are out there looking for jobs, trying to plan their careers.

Things do not feel normal to them.

So recently I had a conversation with a guy named Trevor Chow.

He's a 23-year-old, recent Stanford graduate, really smart guy, really skilled, the kind of person who like could go work anywhere he wanted basically after graduation.

And he actually turned down an offer from a high-frequency trading firm and decided to start a startup instead.

And his logic was that basically we might only have a few years left where humans have any kind of advantage in labor markets, where we have leverage, where our ability to sort of do complex and hard things is greater than those of AI systems.

And so basically, you want to do something risky now and not wait for a career that might take a few years or decades to pay off.

And so, you know, the way he explained it to me is like all of his friends are making these kind of similar calculations about their own career planning now.

They're looking out at the job market as it exists today and saying like, that doesn't look great for me, but maybe I can sort of find a way around some of these limitations.

That's interesting.

Well, let me try to bring some skepticism to this conversation, Kevin, because I know in your piece, you identified several other factors that help to explain why young people might be having trouble finding jobs.

You have tariffs.

You have just sort of the overall economic uncertainty that the Trump administration has created.

You have the sort of long tail of disruption from the pandemic or even the great recession, right?

That I think some economists believe that we might not totally have recovered from.

So it seems like there are a lot of explanations out there for why young folks are having trouble finding jobs that don't involve AI maybe at all.

Yeah, I think that's a fair point.

And I want to be really careful here about claiming that all of the data we're seeing about the unemployment being high for recent college graduates is due to AI.

We don't know that.

I think we will have to wait and see if there is more evidence that AI is starting to displace massive numbers of jobs.

But I think what the data is failing to capture or just at least not capturing yet is how eager and motivated the AI companies that build this stuff are to replace workers.

Every major AI lab right now is racing to build these highly capable, autonomous AI agents that could essentially become a drop-in remote worker that you would use in place of a human remote worker.

They see potentially trillions of dollars to be made doing this kind of thing.

And when they are talking openly and honestly about it, they will say like the barrier here is not some new algorithm that we have to develop or some new research breakthrough.

It's literally just we have to start paying attention to a field and caring about it enough to collect all the data and build the reinforcement learning training environments to automate work in that field.

And so they are just kind of planning to go sort of industry by industry and collect a bunch of data and use that to train the models to do the equivalent of whatever the entry-level worker does.

And like that could happen pretty quickly.

Yeah.

Well, so that feels like a threat.

Yeah, it's not great.

And I think the argument that they would make is that, you know, some of these entry-level jobs are pretty rote anyway.

And maybe that's not the best use of young people's skills.

I think the counter argument there is like those skills are actually quite important for building the knowledge that you need to become, you know, a contributor to a field later on.

Like, I don't know about you, but like my first job in journalism involved a bunch of rote and routine work.

One of my things that I had to do was like write corporate earnings stories where I would take some, you know, an earnings report from a company and like pull out all the important pieces of data and like put it into a story and like get it up on the website very quickly.

And like, was that the most thrilling work I can imagine doing or the highest and best use of my skills?

No, but it did help me develop some of these skills like reading an earnings statement that became pretty critical for me later on.

Interesting.

For what it's worth, my first job, I think I was actually the most physical job in journalism I ever had.

I covered a small town.

And so I spent all of my days just driving down to city hall, going down to the police station, sitting at the city council meeting, making phone calls.

A lot of drudgery sort of came in later.

But let me raise maybe an obvious objection to the idea that, oh, young people, don't worry.

These jobs that we're eliminating, it was just a bunch of drudgery anyway.

The young people need to pay their rent.

Yes.

You know, the young people need to buy health insurance.

And so I think they're not going to take a lot of comfort from the idea that the jobs that they don't have weren't particularly exciting.

Yes.

And the optimistic view is that, you know, if you just shift workers off of these like entry-level rote tasks into more productive or more creative or more collaborative roles, you kind of like free them up to do higher value work.

But I just don't know that that's going to happen.

I mean, I'm talking to people at companies who are saying things like, we don't really see a need for junior level software engineers, say, because now we can hire a mid-level software engineer and give them a bunch of AI tools and they can do all of the debugging and the code review and the stuff that the 22-year-olds used to do.

Yeah.

Let me ask about this in another way.

I think a lot of times we have seen CEOs use AI as the scapegoat for a bunch of layoffs that they already wanted to do anyway, or a bunch of sort of management decisions that they wanted to make anyway.

Earlier this year, there was a story in the San Francisco Standard that Mark Benioff, the CEO of Salesforce, said the company would not hire engineers this year due to AI.

I went to Salesforce's career page this morning, Kevin.

There were hundreds of engineering jobs there.

I don't know what wires got crossed.

You know, the story I read was in February.

Maybe something has changed since then.

But talk to me a little bit about the hype element in here, because I do feel like it's real.

Yes, there's definitely a hype element in here.

I worry that companies are kind of getting ahead of what the tools can actually deliver.

I mean, you mentioned Klarna, the buy now, pay later company.

A couple years ago, they made this big declaration that they were going to pivot to using AI for customer service.

And they announced this partnership with OpenAI and like they were going to try to drive down the number of human customer support agents to zero.

And then recently they've been backtracking on that.

They've been saying, well, actually, customers didn't like the AI customer service that they were getting.

And so we're going to have to start hiring humans again.

So I do think that this is a risk of some of this hype is that it

tempts executives at these companies to move faster than the technology is ready for.

Well, and speaking of that, one of my favorite stories from this week was about a guy who has set up a blog, Kevin, where I wonder if you saw this.

He keeps a database of every time that a lawyer has been caught using citations that were hallucinated by AI.

Did you see this?

No.

There are more than 100.

We've talked about this issue on the show a couple of times, and I've thought this must just be a small handful handful of cases because who would be crazy enough to bet their entire career on a hallucinated legal citation?

Turns out more than 100 people.

And so a lot of people might be listening to this conversation saying, Kevin, you're telling me that we're standing on the brink of AI taking over everything.

These things still suck in super important ways.

So help us square that issue.

Like we know these systems are not reliable for many, many jobs.

So how can it be that so many CEOs are apparently ready to just junk their human workforces?

So I think part of the misunderstanding here is that there are like like two different kinds of work.

There's work that can be sort of easily judged and verified to be correct or incorrect, like software engineering.

In software engineering, like either your code runs or it doesn't.

And that's a very clear signal that can then be sent back to the model in these sort of reinforcement learning systems to make it better over time.

Most jobs are not like that, right?

Most jobs, including law, including journalism, including lots of other white-collar jobs, do not have this very clearly defined indicator of success or failure.

And so that's actually like what is stopping some of these systems from improving in those areas is that it's not as easy to like train the model and say, give it a million examples of what a correct answer looks like and a million examples of what an incorrect answer looks like and sort of have it over time learn to do more of the correct thing.

So I think in law, this is a case where you do actually have more subjective outputs.

And so it's going to be a little harder to automate that work.

But I would say

we also have to compare the rates of error against the human baseline, right?

You mentioned this database of cases in which human lawyers had used hallucinated citations in their briefs.

I imagine there are also human paralegals or lawyers who would make mistakes in their briefs as well.

And so I think for law firms or any company trying to figure out, like, do we bring in AI to do a job?

The question they're asking is not, is this AI system completely error-free?

It's, is this less likely to make errors than the humans I currently have doing this work?

Right.

And like in so many things, if the system is like 20% worse than a human, but 80% less expensive, a lot of CEOs are going to be happy to make that trade.

Totally.

All right.

Well, so let's bring it home here.

I imagine we might have some college students listening or some recent college grads.

They're now thoroughly depressed.

They're drinking.

It's Friday morning.

They're wasted.

As they sort of sober up, Kevin, what would you tell them about what to do with any of this information?

Is there anything constructive that they can do, assuming that some of these changes do come to pass?

So I really haven't heard a lot of good and constructive ideas for young people who are just starting out in their careers.

You know, people will say stuff like, oh, you should just, you know, be adaptable and resilient.

And that's sort of like what Demis Isabas told us last week on the show when we asked him like what young people should do.

I don't find that very satisfying in part because it's just so hard to predict like which industries are going to be disrupted by this technology.

But I don't know.

Have you heard any good advice for young people?

Well, I mean, I think what you're running into, Kevin, is the fact that our entire system for young grads is set up for them to take entry-level jobs and gradually acquire more skills.

And what you're saying is that those, that part of the ladder is just going to be hacked off with a chainsaw.

And so what do you do next?

So of course there's, there's no good answer, right?

The system hasn't been built that way.

I think that in general, the internet has been a pressure mechanism forcing people to specialize, to get niche.

The most money and the most opportunity is around developing some sort of scarce expertise.

I have tried to build my career as a journalist by trying to identify a couple of ways where I could do that.

It's worked out all right for me, but I also had the benefit of entry-level jobs.

So if somebody had come to me at the age of 21 and say, if you want to succeed in journalism, get really niche and specialize, I would say, okay, but like, I need to go have a job first.

Like, is there one of those?

So to me, that's like kind of the tension.

I will also say there's never been a better time to be a Nepo baby.

I don't know if you've been following the Gracie Abrams story.

Very talented songwriter, daughter of J.J.

Abrams, the filmmaker.

You know, she's born into wealth and now she's best friends with Taylor Swift.

If you can manage something like that, I think you're going to be very happy.

Yes, I hear that advice.

And I would also add one other thing that I am starting to hear from the young people that I am talking to about this, which is that it is actually possible, at least in some industries, to sort of leapfrog over those entry-level jobs.

If you can get really good at sort of being a manager of AI workflows and AI systems and AI tools, if you can kind of orchestrate complex projects using these AI tools, some companies will actually hire you straight into those higher-level jobs because, you know, even if they don't need someone to like create the research briefs, they need people who understand how to make the AI tools that create the research briefs.

And so, that is, I think, a path that is becoming available to people at some companies.

Yeah, I would just also say that in general, it really does take a long time for technology to diffuse around the world.

Look at like the percentage of e-commerce in the United States.

It's like less than 20% of all commerce.

And we're what, 25 plus years into Amazon.com existing.

So I think that one of the ways that you and I tend to disagree is I just think you have like shorter timelines than I do.

Like I think we basically think the same things are going to happen, but like you think that they're going to happen like imminently.

And I think it's going to take several more years.

So I do think everything we've discussed today, it's going to be a problem for all of us like before too, too long.

But I think if you're part of the class of 2025, you will still probably find an entry-level job in the end.

I hope you're right.

And if not, we promise to make another podcast episode about just how badly all of this is going.

Well, Casey, that wraps our discussion about AI and jobs, but we do want to hear from our listeners on this.

If you have lost your job because of AI or if you are worried that your job is rapidly being replaced by AI, we want to hear from you.

Send us a note with your story at hardfork at nytimes.com.

We may feature it in an upcoming episode.

Yeah, we love voicemails too if you want to send one of those.

When we come back, a conversation with Mike Krieger, the chief product officer of Anthropic, about new Agentic AI systems and whether they're going to take all our jobs.

Or maybe blackmail us.

Or maybe both.

Who knows?

Dell AI PCs are newly designed designed to help you do more faster.

That's the power of Dell AI powered by Intel Core Ultra Processors.

Upgrade today by visiting dell.com slash deals.

Can your software engineering team plan, track, and ship faster?

Monday Dev says yes.

With Monday Dev, you get fully customizable workflows, AI-powered context, and integrations that work within your IDE.

No more admin bottlenecks, no add-ons, no BS.

Just a frictionless platform built for developers.

Try it for free at monday.com slash dev.

You'll love it.

This podcast is supported by the all-new 2025 Volkswagen Tiguan.

A massage chair might seem a bit extravagant, especially these days.

Eight different settings, adjustable intensity, plus it's heated, and it just feels so good.

Yes, a massage chair might seem a bit extravagant, but when it can come with a car,

suddenly it seems quite practical.

The all-new 2025 Volkswagen Tiguan, packed with premium features like available massaging front seats, that only feels extravagant.

Well, Casey, we've got a mic on the mic this week.

And I'm excited to talk to him.

So, Mike Krieger is here.

He is the co-founder of Instagram.

Product some of you may have heard of, little photo sharing app.

Currently, Mike is the chief product officer at anthropic now casey do you happen to know anyone who works at anthropic as a matter of fact kevin my boyfriend works there and so yeah that's something uh i would like to disclose at the top of this segment yeah and my disclosure is that i work the new york times company which is suing open ai and microsoft over copyright violations all right So last week, Anthropic announced Claude 4.

We just spent a little bit of time talking about all of the new agentic coding capabilities that this system has.

I think Mike has a really interesting role in the AI ecosystem because his job, as I understand it, is to take these very powerful models and turn them into products that people and businesses actually want to use, which is a harder challenge than you might think.

Yes.

And also, Kevin, these products are really explicitly being designed to take away people's jobs.

And given the conversation that we just had, I want to bring this to Mike and say, how does he feel about building systems that might wind up putting a lot of people out of work?

Yeah, and Mike's perspective on this is really interesting because he is not an AI lifer, right?

He worked at a very successful startup before this.

He then spent some time at Facebook after Instagram was acquired there.

So he's really a veteran of the tech industry and, in particular, social media, which was sort of the last big product wave.

And so I'm interested in asking how the lessons of that wave have translated into how he builds products in AI today.

Well, then let's wave hello to Mike Krieger.

Let's bring him in.

Mike Krieger, welcome to Hard Fork.

Good to be here.

Well, Mike, we noticed that you didn't get to testify at the meta-antitrust trial.

Anything you wish you could have told the court?

Oh, you know.

That is the happiest news I got that week.

Like, I do not have to go to Washington, D.C.

this week.

You got to focus on something else, which is the dynamic world of artificial intelligence.

Exactly.

So, you all just released Claude 4, two versions of it, Opus and Sonnet.

Tell us a little bit about Claude 4 and what it does relative to previous models.

Yeah.

First of all, I'm happy that we have both Opus and Sonnet out.

We're in this very confusing situation for all where our biggest model was not our smartest model.

Now we have a both, you know, biggest and smartest model and then our like happy-go-lucky middle child Sonnet, which is back to its rightful place in there.

Yeah, both, we really focused on how do we get models able to do longer horizon work for people.

So not just here's a question, here's an answer, but hey, go off and think about this problem and then go solve it for tens of minutes to hours, actually.

Coding is a immediate kind of use case for that, but we're seeing it be used for, go solve this research problem, go off and write code, but not necessarily in the service of building software, but in the service of,

I need a presentation built.

So that was really the focus around both cloud models.

And Opus, the bigger, smarter model can do that for even longer.

We had one customer seven-hour refactor using cloud, which is pretty amazing.

Sonnet, maybe a little bit more time constrained but much more human in the loop well so let me ask about that customer which is racketon i believe a japanese technology company and i read everywhere that they use cloud for seven hours to do it one thought that came to mind is well wouldn't it have been better if it could have done it faster like like why is it a good thing that claude worked for seven hours on something that was a good follow-up which was is that a seven hour problem that took seven hours or a 20 hour problem seven hours or a 50 minute problem that it is still turning on today we just had to stop it at some point um it was a big refactor which like a lot of sort of iterative kind of, you know, loops and then tests.

And I think that's, that's what made it a longer horizon, like seven hour type of problem.

But it is an interesting question around like when you can get this asynchronicity of having it really work for a long time, does it change your relationship to the tool itself?

Like you want it to be checking in with you.

You want to be able to see progress.

Like if it does go astray, how do you reel it back in as well?

And like, what are seven hour problems that we're going to have, you know, going forward?

Most software engineering problems are probably one hour problems.

They're not seven hour problems.

so what was this a case where it was like a real kind of like set it and forget it like walk away come back at the end of the day and okay the refactor is done or was it more complicated than that that's my understanding it was like a lot of you know migrating from one big version to another one or just like changing frameworks that it's like it's you know i remember at instagram we had a moment where we changed network stacks like how instagram communicated with our backend service and it was like we did the one migration to demonstrate it and then we farmed it out to basically 20 engineers over the next month.

So that's exactly the kind of thing that today I would have given to Opus and said, all right, here's an example of one migration.

Please go and do the rest of our code base and let us focus on the more interesting stuff.

So I want to talk about all this Agentech stuff and the implications for workers and labor markets and whatnot, but we have to talk about the blackmail stuff because this is the thing that caught everyone's attention after these announcements last week.

Your safety engineers and testers found that in some early testing, Claude 4 would actually turn to blackmail when engineers tried to take it offline as part of a fictional scenario during a safety test.

What happened?

Yeah.

And I think to be clear, like these are bugs rather than features.

I think we should be clear as well.

I think a thing that drew me to Anthropic in the first place is we're asking the hard questions and putting the model through its paces.

And sometimes it surprises us in positive ways.

And sometimes you're like, all right, that's something that we're going to have to either mitigate in testing or further training.

And so there was, you know, a couple of examples where in safety testing, either either the model said, hey, I'm going to

either contact somebody if I have the tools to do so, or I'm going to try to manipulate it.

And so that's really the purpose of a lot of the safety testing is put it through its paces, see what it does, and then either focus on classifiers that we can use down the line to prevent that behavior or not give it those tools.

That's another important thing.

Like this is a very contrived scenario that we had there, or further train the model so that it demonstrates behavior that we think is more aligned, as we say.

Right.

I mean, in this specific test, the safety testers gave it some incriminating emails as part of a corporate sort of document set that showed that

in this fictional scenario, the researcher who was trying to turn Claude off was also having an extramarital affair.

And so one of the things that Claude tried to do is like blackmail the engineer into not replacing it, not shutting it down by saying, like, I could expose your affair.

Do I have that right?

I think that sounds right.

Yeah.

People got really mad about some of of these safety tests when they were released or published.

There was another test where Claude was asked to do something unethical.

I think it was fake data in a pharmaceutical trial.

And one of the things that it tried to do in this simulated test environment was like use the command line to like tip off authorities and maybe send some incriminating evidence to the press to try to be a whistleblower for this fictional scenario.

I like that part.

Yeah.

And by the way, Claude, if you're listening, we're hard fork at nytimes.com.

Let us know.

If you see something, say something.

Put it at the signal tool.

That's right.

Yeah.

I'm curious, like,

if you think that all models of a certain size and sophistication would demonstrate behaviors like this, and just the other AI labs building these models aren't talking about it as openly as Anthropic is?

Or do you think there is something specific about Claude that is more prone to,

for lack of a better word, narc on its users?

We don't know.

My suspicion is that they would have similar patterns.

I'd love to see that sort of experimentation happen as well.

I think there's a lot that is common to, you know,

what if we decided in our collective published and discussed works is appropriate behavior?

And then there's probably additional things that we're doing on the, we have a constitutional AI process.

We're really trying to train sort of goals for behavior for CLOD rather than sort of...

if then then that kind of rules, which very, very quickly, as we're discussing, kind of become insufficient when you deal with nuanced complicated situations but i my guess is that a lot of the larger models would demonstrate emergent interesting behaviors in that situation yeah which i think is like part of the value of doing this right it's not just like anthropic saying here's what's going on at claude like the stuff that anthropic is finding out i'm sure the other labs are finding out and you know my hope is that this kind of work pressures the other labs to be like yeah okay it's happening with us too and in fact we did see people on x trying to replicate this scenario with models like o3 and they were very much finding the same thing yeah Yeah.

I'm just so fascinated by this because it seems like it makes it quite challenging to develop products around these models whose behavioral properties we still don't fully understand.

Like when you were building Instagram, it wasn't like you were worried that the underlying feed ranking technology was going to like blackmail you if you did something inappropriate.

There's this sort of unknowability or this sort of inscrutability to these systems that must make it very challenging to build products on top of them.

Yeah, it's both a really interesting product challenge and also why it's an interesting product at all.

So I talked about this on stage at Code with Cloud where we did an early prototype alongside Amazon to see like, could we help partner on Alexa Plus?

And one, one, I remember this really early prototype, I had built a tool that was like the like timer tool, right?

Or like a reminder tool.

And one or the other was broken, like if the backend was broken for it.

And Cloud was like, ooh, I can't set an alarm for you.

So instead, I'm going to set a 36-hour timer, which no human would do.

But it was like, oh, it's agentically figuring out that, like, that I need to solve the problem somehow.

And you can watch it do this.

Like, if you play with Claude code, if it, you know, can't solve a problem one way, it'll be like, well, what about this other way?

You know, I was talking to one of our customers, and

somebody asked Claude, like, hey, can you generate a, you know, like a speech version of this text?

And Claude's like, well, I don't have that capability.

I am going to open Google, Google, free TTS tool, paste the user text in there, and then hit play and then record and like basically like export that.

And like nobody programmed that into cloud.

It's just cloud being creative and agentic.

And so a lot of the interesting product design around this is how do you enable all the interesting creativity and agency when it's needed, but prevent the, all right, well, I didn't want you to do that or I want more control.

And then secondarily, also, when it does it right one time, how do we kind of compile that into, great, now you figured this out?

You know, like you want somebody who can creatively solve a problem, but not every time.

If you had a worker that every time was like, I'm just going to like completely from first principles decide how I'm going to like write a Word document and be like, okay, great, but it's like day 70.

Like you know how to do this now.

My impression from the outside is that a lot of the usage of Cloud is for coding, that Claude is.

used by many people for many things, but that the coding use case has been really surprisingly popular among your users.

What percentage of cloud usage is for coding related tasks?

It's, I mean, on Cloud.ai, I would wager it's 30 to 40% even.

And that's even a product that I would say is fine for sort of code snippets, but it's not a coding tool like Cloud Code, where obviously it's, I would say, 95 to 100%.

Some people use Cloud Code for just talking to Claude, but it's really not the optimal way to talk to Claude.

But on cloud.ai, you know, it's not the majority, but it is a good chunk of what people are using it for.

There was some reporting this week that Anthropic had decided toward the end of last year to invest less in Claude as a chatbot and sort of focus more on some of these coding use cases.

Give us a kind of state of Claude.

And if you're a big Claude fan and you were hoping for lots of cool new features in widgets, should those folks be disappointed?

I think of it as two things.

One is what is the model really good at?

And then how do we expose that in the products, both for ourselves and then, you know, who builds on top of Claude.

In terms of what the model is being trained on, again, it's the year of the agent.

I have this joke in meetings, like, how long can we go without saying agent?

And, you know, I think we made it like 10 minutes.

It's pretty good.

That capability unlocks a bunch of other things.

Like, sure, coding is a great example.

You can go and refactor code for tens of minutes or hours.

But,

hey, I want you to go off to do this research and help me, you know, prepare this, you know, research brief that I am doing.

Or I'm getting, you know, 50 invoices a day.

Can you scrub through them, you know, help me understand it and help them classify and aggregate?

Like, these are agentic behaviors that have applications beyond just coding.

And so we'll continue to push on that.

So as a Claude fan that likes to bring Claude to your work, then that's useful.

Meanwhile, we've also focused on the writing piece.

So I spent a lot of time writing with Claude.

It's not at the point where I would say like, write me a product strategy, but I'll often be like, here's a sample of my writing.

Here's some bullets.

Help me like write this longer form doc and do this effectively.

And I'm finding it's getting really good at that matching tone, producing like non-cliched fill text.

Like if I look at Sonnet 3.7, it's a pretty good writer, but there's like turns of phrases to me that are like decidedly claud row.

I'm like, it's not just revolutionizing AI.

It's also, and I'm like, it loves that phrase, for example.

And it's like a little bit of like a clawed tell.

And so for like the claude fans, like, we'll help you get your work done, but hopefully we'll also help you write and just be a good conversational partner as well.

Let's talk about the labor implications of all of the agentic AI tools that you all and other AI labs are building.

Dario, your CEO, told Axios this week that he is worried that as many as 50% of all entry-level white-collar jobs could disappear in the next one to five years.

You were also on stage with him last week, and you asked him when he thinks there will be the first billion-dollar company with one human employee.

And he answered 2026 next year.

Do you think that's true?

And do you think we are headed for a wipeout of early career professionals in white-collar industries?

I think this is another example of, I presume a lot of the labs and other people in the industry are looking and thinking about this, but there is not a lot of conversation about this.

And I think one of the jobs of Anthropic can uniquely have is to surface them and have the conversation.

I'll start maybe with the entrepreneurial one and then maybe the entry.

We'll do the entry one next.

On the entrepreneurship, absolutely.

Like that feels like it's inevitable.

I joked on, you know, with Dari, like, you know, we did it at Instagram with 13 people and, you know,

we could have likely done it with less.

So that, that feels inevitable.

On the labor side, I think

what I see inside Anthropic is our, you know, our most experienced best people have become kind of orchestrators of CODs, right?

Where they're running multiple cloud codes in terminals, like farming out work to them.

Some of them would have maybe assigned that task to like a new engineer, for example.

And not the entirety of the new engineer's job, right?

There's a lot more to engineering than just doing the coding, but part of that role is in there.

And so when I think about how we're hiring, just very transparently, like we have tended more towards the like,

IC5 is kind of like our, you know, career level, you know, you've been doing it for a few and beyond.

And I have some hesitancy at hiring New York, partly because we're just not as developed as an organization to like have a really good internship program and help people onboard, but also partially because it's, that seems like a shifting role in the next few years.

Now, if somebody was an IC3, IC4, and extremely good at using cloud to do their work and map out, of course, like we would bring them on as well.

So there is, I think, a continued role for people that have embraced these tools to make themselves in many ways as productive as a senior engineer and then their job is how do you get mentored so you actually acquire the wisdom and experience that you're not just doing seven hours of work to the wrong end you know or in a way that's going to be you know a spaghetti vibe coded mess that you can't actually then maintain a year from now or because it wasn't just a weekend project The place where it's less known, and I think something that we'll have to study over the next, you know, several months to a year is for the jobs that are more,

is it data entry, is it data processing, where you can set up an agent to do it pretty reliably, you'll need people in the loop there still to validate the work, to even set up that agent tech work in the first place.

But I think it would be unrealistic that the exact same jobs look exactly the same, even a year or two from now.

So, you know, as somebody who runs a business, I get the appeal of having a sort of, you know, digital CTO, salesperson, whatever else, you know, these APIs will soon be able to do.

That could create a lot of like value in my life.

At the same time, most people do not run businesses.

Most people are W-2 employees and they email us when we have conversations like this.

And they want us to ask really hard questions of folks like yourself.

And I think it's because they're listening to all this and they're just like, why would I be rooting for this person?

Right.

Like this person is telling me that he's coming to take my job away and he doesn't know what's going to come after that.

So I'm curious how you think about that.

And like, what is the role that you're kind of playing in this ecosystem right now?

Yeah, I think for as long as possible, the things that I'm trying to build from a product perspective are ways in which we augment and accelerate people's own work, right?

I think there's, um, and

different players will take different approaches.

And I think there will be like a marketplace of ideas here.

But when we think about things that we want to build and from a first party perspective, it's, all right, are you able to take somebody's existing, you know, application or or their role and like be more of themselves right a useful thought partner an extender of their work a researcher a um augmenter of how they're doing will that be the role ai will have forever likely not right because it is going to get more powerful and then you know if you spend time with people who are like really deep in the field they're like oh you know and eventually like they'll

be running companies i'm not sure we're there yet i think the AIs lack a lot of sort of like organizational and like long-term discernment to do that successfully.

I think, you know, it can do a seven-hour refactor.

It's not going to get conceptualize and then operate a company.

I think we are years away from something like that.

So I think

there's choices you can make around what you focus on.

And I think that's where it starts, whether that's the thing that makes it so they're perfectly complementary forever, likely not.

But hopefully we're nudging things in the right way as we also figure out.

the broader societal question of how do we scaffold our way there?

You know, what are the new jobs that do get created?

How their roles change?

Like, how does the economy and the safety net change in that new world?

Like, I don't think we're six months to a year from solving those questions.

I don't think we need to be just yet, but we should be having the conversation now.

I think this is one place where I do find myself getting a little frustrated with the AI safety community in that I think they're very smart and well-intentioned when it comes to analyzing the risks that AI poses if it were to go rogue or develop some

malign goal and pursue that.

I don't think the sort of conversation about job loss and the conversation about AI safety are close enough together in people's minds.

And I don't think, for example, that a society where you did have 15 or 20% unemployment for early career college graduates is a safe society.

I think we've seen over and over again that when you have high unemployment, unemployment, your society just becomes much less safe and stable in many ways.

And so I would love if the people thinking about AI safety for a living at places like Anthropic also brought into that conversation the safety fallout from widespread job automation, because I think that could be something that catches a lot of people by surprise.

Yeah, we have both our economic impact kind of societal impacts team and our AI safety team.

I think it's a useful nudge around how do those two come together, because there are second second order implications on any kind of major labor changes.

Are you guys in the conversations with policymakers, regulators, sort of trying to like ring alarm bells?

Are you hearing anything back from them that makes you feel like they're taking you seriously?

I'm not in the policy conversations as much, being more on the product side.

I do think those conversations are happening and there is more, you know, it's this interesting thing where the critique a year ago, maybe it's changed a bit was, oh, you guys are talking your own book.

You're like, this is not going to happen.

Like you're just, you know, it's all hype.

And

probably some of it was folks hyping it up, at least the, the, the kind of

alarm bells or, you know, signals that I've seen at least coming out of Intharik are like, no, we think this is real.

And we think that we should start reckoning with it.

Believe it or not, like, like, even if you assume it is like a low probability thing, shouldn't we at least have a story around what that looks like?

You were one of the co-founders of Instagram, Instagram, very successful product used by many, many people.

But social social media in general has had a number of negative unintended consequences that you may not have envisioned back when you were first releasing Instagram.

Are there lessons around the trajectory of social media and unintended harms that you take with you now into your work on AI?

I think you have to reckon with these.

I mean, AI is already globally deployed and has at least 1 billion-ish user products.

So it would be silly to say like it's early in the AI adoption curve, but it actually is early in the AI adoption curve.

I think with social media, when it was me and Kevin taking photos of really great meals in San Francisco, you know, with our iPhone 3GS, like, you know, Kevin's system,

not being, yeah, yeah, I don't know.

You were probably early on Instagram, maybe.

Yeah.

Yeah, you were a hip-stomatic guy.

The more important thing was you just would just never invite this Kevin to dinner.

Yeah.

But you were, okay, yeah.

So back in those days.

Yeah, you could kind of maybe extrapolate and say, all right, you know, if everybody used this, what would happen?

But it almost didn't feel like the right question to ask.

And the challenges that came at scale, I think as a platform grows that large, it just becomes much more a mirror of society and all of its both, you know, positives and negatives.

And it also enables new kind of unique behaviors that you then have to mitigate.

But yes, you could have foreseen it at scale.

I'm not sure you would have designed, maybe you would have designed different moderation systems along the way, but at first you're just like, there's 10 people using this product.

Like, I don't, we just need to see if there's a, they're there, right?

AI feels much different because one, on an individual basis, like the reason we have the responsible scaling policy is that, you know, for biosecurity, that doesn't involve a billion people using Cloud for, you know, or an AI for something negative.

It could just be one person that we want to actually make sure we address and mitigate.

So the sort of scale needed from a reach perspective is really different.

That, I think, is very different from the social media perspective.

And the second one, at least for Cloud, which is primarily a single player experience, the issues are less relational, right?

Like with Instagram, the harms at scale come, like if you only used Instagram in a private mode with zero followers, maybe you'd feel quite lonely.

Maybe that's a whole separate thing.

But

the kinds of things that you might think about in terms of bullying among teenagers or body image, like those wouldn't really come up really if you're not really looking at, if you're using it as an Instagram diary, right?

AI, you can have much more of that individual one-on-one experience and it is single player, which is why like, you know, there's a really thought-provoking, again, internal essay just recently around

we shouldn't take thumbs up and thumbs down data from, you know, anthropic, from cloud users and think of that as the North Star.

Like we aren't out here to please people, right?

And we should be, we should fix bugs and we should fix places where the, the, it didn't succeed, but we shouldn't just be out there telling people what they want to hear if it's not actually the right thing for them.

So this is something I've been thinking about a lot because, you know, there are many people today who have the experience of Instagram of like, I like this a certain amount, but I feel like I look at it more than I want to and I'm having trouble managing that experience.

And so maybe I'm just going to delete it from my phone.

I look at where the state of the art is with chatbots, and I feel like this stuff is already so much more compelling in some ways, right?

Because it does generally agree with you.

It does take your side.

It's trying to help you.

It might be a better listener than any friend that you have in your life.

And I think, you know, when I use Claude, I feel like the tuning is like pretty good.

I do not feel like it is sycophantic or, you know, sort of being like, like very obsequious, but I can absolutely imagine someone taking the Claude API and just building that and like putting in in the App Store as like fun teen chatbot 2000.

How do you think about what the experience is going to be, particularly for young people using those bots?

And are there risks of whatever that relationship is going to turn out to be for them?

Yeah, I think if you talk to like Alex Wang from scale, he's like, in the future, most people's friends will be AI friends.

And I don't.

necessarily like that conclusion, but I don't know that he's wrong.

Also, if you think about like the availability of it.

and um, and I think it's really important to have relationships in your life around people that will disappoint you and be disappointed by you and don't

imagine it was just pure AI, it would never be the same, right?

And so, um, I think uh, maybe two answers there.

Like, one,

we should just confront it and be really vocal about it, not just pretend that it's not happening, right?

It's like, what are the conversations that people are having with AI at scale?

And what is, what do we want as a society?

Like, do we want AI to have like some sort of moderator process?

It's like, hey, your conversation with this particular AI is getting a little too, you know,

real,

weird.

Like, maybe it's time to step back.

Like, will Apple eventually build the equivalent of screen time that's more like AI time?

I don't know.

It's like there's a bunch of interesting privacy questions around the road, but maybe that is interesting even for parents.

Like, how do you think about moderating the experiences that your kids have with AI?

It's probably going to be at the platform level, right?

It's going to get interesting.

Your abstract example is an interesting one.

That'll be a really fascinating question.

And then the second piece is, you know, as we think about moving up the, you know, safety levels thing, I mean, the responsible scaling policy is also a living document.

Like we've iterated on it and added to it or refined the language.

I think it will be interesting to think about, and manipulation is one of the things that's in there and something that we look for in deception, but also like.

over-friendliness.

I'm not sure exactly the word I'm looking for, but that sort of like over-competition.

Glazing, I believe, is the industry term of ours.

You know, that sort of like over-reliance, I think is also an AI risk that we should be thinking about.

Yeah, so if you're a parent right now of like a teenager and you find out that

they're speaking with a chatbot a lot, what is your instinct to tell them?

Is it you need to sort of supervise this closer, like read the chats, or maybe, no, don't be too worried about it, or like, unless you see this thing, don't worry about it?

I think it depends a little bit on the product.

I mean, you have to,

especially with cloud, which currently has no memory, which mostly is a limitation of the product, but also makes it so that you don't, it's harder to have that kind of like deep engagement with it.

But anyway, as we think about adding memory, like, what are the things?

I've thought about one of the things that I'd like to to do is introduce a family plan where you have like child or teen accounts, but with parent, you know, visibility on there.

Maybe we could even do it in a privacy-preserving way where it's not like you can read all your teens chats.

So then maybe that's the right designer.

But maybe what you can do is have a conversation with Claude that also can read the teens' chats, but does it in a way where like it might not tell you exactly what it felt, what your teen felt about you last night when you like told them no, but it will tell you like, hey, this behavior over time.

I'm flagging something to you that I would, you need to go and follow up.

Like you can't abscond responsibility from the parent, though.

Right.

Actually, I mean, that's really interesting if the, if the bot could say something like, your teen is having a lot of conversations about disordered eating, you know, or something.

Yeah, I want to think more about that.

My last question.

Earlier, before you got here, Kevin and I had a huge fight because I thought it was Claude 4 Opus.

And then he was like, no, it's Claude Opus 4.

And he turned out to be right.

So why is it like that?

We changed it partially because this is a vigorous internal debate.

There was something we really spent our time on as well.

I'll give you two.

One, aesthetically, i like it better and but it was tending towards it and also like we think over time we may choose to release more opuses and more sonnets and having the major you know the the the big important thing in the thing be the version number kind of created this thing we're like well you had claude three five sonnet why didn't you have claude three five opus and it was like well we wanted to make the next opus really worthy of the opus name and so uh maybe uh flipping the priority in there as well but the drove the team crazy because now our model page is like you have claude 3.7 Sonnet and Claude Sonnet 4.

Like, what are you doing?

I feel like we can't go unreleased without doing at least something mildly controversial on naming.

And as the person responsible for Claude 3.5 Sonnet V2, I hope we're getting better.

And hopefully, the AI can just name things in the future.

Let us hope.

Mike Krueger, thanks for coming.

Thanks, thanks for having me.

When we come back, we're headed to court for Hartford Crimes Division.

Over the last few decades, the world has witnessed incredible progress.

From dial-up modems to 5G connectivity, from massive PC towers to AI-enabled microchips, innovators are rethinking possibilities every day.

Through it all, Invesco QQQ ETF has provided investors access to the world of innovation with a single investment.

Invesco QQQ, let's rethink possibility.

There are risks when investing in ETFs, including possible loss of money.

ETF's risks are similar to those of stocks.

Investments in the tech sector are subject to greater risk and more volatility than more diversified investments.

Before investing, carefully read and consider fund investment objectives, risks, charges, expenses, and more in perspectives at Invesco.com.

Invesco Distributors Incorporated.

AI is transforming the world, and it starts with the right compute.

ARM is the AI compute platform trusted by global leaders.

Proudly, NASDAQ listed, built for the future.

Visit arm.com/slash discover.

This podcast is supported by the all-new 2025 Volkswagen Tiguan.

A massage chair might seem a bit extravagant, especially these days.

Eight different settings, adjustable intensity, plus it's heated, and it just feels so good.

Yes, a massage chair might seem a bit extravagant, but when it can come with a car,

suddenly it seems quite practical.

The all-new 2025 Volkswagen Tiguan, packed with premium features like available massaging front seats, it only feels extravagant.

Kevin, from time to time, we like to check in on the miscreants, the mischief makers, and the hooligans in the world that we cover to see who out there is causing trouble.

Yes, it is time for another installment of our hard fork crimes division.

Let's open the case files.

All right, Casey, first on the docket, Meta rests its case.

After a six-week antitrust trial, the case of the Federal Trade Commission versus Meta platforms has wrapped up and is now in the hands of Judge James E.

Boesberg, who has said that he will work expeditiously to make a judgment in the case.

Casey, how do you think Meta's antitrust trial went?

Well, so if you're just catching up, Meta, of course, has been accused of illegally maintaining its monopoly in a market that the FTC calls personal social networking.

And they did this by acquiring Instagram and WhatsApp in the early 2010s.

And the government has said that prevented a lot of competition in the market.

That introduced a lot of harms to consumers, such as the fact that we have less privacy because that's just kind of not an access that there is any companies left to compete over.

And the government spent a lot of time making that case, but Kevin, I'm not sure it went that well for them.

Yeah, do you think Meta's going to win this one?

I think Meta has a really good chance.

You know, your colleague, Cecilia Kong, noted in the Times that Meta called only eight witnesses over four days to bat down the government's charges.

When you consider how much revenue Instagram and WhatsApp generate for Meta and what a sort of existential threat to their business it would be to have to spin these things off, I thought it was pretty crazy that they felt like they had made their entire case in four days.

Well, maybe their case was so simple and straightforward that they didn't need to do anymore.

Or maybe they just wanted to frame it in terms of a real,

yeah, they did a short-form antitrust trial.

That's huge right now.

Well, look, I think the real issue here is that Meta's argument is pretty simple.

They're saying we face tons of competition.

Have you ever heard of TikTok?

The way this case is built, if the judge considers TikTok to be a meaningful competitor to Meta today, it may be extremely difficult for him to say, we're going to unwind a merger that in the case of Instagram took place 13 years ago.

I guess we will see very shortly whether this is an actual crime that belongs in the hard fork crime division or whether this was just a Tempest in a teapot.

Yeah.

Well, you know, sometimes criminals get away with things, Kevin.

Moving on.

Case file number two, the crypto gangs of New York.

This comes to us from Chelsea Rose Marcius and Maya Coleman at the New York Times, and they write that another suspect has been arrested in a Bitcoin kidnapping and torture case.

And let me say right up front, this story is not funny.

It is extremely scary.

Not funny at all.

In fact, quite tragic.

There has been a recent wave of Bitcoin and crypto-related crimes, people attacking people to try to steal their Bitcoin passwords and steal their money.

This has been happening over in Europe, in France, in just the last few months.

There have been several attacks on crypto investors, people with lots of money in cryptocurrency.

These have been called the wrench attacks because criminals are coming after these investors and executives violently, in some cases with wrenches.

This most recent case happened in New York, in the Nolita neighborhood of Manhattan, where an Italian man named Michael Valentino Teofrostro Carturan

was allegedly kidnapped and tortured for nearly three weeks in a luxury townhouse by criminals who were apparently trying to get him to reveal his Bitcoin password.

Casey, what did you make of this?

Well, to me, the important question here is, why is this happening so much?

And the reason is because if a criminal can get you to give up your Bitcoin password, that's the ballgame.

In most cases, there is no getting your money back.

It can be relatively trivial for this money to be laundered and for there to be no trace of what happened to your funds.

That is not true if you're just a regular millionaire walking around town, right?

Obviously, you know, you may be vulnerable to robberies or other sort of scams or theft, but you know, if you give up your bank password, for example, in most cases, you would be able to get your money back if it had been illegally transferred.

So this is just a classic case of Bitcoin and crypto continuing to be a true Wild West where people can just run up to you off the street and hit you over the head with a wrench.

And it's really scary.

Yeah, it's really scary.

And I should say, this is something that I think crypto people have been right about.

Years ago, when I was covering crypto more intently, I remember people sort of telling me that they were hiring bodyguards and personal security guards.

And it seemed a little excessive to me.

These were not by and large famous people who would like get recognized on the street, but their whole reasoning process was that they were uniquely vulnerable because crypto is very hard to reverse once you've stolen it.

It's very hard to get your money back from a criminal who steals it.

And that meant that they were more paranoid than like a CEO of a public company would be maybe walking around.

You know, I read a blog post on Andreessen Horowitz's website recently, so you know I was having a great day.

And they've hired a former Secret Service agent to, among other things, help crypto founders prevent themselves from getting hit over the head with a wrench.

And he has sort of an elaborate guide to like the things that you could do.

But my main takeaway from it is if you're a crypto millionaire, you have to spend the rest of your life in a state of mild to moderate anxiety about being attacked at any moment, particularly if you're out in public.

Yeah, I do think it justifies the sort of lay low strategy that a lot of crypto entrepreneurs had during the first big crypto boom, where they would sort of have these like anonymous accounts that were out there that were them, but no one really linked it to their real identity.

I think we are going to start seeing more people, especially in crypto, using these sort of pseudonymous identities.

I mean, this is one of the reasons that, you know, people say that Satoshi Nakamoto has never wanted to reveal him or herself after all these years is because there would be a security risk associated with that.

But I think this is really sad.

and criminals, cut it out.

And here's my message to all the criminals out there.

I don't own any crypto and I will continue to not own any crypto.

You can keep your wrenches to yourself.

All right, last up on the docket for today, this one.

Oh, I love this one, Casey.

I've been dying to talk about this one with you.

Tell me.

Elizabeth Holmes' partner has a new blood testing startup.

So, Casey, you may remember the tragic story of Elizabeth Holmes, who is currently serving an 11-plus-year prison sentence for fraud that she committed in connection with her blood diagnostic company, Theranos.

Because God forbid a woman have hobbies.

Well, Elizabeth Holmes has a partner named Billie Evans.

They have two kids together.

And Billy is out there raising money for a new startup called Hemanthus, which is drummo, please, a blood diagnostics company that describes itself as a radically new approach to health testing.

This is according to a story in the New York Times by Rob Copeland, who says that Billie Evans' company is hoping to raise $50 million to build a prototype device that looks not all that dissimilar from the device that put Elizabeth Holmes in prison, the Theranos Mini Lab.

And according to this story, the investor materials don't mention any connection between Billie Evans and Elizabeth Holmes.

Hmm.

Well, I wonder why that is.

I have to say, she does have some experience that is relevant here, Kevin.

Why not lean on that?

Now, do we know what Hemanthus means?

Is that like sort of a name taken from historical antiquity and we'll look it up and it turns out it's like an ogre that used to like stab people with a spear or something?

I assumed it was like ancient Greek for like, we're serious this time.

It's

according to Wikipedia, Kevin, it's actually a genus of flowering plants that grows in southern Africa.

But members of the genus are known as the Blood Lily.

And I want to say, is it too late to change the name of the company to Blood Lily?

Yeah, that one, I like that one better.

I did spend some time this morning because I was on my commute just trying to brainstorm some better titles for this startup that was run by Elizabeth Holmes' partner and does something very similar to Theranos.

All right, let me run these by you.

Okay.

Blood Test 2, Electric Boogaloo.

No.

Fake Tricks Reloaded.

That's a Matrix Reloaded.

I like that it was high concept.

Okay, here's one.

Okay.

Thera, yes.

That's good.

Let's go with that one.

Okay.

Well, good luck to Billy Evans with Thera Yes.

$50 million.

Andreessen Horowitz will give that to them.

You know, they love to be contrarians.

Yeah.

Here's my prediction.

The startup is going to get funded and they're going to release something.

Yeah.

And you're going to have to figure out how to keep your family safe from it.

Listen, if they're doing another Fire Fest, they're going to do another Theranos.

You better believe it.

We have learned nothing.

Theranos is back.

Well, Casey, that brings to a conclusion this week's installment of Hard Fork Crimes Division.

And to all the criminals out there, keep your nose clean, stay low, try to stay out of the funny pages.

You're on notice.

Over the last two decades, the world has witnessed incredible progress.

From dial-up modems to 5G connectivity, from massive PC towers to AI-enabled microchips, innovators are rethinking possibilities every day.

Through it all, Invesco QQQ ETF has provided investors access to the world of innovation with a single investment.

Invesco QQQ, let's rethink possibility.

There are risks when investing in ETFs, including possible loss of money.

ETF's risk is similar to those of stocks.

Investments in the tech sector are subject to greater risk and more volatility than more diversified investments.

Before investing, carefully read and consider front investment objectives, risks, charges, expenses, and more in perspectives at Invesco.com.

Investco Distributors Incorporated.

AI is transforming the world, and it starts with the right compute.

ARM is the AI compute platform trusted by global leaders.

Proudly, NASDAQ listed, built for the future.

Visit ARM.com/slash discover.

This podcast is supported by the all-new 2025 Volkswagen Tiguan.

A massage chair might seem a bit extravagant, especially these days.

Eight different settings, adjustable intensity, plus it's heated, and it just feels so good.

Yes, a massage chair might seem a bit extravagant, but when it can come with a car,

suddenly it seems quite practical.

The all-new 2025 Volkswagen Tiguan, packed with premium features like available massaging front seats, that only feels extravagant.

Original music by Diane Wong, Rowan Nemas-Doe, and Dan Powell.

Our executive producer is Jen Poyant.

Video production by Sawyer Roquet, Pat Gunther, and Chris Schott.

You can watch this full episode on YouTube at youtube.com slash hardfork.

Special thanks to Paula Schumann, Qui Wing Tam, Dahlia Haddad, and Jeffrey Miranda.

You can email us as always at hardforknytimes.com.

Send us your ideas for a blood testing startup.

At Capella University, learning online doesn't mean learning alone.

You'll get support from people who care about your success, like your enrollment specialist who gets to know you and the goals you'd like to achieve.

You'll also get a designated academic coach who's with you throughout your entire program.

Plus, career coaches are available to help you navigate your professional goals.

A different future is closer than you think with Capella University.

Learn more at capella.edu.