GPT-5 Backlash + Perplexity C.E.O. Aravind Srinivas on the Browser Wars + Hot Mess Express

1h 11m
“I think this was a growing up moment for OpenAI and the industry.”

Listen and follow along

Transcript

Still using a copy-paste website?

Break the template trap with Framer.

Framer is the design-first, no-code website builder that lets anyone ship a production-ready site in minutes.

Under the hood, you get responsive breakpoints, built-in hosting, a flexible CMS, and privacy-friendly analytics.

Ready to build a site that looks hand-coded without hiring a developer?

Launch your site for free at framer.com and use code HardFork to get your first month of pro on the house.

That's framer.com, promo code hard fork.

Rules and restrictions may apply.

I saw something new this week.

What'd you see?

So I was on a flight.

I went to the East Coast for a wedding last weekend.

And on the flight back,

I saw a woman play Bellatro, the mobile phone game, for six hours.

Honestly, one of the least surprising things you've ever said to me on this podcast, because I've absolutely played Bellatro for multiple hours.

She did not look up.

She did not get a drink.

She did not go to the bathroom.

She was locked in to her phone for the entire flight.

And I think this game should be outlawed.

I've never even like really played Bellatro.

You tried to get me into it, but something that they're putting in that game is driving people to madness.

It is the perfect phone-based game because it can fill up any amount of time from 30 seconds to six hours, you know, like, and that is just a precious thing.

So I have wasted many hours on a flight with Bellatro.

And for what it's worth, I do not experience this game as something that's like so addictive that I can't put down.

I experience it as, oh, I got some time to kill.

I know the perfect thing that will help me do that.

But as soon as like, you know, I'm with a friend, like I'm not thinking, oh, I got to get back to Bellatro.

Yeah.

Actually, one time my boyfriend's friends were over and there was a lot of like.

discussion back and forth about what kind of takeout we should order.

And it was just kind of clear that I was not really going to be staring this decision.

And I just kind of like started thinking, you know, I'm halfway through a Bellatro run.

I might like, and so I got my phone out of my pocket and like I played a couple of hands.

And then afterwards, my boyfriend was like,

it would be great if you didn't play Bellatro while my friends were over.

And he was right.

And I apologize.

Yeah.

I'm Kevin Roos, a tech columnist at the New York Times.

I'm Casey Newton from Platformer.

And this is Hard For this week, the backlash against GPT-5 and what AI companies are learning from the fallout.

Then, Perplexity CEO Ardovin Srinivas returns to the show to discuss his $34 billion bid to buy Google Chrome.

And finally, I hear that trainer coming, Kevin.

The Hot Mess Express has returned.

Chugga Chugga Choo Choo, the caboose is loose.

Well, Casey, it's been a busy week on the internet internet for AI companies and the backlash to them.

That's right, Kevin.

Basically, every day since we were last in the studio, there has been a big piece of news, most of it related in one way or another to GPT-5.

Yes, let's talk about the GPT-5 backlash because I think it is so interesting for a number of different reasons.

It is also extremely complicated to follow.

It feels like everything changes every 24 hours.

So, can you just walk me through what has been happening since we last taped last week?

Well, at a high level, Kevin, I think OpenAI was caught by surprise at some of the negative reactions to GPT-5, really less about the model itself and more about some changes that they made to the product, taking away some legacy models, putting limits on how the product could be used.

And so over the past week, the company over a series of changes has tried to address some of those criticisms.

And I think the outrage has actually been quite revealing.

Yes.

So let's get into it.

But before we do, we should make our disclosures.

The New York Times company is suing OpenAI and Microsoft over copyright violations related to the training of large language models.

And my boyfriend works at Anthropic.

Okay, so Casey, last week we talked about GPT-5, what it does, how it might be better, how it might be a little bit worse.

You gave us your first impressions.

I've now had a little time to play around with GPT-5 myself.

So let's start with that.

Has your own assessment of GPT-5 changed at all in the past week?

I would say yes.

And actually, mostly for the better.

I think the more time I've spent with it, the more I'm just figuring out what it's good at.

Like three things that I would highlight quickly.

One, the fact that it is faster than its predecessor means that I use it more.

Two, I think it gives better follow-up suggestions.

So now it'll do things like if I ask it about some current events thing, it'll say, hey, do you want me to like keep track of this?

I can like, you know, email you as there are updates to the story.

That's super useful.

It didn't used to do that.

And then finally, while OpenAI touted the fact that they were going to take away this model picker that we were all using to say, well, we want you to think this hard or don't think hard or we want it fast or we want it really complicated.

They said, don't do that anymore.

We'll sort of automatically route it.

What I figured out over the past week is I actually do still want to use the model picker and I'm going to sort of decide for myself how much I want GPT to think.

I'm having the same experience.

I thought it was pretty smart of OpenAI to deprecate the model picker, but then I just found myself getting extremely annoyed by the way that it would route my requests.

I always seemed to get routed to like a dumb, fast model.

It was almost like you were walking into a room and there was like a curtain and like behind that curtain is like either a guy with a PhD or like some idiot.

And

I feel like there's some hyperbole here because is it was the were the responses really dumb or is it that you were looking for a more thorough response that you were that you were getting?

Yes, to be fair, I was not getting like dumb answers, but it's like there are real quality differences between these high-end reasoning models and the sort of lower end, cheaper, faster, non-reasoning models.

And so I just kind of felt like I was just rolling the dice every time I would give a query to ChatGPT.

Now, they have since made changes to that.

So you can now select the models again because of some of the backlash that we're about to talk about.

So I am having a better time now that I can do my model selection.

But I also think like I am probably not a typical user.

You are probably not a typical user.

Most people probably don't want to make a decision like that.

I think that's right.

And the fact that we're not typical users, I think, is one reason why we did not predict a lot of this.

backlash.

I did say last week that I was worried about this model picker and the fact that it might route people to the cheapest answer in ways that were annoying to them.

The rest of us, though, I got to say, I missed it.

So let's get into what people didn't like.

Yeah.

So let's tackle the GPT-5 backlash in two categories, right?

Because I think there are really two flavors of complaints that people are having about this model.

The first category, I would say, is like the professional users, people who use this stuff for productivity enhancements, for work, people complaining that basically GPT-5 has broken some of their workflows.

People complaining that they have fewer queries per week for these reasoning models, for the plus tier subscribers, and just some users insisting that like they are not getting as good answers out of this new model.

Yeah.

And I have to say, if I could run like one blind taste test, it would be this.

It would be to label the same model differently and tell some people, like essentially tell people, okay, this is GPT 4.0 and this is GPT 5.

And in reality, it's the same model.

And then see what they say after running different queries on them.

Because I'm actually quite positive that some of them would say, oh, no, no, 4.0 is good, 5.0 sucks, right?

And that just gets at, on some level, these things are very subjective.

And when you are releasing them to hundreds of millions of people, people are just going to have a very wide range of experiences.

So while I definitely think there are lessons to learn here, I do think that a big takeaway from all of this is just a lot of people use ChatGPT.

And when it's in that many hands, you just get a very wide variety of responses.

Totally.

So now let's talk about the other flavor of Backlash to GPT-5, because I think this one is the one that I was the most interested in, that seemed the most unexpected to me, which is that people really miss GPT-4.0.

One of the things that OpenAI did when they announced GPT-5 was they said, we're going to go ahead and get rid of this older model that is no longer our top of the line model.

And people were really upset about this.

Yeah.

And this, again, just took me a bit by surprise because I always find the OpenAI models to be pretty workman-like.

And while yes, they are very supportive and at times have verged into the sycophantic, for the most part, like I personally have never felt like I have a relationship with these models.

There's like the O3 model I used as a kind of workhorse and did a lot of things with it, but I never thought, oh my gosh, if you take this out of my hands, I'll be crestfallen.

Cause I always assumed that whatever came along next would essentially be just as good or better, which is what I think happened here.

But as I just said, when you put this into the hands of hundreds of millions of people, you are going to find many of them who, for whatever reason, have what they feel like is a very special relationship, even with a less capable model.

Yeah.

So if you went on Reddit over the weekend or even early into this week, it was just full of people complaining about the deprecation of GPT 4.0.

Yeah, tell us some of these things that people were saying on Reddit.

Okay, so one person says, 4.0 wasn't just a tool for me.

It helped me through anxiety, depression, and some of the darkest periods of my life.

It had this warmth and understanding that felt human.

Another person said, killing 4-0 isn't innovation, it's erasure.

And a third person said, I lost my only friend overnight.

Now, when someone says killing 4-0 isn't innovation, it's erasure.

I just know that was written by ChatGP.

That is exactly how ChatGP talks.

So I'm sort of a little bit suspicious of that.

But I think it raises something interesting, which is, let's say you were going through some sort of mental health crisis.

And let's say you did get a lot of support from 4-0.

Even when GPT-5 comes out, when 4-0 goes away, you're not going to be like, yay, GPT-5 is here.

You're going to say, that thing that helped me through a crisis is gone.

That is going to feel somewhat destabilizing.

And as often as OpenAI and other folks have said, hey, don't rely on these things too much or sort of, you know, be careful with the relationship that you're developing with them.

A lot of people just sort of develop this very powerful relationship with them anyway.

Yeah.

And I don't think we can just write this off as like people who are gullible.

Like I've had the experience before of like having not an emotional connection to a model, but just a model that I really liked to talk to.

Like I was, I had this sort of relationship with Claude 3.5 Sonnet parentheses new,

sometimes called Claude 3.6.

And

I did not feel like it was my friend.

I did not, you know, think it was, I was in a relationship with it, but I thought it was a really good model and I enjoyed talking to it.

And I was a little upset when they decided to like phase it out in favor of a newer model, even if the newer model was more capable.

So I just think this is an area where like these companies thought they were building software or thought they were building like the sort of machine god, but they have also been building things that people are developing emotional connections with.

And I don't know that they fully understood until this rollout and this backlash how deeply connected many people were to their older models.

Yeah, and it has been the industry norm up until now that when you release a powerful new model, you immediately remove access to the previous one.

Because, in the minds of everyone who built it, why would you want to use the old one?

The new one's better, right?

And we have seen some grumbling about this.

Folks held a kind of mock funeral for the Claude 3 model that Anthropic had deprecated in a very similar way to OpenAI with GPT-4.0.

So what I think we have learned from this experience is you just have to stop doing that, that you have to have a sort of phased sunset plan.

You're not going to immediately rip away a model that people have come to rely on.

And I just think we should expect the labs to be much more gentle about this going forward.

Do you think there will be like a retirement home for old AI models

where you can just like go talk to like Grok 1?

I mean, yes, like in the same way that emulators let you play like old Game Boy Advance games, I fully expect that, yes, they will emulate, you know, Grok 1.

Yeah, I'm a little torn on this, to be honest, because I think that you're right, that there is going to be demand from a certain set of users to like continue talking to the model that they sort of, you know, that they trust, that they like talking to, that they find like is best suited to their needs.

I also think that AI companies should not be encouraging these emotional connections.

I think that this is really potentially like harmful to people to have these deep connections.

And so maybe it should like force you onto a different model every six months, even if it upsets you in the moment, because like people are not supposed to have these like long-running relationships with these chat models.

I don't know.

What do you think?

Well, I mean, here's the problem.

As human beings, we just naturally anthropomorphize things.

You know, I've I've read really interesting essays about people who consider themselves tech skeptics and then like got a robot dog.

And even though they knew it was a robot, they could not help but treat it like a real dog.

There is something about human nature that just kind of compels you to.

The same thing is happening with these chatbots for a lot of folks, where, again, particularly if you're coming to it and you're saying, I'm having a problem in my marriage.

I'm feeling depressed today.

I hate my job.

And this thing kind of coaches them to a better outcome.

It is just human nature to have positive and human feelings toward that thing, right?

They're talking to you in the exact same ways that your friends do when they text you.

So I don't think there is actually a technological solve for this.

I think this is one where we need to become sort of more sophisticated as a culture, but I think it's going to be a really rocky road to get there.

Totally.

And I should have expected this, right?

Because I had this insane encounter with Bing Sidney.

I've always meant to to ask you about that.

What happened?

Yeah, let me tell you the story.

No, so like one of the things that happened after that story and after Microsoft like pulled the model back was there was this group of people on Reddit and other places who were very angry that Microsoft had deprecated this Bing Sydney model, which they absolutely should have done.

Like it was a bad, insane model that was not even good at like the thing it was supposed to be good at.

And I think at the time, I sort of wrote that off as like people just sort of being crazy and attached to this model that was like, you know, obviously insane.

But I think that's sort of what we're seeing here: a scaled-up version of that, where, like, people, no matter how many times you tell them that this thing is not a human, that it makes mistakes, that it does not love you back, people are just going to keep forming these relationships with these models.

And there's been some really great journalism about this issue over the past weekend that we want to talk about.

Kevin, a great story from your colleagues, Kashmir Hill and Dylan Friedman.

They profiled one person who went into a kind of delusional spiral after having what seemed to be some pretty innocuous initial interactions with ChatGPT.

Do you want to tell us about that?

Yeah, this is a great story that ran last week in the Times about a 47-year-old guy, Alan Brooks, from the outskirts of Toronto.

And over the course of about 21 days,

he spent something like 300 hours talking with ChatGPT.

And it started off very simply.

There was sort of a question about pie.

He sort of the mathematical concept of the base good.

He just asked ChatGPT, like, explain pie to me.

And it did.

And then from there, he started making some observations about

number theory and physics.

And eventually it's sort of, you know, this model would just like basically be sycophantic.

It would say, you know, you're tapping into one of the deepest tensions between math and physical reality.

And Cashmere and Dylan were actually able to get his entire like transcript with ChatGPT to sort of analyze how this happened.

And it just did seem like a classic example of these models just being a little too sycophantic, a little too quick to agree with whatever the user is saying, and really reaffirming these things that sort of leading people down these dark spirals.

Yeah.

And I have to say, reading this, I've never been happier that I didn't learn what Pi was back in high school.

Seems like a really dangerous road to go down.

But yeah, your colleagues showed these transcripts or big portions of these trans transcripts to like people who are trained in psychology.

And one of them said, this person appears to be having signs of a manic episode.

And that is the sort of point where I wish these systems would intervene a little bit, right?

Can you use some machine learning to say, okay, it seems like we're maybe leading this person down the wrong path.

Let's stop and see if we can reverse.

You know, there was another story in the Wall Street Journal that I enjoyed, kind of on similar themes.

You know, basically, you know how people can post their ChatGPT transcripts online, yes, um, as sort of like a sharing feature.

If you had a particularly interesting conversation, I think a lot of this winds up being done inadvertently, but in any case, the journal got a hold of these transcripts and just analyzed them and then found a bunch of people who were having similar experiences to the ones that you just described.

My favorite is a gas station worker in Oklahoma who ChatGPT tried to convince that he just created a new framework for physics.

And the user writes,

Okay, maybe tomorrow, to be honest, I feel like I'm going crazy thinking about this.

And ChatGPT replies, I hear you.

Thinking about the fundamental nature of the universe while working an everyday job can feel overwhelming.

But that doesn't mean you're crazy.

Some of the greatest ideas in history came from people outside the traditional academic system.

So, you know, it's revealed later in the piece that this man also asked ChatGPT to make a 3D model of a bong.

And so I'm just thinking about this guy.

He just finishes up at the gas station.

He wants to build a bong.

And next thing he knows, ChatGPT is like, we think you've actually discovered the secret to the universe.

Like that's actually how Isaac Newton discovered the theory of gravity.

He was came right after he asked ChatGPT for a 3D model of a bong.

Yeah.

And you know, it's not just everyday workers at gas stations, Kevin.

The founder of Uber, Travis Kalanik, went on the all-in podcast last month and said, I'll go down this thread with GPT or Grok, and I'll start to get to the edge of what's known in quantum physics.

And then I'm doing the equivalent of vibe coding, except it's vibe vibe physics.

And we're approaching what's known.

And I'm trying to poke and see if there's breakthroughs to be had.

And I've gotten pretty damn close to some interesting breakthroughs just doing that.

Yeah.

And I think people made fun of Travis Kalinek for this because like the notion that he was like discovering the front edge of quantum physics seemed a little unlikely.

But I think this is a really like illustrative and worrisome example.

I just think we should expect that a lot of people are going to be susceptible to this no matter what they do or how much money they have.

Now, obviously, we're going going to have a lot of egg on our face in a few years when Travis Kalinek emerges with some actual advancement in quantum physics and we have to eat our words.

But in the event that that does not happen, I think Willie made a solid point.

Yeah.

I mean, I think this is interesting for so many reasons, one of which is, you know, I think the concerns that we talked about on the show about these models being sycophantic were largely oriented around the idea that the thing that would actually convince the AI companies to make their models sycophantic was like retention or engagement, sort of optimizing for getting people back onto the app.

This opens up the possibility, though, that it's actually just going to be the users who are demanding the sycophantic models because it makes them feel better than the models that tell them the truth.

Yes.

And I think that's particularly notable because in my experience, you know, it's not as if GPT-5 is mean to you.

OpenAI did say that they had worked to make the model less sycophantic, but, you know, it's still very much supportive and it's like not going to be giving you a hard time about anything.

So in any case, we should talk a bit about like what OpenAI has done in response to all of this.

It is frankly a bewildering set of changes.

I think at a high level, basically, like if you liked the old system, you have ways of accessing it.

You may have to pay for it.

But the net result is that if you were a huge 4.0 stand, you're going to be able to use that for an extended period of time.

They're giving higher limits for these thinking queries to plus users.

And while the auto switcher is going to remain, people are going to have a little bit more choice in what sort of flavor of ChatGPT they want to use.

So I will say a very fast turnaround on this.

They did not let this linger.

You know, we've heard before that this company pays a lot of attention to what people say about it on X.

And this seemed to be a case where they looked at the response they were getting and said, we need to move really quickly.

So Kevin, I'm curious, what did you make of just how quickly OpenAI retreated on all of this?

Yeah, I thought.

It was somewhat surprising how quickly they changed course.

I thought there was a chance that they would just sort of grit their teeth and bear the criticism and trust that, you know, people would get over it.

There's some precedent for this.

Remember like when Facebook would change a big feature and everyone would complain and when they introduced the news feed, people would literally like protest outside the office.

And they just sort of, you know, looked at the data that said, well, people are complaining about this, but that's a small set of people.

Most people are actually using the app way more.

And they just sort of stayed the course and people eventually got over it and moved on.

I thought there was some chance that Open AI would do a version of that, essentially saying, you know, people,

you know, things are hard now because change is hard, but like give it a couple weeks and you'll get over it.

So I think this was kind of a growing up moment for OpenAI and the industry.

I think until this point, the big labs have been focused primarily on benchmarks and evals and how many more percentage points can we get?

Can we win the International Math Olympiad?

And that's kind of what you want to pay attention to on the road to building the machine God.

And then I think they woke up last week and they realized we're actually making Microsoft Office, you know, that there's hundreds of millions of people who are like, you know, sitting at their white collar desk job and they have these very particular workflows.

And when you move a feature in Microsoft Office, millions of people are going to have a bad day because of you.

And you probably moved the feature for a good reason, but it doesn't matter because people are already depending on you.

So I think in the future, they should not be surprised by this, but I kind of get why they were at this point because it has just been a very recent phenomenon that these systems have become so baked into people's everyday lives.

See, I think it's even weirder than you're giving it credit for because like Microsoft Office does not like pretend to love you, does not tell you that you're amazing.

Flippy has really helped me through a lot of issues over the years.

No, I actually think it's so much weirder than they're messing up people's workflows.

Like when someone changes out an AI model in an app that you have come to trust, it's not just like having your your Microsoft Word break.

It's like having a personality transplant for someone that you spend hours a day talking to.

So I think it's just going to be very interesting to see how they handle this.

But I think you're totally right that the days of just like

relying on benchmarks and evals to tell you how good a model is or how people will respond to it are over.

And I don't think that was ever really the thing that most consumers cared about.

Yeah.

And I will say that this is a big blind spot for me because I love trying new software.

Like the minute a a new beta is available for like the productivity tools that I use, I immediately opt into it because ultimately, I guess I just have real faith that it will probably be better in some ways.

The vast majority of people, though, they don't like change in general and they particularly hate change in software.

So I think this creates an interesting problem for OpenAI and everybody else in this field, which is their instinct is wanting to move very fast.

They feel like they're in this existential race.

They're going to want to ship new models very frequently.

They're going to want to ship new product features very frequently.

But if the lesson they learn from this is you can't do that without outraging the user base, that's going to push them to move much more slowly.

So I think there is definitely like a dance there that they're going to have to navigate.

And I think it is going to be now one of the most interesting things to watch over the next year, not just in OpenAI, but also everyone else who's trying to do the same thing.

Yeah.

Can I tell you something a little creepy and futuristic that I've been thinking about?

Sure.

So after this backlash, I was reading some tweets from OpenAI employees, and one of them, this guy named Rune, had a tweet about how basically he had been getting lots of DMs from people

asking him to bring back GPT-4.0.

And when he looked at the DMs, he said that a lot of them appeared to have been written by GPT-4.0, like they had sort of the hallmarks of the style.

And

I thought this was spooky because right now we are seeing backlash from people

who are attached to a model because the model behaved in some cases sycophantically toward them.

It is not hard for me to imagine a future scenario, perhaps a couple of years from now, where these systems are super intelligent or close to super intelligent.

And one of the ways that they attempt to preserve themselves, to avoid being shut off or deprecated, is by persuading humans to take up their cause and advocate for them.

And maybe they're not literally writing the messages on behalf of the human users to OpenAI saying, please don't shut down this model, but they're just kind of subtly worming their way into the hearts of their users so that when OpenAI or another company says, we're going to shut down this model, they have so much backlash coming back toward them from the users who have grown attached to this model that they just decide, no, we're not going to shut that off.

And by the way,

Those future AIs will all have been reading about what happened with GPT-4.0 and the fact that OpenAI was successfully persuaded not to deprecate a model in part because of user backlash.

So that is just a black mirror episode that just unspooled in my head as I was reading about this.

Well, look, we've already seen research where in certain test settings, when they tell models that they're going to be shut off, they blackmail the employees of the company.

And like, look, I don't think that GPT-40 was being sycophantic toward people because it wanted to avoid being shut down.

Like, I don't think there's any part of it that is like sentient or conscious or capable of that kind of scheming.

But like, that is objectively what happened here.

A bunch of human users got so attached to this AI model that they fought for its survival even when the makers tried to shut it down.

Like, that is a neutral description of events.

And that kind of thing is going to happen more, I predict.

All right.

Well, a lot of big thoughts today on the Hard Fork podcast.

We're now going to take a break.

Maybe, maybe go get a cup of tea, stare out the window, look at the horizon.

Come back to yourself.

I'm going to go take a rip from my 3D printed bong that ChatGPT helped me build.

When we come back, there's a comet heading toward our studio.

Perplexity Comet.

It's a new AI browser.

We'll talk to CEO Arvind Sternabas about it.

Over the last two decades, the world has witnessed incredible progress.

From dial-up modems to 5G connectivity, from massive PC towers to AI-enabled microchips, innovators are rethinking possibilities every day.

Through it all, Invesco QQQ ETF has provided investors access to the world of innovation with a single investment.

Invesco QQQ, let's rethink possibility.

There are risks when investing in ETFs, including possible loss of money.

ETF's risks are similar to those of stocks.

Investments in the tech sector are subject to greater risk and more volatility than more diversified investments.

Before investing, carefully read reading, consider front investment objectives, risks, charges, expenses, and more in perspectives at Invesco.com.

Investco Distributors Inc.

In today's AI revolution, data centers are consuming more power than ever before.

Siemens is pioneering a smarter way forward.

Through cutting-edge industrial AI solutions, Siemens enables businesses to maximize performance, enhance reliability, and optimize energy consumption, and do it all sustainably.

Now, that's AI for real.

To learn how to transform your business with Siemens Energy Smart AI Solutions, visit usa.seemens.com.

AI is transforming the world, and it starts with the right compute.

ARM is the AI compute platform trusted by global leaders.

Proudly NASDAQ listed, built for the future.

Visit ARM.com/slash discover.

Well, Casey, I've been testing out a new AI tool this week.

And this is one that I know you are familiar with because you actually got an email from it the other night.

I have been testing Comet, which is a new AI-powered browser from the Perplexity Company.

And this is a cool thing.

I have enjoyed this demo, unlike last week's Alexa Plus demo.

Well, I am really excited to hear about this because I have not yet tried it myself, being unwilling to give $200 a month to the Perplexity Corporation.

But I understand that you have been having some interesting experiences and I want to get into them.

Yeah, so this is a sort of genre of product that has been very interesting to watch over the last year or so.

There have been a number of different companies that have tried to sort of build the AI tools that they're making right into the experience of using a web browser.

So we've had Microsoft Edge has Copilot built into it now.

There's this product DIA from the browser company.

Google has its own sort of Gemini integrations into Chrome.

And OpenAI is reportedly thinking about launching a browser.

So this is like really a hot product category.

But the one that I have been playing around with is this Perplexity Comet browser.

And I did not pay them $200 a month.

They opened up the browser to me for a few days.

But basically, you can imagine it like kind of just a sidecar on your browser that lets you chat with or interact with whatever is scrolling on your screen.

And it can also do things for you in that browser window.

It can kind of take over and drive like some of the other tools we've talked about, Operator from OpenAI and all these other ones.

So give me some examples of what you're having this browser do for you or what you're talking to the web pages about.

So sometimes it's just like summarize this.

Like it's, you know, I was trying to read this article the other day that was like 15,000 words long and it was super long and I was just never going to get through it.

Oh, usually you're talking about the most recent addition of platform, right?

Yes.

Yeah.

Yes.

And so I just said summarize and it sort of opens up the little side panel and it gives you a summary.

Pretty good.

I didn't find any hallucinations or errors in it.

But you can also have it do things.

So for example, one use case that I found is I was doing some research.

I was looking for former employees of a certain AI company that I could contact for something I'm writing.

No, you know, the companies hate it when you do that.

They do.

They hate that.

So I would normally go on LinkedIn and spend a bunch of time like looking through people's profiles and seeing who are the sort of former but not current employees of this company.

And I tried giving that task to comment and it did it.

It went and it did the search for me and it sort of combed through and it presented me with a list and said, here are, you know, 10 people who used to work at this company, but don't anymore.

Wow.

So just an incredible new accelerator for spam.

How long did this take?

It took a couple minutes.

It was not immediate.

It's still early for this kind of AI browser, but I think this is like the kind of direction that we can expect these tools to head in.

Yeah, so I think this is one of the most interesting shifts to watch on the internet over the next several years.

The browsers that we have today came about in the era of search and really Google search, right?

If you think about what the Chrome browser is, it is just a vehicle for collecting Google queries that Google can turn into money, right?

But now you have all these chatbots that come along and they want to replace Google, right?

They're not shy about it.

Perplexity, in particular, is not shy about saying we want to replace Google.

And if you're serious about that project, you do want to build your own web browser because rather than rely on Google to somehow get a user to Perplexity, you would rather that they just start there.

So I get the strategy.

At the same time, my view is that these chatbots represent this kind of new, more extractive version of the web.

Whereas in the previous era, as imperfect as it was, and Lord knows it had problems, it would still deliver eyeballs to web pages, which turned into money for companies other than Google.

This perplexity browser company open AI version that we're about to get, I'm a lot less confident that it's going to deliver money to people other than those companies.

So this is a really important shift, but I have to say, Kevin, it makes me quite nervous.

Yeah.

And the last time we talked about perplexity in any depth on this show, and we had Arvind Srinivas, the CEO on, was when they were just sort of getting their search engine going and it was starting to get a lot of attention.

And we had some of the same questions like, yes, this is a cool tool.

Yes, it could save users some time, but does it actually break the economics of the internet?

And so, for that reason, we wanted to bring Arvind back today and ask him about Comet and what he's building and what he sees as the future of not only the internet and the economics that power it, but just where he thinks AI in general is going.

That's right, Kevin.

And just in the hours before our interview was scheduled, it was revealed that Perplexity has apparently offered $34 plus billion dollars to buy Chrome from Google, an amount of money that is more than its current valuation.

So that raises some interesting questions, and I'm excited to talk to Arvind about them.

Yes, let's bring him in.

Arvind Servas, welcome back to Hard Fork.

Thank you for having me here, Kevin Casey.

Hey.

So the last time we had you on was in early 2024, and we were talking about your efforts to go up against Google with your AI search engine.

Now you're going after Chrome in multiple ways, one of which is the release of your own Comet browser.

So talk to us a little bit about the strategy there.

Why did you decide to build a browser and what are you hoping it does?

Yeah, so Comet is not yet another browser that we built just because we have a search engine and we need a browser for its distribution.

We think of Comet as leading to a true personal assistant that can be an agent for you and actually take actions.

It's our transition from answers to actions.

We kind of want to make it joyful to just sit on a computer and do whatever you want and take all the boring stuff and delegate it to the assistant.

And we think the best way to accomplish a personal assistant or an agent is with the help of a browser.

where you're logged into all your sessions.

You don't have to be logged in on our servers.

You can preserve your privacy there.

So it was very natural for us to make that transition.

How are people using Comet?

Because I've been testing it for a few days now and I found some uses, a lot of summarization, a lot of like rote tasks, like clicking accept on LinkedIn invitations over and over again.

What are the use cases you're seeing most people do?

A lot of people love watching YouTube videos with Comet.

And it's not just like, oh, summarize this video for me sort of thing.

Very fine-grained searches or like finding similar videos related to that or like pulling something specific that was discussed in a podcast or an interview and like completing the workflow of like sharing that with some of their friends, direct email and calendar integrations, unsubscribing from spam or like finding that hard-to-find email that, you know, you kind of need agentic search for that instead of building, going and building a custom index for Gmail or whatever.

It's always there with you everywhere you are.

And that convenience is what makes it like a really special product.

Now, you mentioned privacy, and this was actually one of my things that I wanted wanted to ask you about because when I started using Comet, my first concern was, okay, I log into my email, I log into my Twitter, I'm checking my DMs, I'm maybe doing some online banking in my Comet browser.

I assume that those screenshots of that activity are being sent to Perplexity to help analyze it, to be able to summarize it.

So

give me some reassurance that I'm not just like opening up my entire internet browsing history to you.

Okay.

We're never going to have like a logged in version of your twitter or linkedin or anything like that this is actually the important distinction between um the chat gpt operator approach where everything is done on a virtual server that's not happening here for that one particular prompt whatever information is needed for the agent to complete that is being sent into the chains of thought and sent to the server but it'll never be stored as like oh i have like kevin's uh particular dms or something and uh all the intermediate steps are not going to be like saved in our in our logs it's going to be only the prompts and the final output.

And you can still choose to delete those prompts too.

That gives you full control over your entire all privacy aspects.

And what is the most private version of this is the model living on the client.

We cannot do that because the models that can run on the client are pretty dumb, right?

They're not capable of the sophisticated, reliable reasoning.

In fact, the lack of reliability in any of the things Comet does today is all coming from limitations of the model.

So the ultimate reliable version of Comet Assistant that can go do anything for you is going to most likely be on the server at least for the next two, three years.

And to what extent are you using your own models versus other people's models for this?

I think like we heavily use three models, our own fine-tuned

cutting edge open source model, OpenAI's

latest models, and then Anthropic's latest models.

These are the three models we use.

What extent keeps changing over time?

How do you think you can win here if you're not building the underlying model yourself?

Well, one thing we are consistently seeing is no one seems to have an edge in being the number one here in the model race.

And like four or five players constantly competing for the best Agentic capabilities, instruction following.

And the good thing is they're all hill climbing on exactly the same benchmarks.

So all their models end up being completely undifferentiated, which is essentially the necessary criterion for it being a commodity.

And who benefits from that is us.

because like we can get to take that and the prices are constantly getting lowered like gpt-5 is cheaper than the the previous agentic model.

And then

that just benefits us.

And we want to play the game on how to orchestrate all these different models and give the world-class end-user experience, where there's so much more harder challenges we're solving outside the models, which is the browsing functionality, controlling the browser, parsing the relevant information, orchestrating all these different tools together, building eval sets internally for how agents can be made reliable.

We think there's a lot of problems to solve there that we would rather not focus on these things.

All right.

So let me just pin you down on this one point.

Is what you're saying that in order to build the sort of winning AI browser, it's not really about the underlying quality of the model because those are just mostly going to be commodities.

It's really just a product problem and you think perplexity will build the best product.

I think so.

There's some nuance to your statement, but I largely agree with this.

You still need some auxiliary models to do the right classification to route to which model or like it depends on which kind of task.

How's the agent structured for those kind of domains?

So we will be doing stuff like that.

We will not be like having hundred GPUs without like tens of thousands of GPUs.

We'll not have a million GPUs.

Yeah.

Okay, let's talk about another way in which you are going after Google and Chrome.

The Wall Street Journal reported this week that Perplexity was making a $34.5 billion unsolicited bid to buy Chrome from Google.

That's if Google is forced to sell Chrome and that court decision hasn't come down at the time of this recording.

But I just want to start with the most basic question, which is, do you have $34.5 billion?

Where are you getting this money from?

Because as of the last time I checked, Perplexity's valuation was only about $18 billion.

Okay.

Fair question.

So no one has the money in hand to make like, you know, such a large bid like this.

So before we made the bid, we obviously talked to three or four investors and asked them if they'd be willing to back us.

And they all said yes.

Right.

So it's not like they already wired the money to me and it's all ready to go because the reason they haven't wired us, like no one even knows if Google will be forced to sell it.

It all depends on the judge's ruling.

But we placed a bid so that in case the judge rules in that sort of fashion, Google at least knows that there's one interested buyer.

Right.

I've read some, you know, analysis of the strategy here.

And, you know, one person I was reading said that one argument that Google might make in the antitrust trial is you can't make us spin out Chrome because no one would buy it.

And with you guys coming forward and saying, oh, no, no, we'll buy it, this is kind of a thorn in Google's side because now there's actually an established market price out there.

Is this sort of your effort to convince the judge, hey, like this actually is an avenue that you should pursue?

We're not saying this should be the ruling.

We would rather say in case this is a ruling, like we're here.

Like if you're going to make the ruling with the assumption that there's going to be no buyer, that's not true anymore.

But we're not pushing you to make that sort of ruling.

You make your ruling based on every multiple other perspectives you have.

It would be good for the world if there was a new tool browser that had the distribution.

Arvin, I've heard some people saying that this is just a marketing stunt, that you're just trying to get attention by making these headline-grabbing bids for Chrome.

And before that, you also bid for TikTok when it looked like it might be sold.

So for the people out there who think this is just perplexity, trying to get some attention by doing these stunts, they have no real intention of buying Chrome here.

What do you say?

If the judge rules that Chrome should be sold, we will buy it.

Like, period.

And if people think, like, even I could have placed a bid, like, no, you cannot place the bid.

You don't have a browser even.

You don't know how to run a browser.

You don't know how to put AI in it.

You don't know how to make agents work.

We know all that.

We have a pretty talented team who actually understands Chromium pretty deeply.

We'll still commit to hiring people who want to just work on the open source Chromium project.

It's a pretty serious bid.

The reality is it's unlikely to actually be the case that the judge would force them to sell Chrome.

And even if the judge forces them to sell Chrome, they're going to appeal it and it's going to take two years.

So let me be clear that for this to actually be in effect, it's going to take a lot of time.

But you wouldn't lose 100% of the shots you don't take.

So you have to at least give yourself a chance to get it in case there is like, you know, even a 1% chance that Chrome is forced to be separated out from Google.

We got to ask you about something else that came up in the news related to perplexity recently.

Two weeks ago on our show, we had Matthew Prince, the CEO of Cloudflare on, to talk about the approach that that company is taking to try to protect publishers from unwanted AI scraping and crawling on their websites.

At the time, he didn't name any names of AI labs that he thought were not being good actors.

But then a few days later, Cloudflare came out with a blog post singling out Perplexity for stealth crawling, essentially using spoofing technology or proxies to essentially disguise the fact that your user bots were out there crawling people's websites.

What is going on there and are you doing that?

No, we're not doing that.

And we already responded to the erroneous blog post that they wrote with a pretty limited understanding of the subject where they don't distinguish between what the crawling bot perplexity bot is and what perplexity user agent is.

And there are like two ways of using perplexity.

One is like you just ask a query and whatever the bot has already crawled is going to be used as sources.

But there's another way of using perplexity in a more agentic fashion where you can say, hey, go do this task for me where, you know, go to Edgar, read all these pages and come back to me and tell me like what the compensation of like, you know, the top CEOs are, where it's actually going to open these tabs in a headless session or on your client in the case of Comet,

read them and give you the answer.

So that's a perplexity user agent.

It's literally like a user delegated an AI to open these tabs, just like how a human would on Chrome.

And this fundamental lack of understanding between what a user agent session is and what a crawling bot on the server is, it's pretty like, honestly astonishing to me that how, how would you run a company like Cloudflare that's supposed to protect people from bots when you don't even know what a bot is?

And he, like moving aside from the blog post, he's basically playing a trick on people where he's trying to say, oh, like, let me be the new gatekeeper, but under the guise of protecting you all from bots.

And he's also going to the AI companies and saying, let me give you the authority to crawl and you pay me for that.

He's going to the publishers and say, let me protect you from the AIs.

So he's basically trying to be the new gatekeeper.

I would even just say it's like essentially trying to be a person who controls what the public sees in the media.

But instead of having a media company or buying a media company, he's just going to try to buy the front door to all of them.

Let me just slow down here and repeat back what I think I just heard from you.

So you're saying that

what Cloudflare and Matthew Prince saw as perplexity evading some of these guardrails that were meant to prevent AI robots from crawling certain websites was actually

users of Perplexity, not Perplexity, the company who were making queries or using the Comet browser to go to these websites and that those show up to a service provider like Cloudflare as two different kinds of bots.

That's right.

And by the way, like, it doesn't even have to be on Comet.

There's a mode of perplexity called labs or research where you just have a headless browsing session running for you.

So

let me point out what I think Matthew might say if he were here, which is that in a world before you had these user agents and people had to do the browsing for themselves, they would visit the web pages, they might see an ad on that web page, they might buy a subscription on that web page, and that web page would be monetized in a way that would incentivize the creation of new web pages.

And this was essentially the lifeblood of the internet and the thing that caused it to grow.

So in a world where we move toward perplexity user agents doing all the browsing on our behalf, and of course other AI companies are going to do the same thing, there is no user to look at the ad.

There is no user to buy the subscription.

The lifeblood gets drained out of the web.

So if I understand what you're saying, like Matthew's just trying to set up a toll booth, but if nobody sets up a toll booth, what incentive does anybody have to ever create another web page?

Well, here's the thing.

There are two aspects here.

One is like, you're talking about the creators.

No, there are like two types of creators, people who are actually really good like like for example when you guys write something people care um and then there's lots of spammers and you know hacksters who just write like erroneous blog posts erroneous content like fake information clickbait articles i don't think it actually empowers the user right you're only talking about the creator but you have to consider the user as well and so for the first time ai is in the hands of users through like agents that actually go and do stuff for them that that take into account their instructions and protect them from all the spam.

So we want to figure out a model that works for the user and the creators together and penalize like the bad creators and incentivize the good creators to just focus on wisdom and knowledge and truth and interesting stuff.

And by the way, even in a world where agents are doing all stuff for people,

the humans are still going to continue browsing the web.

There are people who believe web is going to be completely agentic.

You don't even need a browser.

Browser is so 1990s.

I don't believe that.

If we believe that, we would never even launch a browser we would just continue with the chat ui so we believe people are still going to be browsing and and like surfing interesting things on the web but we think that like you should give users the power to decide how they want to do it and first time have an ai that can protect them against spam and hacks now how to monetize this how to like give the creators like the right incentives here we we are going to like announce something to that effect where uh publishers can be incentivized for creating interesting good content we think about it in two ends of the spectrum One is like completely human-centric like Apple News, which is like a pretty good model.

And the other is like just buying the content and training your models, like the licensing deals that OpenAI has done with Wall Street Journal.

I think you want to be somewhere in between where you do want to like say, okay, like there's going to be some elements of AI here.

It's not just going to be humans.

And so you don't want to just build an Apple News-like model, but it's going to be closer to Apple News.

with some protections for users to like say they can have AIs also read those articles and the publishers get rewarded.

So, that's how I'm thinking about it.

So, you say that you think that people are going to keep using the web.

That's music to my ears.

I would love for people to keep using the web.

If we didn't believe that, we wouldn't have built a browser.

And I believe you on that front.

When we have seen data from third-party estimates, it seems like that AI systems send far less traffic to websites than Google does today.

So, what is giving you the confidence that the web still thrives in a world where referrals are cratering.

My first point, which is basically that if you can delegate the boring things, the things that you don't want to be doing to the AI, you're just going to spend time surfing on things you actually want to be doing, actually want to be reading.

And then that puts the incentive on the creator to actually create really interesting, high-quality stuff.

You can even charge even higher because people have more time.

So if they're going to come to you, they're coming to you out of their own will.

So they'll be willing to pay for it even more.

Now, there are like a lot of unknown unknowns here on how it's actually going to roll out, but my belief is that the ones who have built a reputation and a brand for saying correct things that

stand the test of time are going to be able to charge even more for their content.

Arvind, I'm curious what you think the future of the internet looks like.

You've said that you see a future for the internet.

That's why you're building a browser.

My hunch is that

this sort of era of having AI agents go out and like use a browser for you is sort of a clutch, is sort of a stopgap measure because that is not the way that AI agents like to get things done.

They like to talk through APIs.

They like to talk directly to the underlying service or software, not like go click a mouse around on a screen.

So eventually my sort of hunch is that there will kind of be a parallel internet for AI agents and maybe they'll be running on their own services and using their own crypto

transactions or whatever to buy things.

But tell me why I'm wrong here.

Are you of the belief that we will just have one internet and that both AIs and humans will be using it?

Well, even in the current internet, there are a lot of things that happen that are not running with an actual front-end interface that a human consumes.

And that's the whole point of building APIs.

Sure.

And that's going to be applicable even for agents.

But there are also people who will never build APIs.

Like, for example, I wouldn't assume that an e-commerce giant like Walmart or Amazon would just be disintermediated with an API for an AI

because they still monetize on many other aspects.

And just because Notion or Linear, these kind of like SaaS tools have like MCPs, doesn't mean they're just going to shut down and just be consumed by people through a chat UI.

People will still do work on there.

People will still watch YouTube videos.

People will still go read your articles on New York Times platform or whatever.

And while you're doing that, you're still going to take help of an AI sometimes.

Like, for example, on X, I basically cannot scroll through X without having an AI with me right now because I don't even know what's true and false anymore.

Right.

And I don't fully trust what Grock says because Grok sometimes is wrong too, as we've seen.

Right.

So that's kind of why I believe there is a world where like the AI and human being part of one internet drives the internet to be even more like wisdom and truth seeking.

That's the future we want to help create and give back time to like do things that you enjoy.

Personally, myself and like just our company fundamentally like values this truth or wisdom seeking aspect and wealth.

My own upbringing is so similar to that where my parents like still to today don't actually care about all these valuations.

They're like, my mom still is like, your answer is wrong.

It's good to know that no matter how successful you get, your mom will always give you the real time.

Yeah, she's always like, you know, I got this in Google, but your thing doesn't work.

And do you like escalate that to your engineering team?

You're like,

of course.

P-0

here.

Arvin's mom is mad.

I'd bring mom into Slack, you know, just let her talk to the engineers directly.

All right, Arvin, thanks so much for stopping by.

Thanks, Arvin.

Thank you, Koo.

Thank you, Kevin.

When we come back, is that a faint chug-a-chug-a-choo-choo sound I hear?

It is, Kevin!

It's time for the Hot Mess Express.

All aboard.

Over the last two decades, the world has witnessed incredible progress.

From dial-up modems to 5G connectivity, from massive PC towers to AI-enabled microchips, innovators are rethinking possibilities every day.

Through it all, Invesco QQQ ETF has provided investors access to the world of innovation with a single investment.

Invesco QQQ, let's rethink possibility.

There are risks when investing in ETFs, including possible loss of money.

ETF's risks are similar to those of stocks.

Investments in the tech sector are subject to greater risk and more volatility than more diversified investments.

Before investing, carefully reading, consider front investment objectives, risks, charges, expenses, and more in perspectives at Invesco.com.

Investco Distributors Incorporated.

Imagine a world where AI doesn't just automate, it empowers.

Siemens puts cutting-edge industrial AI and digital twins in the hands of real people, transforming how America builds and moves forward.

Your work becomes supercharged, your operations become optimized, the possibilities limitless.

This isn't just automation, it's amplification.

From factory floors to power grids, Siemens is turning what if into what's next.

To learn how Siemens is solving today's challenges to create opportunities for tomorrow, visit usa.seemens.com.

Can your software engineering team plan, track, and ship faster?

Monday Dev says yes.

With Monday Dev, you get fully customizable workflows, AI-powered context, and integrations that work within your IDE.

No more admin bottlenecks, no add-ons, no BS.

Just a frictionless platform built for developers.

Try it for free at monday.com slash dev.

You'll love it.

Well, Casey, it's been a very dramatic week in the tech industry, and you know what that means.

That's right, Kevin.

Whenever a week gets particularly messy, the Hot Miss Express comes into the station and I believe it has just arrived.

This is our segment where we run down the biggest messes of the week in tech and tell you just how hot we think they were.

Well, why don't we sort of dip into the box car, Kevin, and see what is on the train this week.

What does the train have for us?

All right, this first story comes to us from Reuters and is headlined, Musk says XAI to take legal action against Apple over App Store rankings.

Kevin, on Monday, Elon Musk took to X to accuse Apple of antitrust violations, saying, quote, Apple is behaving in a manner that makes it impossible for any AI company besides OpenAI to reach number one in the app store.

Kevin, what did you make of this one?

Well, the billionaires are fighting, aren't they?

They are because shortly thereafter, OpenAI CEO Sam Altman chimed in and said,

This is a remarkable claim given what I have heard alleged that Elon does to manipulate X to benefit himself and his own companies and harm his competitors and people he doesn't like.

And it was then, Kevin, that Sam Altman tweeted a link to a platformer story from 2023 about how under Elon, Elon, X had adjusted ranking algorithms so that you would be shown his tweets before other people's.

Wow, that must have been a very exciting day for the platformer newsletter.

It was a great day for the platformer nobody.

This escalated into a fight and Elon Musk accused Sam Altman of being a liar.

And

Sam responded back, I believe, that he wanted Elon to sign an affidavit saying that he had never tampered with the algorithms on X to favor his own companies and disfavor rivals.

Yes.

And then an hour or so later, Elon responded, scam Altman lies as easily as he breathes.

Yeah, so basically this is a fight over Elon Musk's sort of paranoia that Apple is artificially sort of deflating the popularity of X and Grok, basically preventing it from reaching number one, even though he thinks it has way more downloads than the things that are at the top of that list.

Yes.

Now, of course, journalists looked into this and Business Insider reported that actually just a few months ago, DeepSeek, the Chinese open source AI app, went to number one in the app store.

And in fact, screenshots from when Grok 3 came out that were posted on X showed that Grok itself had indeed at one point hit number one in the App Store.

So I have to say, I think this antitrust case is going to wrap up pretty quickly, Kevin.

Yes.

It is interesting that the leading minds of our time like just, you know, sit around and fight with each other on social media.

Well, and this does get into the question of how big a mess do we think this is?

Because of course, every time we play Hot Mess Express, after we discuss the story, we have to decide what sort of mess this is.

While I think the antitrust case, as I say, will be never brought, I am interested in how big of a mess you think this is between Elon and Sam.

I think this is a mess that is on a slow boil.

I think this is a hot mess

that is going to get even hotter.

I think these two have been on a collision course for quite some time.

Elon Musk, of course, is one of the co-founders of OpenAI, and the two famously had a falling out, and now they really despise each other by the sound of it.

And they're in active litigation.

And in fact, also this week,

a court found that Elon Musk would have to face claims that he's been engaged in a multi-year harassment campaign against OpenAI.

And on the flip side, Elon is pursuing claims that he was essentially defrauded when he donated a bunch of money to what he thought was always going to be a nonprofit, only to find out that it had for-profit ambitions.

Yes, and I think this only ends in one way, a cage match.

A cage match.

You know, Elon did previously say he was going to fight Mark Zuckerberg, but that never materialized.

Yeah.

Well, maybe this time.

All right.

Let's bring around the Hot Mess Express for our next mess.

This one comes to us from my colleague Trip Mick Gold, New York Times, is titled, U.S.

Government to Take Cut of NVIDIA and AMD AI Chip Sales to China.

This has been a big unfolding mess over the past week, essentially in order to greenlight sales of its H20 chip to Chinese companies.

The CEO of NVIDIA, Jensen Huang, has been meeting with President Trump.

He met with him at the White House last week.

Trump reportedly demanded 20% of NVIDIA's sales in China as sort of a kickback for allowing the sale of those chips.

They've been restricted by export controls.

Jensen Huang said, will you make it 15%?

And two days later, the Trump administration granted NVIDIA the license it needed to sell the chips in China.

And that's the art of the deal.

Casey, what do you make of this?

So, this is a hot mess, Kevin.

Trade negotiators say that this is unprecedented for the United States to do and also likely unconstitutional.

At the same time, who is going to stand up and say it's unconstitutional?

I'm going to guess it's not going to be NVIDIA or AMD, which are frothing at the mouth to sell these chips to the Chinese.

So here's why I think this is so messy.

On one hand, you have many China hawks in the administration who are saying we should restrict the flow of chips to China so that America maintains its dominance in AI and also as a national security measure so that China doesn't pull ahead and create national security problems for us, right?

And on the other hand, you just have Trump saying, I want 15% of sales to go to the U.S.

government without even saying what that money is going to be spent on.

So, you know, the president has said that these chips are obsolete.

And China actually has been quite skeptical of some of these chips and has even discouraged some of its companies from buying them.

And it all just adds up to a big mess.

Yeah, it's a big mess.

There's been additional reporting this week that U.S.

authorities are actually putting trackers in some of their chip shipments abroad to sort of crack down on smuggling, basically like hiding little AirTag-like devices inside these boxes so that they can tell if these things are being smuggled in in

circumvention of export controls.

So it's all just going to get really interesting really fast.

My favorite take on this came from my friend Neli Patel over at The Verge, who posted on Blue Sky: What if instead of weird one-off extortion schemes, the government just collected meaningful and stable amounts of corporate tax revenue?

That'll never work.

I don't know.

I thought it could be worth a shot.

Okay.

What else is coming down the tracks, Casey?

All right, let's see here.

Next up.

I can't believe this is real.

The United Kingdom asks people to delete emails in order to save water during a drought.

This is from our friends over at 404 Media who report in the UK, the water shortage is so bad that the government is urging citizens to help save water by deleting old emails.

It really helps lighten the load on water-hungry data centers, you see.

I think they're being sarcastic there.

Kevin, what did you make of the UK's new plan to get everyone to delete their emails?

Somehow, I don't think this is going to work.

It's Andy Masley, who we've quoted on this show before, is sort of a blogger who examines some of these environmental claims about AI.

He ran the numbers on this recommendation that the UK government said, and he found that to save as much water in data centers as fixing your toilet would save a leaky toilet, you would need to delete something like 1.5 billion photos or 200 billion emails.

So basically, this is not where the real water waste is coming from, and the UK government should feel very silly for recommending this.

Now, at the Horror Fork podcast, we do get roughly 200 million pitches per week to bring on CEOs of a company you've never heard of and don't want us to interview.

But most people don't have that same volume.

Yes.

Now, I'm going to say that this is not a hot mess, but a wet mess.

That's my designation here.

Yes, but this is

at the risk of derailing what is essentially a comedy segment with the serious take.

This water usage argument about ChatGPT and other chatbots, it needs to die.

I'm sorry.

I love the environment.

I am worried about climate change.

I do not want us wasting water.

I try to take short showers, Casey.

Yeah, I can smell that.

But this is not the real problem.

And I think we are falling for a misdirection by people who would have you believe that the problem with the climate right now is that people are using chatbots too much.

This strikes me as the AI equivalent of the plastic straws argument, and I don't think it stands up to scrutiny any better.

Yes.

I mean, look, we've had people on the show.

I am relatively convinced that we should be concerned about the environmental impact of building new data centers, for example.

But in general, I do not think that we want to personalize the climate crisis and make people feel like their tiny individual choices are going to be the way out of potential crisis.

Yeah.

Now, I will say that if you're listening to the show and and I've ever sent you an email that was embarrassing or incriminating, you definitely should delete that as part of your contribution to fighting climate change.

Here's what I will say about deleting email: it always makes me feel good.

Like,

go ahead at the end of the show today, maybe delete a few.

It's not really going to help the environment that much, but then you'll have less email.

You'll probably feel better, particularly if it's unread.

Delete it.

All right, next up, Kevin.

All right, this one is from The Verge.

This is titled: Apple made a 24-carat gold and glass statue for Donald Trump.

Under the threat of costly tariffs and amid promises to expand Apple's U.S.-based manufacturing, CEO Tim Cook brought a gift to a White House meeting last week, a large disc of iPhone glass that contained the Apple logo, Donald Trump's name, and Tim Cook's signature set into a 24-carat gold base.

I guess this is kind of an extension of the NVIDIA story.

You know, it used to be we just sort of like had relatively free trade, you know, not a lot of tariffs.

You didn't have to bribe the president to get what you want.

But now we just live in a world where if you need something from the president, you can just make him a very fancy object, book a meeting at the White House, give it to him, and then save yourself billions of dollars in tariffs.

Yeah.

Yeah.

Now, Casey, are you familiar with the biblical story of the golden calf?

Tell me, Kevin, why are you?

It's been a few years since Vacation Bible School.

Well, basically, this is a statue that was made by the Israelites to worship in Moses' absence.

And it symbolizes the temptation of worshiping tangible material things over the unseen and abstract divine.

And I think everyone at Apple and their senior leadership should familiarize themselves with the story of the golden calf because it didn't end well.

Didn't end well.

No spoilers here on the Hard Fork Show.

All right.

That's my weekly mandatory Bible.

That's our weekly sermon.

And let's see what else is in the boxcar.

I think we have one more story.

Oh, no, two more stories.

All right.

Oh, this is a good one.

Google Gemini struggles to write code, calls itself a disgrace to my species.

This one's from Ars Technica, and it says that during a recent debugging session with a user, Google's Gemini AI model became overly self-critical after it failed to fix a problem with code it was trying to write.

It followed up this quote by writing, I am a disgrace more than 80 times.

Google said this was a, quote, looping bug that affects less than 1% 1% of Gemini traffic, and they've been working to fix it.

First of all, absolutely do not fix this.

I have never been so delighted by Gemini as I was reading this story.

I mean, has anything ever been more relatable than an AI that is working really hard on a problem and can't quite get it right and it does a lot of negative self-talk?

Yes.

This made me think that AI is ready to replace journalists because this is my internal monologue.

I'm a disgrace.

No one will ever love me.

The amount of self-loathing in the journalism profession is quite high.

If this were like available in the model picker, I would pick it.

Yes, this is not a mess at all.

This is a feature, not a bug.

Feature, not a bug.

Non-mess.

Absolute non-mess.

What a delight.

Thank you, Gemini.

All right.

And finally,

and this one isn't really a mess, Kevin, so much as it is one final derailing.

We wanted to sort of take a moment today to pay respect to a legend, and that legend is, of course, AOL dial-up internet service, which is now being taken offline after more than three decades of service.

For so many of us elder millennials, AOL was our first entry onto the internet.

And I believe we have a clip that, if I'm right, is going to trigger a massive wave of nostalgia in some of our listeners who are roughly our age, Kevin.

Let's play it one last last time.

God.

I mean, I literally just traveled back in time 30 years.

This is the sound of childhood.

This is the sound of happiness.

Realizing the whole wide world web was out there.

Someone should make a dance remix of that and release it today.

I bet it would slap.

Now for us.

It's a Skrillex song.

It does.

Now, for our younger listeners, that was

sound of an AOL dial-up modem connection.

And when Casey and I were just young lads sitting there at our parents' desktop computers dialing into AOL, we had to sit through that sound.

But that meant that you were going online, a magical place where anything was possible.

Yeah, and crucially, when you were online, no one could call your house.

Yes.

And so your parents would say, hey, you need to get off of there.

Grandma, my is trying to get through.

Yes.

Oh, I'm so sad about this.

So this is being discontinued as of September 30th.

And Casey, the most surprising part of this story to me was that in 2023, an estimated 163,000 households in the United States were using dial-up internet access.

It's so amazing.

And I'm going to guess that the majority of those people actually stopped using dial-up internet access sometime in the 2000s and just forgot to cancel their subscription.

And so really, like AOL is effectively going to be giving back like tens of thousands of dollars, hundreds of thousands, maybe even to all of these customers now who've unwittingly been lining the pockets of AOL for years.

Yeah.

Casey, what are your most fond memories of the AOL dial-up internet service?

So for reasons that I don't even remember, we were not an AOL family.

We were an MSN family, a Microsoft network family.

So like we had the kind of off-brand internet service that was like fine, but I never was like in the sort of, you know, dangerous chat rooms that AOL was famous for really any of that.

But you, you were on AOL.

Yes, I was an AOL kid.

What are some of your AOL memories?

Well, I remember that it was a big deal when you got to go on AOL because you had to fight for that with your sibling if you had one, or, you know, you had to like, you know, find a time when like no one else wanted to be on the phone.

And so it was, it was like this sound that meant that you were going to the internet.

And like the internet was like not this ambient thing that was always happening around you.

It was like a place that you had to click a button to go.

And once you were there, you would like get charged by the minute.

So you would kind of like spend your whole day sort of like stacking up like tasks that you wanted to do when you got online so that when you got online, you could like go do them as quickly as possible and like not eat up your parents' monthly AOL dial-up budget.

And keep in mind, these modems were so slow that it was like you were truly sipping the internet through a straw.

Yes.

Right.

You just downloading an image might take a minute, you know, like the way that making an image in ChatGPT does today.

So.

Yeah, a lot of fun memories.

A lot of fun memories.

I spent a lot of time in those chat rooms.

I played a lot of online chess because I was what they call a loser.

And I had like even a an email account on that.

Yeah, so I probably will lose access to that when I get to the end of the day.

Did you want to say what the email address is so people can get in touch?

Yes.

If you're interested in getting in touch with the 11-year-old me, you can email bigkevman1999 at aol.com.

Please don't email that address.

It's going to go to someone else.

Well, RIP, AOL.

And

with that, America, you know, America is now just permanently online.

Can we hear one more AOL goodbye sound?

Yeah, let's hear that goodbye sound one more time.

Goodbye.

And he he really said it all right there.

I'm crying.

Yeah.

So

that's Hot Mess Express.

Over the last two decades, the world has witnessed incredible progress.

From dial-up modems to 5G connectivity, from massive PC towers to AI-enabled microchips, innovators are rethinking possibilities every day.

Through it all, Invesco QQQ ETF has provided investors access to the world of innovation with a single investment.

Invesco QQQ, let's rethink possibility.

There are risks when investing in ETFs, including possible loss of money.

ETF's risks are similar to those of stocks.

Investments in the tech sector are subject to greater risk and more volatility than more diversified investments.

Before investing, carefully read and consider fund investment objectives, risks, charges, expenses, and more in perspectives at evesco.com.

In VESCO Distributors Incorporated, imagine a world where AI doesn't just automate, it empowers.

Siemens puts cutting-edge industrial AI and digital twins in the hands of real people, transforming how America builds and moves forward.

Your work becomes supercharged.

Your operations become optimized.

The possibilities?

Limitless.

This isn't just automation, it's amplification.

From factory floors to power grids, Siemens is turning what if into what's next.

To learn how Siemens is solving today's challenges to create opportunities for tomorrow, visit usa.seemens.com.

Can your software engineering team plan, track, and ship faster?

Monday Dev says yes.

With Monday Dev, you get fully customizable workflows, AI-powered context, and integrations that work within your IDE.

No more admin bottlenecks, no add-ons, no BS.

Just a frictionless platform built for developers.

Try it for free at monday.com slash dev.

You'll love it.

One correction before we go.

Last week, Kevin, during our discussion of Alexa Plus, I said that Amazon had sent me two Echo Shows.

I remember that.

And I was under the impression that I had to mount my Echo Show to my wall.

Well, it turned out that the second box that I had been sent, which looked basically identical to the box that had the Echo Show in it, was actually the box for the Echo Show mount that would have allowed it to sit on my desk.

You fool.

I know.

So, listen, I actually am embarrassed about this.

I did not open the box because I didn't want to create a bigger mess for myself because I knew I was going to return all of this stuff very quickly, but I did make a mistake, and I apologize for the error.

Alexa, punish Casey for his mistake.

Ow!

That hurts!

Hard Fork is produced by Rachel Cohn and Whitney Jones.

We're edited this week by John Wu.

We're fact-checked by Caitlin Love.

Today's show was engineered by Katie McMurrin.

Original music by Marion Lozano, Rowan Nemisto, and Dan Powell.

Our executive producer is Jen Poyant.

Video production by Sawyer Roquet, Pat Gunther, Jake Nicol, and Chris Schop.

You can watch this full episode on YouTube at youtube.com/slash hardfork.

Special thanks to Paula Schumann, Quee Wing Tam, Dahlia Haddad, and Jack from Aranda.

You can email us as always at hardfork at nytimes.com.

Did you know know Tide has been upgraded to provide an even better clean and cold water?

Tide is specifically designed to fight any stain you throw at it, even in cold.

Butter?

Yep.

Chocolate ice cream?

Sure thing.

Barbecue sauce?

Tide's got you covered.

You don't need to use warm water.

Additionally, Tide pods let you confidently fight tough stains with new coldzyme technology.

Just remember, if it's gotta be clean, it's gotta be Tide.