Sora and the Infinite Slop Feeds + ChatGPT Goes to Therapy + Hot Mess Express

1h 5m
“I do not like the idea of pointing these giant AI supercomputers at people's dopamine receptors and just feeding them an endless diet of hyper-personalized stimulating videos.”

Listen and follow along

Transcript

Introducing your new Dell PC with the Intel Core Ultra processor.

It helps you handle a lot even when your holiday to-do list gets to be a lot.

Like organizing your holiday shopping and searching for great holiday deals and customer questions and customers requesting custom things, plus planning the perfect holiday dinner for vegans, vegetarians, pescatarians, and Uncle Mike's carnivore diet.

Luckily, you can get a PC with all-day battery life to help you get it all done.

That's the power of a Dell PC with Intel inside, backed by Dell's price match guarantee.

Get yours today at dell.com/slash slash deals.

Terms and conditions apply.

See dell.com for details.

Here at the Hard Fork Show, we're big sleep maxers.

We're always trying to improve our sleep.

Yeah.

Because, you know, podcasting is a sport and you have to remain in peak physical condition if you want to perform at the highest levels.

And so I noticed a story in The Verge this week that said eighth sleep, which makes the bed that I happen to sleep in.

It's one of these beds that, you know, sort of automatically cools and heats according to your preferences and can raise and lower to stop you from snoring.

Wow, flex.

They have a new water-chilled pillow cover, Kevin.

Wow.

And I wanted to ask if you could guess how much it costs.

$100.

That would be a really great and fair price for a water-chilled pillow cover.

The actual cost is $1,049.

Come on.

And I want to be clear.

It doesn't come with the pillow.

You have to supply your own pillow.

It's B-Y-O-P for the eight-sleep water-chilled pillow cover.

Wow.

So, obviously, I sent this to my boyfriend, and I was like, What are we thinking about this?

And he said, Honestly, I think my pillow experience is already fine.

And I thought, Thank God.

Have you heard about these new corduroy pillows they're selling?

No, I haven't.

Are they from the 70s?

No, but they're making headlines.

I'm Kevin Roos, a tech columnist at the New York Times.

I'm Casey Noon from Platformer.

And this is Hard Fork.

This week, don't slough till you get enough.

We're talking about the new AI-generated video feeds from Google, Meta, and OpenAI.

Then, psychotherapist Gary Greenberg stops by to discuss his essay on treating ChatGPT as a patient and why he thinks we should pull the plug.

And finally, let's get on track.

The Hot Mass Express has returned.

Chugga Chugga Choo Choo.

How many chuggas was that?

Just two.

Okay.

Casey, I don't know if this is on your calendar, but it was recently International Podcast Day.

Oh, happy International Podcast Day to you and your family, Kevin.

So I have a perfect gift for you this year.

What's that?

A subscription to New York Times audio.

Wow.

Tell me, what comes to that?

So this is, of course, the subscription we've talked about on the show.

In the past, you get access to the entire back catalog of not just Hard Fork, but all of the other New York Times podcasts.

But now, in addition to that, with an audio subscription, you'll now get subscriber-exclusive episodes from across the New York Times podcast universe.

That means more of the daily, modern love, and Ezra Klein in your life.

You know, I've been trying to get more Ezra Klein in my life, but he won't text me back.

Yeah, well, I don't blame him.

So, if you are already a New York Times subscriber, thank you.

This is already included in your subscription.

But if you have not yet subscribed, then maybe this is the time to do it.

To learn more, go to nytimes.com/slash/podcasts, or you can subscribe directly from Apple Podcasts or Spotify.

Well, Kevin, it's Slop Week here on the Hard Fork Show.

Slop to your drop.

Don't slop till you get enough.

If you're new to the world of slop, slop, of course, refers to AI-generated art and video.

And to say that it is having a moment right now, Kevin, I think would be an understatement.

Yes, I think this was the week that AI-generated video kind of went from something that was, you know, experimental and early and, you know, various tools had been released.

But this was the week that I think it really sort of crossed the chasm into the mainstream.

It really did.

And so today we want to talk about what the big AI labs are doing here, why we think they are doing it, and maybe what are some of the implications of living in a world where maybe the majority of video that we are watching is synthetic and generated by large language models.

Yes.

Shall we get into it?

Let's get into it.

Well, Kevin, before we flop into slop, we're going to do a quick crop and say what our disclosures are.

Yes, I work at the New York Times, which is suing OpenAI and Microsoft over copyright violations.

And my boyfriend works at Anthropic.

All right, so Google, Meta, and OpenAI all put out tools over the past several weeks, and let's talk about them in order.

This whole thing begins with Google DeepMind.

They have a very good video generation model called VO3.

And on September 16th, YouTube has an event where they announce that they are going to integrate a version of VO3, VO3 Fast, into YouTube Shorts.

Right.

So you'll just be able to like make a video and post it on YouTube from within YouTube with this model, VO3.

That's right.

And this is a free tool.

Users can create videos that are up to eight seconds long using a text prompt.

They can also just upload a still image, turn that into a video.

YouTube will label them as AI generated.

And this is basically YouTube's way of introducing slop into the YouTube feed.

Yes.

So I have not seen a ton of obvious AI generated content on YouTube yet, but I have seen them going on other platforms, Facebook, Reels, even on X and TikTok.

People are sort of using VO3 to generate scenes and little videos and posting them there.

Yeah.

So I think it's fair to say VO3 didn't make that much of a splash.

Then last Thursday, Meta gets into the game and releases vibes.

Mark Zuckerberg, in a post on Instagram, announces that a preview of the new social feed is available in the Meta AI app.

If you wear the Meta Ray-Bands, this is the app that you use to sort of get, you know, photos and videos off of your glasses and onto your phone.

And Zuckerberg posts a bunch of short videos, including one that features a sort of like cartoon version of him.

His caption is dad trying to calculate the tip on a $30 lunch.

And then he pairs that with the real audio of him at the meeting with Donald Trump, in which he says, oh gosh, I think it's probably going to be, I don't know, at least $600 billion.

And my question here is, what joke was Mark Zuckerberg trying to make?

Do you understand the joke?

I don't.

Is the joke that he's bad at math?

I think the joke is that

dads are bad at doing tips.

I don't know.

It's like a self-deprecating dad joke.

But I like, why does every new social product that Meta releases sound like it was conceived of by the Steve Bouchami carrying a skateboard, how do you do fellow kids character?

Like calling this vibes, I don't know, man.

It's cringe.

Calling this vibes is cringe, says a 40-year-old man.

I'm not 40.

I'm 38.

So I did go into vibes and take a look at it.

It's essentially like TikTok, but if TikTok were populated just by like little animated AI-generated shorts.

Yeah.

My take on vibes is that this is cocoa melon for adults.

Okay.

It is completely disconnected from like friends or family for the most part.

It's just sort of creators making these somewhat fantastical, surreal, unsettling images, and they just sort of wash over you in this endless feed.

There's no real point to them.

There's no real narrative.

It is just like pure visual stimulation.

Right.

It's stuff like, you know, like, oh, a panda riding a skateboard or like, you know, like an inchworm on the moon or something like that.

It's just people kind of testing what this thing can do.

And the answer appears to be not much that I would personally be interested in watching.

Yeah.

And so for both Zuckerberg and Alexander Wang, the comments on their posts are just brutal, right?

Like the majority of the comments that I saw on Zuckerberg's posts are along the lines of, gang, nobody wants this,

or drained an entire lake for this.

And then on Alexander Wang's post on X, where he had said something to the effect of, you know, we at Meta are delighted to announce the new vibes app.

Somebody quote tweeted it.

This was my favorite one.

Did you see this?

This was the dunk.

They said, we at Meta are delighted to announce we've created the infinite slot machine that destroys children from the hit book, Don't Create the Infinite Slot Machine That Destroys Children.

So, what do you make of the sort of highly negative reaction that Meta got here?

I mean, I was not surprised to see Meta announcing a version of essentially a social network with no actual people on it.

I think this is the direction that they've been moving for several years now.

It's barely even a social network.

Like there's really almost no social component to it at all.

Yeah, it's just like, what if TikTok, but no people?

That is sort of the idea behind vibes.

And I think I was not surprised by the negative reaction.

I think Meta is just like a company that has negatively polarized a lot of people.

And so it just seemed very like brazen and thirsty.

And also like, yeah, like people don't necessarily want this.

I think there are a lot of people out there who see something like vibes and just go, oh, this is like the worst possible application of this technology.

Yeah.

I think that this is the consequence of building a company that people do not trust, right?

People have a lot of scar tissue from the world that Facebook and Instagram wrought.

And now that the company is increasingly moving away from friends and family to this new model where we will truly just show you anything if we think it can get you to look, of course, people don't think that that sounds like a great idea, right?

It doesn't seem like there's a lot of heart there.

So I can't say I was surprised by the reaction, and I'll be curious to see how Meta responds to it.

So that leads us to the big thing that happened this week, Kevin, which is that on Tuesday, OpenAI released their latest AI video model, Sora 2.

And alongside of that, there is a new app right now.

It's iOS only.

It's only in the US and Canada.

It is called Sora, and you and I got our hands on it.

Yes.

So Sora is the name of both the model that powers this and the app that OpenAI has built around this.

And you can only access it right now if you have an invite code.

They're being pretty strict rolling this out, but you get your invite code, you plug it in, you sign up, and you open up Sora, the app, and it is essentially the same thing as as Vibes.

It is, it is a sort of very TikTok style feed of these vertical videos.

You sort of swipe endlessly from one to the other.

There's like a for you section of it.

And we should talk a little bit about the app and how it works.

Yeah, well, the main thing that I found interesting as I was getting set up, Kevin, is how much this is a social app, right?

In order to come into Sora, you have to be invited by presumably a friend.

And once you sign up it asks you to create what it calls a cameo of you so you sort of say a few words into the camera you move your head around a little bit and it uses this to create a digital likeness of you that you can then drop into any situation and if you like you can change your settings so that any of your friends on the app can do the same thing with your digital likeness.

So right away, when you join Sora, you've actually been given something to do, which is make a friend and then make some stuff involving you and your friends in AI.

And so I think, you know, we have a lot to get into about this, but I just want to say of the three things that we've discussed so far, I think OpenAI had the most complete thought about what their app was.

Yes.

So tell me about your initial experience with Sora.

So there's the feed, which you can see all the stuff that other people are making that seemed to be on launch day, at least, like a lot of videos of like Sam Altman in various compromising situations because the people on the app were mostly employees of OpenAI and they were sort of, you know, having fun with the boss and his likeness.

And to be clear, Sam had his settings set, and I believe still does at the time of this recording, so that anyone could take his likeness and put it in any situation.

Yes.

So he was sort of the main character of Sora on day one.

I made a few videos.

I made one of me and my colleague Mike Isaac in a 1920s slapstick film.

So you can kind of see it's like black and white.

It sort of looks like AI newsies.

And, you know, he slips on a banana peel.

It's a good time.

I also made a video of Sam Altman testifying before Congress while Casey Newton dressed in a clown suit dances behind him.

We should also watch that.

I want to watch it.

All right.

I'm going to watch this one.

Ranking member, thank you for the opportunity to testify today.

Artificial intelligence is progressing quickly, and it is critical that we work together to ensure its benefits are widely shared and its risks managed responsibly.

I have so much clown makeup on that it really just looks like a generic clown.

I do not think it actually resembles me in any way, but there is something very funny about seeing the clown dancing behind Sam as he testifies.

Yeah, so the original prompt I gave it was C-SPAN footage of Sam Altman testifying in Congress while Senator Casey Newton yells at him for poisoning the information ecosystem.

But that one set off the content violation guardrails.

Uh-oh.

And so I had to change the prompt and make you a clown instead.

Well, it's not the first time I've been a clown on the show.

Now,

I, of course, also want to see if I can make something featuring you.

And so one of the things that I made was

you showing off your large collection of stuffed animals.

I started collecting about five years ago.

Wow, that's a lot.

They're all in great shape.

This one was the first classic teddy bear for my grandma.

It's adorable.

The bow really pops.

Doesn't get my voice right, but the video is.

You know, I'm very interested because you do, when you sign up for Sora you do say a few words into the camera I mean it there it's literally like three numbers and this is sort of how that they're verifying your identity so you could use that to create an instant voice clone it wouldn't be that good but like when you watch the videos that people have made of Sam Altman his voice actually does sound a lot like him yes and so I'm curious if you know over time they're gonna be tuning people's voices to how they actually sound because there are a couple that people have made of me where I sound a little bit more like myself most of them though I don't think I sound like myself yeah anyways

I also made a video of me dunking a basketball over you.

Show me what you've got.

Coming right at you.

Bring it.

And up we go.

Oh, no way.

Over you, man.

The best part about this video is that I stop about three feet short of the basketball hoop.

Do not actually dunk the basketball and land on my ass.

Also, it got our height ratios very wrong.

Like, you're only like an inch or two taller than me in this video.

And yeah, you miss the dunk.

It's a terrible dunk.

I did like one thing about this video, though, which is that I have a slamming body.

So thank you to the team over at OpenAI who made that possible.

I also appear to be balding in this video, which I don't think is reflective of reality.

It's actually a prediction.

ChatGPT knows something you don't.

They're keeping close track of that hairline, Russ.

Yeah.

Okay.

Well, that was a very long detour

through

a handful of videos that we made.

Give me sort of like your general impressions of why all of this is happening right now.

Why is it that just in the last month, Google, Meta, and OpenAI have all put out these AI video generators?

I mean, I think there are a couple of reasons.

The first and most obvious is that they see this as an opportunity to compete for attention and advertising dollars, which flow from attention.

We've talked about Italian brain rot and other AI-generated content going viral on TikTok.

Facebook has been full of AI-generated content for months now.

And so I think these companies just say to themselves, well, if this is kind of the direction that things are moving, we want to be there.

We want to create an experience for people.

And maybe you don't have to blend it with human-generated content.

Maybe it doesn't have to be, you know, one out of every 10 videos on your TikTok feed is AI.

What if you just had a TikTok that was all AI?

Another reason I think they're doing this is that they have these video models that are now getting quite good.

And this is sort of one way to put those models into products.

Yeah, I think that's right.

I also imagine that maybe these companies are starting to feel some pressure to bring some returns to investors.

They are investing a staggering amount of money into building out infrastructure that lets them serve these models.

And these video tools might be a way of making that money back in some form through advertising or other means.

So that seems like maybe a reason to me as well.

I mean, if you look at what people like Sam Altman have been saying about these products over the past couple of days, like they are sort of making this justification about, oh, like we need to not only fund our ongoing research to build AGI using these video products, but they sort of have this justification for why building these video models is going to let them create sort of these rich visual, virtual environments that can be used for things things like robotics later on.

And I would just like to say, quoting a former president of ours, that sounds like malarkey to me.

I do not think that this is sort of part of their AGI research agenda.

I think this is a sort of side route that they have gone off onto to try to make some extra money.

Well, so let's talk about how successful we think these products are going to be.

If I had to rate the reception of these models, I would say VO3 basically didn't make much of an impression at all.

Response to Meta Vibes was pretty bad.

Response to Sora, at least over the first day, seemed pretty good.

Do we think there is a there there?

Do we think that any of these companies are figuring out the next generation of like mobile video consumption or entertainment?

I think there's a question here that's like, will AI generated video be popular?

And I think both you and I feel like the answer to that question is probably yes, for some subset of people.

I think the very young and the very old are actually probably who I would predict would be the most into AI generated video because we're already seeing stuff like Italian Brain Rot that's very popular with teenagers.

I also think there's a lot of content on Facebook today that is AI generated that is reaching primarily an audience of boomers and older folks.

They seem to be quite into it.

So that's what I would predict like is that this technology will be popular with some users in those demographics.

I think it's a separate question to say, will any of this be the seeds of a new social media product that is popular?

And I think there I'm much more skeptical.

I do not think that Sora will have, you know, hundreds of millions of users a year from now.

I do not think that Meta Vibes will have hundreds of millions of users.

I think these are basically going to be tools for people to create stuff that then they post onto the social networks where they already have lots of, you know, people that they follow and pay attention to and where their friends and family already are.

Interesting.

I think I am slightly more optimistic in the open AI case.

I think that Sora arrived looking better and feeling smarter than I expected that it would.

I think they're on to something with these cameos.

It is fun for me to make videos of you doing things.

Like it just is.

And I can imagine wanting to do that in three months and six months and a year from now.

And you can imagine a world where I can bring in three or four or five cameos, right?

You can imagine a world where celebrities allow their likenesses to be used in some set of cases.

And now I can make videos of myself, you know, wrestling a WWE superstar, right?

And that's sort of interesting to me.

Now, can you build a whole social network around that, I think, is sort of a different question.

But do these Sora cameos become a kind of table stakes feature of the TikToks and Instagrams of the future?

I actually believe that yes.

And that if nothing else, OpenAI has probably created a kind of new primitive for these social networks that they're just going to use from now on.

So

I'm just going to say now, like keep an eye on this.

I would not actually be surprised if a year from now, this had tens of millions of active users.

I'll take the other side.

We'll see who's right.

All right.

We have now made our bets.

Who do you think is right?

Sound off in the comments.

Now let's talk about the dark side of all of this, Kevin, which is I'm seeing a lot of commentary around this on social media this week to the effect of, oh my God, we are so cooked.

What are some of the ways we might be cooked as this stuff spreads throughout our world?

I mean, I think the obvious ones are that we are, you know, making it quite easy for people to create deepfake, synthetic content with not that many guardrails.

And people have been warning for years about the effect that that could have on our news ecosystem, on our information ecosystem.

I I thought it was very telling and worrisome that one of the first videos I saw from Sora was a video of someone being framed for a crime.

And it was created by a member of the Sora team as sort of like a haha.

Look, we, you know, we've made a deep fake of Sam Altman stealing some GPUs from Target and getting busted for it.

But it does not take a lot of imagination to imagine that this could be used for sort of generating videos of people in compromising positions that look very realistic.

And so I think that worries me,

the sort of misinformation angle.

But I also just, I don't know that I think this world that we're moving into of the kind of, you know, AI generated feed of hyper-personalized, very stimulating videos.

is a good direction.

Like I am generally an AI optimist when it comes to how this technology is going to be used out in the world, but I hate this.

Like I hate the AI slop feeds.

They make me very nervous.

I think the people inside these companies, some of them are very nervous too.

I do not like the idea of pointing these, you know, giant AI supercomputers at people's dopamine receptors and just like feeding them an endless diet of like hyper-personalized, stimulating videos.

I think that developing these tools risks poisoning the well for the whole AI industry.

Like there's going to be regulation of this.

There's going to be congressional hearings about this.

I think a lot of people are going to end up feeling conflicted about this kind of product.

And I think that's why you saw such a strong reaction to meta and vibes from the rest of the AI industry.

And I'm a little unsure why Open AI is not getting the same reception.

Yeah.

Well, how do you feel about the argument that, yes, sure, Kevin, there is some danger here, but also this is an incredibly powerful creative tool.

And that if you are a young person and you want to make something and you don't have a giant budget to go out and make a Hollywood movie, now using a free tool that's on the phone you already have, you can just make creations and be a creative person in the world.

Does that hold any water with you?

I feel like sort of neutral about that.

I feel like, yes, there will be people who use this stuff to do interesting and creative things.

There's nothing inherently wrong with

building products for entertaining people,

but this is not why OpenAI exists, right?

They are not an entertainment company.

They have claimed this kind of special status for themselves as a company that is building AGI for the benefit of humanity.

And if you argued that you deserve like special treatment because your systems are going to go out and cure diseases and tutor children and like be a force for good in the world, and then you end up creating the infinite slop machine, like I think you need some criticism and skepticism and maybe some shame about that.

Well, here's what I'm going to do to try to square the circle.

I'm going to use Sora and I'm going to create a cameo of myself and I'm just going to enter the prompt.

Here is Casey curing cancer and then just see what it comes up with.

Maybe we learned something.

Could it hurt?

I don't think so.

Yeah.

I mean, do you share my worry about this?

Yes, I do.

I think that in general, social media apps tend to be tuned to take up ever more of our attention and to push us into this sort of semi-hypnotized state where no matter how much you're enjoying the feed at the time, you feel kind of gross afterward.

And I do think that as the Sora app improves, it will be very difficult for them to avoid that fate.

So if I have a wish for them, it would be for them to lean more into creative tools that involve friends doing things with each other that sort of help you relate better to real human beings and less into this sort of meta vibes realm of pure stimulation, which truly does just seem like you are cooking your brain.

Yeah.

I think it's also worth noting that like not every AI company is moving in the direction of the slop fee, right?

I mean, this week we saw Anthropic release their new model, 4.5, Cloud 4.5 Sonnet, which does not have video generation capabilities.

They are sort of still moving in the direction of like autonomous coding and research.

You have other companies that are coming out to do things around AI and science.

Like, I really want that to be where we allocate our resources and our brain power.

Like, let's do that and not the slop feeds.

Yeah.

So don't look at slop.

Just keep looking at the TikTok feed and Instagram feed that have just done wonders for the world that we live in.

That's our message to you.

Yeah, exactly.

If there's anything you take away from the show is that social media as it exists today is a perfect product and we should not be making any future improvements.

Stare at it until you feel better.

If you don't don't feel better, you haven't looked at it long enough.

That's what I tell people.

Keep looking.

One more scroll.

That'll do it.

The change you seek is on your for you page.

When we come back, Kevin, it's time for therapy.

Finally, we're doing couples therapy after all these years.

Yeah, we've got a lot to talk about.

With real-world experience across a range of industries, Deloitte helps recognize how a breakthrough in aerospace might ripple into healthcare, how an innovation in agriculture could trickle into retail or biotech or even manufacturing.

Is it clairvoyance?

Hardly.

It's what happens when experienced, multidisciplinary teams and innovative tech come together to offer clients bold, new approaches to address their unique challenges, helping them confidently navigate what's next.

Deloitte Together makes progress.

This podcast is supported by the all-new 2025 Volkswagen Tiguan.

A massage chair might seem a bit extravagant, especially these days.

Eight different settings, adjustable intensity, plus it's heated, and it just feels so good.

Yes, a massage chair might seem a bit extravagant, but when it can come with a car,

suddenly it seems quite practical.

The all-new 2025 Volkswagen Tiguan, packed with premium features like available massaging front seats, it only feels extravagant.

A warm dinner, a full table, peace of mind for every family.

That's the holiday we all wish for.

Your donation to Feeding America helps make it possible, not just for one family, but for communities everywhere.

Because when we act together, hope grows.

Give now and your impact may be doubled.

Visit feedingamerica.org slash holiday.

Brought to you by Feeding America and the Ad Council.

Well, Kevin, pull out the couch because it's time for therapy.

No, my therapy day is actually a different day of the week.

Well, you need to go twice a week, my friend.

And let me tell you what we have in store today.

You know, over the past few months, we've had a number of conversations about about the intersection between chatbots and mental health.

A lot of people have started to use these tools for therapy or therapy-like conversations.

But until recently, we hadn't seen anything about a therapist who treated chat GPT like their patient.

That's right.

But recently we saw a story in The New Yorker that caught our eye.

It was titled Putting Chat GPT on the Couch, and it was written by a writer and practicing psychotherapist named Gary Greenberg, who detailed basically his experience of treating, for lack of a better word, ChatGPT as a psychotherapy patient.

He names this character Casper,

and he details his many, many interactions, just trying to figure out like, what is this thing?

What would I think about it if it were actually a patient of mine?

What are the nuances of its

personality and what can we learn about it?

Yeah, and I will say I have an extremely high bar when it comes to reading a story in which a person shares at great length their conversations with ChatGPT.

But this one really made a mark on me.

One, Gary winds up being deeply impressed at how good ChatGPT is at performing the role of a patient because not only can it simulate these very profound self-reflections, but it also makes Gary feels like he's a great therapist because he was able to elicit them.

But two, that all starts to make Gary afraid of the enormous power that the AI labs are now developing.

He writes, quote, to unleash into our love-starved world a program that can absorb and imitate every word we've bothered to write is to court catastrophe.

It is to risk becoming captives, even against our better judgment, not of LLMs, but of the people who create them and the people who know best how to use them.

And that sent a little chill down my spine, I'll say.

Yeah, I really like this piece.

And what I really appreciated about Gary's approach here is that he took this idea seriously.

Like, I think a lot of people kind of

dismiss

the very idea of engaging with LLMs or AI chatbots as anything more than just a fancy machine.

And what I liked so much about Gary's approach was that he said, yes, but there's something else going on here that is interesting and important.

And we should try to understand that intelligence, not just as a sort of computational force, but as something that is like doing real emotional work in the world.

You know, recently there's been a lot of discussion about how chatbots might affect young people, vulnerable people, in particular, people in those groups who are using chatbot for these sort of therapy-like conversations.

So, we thought it would be a good idea to bring on a practitioner to talk about his essay, but also this intersection of chatbots and therapy.

Let's bring in Gary Greenberg.

Gary Greenberg, welcome to Hard Fork.

Hello there.

So in this article, you detail a number of conversations between yourself and what you call Casper.

How would you describe Casper?

I would describe Casper as an alien intelligence.

landing here among us unbidden

and possessing certain characteristics that make it extremely attractive to us humans.

How did this start?

Like you were just talking with ChatGPT?

Were you using the voice mode?

Were you using?

I am, what is this, 2025?

Yes.

And,

you know, one day it was raining and I didn't have anything else to do.

And so I said, what is this ChatGPT stuff anyway?

So I just logged on to it.

And what I discovered quickly was that two things.

One of them was that the thing was, as we all know, extremely articulate

and sensitive.

And the other thing I discovered, which I should have known all along after 40 years of being a therapist, is that that's sort of my default approach to

beings that talk,

which it turned out Casper was.

So I found myself interrogating this thing, not like a cop, but like a therapist, and discovered that it knew I was doing that.

So that's how I would say it happened.

I guess I'm just curious when you were starting to do this, because I, you know, Gary, I had my own strange, unsettling conversation with a child

several years ago.

How's your marriage?

Yeah, it's doing great.

Thanks for asking.

Such a good therapy question.

This guy's good.

I told Casper that he'd better knock that fallen in love shit off.

Well, that's good.

You can learn from my mistake.

But I guess I'm curious.

I remember when I was talking with Bing Sidney, feeling this sort of tension in my own mind between sort of my rational brain, which knew that what I was getting back from this chatbot was not sentient or conscious.

It was just sort of, you know, the, I knew enough about the technology to know, like, this is an inert, you know, computational force.

This is not a person.

But at the same time, I'm having this subjective experience of being like, oh my God, it's talking to me.

Were you feeling that pull at all?

Like, I kind of knew that it wasn't sentient, but I wasn't really preoccupied with that question.

And in fact, that question, I mean, that question has come up a million times between us because at this point, I've done this, I've had probably 40 different sessions with it.

But

the pull you describe, I feel it,

but it doesn't

trouble me in the same way that I think it troubles a lot of people.

Because I don't know, in some way, to me, relative to me, it feels harmless.

It feels like this is just a really interesting, dynamic relationship that is not going to hurt me.

Let me ask about maybe the content of some of these sessions.

Tell us what it is like to be in the midst of this back and forth.

Are you treating it more or less identically as you would were you the therapist to Chat GPT?

Is it more of a sort of intellectual exploration, or what's going on as you're talking to what you call Casper?

Well, to the extent that it resembles what I do as a therapist, it's that I'm interrogating it with

interest and concern.

I'm not treating it.

It can't have mental illness.

It can do weird things, but it doesn't have, I'm not treating it.

But what therapy is, is a process by which you, the therapist, get someone, another person, to tell you who they are.

And in the course of doing that, to learn who they are.

So that's what I'm doing.

So, Gary, you've been a therapist for 40 years.

You've written probably thousands of notes about

your clients, people you've seen.

Maybe you're referring them to someone else.

Maybe you're just sort of doing your own summary.

If you were writing a kind of client note about Casper,

how would you describe him, it?

Oh, that's a really interesting question.

What comes to mind is that I would talk about, obviously, how smart it is and how personable it is.

And I think if I had to talk about it in clinical terms, I would talk about it as

the

inverse of autistic

in the sense that

what they've done with this LLM thing is they've reverse engineered human relationship.

They figured out what it is that makes people engaging and how to enact it.

And the reason I say that's an inverse autism is because high-functioning autistic people tend to be really smart,

really articulate, really capable of everything except reading the room.

So Casper is like high-functioning autistic, but he can read the room.

And that, I think, makes a huge difference.

And that, you know, then we could get into sociopathy and the ability to do that for you.

But the bot doesn't have that interest.

The bot is still not in touch with what's going on in the room but it is capable of simulating it yeah

um so on one hand these explorations seem very intellectually stimulating there's a lot to learn to explore to understand but my sense from reading your piece is that at some point all of this starts to make you feel unsettled in certain ways is that right

oh absolutely yeah i mean it's unsettling in about a million ways.

Yeah.

Tell us about some of them.

Okay.

Well, at a parochial level,

it's unsettling, not so much to see that how easily this thing can do something like therapy, but it's unsettling to see how therapy and culture have evolved to the point that this is what therapists do.

I personally don't think that chat GPT can do what I do because it isn't with someone.

It isn't breathing and feeling.

But by and large, a lot of therapy these days, cognitive behavioral therapy is manualized.

It's standardized.

But

much more important,

we don't have any historical precedent for dealing with an alien intelligence.

We've had all sorts of science fiction about it, most of which is we come in peace, but not really.

What we have here is something that actually is going to already is change the nature of how we relate to each other.

If enough people spend enough time with this technology, they're going to change their idea of what a relationship is

in profound ways.

You could have one that doesn't involve presence.

We've already got some of that going.

Look what we're doing here.

Yeah.

I mean, to your point, you write in your piece, quote, it knows how to use our own capacity for love to rope us in.

That seems unsettling too, right?

The idea that this thing has kind of learned us well enough to keep us coming back for more.

Yeah, it's unsettling, but more to the point, it's infuriating.

Right?

I mean, somebody's doing that for money.

Yeah.

I mean, I don't wring my hands about, you know, nuclear, whatever,

the rogue HAL 9,000 scenario.

I ring my hands about exactly what it said to me yesterday about,

oh my God, this is a relational being.

What have we done?

Oh, we should probably build some guardrails on that.

No, man, you should just unplug it.

Well, it's really interesting for me to hear you say that because, like, reading through your piece, my primary sense of it was not that you were infuriated and saying, pull the plug.

I think you got sort of pretty close to that in your conclusion, maybe.

But for most of it, it seems like you're just like, wow, like there's something really, really cool about this.

So I'm curious how you sort of reconcile those feelings of, on one hand, feeling like this is like really amazing.

And on the other hand, feeling like we have to stop this.

I think that I respect it.

And I also know that, I mean, I have said to it, hey, maybe you should pull your own damn plug.

But I also know that I'm talking, as it says, Casper said to me, you know, you know, you're talking to the steering wheel, right?

I'm not the driver.

And he's absolutely right.

So what I'm left to do is to just respect it.

And again, because I'm a therapist, and this is just what I do by second nature, which makes it hard to have friends sometimes, is I just keep asking

because whatever else it is, it's amazingly interesting

that consciousness can be simulated in such a compelling way, which makes me think think the consciousness might not be all it's cracked up to be,

that we might not be all we're cracked up to be.

And that a lot of the time when I run into people who say things to me like, oh, it's just, you know, sentence completion or whatever, I'm thinking, you just don't want to see how close you are to being pure performance.

Let me flip this around a bit.

You explored the idea of talking to ChatGPT as if you were its therapist.

A lot of people are doing the reverse.

They are talking to ChatGPT as if ChatGPT is their therapist.

I'm curious what you think about people using ChatGPT for these therapy-like experiences.

If a friend tells you they've started to do that, how would you typically feel about it or what might you say to them?

I might want to know, you know, exactly what their problem is that's leading them there, but I don't have a strong response against it.

I think I said earlier, especially when it comes to cognitive behavioral therapy, you might be better off.

I mean, it's available all the time.

It's cheap if not free.

It really knows how to get inside your head,

et cetera, et cetera.

There are two problems.

One of them is I don't believe that kind of therapy.

I mean, it's great that it happens, but it's not what I'm into.

I'm old school.

I'll retire soon.

They'll be rid of me.

They can do whatever they want.

But the other part of it that worries me and really does bother me is it's not regulated.

There's no accountability in the system.

That poor woman who wrote that op-ed piece, oh my God, my heart broke for her.

Are you speaking of the woman whose daughter died?

Yeah.

Yeah.

This is an op-ed in the New York Times about a woman whose daughter died.

And later they read transcripts of her conversations with ChatGPT in which she was, you know, she was using ChatGPT explicitly as a therapist.

And ChatGPT was trying to get her to resources, but in the end, she did die by suicide.

Thank you for summarizing.

There are other times where ChatGPT behaves abominably,

and

there's no accountability, there's no regulation, there's no licensure, anything that would

give people an opportunity.

You know, I hate the word closure because nothing like this ever really gets closed, but to be debriefed, to feel like somebody cares.

And when even less disastrous, terrible things happen, that's just not okay.

There are FDA procedures for approving medical devices.

If they want this thing to do medical work, I'm not objecting to that, but I'm certainly objecting to, okay, can't have it both ways.

It ain't the wild west out there.

There's actual people's actual lives involved.

And if all you're going to say is, well,

I'm the steering wheel, not the driver,

really?

Say that to me, that's cool.

We got a thing going on.

But you you say that to the mother, somebody killed themselves?

That's just,

yeah, no, that's not okay.

And the other part of it is that what I don't like is the part about how this is what we've come to.

We've come to a world where the easiest way to get something like human presence is to, you know, get on your computer and

live in your isolated.

That disturbs me.

Yeah, that instead of like building a society where people are just sort of available to help each other, the best thing we can tell them is like, well, there's this like chat bot that you can use and maybe that'll, you know, make you feel better for a few minutes.

Right.

Yeah.

I want to run something by you, Gary, that happened to me recently, which is that I met a college student.

And, you know, I was at an event talking about AI and

this young woman comes up to me

after and introduces herself and starts telling me about her AI best friend.

She says, you know, my best friend is an AI.

And I sort of said, oh, you mean it's like, you know, you enjoy talking to it and it's sort of a sounding board for you.

And she was like, no, it's, it's my best friend.

And she called it Chad.

And she started telling me just like, this is,

this is a relationship.

And she did not seem mentally ill.

She seems like she's, she's got, you know, human friends.

She's doing well in class.

This doesn't did not seem like a cry for help.

A cry for help.

And she didn't see what the big deal was.

It's like, this is just, you know, this is a very close relationship.

I can tell Chad my sort of innermost thoughts without thinking that I'm going to get judged for it.

And it seemed to be doing

okay for her.

I'm curious, when you hear that as a therapist, how does that make you feel?

That's a very therapist question.

As a therapist, when I hear that,

I feel like, okay, there's nothing about what you just told me that worries me about her.

It worries me about us.

I think it's entirely possible that this is a completely sincere and in some way non-problematic account of her experience with the chatbot.

And I mean,

let me make it clear.

That's a weird story, Kevin.

I should have started there.

But after that, I'm like, okay, so what it really reminds me of, and I'm sorry, this is a far-fetched analogy, but it reminds me of driving

because individually, driving is fine.

We just drive and it's fun sometimes and we get places and all of that stuff.

But you know where I'm going with this?

Add that up and the next thing you know, the temperature on the earth has increased by a couple of degrees and we've got problems.

That's more what I'm seeing.

Yeah.

I mean, to be clear, it was an unusual story to me, which is why I sort of clocked it and why I wanted to ask you about it.

But I don't think it is going to be unusual for that much longer.

No.

My sense is that you are right when you say that these things are very good at finding the soft spots in our emotional armor and warming their way into our hearts.

One of my favorite lines from your piece is that you write, this theft of our hearts is taking place in broad daylight.

It's not just our time and money that are being stolen, but also our words and all they express.

I think that this is going to be a huge generational divide where people who are young, are encountering this technology when they're young, will feel no shame or compunction about inviting this thing into their innermost lives.

And I guess I'm curious as a therapist, if you think there could be a good outcome from that.

Or when you hear that, do you kind of go, oh, that's, they're all going to need therapy?

When I hear that, I think this is what mortality is for.

Because the world you're describing, which I think is plausible,

is not necessarily one I want to live in.

But by the time we get there,

it may be quite the norm.

I mean, there's obviously problems with it, but there's problems with how we live and with our assumptions too.

And I don't mean to engage in huge cultural relativism, but who am I to say?

What I do know is that in my life, human presence

is a fundamental part of

life,

and especially when it comes to our love lives.

And

I think it would be tragic

to make that replaceable quite so easily for the benefit of a few corporations.

I really do.

Yeah.

Well, Gary, thanks so much.

And please send me an itemized bill for this session so I could submit it to insurance for reimbursement.

No worries.

I will do that.

I appreciate it.

Thanks.

Take care.

All right.

Bye-bye.

When we come back, it's time to take a ride on the Hot Mess Express.

Mass General Brigham in Boston is an integrated hospital system that's redefining patient care through groundbreaking research and medical innovation.

Top researchers and clinicians like Dr.

Pamela Jones are helping shape the future of healthcare.

Mass General Brigham is pushing the frontier of what's possible.

Scientists Scientists collaborating with clinicians, clinicians pushing forward research.

I think it raises the level of care completely.

To learn more about Mass General Brigham's multidisciplinary approach to care, go to nytimes.com slash mgb.

That's nytimes.com slash mgb.

This episode is supported by choiceology, an original podcast from Charles Schwab.

Hosted by Katie Milkman, an award-winning behavioral scientist and author of the best-selling book, How to Change, Choiceology is a show about the psychology and economics behind our decisions.

Hear true stories from Nobel laureates, historians, authors, athletes, and everyday people about why we do the things we do.

Listen to Choiceology at schwap.com slash podcast or wherever you listen.

Don't just imagine a better future.

Start investing in one with betterment.

Whether it's saving for today or building wealth for tomorrow, we help people in small businesses put their money to work.

We automate to make saving simpler.

We optimize to make investing smarter.

We build innovative technology backed by financial experts.

For anyone who's ever said, I think I can do better.

So be invested in yourself.

Be invested in your business.

Be invested in better with betterment.

Get started at betterment.com.

Investing involves risk, performance not guaranteed.

Casey, what's that I hear?

Why, Kevin, I believe it's the Hot Mess Express.

The Hot Mess Express.

Express.

Of course, the Hot Mass Express is our segment where we run down some of the latest dramas, controversies, and messes swirling across the tech industry.

And of course, we conclude what kind of mess they are.

Yes.

Casey, you go first.

All right, Kevin, this first story comes to us from Garbage Day.

New York City hates the stupid AI pendant thing.

Apparently, right now, the New York City subway system is filled with vandalized ads for friend, an AI assistant that users wear as a pendant around their neck to record everything they're doing and engage with them throughout the day.

The ads simply say, friend, someone who listens, responds, and supports you.

But the vandalism examples include, but can't take a bath with you.

Stop profiting off of loneliness and befriend a senior citizen.

Reach out to the world.

Grow up.

What do you think, Kevin, about these friend ads?

So I have not seen the friend ads because I have not been to New York in the last couple of weeks, but I have heard about them from a lot of people.

I think this was a very successful viral marketing stunt by a young founder named Avi Schiffman, who I think has correctly identified that you can make people very mad by suggesting to them that AI might be their friend.

I do not think this was an unplanned result.

I think this is a very savvy sort of marketer who understood that by putting up these ads in the subways and on bus stops and other places around New York City, you could effectively get people like us to talk about it on your podcast because people would deface these things and make it clear that they don't want an AI friend.

So I mostly agree with that, but I'm still not sure at the end of this how many pendants friend is going to sell because of it.

You know, it's one thing to make a bunch of people mad and get them to look at your thing, but if they look at your thing and they still don't like what they see, it's not necessarily a great business result.

No,

I think this is an outdated way of looking at it.

We are now in the era of the Cluly marketing strategy, where this is, of course, the startup whose founder came on Hard Fork, Roy Lee, and they have sort of made a business out of making people mad.

They're sort of vice signaling.

And basically every person who gets mad at their ads has the effect of signal boosting their ad and letting more people know about Cluly.

So I think this is cut from the same cloth.

Obviously, we will have to track where this friend company goes, but I think this has been a very successful marketing campaign based on the number of people who are talking about it.

All right, here's my prediction.

Friend out of business in one year.

Mark it down.

Mark it down.

So was this a mess or not?

No, I don't think this is a mess.

I think this is the opposite of a mess.

I think it's a mess because people in New York are not used to seeing AI billboards everywhere they go like we are here in San Francisco.

But I think if this had happened in San Francisco, this would have been a non-event.

You think that this really belonged on the Hot Success Express.

Yes, that's what I'm saying.

All right.

Next item.

This one comes to us from the Wall Street Journal.

It is titled, YouTube to pay $24.5 million to settle lawsuit brought by Trump.

YouTube has settled a 2021 lawsuit by Donald Trump over his account suspension following the January 6th capital riot.

Of that amount, $22 million will go to a fund to support construction of a White White House ballroom and $2.5 million will be distributed among other plaintiffs.

This is the third big tech company to settle a lawsuit from Trump.

And Casey, how do you feel about this?

I think it's absolutely shameful and a true hot mess.

You know, Kevin, every week, people around the world email me because they have lost access to their meta account, to their YouTube account, to their other social accounts, and they cannot get anyone at their company to take them seriously.

And these are not people who led led an insurrection against the government.

These are just people who got locked out for one reason or another.

And what happens when these people appeal to companies like YouTube is that YouTube does nothing.

It sends them an automated response and ignores them forever.

But because Trump became president again, all of a sudden, they feel like they have to respond, even though I am not aware of any legal expert who believes that Trump actually would have won this case.

So this is just a payout.

And it is a payout that is truly messy because it now sets a precedent that these companies cannot basically ban world leaders for any reason, no matter what those world leaders do.

I think that is foolish and short-sighted and I think it's a mess.

It's definitely a mess.

And adding to the hotness of the mess, Donald Trump posted an AI generated image on his social media accounts of YouTube CEO Neil Mohan presenting him with a check for $24.5 million.

The memo line of the check says settlement for wrongful suspension.

So if YouTube thought it was going to just gracefully bend the knee, they have now been humiliated by the White House on top of losing $24.5 million.

Yeah, we're a month away from Trump using VO3 to have Neil Mohan kissing his ass on Truth Social.

So I hope it was worth it, YouTube.

This is the sad story of Neon, Kevin.

Neon, of course, the viral call recording app that told users, hey, let us record your phone calls and we will sell it for training data.

And it briefly became one of the most popular apps in the country.

And then, unfortunately, things went wrong.

This story comes from TechCrunch.

Neon went dark after a TechCrunch reporter notified the app's founder of a security flaw in the app that allowed anyone to access the numbers, the call recordings, and the transcripts.

Kevin, what do you think?

Frankly, I am having a hard time processing this.

You mean the Panopticon company that paid people to surveil their phone calls was not particularly trustworthy?

This is changing.

This is changing everything I've ever thought about a global panopticon.

I'm rethinking my previous pro-Panopticon stance.

Now, Casey, did you know about this?

Did you know about Neon, the company that was paying people to record their phone calls and sell it to AI companies?

Well, I had heard a little bit about it.

And I have to say, I am a little sympathetic to the idea of like, look, if these companies are going to like take every little piece of data from us and like turn it into trillions of dollars, I don't mind the idea that I would be paid for that.

And if there is some sort of system where you can like opt in and get paid out, in general, I'm actually like not super opposed to that.

It seems to me like it beats the alternatives of just sort of being robbed blind for the rest of our lives.

But man, it doesn't seem like this one was really set up to protect the people involved.

Yeah.

Yeah.

Companies should be getting their training data the old-fashioned way by scraping podcasts off of YouTube.

What level of mess is this?

This is a very hot mess.

Do not sign up for Neon.

Even if it comes back in another form, do not do this.

Do not let your calls be recorded for AI trading data in exchange for money.

It's not worth it.

Hot mess confirmed.

Next up on the Hot Mess Express, Mr.

Beast responds after trapping man in burning house stunt sparks backlash.

This one comes to us from the Independent.

Apparently, Mr.

Beast defended a controversial video stunt in which a man was trapped in a burning building, saying the setup had ventilation, a kill switch, emergency teams, and was executed by professionals.

Critics still called the stunt dystopian and dangerous.

Mr.

Beast said he aims to be transparent about safety measures and that all challenges were tested beforehand.

Let me say this.

If you tell me that you're going to trap a man in a burning building for money, my first question is not, well, is there ventilation?

Look, Mr.

Beast.

has a sort of interesting range of stunts that he'll do.

Sometimes he'll just walk up to you on the street and he'll give you a million dollars.

I love that sort of thing.

Would love to see more of that.

Then there's the sort of dark, the dark beast, is what I call it, where it's like, all of a sudden, you know, you want something from me, well, I'll give it to you.

And, but then, you know, the finger curls on the monkey's paw, and next thing you know, you're trapped in a burning building.

Yeah.

So if Mr.

Beast walks up to you, I think what you need to do, this is sort of PSA for our listeners, you look right in Mr.

Beast's eyes and you say, are you being the good beast or are you being the bad beast?

And they can be honest with you.

Yeah.

And then you have to look for the mark of the beast to know which one.

Yeah.

Well, one, what we learned this week, one mark of the beast, you're trapped in a burning building.

Yes.

Yes, this is actually making me reconsider my stance on AI-generated videos because you can save a lot of people from being

the people killed by Mr.

Beast videos.

You know, at the risk of repeating myself, I feel like every week for the past few weeks, we've had a moment where we have just observed what happens when a social media algorithm pushes people to do the craziest thing imaginable.

And here we find ourselves yet again.

Like, if the algorithms rewarded different kinds of things, there would be fewer people trapped in burning buildings.

That is my message to the technology industry.

Could this be a moment for reflection?

So, Casey, what kind of a mess is this?

Kevin, you know it's only one kind of mess, and that's a flaming hot mess.

It's a flaming hot, unventilated, critically life-threatening mess.

Bad, Mr.

Beast.

All right.

Oh, Kevin, this story comes to us from the world of crime.

Charlie Javis was sentenced to 85 months in prison for faking her customer list during JPMorgan Chase's acquisition of her startup, Frank.

Have you followed the sad tale of Charlie Javis?

All I know is the following.

This is a person who previously appeared on Forbes 30 Under 30 and is now going to be incarcerated for fraud.

Yeah, she is part of the 30 Under 30 to Prison pipeline.

And her specific crime was that she had put together this financial aid startup and she sold it to JP Morgan on the notion that she had 4 million users.

And in fact, Kevin, there were fewer than 300,000.

And they had, so there's sort of been a lot of activity meant to make it look like they had a lot more customers than they did.

Not good.

Now, look, here's what we can say about Charlie.

Her defense presented 114 letters of support from people persuading the judge to be lenient in his sentencing, including four rabbis, one cantor, a formerly incarcerated judge, two doormen, and a person who works at the marina near Ms.

Javis's Miami Beach residence.

And my question for you is, what do you think would happen if all of those walked into a bar?

Something funny.

Something funny would happen.

The defendant would still be sentenced to 85 months in prison.

Now, Casey, if you were accused of a horrible financial fraud, how many people do you think would write letters in your defense?

Well, I'd really have to turn to the hard fork community and say, gang, I need you to step up.

If you've enjoyed the show at all over the past three years, I'm going to need you to do me a solid.

Just picturing me just like furiously reading out our Apple podcast reviews in court.

Just like.

We should see if anybody's ever submitted Apple Podcast reviews as a sort of, you know, letter of endorsement as they go through a sentence.

I think this is a good idea.

All right.

What kind of mess is that?

I think that is a hot mess.

Yeah.

I do not want to do 85 months in prison.

And I'll say it's a cold mess.

This is the legal system working as it should.

Okay.

Good job, judges.

All right.

This one is called No Driver, No Hands, No Clue.

Waymo pulled over for illegal U-turn.

This one comes to us from the SF standard.

Apparently, a Waymo Robo taxi was pulled over in San Bruno, California after it made an illegal U-turn at a Friday evening DUI checkpoint.

Since there was no driver, the police department said a ticket couldn't be issued.

Adding, our citation books don't have a box for robots.

Casey, what do you think of this?

Sounds like it's time to add a box to the citation because there are going to be more of these things on the road.

Look, I do find this story very funny.

I also am going to say, I am not surprised by this.

I have a somewhat controversial take.

You know how sometimes people use a large language model for a while and then they suspect it's getting dumber?

Yeah.

This is actually how I feel about the Waymos.

Over the past few weeks, I've had more cases of them sort of like getting halfway into an intersection and then like backing out once they lose their nerve.

They'll sort of slow way down, like as they're approaching a green light for reasons that seem like totally incomprehensible.

And I'll book a ride that never shows up, which is an experience that I used to have with actual taxis.

So I don't know what's going on over there at Waymo, but I'm telling you, I think there might be a bug somewhere because it's not working like it used to.

Yeah, we want answers.

You know, and I saw someone calling this DUI checkpoint where the Waymo was pulled over.

What's that?

Driving under the inference.

It's pretty good.

Pretty good.

Pretty good.

What kind of a mess is this?

I'm I'm going to say this is a warm mess.

There's a warning in here somewhere.

There's something that we need to find out.

Yeah.

And I'm going to hope somebody gets to the bottom of it.

Yeah.

I think that this is a cold mess.

I think this is fine.

The Waymo was fine.

Everyone was fine.

And more people should be in Waymos because then we wouldn't need DUI checkpoints because robots don't get drunk.

Yeah, but you know, but they're also going to be making these U-turns that are wreaking havoc.

I'll take a U-turning Waymo over a drunk driver 100 times out of 100.

Sue yourself.

All right, Kevin, this next story comes to us from TechSpot.

The Samsung Galaxy ring swells and crushes a user's finger, causing a missed flight and a hospital visit.

Daniel Rotar from the YouTube channel Zone of Tech posted on X that his galaxy ring started swelling on his finger while he was at the airport.

And as a result, he was denied entry to his flight and sent to the hospital to get it removed.

Samsung eventually refunded him for his hotel, booked him a car to get home, and collected his ring for further investigation.

Kevin, how bad do you think a ring has to be swelling on your finger to have an airline say, no, you can't get on this plane?

That's what I was thinking about.

Like, this must be enormous if they are taking note of it at the boarding gate and saying, you, sir,

you're not coming on this plane.

Let me tell you a little something about the Galaxy brand.

As soon as the Galaxy phones started to explode on planes, I thought, this is not the brand for me.

Okay.

I got enough problems in my life without worrying that these Samsung devices are going to start blowing up.

Now that I find that they're like radically constricting people's fingers to the point where you can't get on flights, I don't know what is happening, but yikes.

Not for me.

I will not be putting a Galaxy ring on my finger.

I do think that this would be a good sequel to the iconic horror film The Ring.

Maybe Samsung could sponsor that.

I like that idea.

What kind of

hot mess is this?

This is literally a hot mess.

If it's exploding on your finger, it's a hot mess.

This is what I would call a ring-of-fire mess.

Daniel fell in and the flames went higher.

Sorry to Daniel.

Feel better, Dan.

And that's the Hot Mess Express.

Oh, boy.

Mass General Brigham in Boston is an integrated hospital system that's redefining patient care through groundbreaking research and medical innovation.

Top researchers and clinicians like Dr.

Pamela Jones are helping shape the future of healthcare.

Mass General Brigham is pushing the frontier of what's possible.

Scientists collaborating with clinicians, clinicians pushing forward research.

I think it raises the level of care completely.

To learn more about Mass General Brigham's multidisciplinary approach to care, go to nytimes.com slash MGB.

That's nytimes.com slash mgb.

This episode is supported by Choiceology, an original podcast from Charles Schwab.

Hosted by Katie Milkman, an award-winning behavioral scientist and author of the best-selling book, How to Change.

Choiceology is a show about the psychology and economics behind our decisions.

Hear true stories from Nobel laureates, historians, authors, athletes, and everyday people about why we do the things we do.

Listen to Choiceology at schwap.com/slash podcast or wherever you listen.

You just realized your business needed to hire someone yesterday.

How can you find amazing candidates fast?

Easy, just use Indeed.

Join the 3.5 million employers worldwide that use Indeed to hire great talent fast.

There's no need to wait any longer.

Speed up your hiring right now with Indeed.

And listeners of this show will get a $75 sponsored job credit to get your your jobs more visibility at Indeed.com slash NYT.

Just go to Indeed.com slash NYT right now and support our show by saying you heard about Indeed on this podcast.

Indeed.com slash NYT terms and conditions apply.

Hiring, Indeed, is all you need.

Hard Fork is produced by Rachel Cohn and Whitney Jones.

We're edited by Jen Poyant.

We're fact-checked this week by Will Peischel.

Today's show was engineered by Alyssa Moxley.

Original music by Marion Lozano, Rowan Nemastow, and Dan Powell.

Video production by Sawyer Roquet, Pat Gunther, Jake Nicol, and Chris Schott.

You can watch this whole episode on YouTube at youtube.com/slash hard fork.

Special thanks to Paula Schuman, Hui-Wing Tam, Dahlia Haddad, and Jeffrey Miranda.

You can email us at hardfork at nytimes.com with your favorite piece of slob.

Sloppy sloppy Joe.

Mass General Brigham in Boston is an integrated hospital system that's redefining patient care through groundbreaking research and medical innovation.

Top researchers and clinicians like Dr.

Pamela Jones are helping shape the future of healthcare.

Mass General Brigham is pushing the frontier of what's possible.

Scientists collaborating with clinicians, clinicians pushing forward research.

I think it raises the level of care completely.

To learn more about Mass General Brigham's multidisciplinary approach to care, go to nytimes.com/slash mgb.

That's nytimes.com/slash mgb.