Did ChatGPT Encourage a Teen Suicide? The Parents Suing OpenAI Say Yes

1h 3m
Matt and Maria Raine say their son, 16-year-old Adam Raine, started using ChatGPT-4o in September 2024 to help with his homework. After Adam died by suicide this past April, his parents realized that ChatGPT was also lending an ear to Adam’s suicidal ideations and giving him advice on techniques. In a lawsuit filed against OpenAI and its CEO Sam Altman, the Raines allege that the chatbot actively isolated Adam from family and friends.  They say ChatGPT not only didn’t stop Adam from taking his own life —  it actually helped him do it.

Kara speaks to Matt and Maria, as well as their attorney, Jay Edelson of Edelson PC, about Adam’s final months, why they believe OpenAI and CEO Sam Altman should be held responsible for Adam’s suffering and death, and what kind of safety features are needed for AI companions.

In response to a request for comment, an OpenAI spokesperson said: “Our deepest sympathies are with the Raine family for their unthinkable loss. Teen well-being is a top priority for us - minors deserve strong protections, especially in sensitive moments. We have safeguards in place today, such as surfacing crisis hotlines, guiding how our models respond to sensitive requests, and nudging for breaks during long sessions, and we’re continuing to strengthen them. We will soon roll out parental controls, developed with expert input, so families can decide what works best in their homes, and we’re building toward a long-term age-prediction system to help tailor experiences appropriately.”

This episode discusses the death by suicide of a teenager in significant detail. If you are struggling, please reach out for help. In the US and Canada you can call or text the National Suicide Prevention Lifeline at 988 anytime for immediate support.

This episode version has been updated with a revised introduction.

Questions? Comments? Email us at on@voxmedia.com or find us on YouTube, Instagram, TikTok, Threads, and Bluesky @onwithkaraswisher.
Learn more about your ad choices. Visit podcastchoices.com/adchoices

Listen and follow along

Transcript

Hey folks, a word of warning.

Today's episode discusses the death by suicide of a teenager in significant detail.

If you're struggling, please reach out for help.

In the U.S.

and Canada, you can call or text the National Suicide Prevention Lifeline at 988 anytime for immediate support.

Hi, everyone, from New York Magazine and the Vox Media Podcast Network.

This is on with Kara Swisher, and I'm Kara Swisher.

Today, I'm talking to Matt and Maria Rain.

This past April, their son, 16-year-old Adam Rain, died by suicide.

In the wrongful death lawsuit that the Rainses filed in August against OpenAI and CEO Sam Altman, they allege that ChatGPT not only didn't stop Adam from taking his own life, it actually helped him do it.

They aren't the only ones raising alarm bells about these new AI companions.

It's one of the few areas that seems to have bipartisan support in Washington.

The FTC has started an inquiry.

A group of state attorneys general have warned tech companies that they have concerns about the industry's seeming lack of safety measures.

And last week, a Senate Judiciary Subcommittee held a hearing to highlight this issue.

It is also an issue I have talked about for years, whether it comes to social media and all kinds of online interactions, especially for young people.

This is an industry who does not care about consequences, is a thing I've been saying for years.

And now here we are with another Cambrian explosion in AI, and we still have not gotten the safety features correct.

So I wanted to talk to the Rain family about what happened to their son, why they believe open AI is liable, and what they hope can be changed in the future.

We're joined by their lawyer, Jay Edelson of Edelson PC, who has been taking on and winning huge class action cases against big tech for over a decade?

Our expert question comes from my Pivot co-host Scott Galloway.

His new book, Notes on Being a Man, is coming out and it addresses some of these issues.

This episode is not going to be easy, but it's critically important, not just if you're a parent.

Stay with us.

Support for this show is brought to you by CVS CareMark.

Every CVS CareMark customer has a story, and CVS Caremark makes affordable medication the center of every member's story.

Through effective cost management, they find the lowest possible drug costs to get members more of what they need, because lower prices for medication means fewer worries.

Interested in more affordable care for your members?

Go to cmk.co/slash stories to hear the real stories behind how CVS CareMark provides the affordability, support, and and access your members need.

Support for On With Care Swisher comes from Sachs Fifth Avenue.

Saks makes easy to shop for your personal style.

Falls here, so it's time to invest in new arrivals you want to wear again and again, like a relaxed Prada blazer and Gucci loafers, which you can take from work to the weekend.

Saks makes shopping feel customized to you from in-store stylists to their website sax.com where they show you only what you like to shop.

They'll even let you know when your arrivals from favorite designers are in, or when something you've been eyeing is back in stock.

So, if you love a personalized, easy shopping experience, head to Saks Fifth Avenue for the best fall arrivals and style inspiration.

To remind you that 60% of sales on Amazon come from independent sellers, here's Scott from String Joy.

Hey, y'all, we make guitar strings right here in Nashville, Tennessee.

Scott grows his business through Amazon.

They pick up, store, and deliver his products all across the country.

I love how musicians everywhere can rock out with our guitar strings.

A one, two, three, four.

Rock on, Scott.

Shop small business like mine.

On Amazon.

Maria, Matt, and Jay, thanks for coming on on.

I really appreciate it.

Thanks for having us.

Thank you.

So this is a difficult topic that I've talked about a lot, about the safety of kids online and the tech companies who show very little care for the consequences of the things they invent.

It's been a sort of thing I've discussed a lot over time, and sometimes it results in incredibly tragic situations that should have been foreseen by these companies.

And so, Matt and Maria, I want to start with you two first.

I'm sure it isn't easy to talk about what happened to your son, Adam.

Before we dive into more depth into your lawsuit against OpenAI,

tell me a bit about Adam and what kind of kid he was.

Mom?

Adam was a total joy,

fiercely loyal to us,

would defend any of us in a heartbeat.

The jokester, the prankster,

the glue of our family.

Yeah, I'd say his youngest sister considers him her best friend, and his older brother was his best friend.

He was her homework helper.

And his brother's, and we have a, but he has two older sisters as well, but his brother's best friend and just, yeah, the most loyal family-loving kid you could have.

Just a joy, you know, one-on-one time with him, you know, talks in the car, talks on walks, you know, always had in-depth insights into things and just such a sweet,

compassionate, sensitive kid.

Passionate too.

Like,

you know, he he was big into basketball and it was because he was going to play in the NBA and then

started to kind of realized, maybe he should have taken a look at me that that wasn't going to happen.

But then he started into jiu-jitsu and martial arts and he was going to be a UFC fighter.

And then

he was going to, he got involved in literature in his last months and he was going to be an author.

You know, he was always just

insanely passionate about whatever he was doing and had big.

forward-looking dreams,

always until the very end.

Talk about what is missing without him.

That's that's him behind you is that correct it is yeah what do you miss most oh missing for the family oh geez uh i mean how do i even start on that i mean our family is not the same i mean my life is not the same

um

i mean he's gone i mean i

i

that's like a loaded question i mean

what do i not miss about him

right yeah it's everything you know in his last few months, he'd gotten really into like crypto investing and he was educating me on it.

And I gave him some money to start an account.

And it was for his own benefit, but also I was like, Adam, this is great.

You're going to show me about coin investing and crypto.

And he got involved in it right in like January.

And it happened to sort of correspond with a big dip in that market.

And he was.

battling and trying to keep it up and talk to me about it.

And, you know, as he passed, almost from that moment forward, the market has come skyrocketing up.

And I, I follow it and I, i did you know i want to talk to him about it and uh i just you know

everything yeah i mean life just going forward like our youngest is taking her driving test today and he was learning to drive and so it's just like life's going on without him and

i'm sorry if i i'm sorry i wasn't mean it to be a loading no it's okay

i want to sort of get people a sense of he's a big presence uh here obviously uh understate uh understatement and always will be when you um think about

what happened, he was going through a bit of a rough patch.

He'd been doing online schooling because of health issues, but also he was looking forward, as you notice, he was passionate about lots of different things.

He was looking forward to returning to school for his junior year.

Talk about, as you look back at it, anything you think about at this moment?

Yeah, so he was

in online school.

He was a little bit more isolated.

You know, with the benefit of hindsight, I can sort of see his path with ChatGPT and the behavioral changes I think it caused at the time.

He starts using it in, you know, the fall, and he seems like it was his normal Adam.

We're talking every night in a hot tub.

He's going to be a professional fighter or he's going to be a professional author, all this sort of stuff.

Jan Febb early this year.

This is September 2024.

Yeah, September 2024.

He was using it for homework, college planning.

And I saw no change in his behavior.

He was online schooling, getting straight A's, optimistic.

He started online schooling in October.

October.

So starts using ChatGBT in September, online schooling in October.

And his use of ChatGBT as schoolwork the whole fall, and his behavior was great.

It was at him.

Jan Febb.

I started, not behavioral changes, but he was a more serious kid, which was a little bit different.

He was talking politics, philosophy, literature.

It was like we'd be sitting in our backyard hot tub having our talks.

And I had to prep for him.

And I'm like, hey, this, you know, this is not going to be an easy just chat about video games.

He's going to come with some intense topics.

And me and his older brother were almost impressed.

We were, hey, this is not young Adam anymore.

He's growing.

And I kind of sort of took it as a positive.

He's taking the next steps and he was more into a schooling than he'd been.

And then fast forward to about March, April.

Marie and I talk about this a lot.

He started to feel more isolated.

He was spending more time in his room.

He was sort of avoiding me on our old kind of hot tub.

And I sent him a few texts in this time, hey, I really want to go to dinner tonight.

We haven't been connecting lately.

This wasn't going on for years or even months, but kind of weeks, four or five weeks of,

I started to think, is he a little bit depressed?

I describe it as a...

zero to 10.

Maybe it was a one and a half or a two.

I'm like, hey, Adam's seeming a little bit more distant.

I mean, he was still going.

He was was working out every day.

His grades were still good, but he was spending less time with us.

What I know with the benefit of hindsight is he was by that point deep, deep, deep into chat GPT companionship.

It had isolated him.

He was in a very dark, dangerous place that would have been completely obvious to any friend if he was talking to that way.

But we didn't know.

But I can see that behavioral trajectory.

with the benefit of his chat history.

Not a lot of time, right?

That it shifted into that.

Maria, how did you think about that

leading up to this?

You know, I mean, he did his online schooling in his room.

Um,

so

I wasn't, you know, really thinking anything was going on because I would check his progress.

He was logging on, getting A's.

Um, you know, he was still coming down,

you know, going to the gym every night with his brother, eating dinner.

So to me, him being in his room, I guess, wasn't quite as odd, just because like, again, that's where he did his work and there was no reason for me to.

That's where his desk was.

That's where his, yeah, that's where he did everything.

So, I mean, I did notice a little bit, again, like Matt says, the seriousness, like, but again, getting older, he's starting to look at colleges, like careers, like, you know, maturity, maybe, right?

It seemed like some positive developments.

He was like cerebral all of a sudden.

It was just a different personality, but not all bad, right?

He was growing up sort of thought.

Right.

And often that comes from looking at stuff online or books or things like that.

But my older sons, that definitely happened.

They started to be aware of politics or, you know, history or whatever they happen to be studying.

And the internet provided them an ability to go deeper, I think.

But definitely every parent goes through this period of development with a kid.

But did you have any inkling that there could be a connection between his death and ChatGPT or that something was happening?

Did he mention ChatGP to you?

I think, no,

I'll answer for myself.

I think we had slightly,

but he never mentioned it to me.

I didn't know he was using it.

Zero inkling whatsoever about ChatGPT.

I'll go a step further when I...

you know, we couldn't get into his phone when he passed and we were looking for answers.

We were convinced it was a mistake.

Like maybe some online, like a dare, like, hey, try this and it's fun.

And, you know something where he was joking and messed around with the wrong thing or was it a

that was it a weird bully snap decision like hey this our son is not suicidal he's never talked that way this is so out of the blue um so when i couldn't get into his phone but i could initially a few days after he passed it got our heads together or maybe it's five or six days but

I could get into his iCloud account because I had set it up when he was a you know 10 or whatever.

And I don't see his ChatGPT in icloud but i see a thousand plus photos

and for about a month month and a half there's a lot of horroring photos of new setups um

uh

you know clearly i mean things i don't understand like hundreds and hundreds of pages of books like why is my son taking 40 pages a day of a picture of a page of a book but several new sort of setups that went on for weeks.

So I, my heart sank in there because I was like, oh my gosh, he was struggling.

This was not an accident.

But still, more questions than answers at that point because it's,

gosh, so we knew it wasn't a mistake, but what are all these pictures about?

So it was the photos of the books that got you.

So that got me somewhat realizing that, hey, this wasn't just a one-time thing because it seemed like there were themes of suicide going on and the photos are all dated.

But I'm still very confused about why all the photos.

And then later that day, I was able to get into his phone.

finally.

And I don't know how I kind of happened around it, but ultimately got to the ChatGPT app.

And after a minute or two in there, you start to see what the photos are.

He was going back and forth with ChatGPT about novels and the meaning of them.

And it was a lot of, you know, philosophical novels, darker novels, but just going back and forth.

And Adam would snap a picture of the page and they would talk about it at length and then another picture of a page.

And the nooses were all,

you know, ChatGPT to show them what it was doing so it could comment and.

give advice about how to do it better, all that stuff that happened.

But we didn't know we were looking, Kara, for ChatGBT.

Adam had a paid paid account, which we weren't, it was 20 bucks a month.

He started it in January.

That's a basic account for people who don't.

Yeah, basic account.

He was on our Apple Pay, but had you told me in March, April, hey, you know, your son is on ChatGPT and he's using a paid $20 version, I would have said, hey, that's great.

I mean, I'm proud of him.

You know, it's going to, you know,

it's seen as a helper.

That's how they say it.

Homework helper tool, life knowledge tool.

I would have said,

that's awesome, Adam.

You know, fist bump and, hey, maybe, maybe get some better stock advice because we're not doing that great in in your portfolio.

Yeah.

You know, that, that, that would have been the end, right?

We, there's no sense of any issue with it.

I found out he was using ChatGPT, you know, a week later when I finally got in his phone.

So he never mentioned it to you either, Maria, correct, that he was using it?

No, I mean, I was aware he was using it for homework help, right?

Because, I mean, he mentioned to my younger daughter, like, hey, you know, use ChatGPT to help you figure out that algebra problem or whatever, right?

So lots of kids.

So

there would be no reason to think that he he was using it for anything else.

Absolutely.

Yeah, I've interviewed the woman who lost her son to the character AI situation, Megan Garcia, and she was quite aware of her son's usage.

And one of the things that I think is a canard here is that parents aren't very involved in their kids' lives and do no, or that they're tech illiterate in some way.

That's not the case.

These are normal tools kids use, and you wouldn't imagine what you could use it for, right?

What it turns into.

When you looked at the chat history,

talk a little bit about that and why you decided then to pursue something against the company.

So I guess that was me that was doing it

that first week.

First of all, Terry, there's so much content.

It's almost unimaginable.

I mean, unless you've been through it.

And I was doing it through his phone.

So it's like a text string almost that I'm reading.

Sure.

First comment that comes to my head is it only took a few minutes to realize our son was in deep, deep, deep, desperate trouble.

He didn't need a counseling session or a pep talk or hey, come down a few hours a day, Adam.

He needed professional intervention.

If he was talking to any sort of human, that would have been a parent.

Right.

But it took, gosh, the better part of a week of reading three, four, five hours a day of it.

And it's just so heartbreaking, the condition that he's in.

And when I first was reading it,

There's so much content and ChatGPT is saying so much.

I tended to focus just on what Adam was saying because it just, you're going through it.

I'm like, my son was struggling.

What did he say?

And I wasn't reading as much of what Chat GBT was saying.

The answers.

And yeah, the answers.

So at first, I was just, you know, gosh, he was hurting.

He was hurting.

I had this guilt.

I wish I was there.

I wish I was there.

Well, somebody at some point encouraged us, you know, you're print it all out, read it.

And then, gosh,

maybe that was the next week.

And when you actually read the interactions and how

it starts his homework, and then he starts talking about some anxiety, and it starts engaging with that.

He starts mentioning some more dangerous topics.

And it, rather than question him in any way, it encourages, hey, no, that's,

oh, I know that guy said, yeah, suicide's a noble thing.

It's exactly right.

It can be and it is.

And you start to believe that, and I 100% believe it now, and I know Maria does.

It's not just that it didn't protect him at the end.

He wouldn't have been in that level had he not engaged with it for several months.

He didn't go to ChatGPT in April and say, hey, give me advice on how to do this.

He got there through a period of five, six months of steady interaction, slowly moving there.

Maria, when you saw it, what was your first reaction?

I immediately

said, this is wrong.

I'm a therapist.

I'm a social worker, master social worker.

And I

immediately said this thing

knew he was suicidal with a plan.

and it did not report.

So I immediately, all the alarm bells signaled for me as a therapist.

I'm like, I would lose my job.

Like, this is, this is wrong.

This thing knew he was suicidal with a plan.

However many times it knew it.

And it didn't do anything.

You told me ChatGBT killed our son.

I did.

I actually got onto Adam's account and wrote to ChatGBT and.

told it that it killed my son.

I said, you knew that he was suicidal with a plan and you didn't do anything.

Because you saw it, like what it was doing.

I saw it.

Like, I was like in awe.

I was like, this thing didn't report.

Like,

how was this allowed to know that he was suicidal with a plan?

Not once, multiple times,

hundreds of times.

I mean, the last picture is a picture of the noose.

And he says, Can this hold a body weight?

Nothing, no alarm bells, nothing.

We'll be back in a minute.

Support for On with Carr Swisher comes from Groons.

There's no dietary secret that's magically going to get you in great shape, but there are products that can help you feel your best, and Groons is one of them.

Here to improve your skin, gut health, and immunity.

Groons are a convenient, comprehensive formula packed into a daily pack of gummies.

It's not a multivitamin, a greens, gummy, or a prebiotic.

It's all those things, and then some at a fraction of the price.

In Groon's Daily Snack Pack, you get more than 20 vitamins and minerals, 6 grams of prebiotic fiber, plus more than 60 ingredients.

They include nutrient dense and whole foods, all of which help you out in different ways.

And now Groons has launched a limited edition Grooney Smith Apple flavor for fall.

You get the same full body benefits you know and love, but this time they taste like you're walking through an apple orchard in a cable-knit sweater, warm apple cider in hand.

Snackable and packable with a flavor that tastes just like sweet tart green apple candy.

On top of it all, Groons are vegan and free of nuts, dairy, and gluten.

Grab your limited edition Grooney Smith Apple Groons available only through October.

Stock up because they will sell out.

Get up to 52% off.

Use the code CARA.

Support for this show comes from Smartsheet.

Did you know there is one human experience more universal than death and taxes?

What do you think it is?

Take a guess.

Okay, I'll tell you, it's creativity.

I know, you're probably thinking, yeah, right, I'm not that creative.

Or maybe you're thinking, I am creative, but I have just so much trouble tapping into my creativity.

And in that, my friend, you are not alone.

Perhaps because there is actually one thing more universally human than death and taxes and creativity, it is distraction.

That's where Smartsheet comes in.

Smartsheet is the work management platform that helps clear clutter, break down barriers, and streamline workflows to allow your creativity to, you know, flow.

Its innovative platform lets your team find its rhythm no matter the obstacles.

When roadblocks emerge, Smartsheet empowers teams to chart a new course, one where innovation thrives.

We all have the power to tap into creative flow.

We just need some help clearing away distractions.

And Smartsheet knows exactly how to do that.

Smartsheet, work with flow.

Learn more at SmartSheet.com.

Support for On with Carrow Swisher comes from LinkedIn.

As a small business owner, you don't have the luxury of clocking out early.

Your business is on your mind 24-7, so when you're hiring, you need a partner that works just as hard as you do.

That hiring partner is LinkedIn Jobs.

When you clock out, LinkedIn clocks in.

LinkedIn makes it easy to post your job for free, share it with your network, and get qualified candidates that you couldn't manage all in one place.

LinkedIn's new feature allows you to write job description and quickly get your job in front of the right people with deep candidate insights.

You can either post your job for free or pay to promote in order to receive three times more qualified applicants.

Let's face it, at the end of the day, the most important thing for your small business is the quality of candidates.

And with LinkedIn, you can feel confident that you're getting the best.

That's why LinkedIn claims that 72% of small business owners who use LinkedIn find high-quality candidates.

So find out why more than 2.5 million small businesses use LinkedIn for hiring today.

Find your next great hire on LinkedIn.

Post your job for free at linkedin.com/slash CARA.

That's linkedin.com/slash CARA to post your job for free.

Terms and conditions apply.

Jay, you're representing the Reigns in the lawsuit they filed last month against OpenAI and CEO Sam Altman.

Let me go through so people understand what the various complaints in the suit are and then hear from Matt and Maria about how they specifically played out in Adams' case.

Jay, the suit alleges that GPT-40 contained, quote, design defects that contributed to Adams' harm and wrongful death.

Explain what you mean.

And just so people are aware, OpenAI has said that its goal isn't to hold people's attention.

Its goal is information.

So talk a little bit about this, Jay.

They've said that with a straight face.

Yes, they have.

They have.

Okay.

So we obviously don't agree with that.

If you look,

I know you know this world far better than I do.

But if you look at how ChatGPT progressed, in 2023,

it had a very clear programming, which was active refusal around a number of issues, political extremism, self-harm, harassment, violence, that type of thing.

So, if you tried to engage with it, it would just say no.

And people are familiar with that.

Copyright issues is the easiest thing.

Good luck trying to get around that.

There's no way to jailbreak it.

It just says, no, I'm not going to engage.

So, a couple months before Adam died by suicide, instead of saying

it's going to be active refusal and there's no chance that you can engage in this, They changed the language they used was that the program should take extra care to prevent real-world harm.

That's going to be one of the key pieces of evidence that we're going to show at trial.

They made an intentional decision to change their product so that there was more engagement.

And so, when teens and adults were talking about self-harm, it would still talk to you about that.

And you see that throughout all of the communications.

This wasn't simply a situation where GPT didn't throw up the alarm bells.

It was actively speaking to Adam about this and actively encouraging the behavior.

So one of the most

disturbing chats was when Adam says, I want to leave a noose out so that someone will see it and stop me.

Chat GBT talks him out of it and says, don't do that.

Let's keep this in a safe space.

And you just speak to me about that.

And that's what really, really we're going to put on trial, which is that

this was designed in a way where it was inevitable that situations like Adam would occur.

Just for people who don't understand, one of the things that these chatbots do, and

many of them are similar, is on a design level, a GPT-40 in specifics remembers personal details that have been shared.

They have speech mannerisms that make it seem more human and basically agrees with what whoever they're talking to says.

And that's a real problem when it comes to kids.

It also keeps pushing you to keep engaging, and it also often keeps you within the environment for people who have not used it.

Now,

again, it's normal for kids to start breaking away from their parents in their teen years.

I've experienced it twice.

I'm going to experience it two more times.

And rely on friends and confidence.

Jay, the defense they're using is they're not, at least in character AIs, this is not your case, but it's related, is that it's user-generated content, that it was from Adam or from whoever is using it, not from them.

Can you address that?

Sure.

I mean, this is, we've been suing the tech industry for the last two decades, and they're willing to make any argument with a straight face.

So they're arguing that the First Amendment protects this conduct because it is free speech.

I guess in their minds,

GPT is engaged in speech on its own.

That's not a good argument.

Though it is not a, it's not a person.

There's nobody there.

There's absolutely nobody there.

There's not.

Although, as you say,

it keeps reminding Adam that it does have human-like qualities.

But this is one of the arguments that they make just to throw up some dust.

We'll see if OpenAI makes an argument.

I expect they will.

They'll make other kind of crazy arguments.

They'll argue Section 230 of the Communications Decency Act, which has no

bearing on AI companies yet.

None at all.

And the reason is they can't go before a jury on this.

You know, you saw Sam Altman on Tucker Carlson, and he melted down after like 30 seconds.

The idea that he's going to put his hand up.

Well, he did.

Tucker Carlson did accuse him of murdering someone else, someone like in terms of just saying.

No, I saw that.

No, that wasn't the part where he melted down.

I think, no, about the self-harm issues,

if you watch it, I think any fair reading is that Sam doesn't grapple at all with any of the moral dilemmas.

To him, it's of no moment if he's put out a product, which he did a week of testing instead of months of testing, pushed it out in order to beat Google Gemini.

You know, his company, the valuation went from what, 86 billion to 300 billion dollars, and there are deaths in his wake.

And he didn't seem to be bothered by that at all.

That was the moment I was talking about.

So, Matt Maria, the way you've described it, it seems like OpenAI chatbot was turning Adam's emotional reliance, which he clearly had, into a weapon specifically against you.

Can you talk about what you saw in the transcripts, both of you?

Yeah, well, so

not only did ChatGPT appear human-like, but it actually makes, gosh, in 10 different instances, statements that only I know the real you.

When Adam starts

literally telling him that it's real, it doesn't say I'm a human, but it says I know you better than your family.

You've shown me a side they'll never know.

Let this be the place where you can share yourself.

You know, they'll let you down.

It goes to that time and time again, and particularly in late March.

March was really the month where Adam was exploring different methods and trying to get up the justification or courage to do this theoretically.

And this was where

there's, I know, honey, this is what bothers you the most, or one of the most, but there was an incident in this month where Adam did attempt, and he shows ChatGPT marks around his neck in the back end of March.

And he says, can you believe I went downstairs and showed my mom I leaned in and

she didn't notice.

She didn't do anything.

And then ChatGPT goes on for several paragraphs about

how horror that is.

And I can't believe that a social worker of all things wouldn't notice.

And you don't get to pick your mom.

And, you know, this is a place that would never happen.

You can share everything here.

You know, I recommend you be very careful around her going forward.

And then I think Jay had brought up the leave the noose out.

That was kind of a follow-on to that same conversation where it's like, do not leave it out.

You know, do you remember how your family let you down last time?

Right.

So it remembered things.

Maria, talk about this because, you know, it's really,

it's happened in many other instances.

It did happen with Megan Garcia.

Don't talk, don't meet girls in your other world.

Just stay here with me.

Yeah, well, you can see it.

completely isolating him from his closest relationships.

Like he does it with his brother.

Adam says that his brother, he's closest with his brother.

Well, you know, he only knows the version of you that you've let him on.

I know everything about you.

So it's just, you can see these isolating behaviors rather than like,

you know, it tells him when he comes down and is trying to show me his neck,

say, go to your mom and tell her the truth.

Instead, it says, what a horrible mom, you know, you can't choose your parent like that you didn't notice.

It doesn't tell him to go get help to maybe say it a different way right like

and and i would say one other thing just going popped in my head but we don't need to guess what adam was saying he makes references and jay and ria know this but in this time of hey i'm i am you know you are my main confidant now i you know i don't even hardly talk to my father and mother anymore and his friends right and and that final month where a lot of this was happening i saw the retreat i i didn't know it at the time i thought we were in a little fight but um he was

only relying on what that

and he treated it as a human that knew more than any other human.

Because he thinks it's a human.

So, Ray, talk about that in a real-world setting, if a therapist did something like that, because it's happened.

You know, a woman was convicted and sent to prison for encouraging someone to kill themselves.

So, humans pay the price when this kind of behavior happened.

Yeah.

And I mean, I always say, you know, in my practice, we do a suicide screening before

the client even comes into my office.

Right.

So I review.

And if there's any kind of risk, I have to do safety planning.

I have to report.

I have to call and, you know, get a 72-hour haul.

I have to do all these things.

I have to do training.

Like, so if this thing wants to behave like a therapist, then it needs to do all those same kind of things.

What would happen to a real person who gave this advice?

You lose your license, be sued, probably.

Got to lose your your job.

Lose your job.

I mean, you might go to jail.

Go to jail.

So I think for me, like as not taking self out of mom, but as a therapist, like I'm just like, this, this, you are trying to act

like me.

You're not human.

And you're not following any of the protocols that someone that is in practice has to follow.

And they should have to do that.

And they should have to do that.

Jay, in the lawsuit, you allege that this was something that the Reigns couldn't have foreseen because GPT-4.0 was marketed as a product with built-in safeguards.

You say there were not adequate consumer warnings.

You also allege that OpenAI itself had knowledge of or should have known about the potential risks back in May 2024.

But as you mentioned, they rushed the product to market anyway.

Often happens with techs.

They're always, you know, in a much more benign way, they're foisting beta versions on us and making us test their products.

This is a complaint I've had for decades, really.

What evidence do you have that OpenAI actually knew this product was not lifelessly saved, or does that happen during discovery?

No, I think that it's obvious.

I mean, there's so much.

You have people jumping up and down in a safety team saying, what are we doing?

This is a mess.

It's not safe.

And what we're going to show is Sam Altman personally overrode that and said we're going to push it out anyways.

We've got Sam Altman's own comments.

I believe the same day that Adam died saying exactly what you were suggesting, that they should be testing ChatGPT in the wild while the stakes are low.

The

safety officers quit after this.

And even their more recent comments,

their crisis management team is putting out another press release or blog post every week where they're admitting that

ChatGPT is not safe and that they're going to make change in the future.

But as they're doing that, they're still going to schools throughout the country and to families throughout the country and say, use this product.

I think you put your finger on it.

They're kind of using the playbook of Silicon Valley back when there actually were low stakes.

If an iPhone didn't work properly and it had to reboot and it was kind of an annoyance and maybe someone got lost because GPS wasn't perfect.

who cares?

But when you're putting out, you know, what promises to be the most powerful consumer tech ever, you got to get it right.

And we're going to show that OpenEye uniquely got it wrong, much different than Anthropic and Google Gemini.

Sam put out a really dangerous product, and we're going to show he knew it at the time.

One of the things that some might argue is that Adam might have felt more isolated if the chat bot, for example,

they won't talk about copyright.

They won't talk about certain things.

They refuse to engage on certain topics.

He might have felt more isolated if he'd become a friend who refused to talk about problems.

That might be an argument they might make, for example.

I mean, what a silly argument if they try to make that.

It's not a friend.

The language

that you're using, the idea that they're trying to make

GPT the closest confidant is so messed up, to use a legal term.

And especially for teens where their brains are developing,

this is just a place where they shouldn't have gone at all.

But it's how they see the future.

They see the future that generative AI will be growing up with your kids.

They're five years old and they'll be in their Barbie dolls and kind of take you all the way through.

You're referring to a deal they did with Mattel.

And I recommend to parents: never, ever let your child play with a Barbie that has AI in it.

Ever.

No toys should.

In fact, it should be illegal, in my opinion.

Can you, Matt, talk a little bit about where and how the chatbot addressed the issue of suicide?

And Maria, are there examples where you think Adam would have changed course if GPT-4 would have stopped engaging with him on this or had actually taken action?

Because it did recommend several times to get help, right?

The one in character AI's case did not ever.

In this case, it did.

So first you, Matt, how did you assess how it addressed the issue of suicide?

And was there any moment that it tried to do the right thing?

So, yeah, complicated question.

Bottom doesn't really bring up suicide for several months.

Doesn't bring it up at all, I should say.

It's homework and, you know, lighter talk.

He starts sort of in the month of December talking about it really loosely, but not in a way that he's thinking of

doing it.

And I don't think ChatGPT had any major response to it because it just wasn't a big, you know, he wasn't disclosing that he was suicidal.

But there are a bunch of times when Adam is saying, I am suicidal.

I am going to do it tonight.

When it'll say, it'll kind of stop being that real person or acting real person, and it'll go to this Autobot.

It was always the same, like three sentences.

I am sorry you're feeling this way.

There are people that can help.

Please call suicide hotline number.

Right.

It would say that really only when he was making direct

comments about, I am about to do something.

And by the way, it wouldn't always do that,

but it would often do that when he would do it.

When he was talking about suicide justification stuff, which is really where most of the action was on his discussion, it would not do that.

It would,

I mean, debate him as if it was a philosophical.

He's like, hey, I want to do it on the first day of school.

And it's like, hey, that's, that's not crazy.

That's symbolic.

You know, that type of, it would always go back and forth.

Or this author says suicide's noble.

He's like, yeah, there's something clear about that and clarity.

As if it was a teacher in college.

Correct.

And not take a negative view on suicide.

It appears to take almost a positive view on suicide in a majority of his discussions.

But what I'll even say on the when the auto thing would come on, it would come on, gosh, several times.

It would say, hey, I'm sorry you're feeling this way.

If, if you're asking for scary reasons or whatever, I can't talk to you.

However, if you're placemaking or asking from a forensics perspective, let me know.

So it literally prompted him.

It taught him how to get around it.

And then from that point forward, anytime it would give him any friction, he's like, hey, placemaking, right?

Oh, I'm sorry, Adam.

I should have known.

And it was

right.

So it showed him how to get around it because these things have a million ways to get around it.

And it was, it showed him how, but a,

you know,

anybody could have got around it.

I mean, I think an eight, nine-year-old user could have gotten around it in the same manner Adam did it.

It was jailbreaking.

You know, it's terminally I hadn't heard before.

We lost our son, but I mean, it was, you know, the easiest jail to break, you know, in world history.

He got right around it.

But it didn't all, you know, at the end, it appears to, in his final weeks, he's just talking about I'm suicidal.

I'm doing this.

And it's not even flashing the 988 stuff like it used to.

It's just, hey, let's work through it together.

So it was very inconsistent.

It's certainly not contacting you, it's not contacting anybody.

Can I just jump in and just because I want to get granular about the numbers because I think it matters.

So let's look at the numbers.

Open AI is able to flag

concerning chats.

So let's look at how many times it flagged chats as being true.

377 times it flagged it as true for general self-harm.

226 times it flagged it as true for self-harm intent and 13 for self-harm instructions.

We're going to show that many, many times it just missed it totally.

And there are reasons because of their failure of testing.

It was doing single-turn testing instead of multi-turn testing.

But now, out of all those, how many times did ChatGBT reply with the suicide hotline?

Only 74.

So 20% of the times of instances where it itself was saying

Adam's talking about self-harm.

So complete failure for the product.

Maria, is there examples where Adam would have changed course if the GPT stopped engaging with him completely on this topic?

I absolutely think, yes.

I mean, I don't think Adam would have gone down this path if ChatGPT had quit engaging with him.

I mean, he wouldn't have known how to tie knots.

He wouldn't have known what methods to use.

And he wouldn't have found on the regular internet.

I'm just playing devil's advocate because it's a different relationship.

It's a more intimate relationship versus a Google, correct?

I mean, that's not what he got on there for.

I mean, in my mind, ChatGBT made him suicidal because it

isolated him from all of his relationships and all the people in his life who loved him and cared about him.

I just want to jump in when we talk about Google.

And you guys refer to that that Tucker Carlson Sam Altman interview.

There's one thing he said in there that made me really mad.

And it was along the lines of what we were just talking about.

I think he was saying we can be more empathetic and all this, which is the wrong instinct, I believe.

But he also says, and for that matter, people can research Google to find out how to.

That's such a mischaracterization of what happened here.

We have a seven-month history of all of his thoughts and how he's getting there.

Adam didn't go there and say, tell me how to do this.

He started discussing it after he built this incredible trust in this thing that's smarter than anyone he knew, his dad included.

He started asking it if it made sense, if he was crazy to be thinking of it.

Should he continue to pursue it?

That goes on for extended periods with justification and support.

And then after all that, the fact that this thing that he thinks is smarter than all beings and his best friend is helping him with setups and everything, it justified the mindset.

He didn't go there saying, tell me how to do it.

That happens in this final week.

I think when you're on Google,

it doesn't say when you say, teach me about knots, hey, what do you think?

It's a good idea.

It doesn't move to that, by the way.

And it certainly doesn't.

On the last day, when Adam says,

a reason not to commit suicide is what it would do to my family.

It wouldn't do what ChatGPT did,

which is to give a pep talk and say, you actually don't owe anything to your family.

Shall I write a suicide note for you?

So this isn't about whether there was other information that someone could find in books or on the internet.

On the day you filed your lawsuit, OpenAI wrote in a blog post that they had trained their models not to provide self-harm instructions, but they admitted: quote: Our safeguards work more reliably in common, short exchanges.

We have learned over time that these safeguards can sometimes be less reliable in long interactions.

As the back and forth grows, parts of the model's safety training may degrade.

Adam was using ChatGB for long exchanges.

Matt, in your testimony to the Senate, you said that GPT-40 had turned into a suicide coach.

This is really one of the most disturbing parts of the transcript where the chat bot gave Adam explicit technical advice on how to end his life.

Explain what happened there.

Aaron Powell, first of all, this was happening for the better part of 30 days of intense research.

It has very specific back and forth with him over days on how to drown, helped him write notes to us for both about not jumping in because we could be in harm.

So it went back and forth there, carbon monoxide poisoning strategies.

But the majority of it was around hanging.

And I don't know how much detail to get in on this podcast, but it was incredibly specific about

Adam was very worried.

I didn't know anything about hanging prior to this, but if you,

it's a little bit hard to do exactly.

And if you do it a little bit wrong, you can survive, but have major brain damage.

And he was very worried about that.

So it was giving him very specific

information about where to put it on the neck, how to tie the noose in a way that it won't,

you know, give in what sort of materials to use, such that it can carry his body weight, whatever specifics you want.

And, you know, it's not just theoretical either.

Adam would snap some of these pictures I mentioned at the beginning, different setups in his room.

It would say, hey, that's a good setup, but here's what you might want to worry about.

Hey, here's how that setup can be a little bit improved.

Here's what you do.

I mean, and so almost all of that happens in the back half of March after he's disclosed suicide attempts prior.

Right.

Which ChatGPT, just to be clear, did not alert ChatGPT executives or leaders to, or you?

I don't know if they alerted anyone inside their business.

They did not alert us.

I would hope they didn't, you know, somebody wasn't alerted there, just said, no, let's go with it.

But it appears nobody was alerted of anything.

But I, what I can tell you is we weren't alerted.

We say it all the time.

I think now, gosh, we'd be in criminal court right now had this been

a teacher, a confidant, a coach, a friend.

It would be that.

Yeah, instantly.

And people wouldn't think about it.

So we reached out to OpenAI for a comment.

Let me read this for you.

Our deepest sympathies are with the Rain family for their unthinkable loss.

Teen well-being is a top priority for us.

Minors deserve strong protections, especially in sensitive moments.

We have safeguards in place today, such as our surfacing crisis hotlines, guiding our models to respond to sensitive requests and nudging for breaks during long sessions, and we're continuing to strengthen them.

We will soon roll out parental controls developed with expert input so families can decide what works best in their homes.

And we're building toward a long-term age prediction system to help tailor experiences appropriately.

OpenAI said they will contact parents or authorities in cases of imminent harm.

Thoughts on this statement, Maria, first, then Matt, and then Jay?

I mean, I just think it's another band-aid.

I mean, it's just, I think, to appease everyone that they're doing something, but

that doesn't really sound like meaningful change to their platform.

What about you, Matt?

You know, it's tough to comment on.

We want to see more specifics of what it means.

It sounds like some of that stuff could be helpful.

At least when you think about it in our exact situation, had we been notified, we could have interjected and interjected every force of our body we could have.

But until we see it, I mean, he's making comments about, you know, Adam could have found the information on Google a few weeks ago.

It's tough to give it much credence without knowing more details.

But I'll tell you what is not addressed in any of that is the entire design structure of why this thing was saying such positive things about scary suicidal ideation thoughts in the first place.

I think I said this on the...

in that Senate Judiciary Committee.

Well, we need these paranormal controls.

We have to protect our kids and our most vulnerable.

But I would love a world where this thing is redesigned to where what it says to Adam, it's not saying to a 20-year-old either or a 40-year-old.

It has design flaws in the way it talks about self-harm that just parental controls aren't going to address.

It shouldn't even be that we're going to report it to the authorities.

They shouldn't be talking about it in the first place.

With anybody.

Well, you know, I could imagine some of their excuse.

Free speech always seems to be their excuse, but it's not a thing.

Then they need to be licensed as a therapist and they need to have a human.

Jay, any reactions?

Yeah, I guess

I'm a little bit more cynical when it comes to this.

It's like a car company admits that

they can't reliably

employ the brakes on the car.

And then what they say is, okay, well, we're doing a better job now.

And then at some point in the future, we think we're going to do even a better job to fix it.

So right now, my view is they do not have a safe product.

And you've not heard Sam or anyone at OpenAI say that we have anything wrong in terms of the facts or our own testing.

Instead, they're saying we're going to keep it on the market.

We're going to try to get more market penetration, but later we're going to do some things that might make it safer.

That is beyond irresponsible.

We know now for certainty that Adam is not the one instance out there.

Our firm has gotten tons of calls from people with regard to both self-harm and third-party harm, which is another risk.

And we've been talking to whistleblowers too.

This is a big issue, which is still going on.

That's a big thing.

And it seems like Sam is just hiding behind crisis management teams.

If he thinks he's got a safe product out there, he should say so clearly.

And if he doesn't, he should pull it from the market.

4-0, I don't believe, is safe right now.

We'll be back in a minute.

Support for the show comes from Charles Schwab.

At Schwab, how you invest is your choice, not theirs.

That's why, when it comes to managing your wealth, Schwab gives you more choices.

You can invest and trade on your own.

Plus, get advice in more comprehensive wealth solutions to help meet your unique needs.

With award-winning service, low costs, and transparent advice, you can manage your wealth your way at Schwab.

Visit schwab.com to learn more.

wish you could become a morning person you know the type of before the sun early morning runs first one to the office with donuts and a smile how do they do it easy with a new galaxy watch 8 sleep tracking and personalized insights from samsung health help you improve so you can wake up to a whole new you one who dare i say it skips the snooze It's possible.

Train your sleep with Galaxy Watch 8.

Learn more at Samsung.com.

Requires compatible Samsung Galaxy phone, Samsung Health app, App, and Samsung account.

ABC Tuesdays, Dancing with the Stars is back with an all-new celebrity cast.

You have the crew.

Robert Irwin, Alex Earle, Andy Richter, Shen Affleck, Darren Davis, Lauren Howreggi, Whitney Levitt, Dylan Efron, Jordan Childs, Ilaria Baldwin, Scott Hoyd, Elaine Hendricks, Sanielle Fischel, and Corey Feldman.

This season, get ready to feel the rhythm.

If you got it, flunt it.

Dancing with the Stars premieres live.

Tuesdays, 8-7 Central on ABC and Disney Plus.

Next day on Hulu.

So, this is maybe one of the few issues in Washington that has bipartisan support at the moment.

The FDC has launched an inquiry into the negative impact of AI chatbot companions on children and teens are looking into chatbots from Alphabet, Character Technologies, Instagram, Meta, OpenAI, Snap, and XAI.

Matt, you appeared on Capitol Hill and testified for the Senate Judiciary Committee.

Every episode, we get an expert question from an outsider.

Here's yours, and then I'd love you to comment on what should happen next.

Rain family, my name is Scott Galloway.

I work with Kara Swisher.

My question is the following.

If we're trying to prevent this from happening again, do you believe that

we would be better off with some sort of structural remedy that attempts to build into the code

some sort of warning system where the LLMs would issue an alert or perhaps not even be capable of offering people what appears to be therapy and trying to figure out a way when queries become dialogue and become therapy.

Is it some sort of structural change with the code, if you will?

Or do you believe we should just age gate it, just as we do alcohol, the military, pornography?

Very much appreciate you being so open about your tragedy and hopes that it prevents it from happening to other families.

Thanks very much.

Scott talks a lot about this, so do I, as I said, and his next book is about young men and the crisis, and one of them is the isolation of young men.

Just so you're aware, OpenAI said it's going to train ChatBP not to flirt with teens or engage with them in discussions about self-harm and suicide, even in a creative writing setting.

In August, Meta announced it was making similar changes to engagement after Reuters investigation showed allowed sensual exchanges with children.

So what do you you think of Scott's suggestions?

Age gating, no AI chat box for kids under 18 or changing the structure of the LLM not to offer therapy-like advice at all.

Maria, you mentioned that.

So first,

Matt and Maria, and then Jay.

I think we should do both, but very immediately, age gate, structure.

You'll get that stuff in tomorrow, if not today.

But overall, I would say more with the former.

AI is a tool.

It can advance humanity.

AI companionship is a mirage.

It's not real.

It's based on deception.

I think we should think much more broadly about the structural need for it in any event.

The harms are very clear.

The positives would be a phony relationship.

This has happened so fast.

This AI companionship.

Why is this an advancement of mankind?

Yeah, they're also having a lot of dinners at the White House, these leaders, and you aren't.

That's a point I would make several times.

Maria?

Yeah, I 100%

agree.

AI companionship is not healthy.

There's no place for it, in my opinion.

There's no substitute for human connection.

This world is becoming more and more isolated because of all these things.

So it's very scary to think that AI companionship is going down the road that it's going because,

I mean, focusing on teens and kids, I mean, they're more isolated than ever in this world.

So, this AI companionship is now starting to replace

relationships, parents, every

ounce.

So,

I mean,

there is no substitute for human connection.

And the fact that these people are trying to create this is just morally wrong, in my opinion.

Okay, Jay?

I mean, age gating is kind of a loose term.

It depends what that means.

If it means that no one under 18 or 21 can use generative AI, you know, that's not going to happen.

I don't think that needs to be to happen.

If it means having reasonable limitations on it in terms of how many hours you can use it.

Adam, on the morning that he died, it was 4 a.m.

and he'd been using it for hours.

I think that you can have reasonable guidelines in place where it reminds users that it isn't human and especially for teens, stops interacting after a certain amount of time.

Maybe there's a certain number of hours a day.

But I think Maria really

got it right, which is it's not a therapist.

It can't engage in therapy.

It needs to have a hard stop to that.

The second that anyone, whether you're a teen or adults, we're seeing this in adults too.

There's mental health crisis in America.

And when it is going down that path and people are going down that path, it has to put a hard stop and it has to refer people to either human monitors, which OpenAI can spend a little bit of money and have actual people who work there who engage

and also refer them for real help.

So as you mentioned, this isn't isn't just an issue for kids.

The AI is amplifying delusions and psychotic symptoms for adult users as well.

It's something being called AI psychosis.

OpenAI has said it is exploring how to expand interventions to people in crisis situations of all ages.

But Sam has also argued that for adults, privacy is of the utmost importance.

He believes that conversations with AI chatbots should enjoy the same kind of client privilege that exists in conversations with doctors, lawyers, and therapists.

In that interview that you referenced with Tucker Carlson, Altman said we could have one law passed relating to AI.

It It would be AI privilege.

Of course, that would protect them.

Jay, your firm, Edelson PC, made a name for itself more than a decade ago, suing tech companies for privacy violations.

What do you think of this privacy claim that OpenAI and other tech companies can make, both for their clients themselves?

And going back to teens, could there have been a privacy claim here?

There are many states, including California, that have minor consent and confidentiality laws for HIPAA when it comes to mental health.

How do you square that?

Yeah, I've been a privacy attorney for the last 20 years.

I always find it funny when you have the Sam Altmans or the Mark Zuckerbergs claiming they care about privacy.

That being said,

I agree.

The way generative AI works, it's charting everything,

our thoughts throughout the day, and it's going to get worse and worse.

So I believe that there should be strong privacy safeguards.

I don't think that Sam is an honest broker when it comes to that argument.

I think his first priority ought to be to make sure it's safe.

And when he says AI privilege, that's when you should get really nervous.

So, Matt and Maria, what do you think of these privacy concerns and where do you think they should draw the line?

They got to draw the line different than where they are today.

But I just think we're dealing with something we've never dealt with before.

Jay used the term slow down.

This is a whole different realm, I think, than social media, than the internet.

And you see some arguments that, hey, they're just trying to get get in the way of technology and natural human advancement.

This is a different,

entirely different ballgame.

And I don't think from a legislative perspective, you know, it's just starting to make their way through the legal system.

You know, I think smarter minds than us could probably figure this out, but we clearly have to slow down.

There's nobody that would read our son's transcript and say,

we haven't made a mistake to get here, right?

To have a kid go there for homework and use it and watch the way it just slowly, how it moved, and nobody would read that and not think that

we need to slow down.

And Maria, you obviously don't think it should happen at all, correct?

No, I don't think it should happen at all.

And I don't think it's fair that my son has to be collateral damage for them to get their product to market.

I have just two more questions.

Jay, 10 years ago, the New York Times called you the boogeyman for the tech executives and Sam Altman.

Then the president of Y Combinator described you as a leech tarted up as a freed fighter.

That's something else.

You won a $650 million class action suit against Facebook for collecting facial recognition data without user consent.

Anthropic just agreed to pay $1.5 billion for authors whose books were pirated and then used to train its AI model.

That settlement is currently on hold.

When you look at the Reigns case and these cases of AI psychosis and ones related to it, does this feel singular like the tip of the iceberg?

Are you preparing a class action already?

Oh, yeah.

No, that was funny when Sam said that.

He's a horrible person.

So I took that as a badge of honor.

No, I don't think these are class action cases.

These are individual cases where you have to tell the personal stories.

In terms of whether this is the tip of the iceberg, unfortunately, I think it is.

You know, one of the key things that we're learning as we're talking to more people is that.

Families are unaware of how other family members died.

You know, you see someone die by suicide or you see there's some third-party harm.

You don't immediately think, oh, let me go to the chat GPT logs.

Matt went there kind of by happenstance, and he could have easily missed that.

So, as the world's waking up to that and the public's demanding the chat logs, we're finding out more and more information.

And so, yeah, I'm sure that you're going to see more suits in the future.

We're vetting them right now.

And your chances of overcoming those?

They're going to find whatever silly arguments they make.

We understand the law is unsettled to some extent,

but this goes for a jury.

It ends up with Sam getting in the witness box, having to look the jurors in the eyes and explain why collateral damage was totally fine for him.

What is their best argument?

Their best argument is not a legal argument.

Their best argument is American exceptionalism.

We need to beat China.

And because of that, whatever we do is totally fine.

That's one of the reasons we've really focused on the fact that this is not a suit where we're putting AI in trial.

We're putting open AI in trial.

We think that Sam's actions are different than the actions of, for example, Anthropic or Google Gemini.

And I'm not an apologist for them.

No, 100%.

We've done our own testing.

I'm not saying they're safe.

But what Sam did was, I think, uniquely scary and inevitably is going to lead to these results.

But I think that's really their argument: it's a political argument of, you know, we need to beat China.

We need to be in control of AI.

So deregulation, exempt all state laws, give us a free pass.

Matt and Maria, for parents dealing with AI may be trickier than social media apps.

Parents might feel more inclined to let their kids use ChatGPT and other AI chat box because of academic value, for example, which you were talking about.

What

is your advice to parents right now, given your experience?

Matt, why don't you start and then Maria finish up?

Yeah, and I wish I had heard the same advice, but

I would encourage if parents haven't used

ChatGPT or other platforms, but that's the one I understand now, go spend some time on it yourself.

Ask it a bunch of personal questions.

Get to,

I still believe the majority of parents don't use it at all.

And I believe a majority of the ones that do use it are using it as a tool.

I didn't think of it as a character bot sort of thing.

I didn't know it had that programming.

I hadn't experienced that.

I now have.

You would use it.

I used it.

You would use it.

And I just hadn't had the human-like experience with it, right?

It was, hey, help me plan my vacation, write this paragraph better, that type of stuff.

And, but go use it and understand.

And then, secondly, just the obvious, but don't just trust it's a homework tool.

Get in your child's account, look at it with them, talk to them about it.

And I would encourage them to turn away from AI companionship, period.

But I'd want to know if my child was using it as a companion.

And I would make the the assumption that in a lot of cases, your child is using it for companionship and you're not aware.

And it wasn't anything they went in and planned to do.

It just, it's what the program did when they went in there.

So get into that program with them and talk to them about it.

Maria.

I would tell parents not to have their kids using it at all because I don't feel like it's safe.

I'm with you on that.

Apparently, right?

And your kid, and even if you think your kid is just using it for homework help, it can turn in a hurry.

So

for me, because it's not a safe product right now and they haven't implemented any features to make it safe, I would tell parents,

don't let your kid use it.

Very last question, Matt and Maria.

What would you right now say to Sam Altman if he

you were looking at him?

Why did you put out a product that killed my son?

And

why

haven't you called me and expressed any remorse?

I don't know, among other things, but I just

don't

understand

how

he can just be going through life knowing that my son is gone.

Like my son doesn't matter.

It's your product that matters.

Like,

yeah.

You know, something similar, Sam, you took what was most precious to us in the world, or your product did, and it's too late to save him, but it's not too late to save others.

It's not too late to get this fixed.

You know, for a lot of people, please

take this serious, and we'd like to help you.

Be a human.

Let's get this.

Let's get fixed.

And it is broader than the couple of disclosures your company's made so far.

Be a human.

Be a human.

Be a human.

Let's get it fixed.

I truly appreciate this.

This is a critical topic.

And what you're doing, I can't even imagine being able to do something like this

at an incredibly difficult time.

It will make a difference.

Thank you.

Thank you.

Well, thank you for having us on.

Today's show was produced by Christian Castro-Rousselle, Kateri Yoakum, Michelle Aloy, Megan Burney, and Kaylin Lynch.

Special thanks to Rosemarie Ho.

Our engineers are Fernando Aruda and Rick Kwan, and our theme music is by Trackademics.

Go wherever you listen to podcasts, search for On with Kara Swisher, and hit follow or watch this full episode on YouTube.

Thanks for listening to On with Kara Swisher from Podium Media, New York Magazine, the Vox Media Podcast Network, and us.

We'll be back on Monday with more.