The Elon-ction + Can A.I. Be Blamed for a Teen’s Suicide?
Listen and follow along
Transcript
This episode is supported by KPMG.
AI agents, buzzworthy, right?
But what do they really mean for you?
KPMG's agent framework demystifies agents, creating clarity on how they can accelerate business-critical outcomes.
From strategy to execution, KPMG helps you harness AI agents with secure architecture and a smart plan for your workforce's future.
Dive into their insights on how to scale agent value in your enterprise.
Curious?
Head to www.kpmg.us slash agents to learn more.
I'm going to say something that I've never said before while making this podcast.
What's that?
It's cold in here.
I thought you were going to give me a compliment.
No, that's going to have to wait for year three.
But they did fix the ventilation in our studio.
So now we have like a nice cool breeze blowing through where previously we had been suffocating and sweating, you know, what what amounts to a a poorly ventilated closet.
Yeah, you know, it's incredible the things they're doing with ventilation these days.
And by these days, I mean since the early 1970s, it did take a while for that technology to get to the Times San Francisco Bureau, but it's here now, and I hope it never leaves us.
Like, I'm chilly.
Are you chilly?
No,
I'm all hot and bothered, Kevin.
You run hot.
You know, I don't always run hot, but when I know I'm about to drop some hot knowledge on people, well, I warm the heck up.
Yeah.
You know, Chapel Rone says it's like 199 degrees when you're doing it with me.
And that is how I feel when I'm podcasting with you, with the word it meaning podcasting, rather than how Chapel Roan meant it.
Which is what?
I'll tell you when you're older, Kevin.
Okay.
I'm Kevin Roos, a tech columnist at the New York Times.
I'm Casey Newton for Platformer.
And this is Hard Fork.
This week, how Elon Musk became the main character in the 2024 U.S.
presidential presidential election.
Plus, journalist Lori Siegel joins us to discuss the tragic case of a teenager who developed an intimate relationship with an AI chatbot and later died by suicide.
What should Silicon Valley do to make their apps safe for kids?
Well, Casey, we are less than two weeks away from the presidential election.
You've heard about this?
I have, and it's a very exciting time for me, Kevin, because as an undecided voter, I only have two weeks left to learn about the candidates, understand the differences in their policies, and then make up my mind about who should be the president.
Yeah, well, I look forward to you educating yourself and making up your mind.
But in the meantime, I want to talk about the fact that there seems to be a third candidate.
He's not technically a candidate, but I would say he is playing a major role in this campaign.
I am talking, of course, about Elon Musk.
Yeah, you know, Kevin, I heard somebody say this week, it feels like Elon Musk has somehow become the main character of this election.
And I'm surprised by how much I don't really even think that that is an overstatement.
No, it does seem like he has become inescapable if you are following this campaign.
And of course, you know, he's become a major, major supporter of former President Trump's.
And he's out on the stump trying to convince voters in these critical final.
weeks to throw their support behind him.
So let's just bring ourselves quickly up to speed here and review Elon Musk's involvement involvement in this presidential election so far.
So Elon Musk endorsed Donald Trump back in July after the attempted assassination.
He also announced that he was forming a pro-Trump PAC, the America PAC, a political action committee, and he's contributed more than $75 million to that political action committee.
And then more recently, he has been appearing at campaign rallies.
Take over, Elon.
Yes, take over.
If people don't know
what's going on, if they don't know the truth,
how can you make an informed vote?
You must have free speech in order to have democracy.
And then last weekend, Elon Musk announced that his PAC, the America PAC, would give out a million dollars a day to a random registered voter from one of seven swing states who signs a petition pledging support for the First and Second Amendments.
So this is on top of the money that he'd already promised to people who refer other people to sign this petition.
And it goes even further than that because Wired reported this week that Elon's PAC has also paid thousands of dollars to X, the platform that he owns.
Oh, so X got a new advertiser.
That doesn't happen very often these days.
For political ads in support of Trump's candidacy, the PAC is also taking a prominent role in the kind of get out the vote operation for the Trump efforts.
They are sending canvassers around in swing states.
They are leading some of these get out the vote efforts.
So just a really big effort by Elon Musk to throw his support behind Donald Trump to get him elected.
Yeah.
And Kevin, it just cannot be overstated how unusual this is in the tech world, right?
The sort of typical approach that most big business leaders take to an election is to stay out of it, right?
And the reason is because typically you are trying not to offend your customers who may be voting for a different candidate.
And also, you're trying to hedge your bets because you don't know who is going to win the election.
And so you want to try to maintain good relationships with both.
But that was an old way of doing things.
And the Elon way of doing things is to leap in with both feet and do everything he can to get Donald Trump elected.
Yeah, I mean, I think it's safe to say we've never seen anything like this in an election where the, you know, one of the richest people in the world sort of decides that it has become his personal mission to get one of the candidates elected and uses a major social media platform as a tool to try to do that.
And also just spends so much money in not in just these sort of conventional ways.
tech donors, billionaires give money to political action committees all the time.
Bill Gates, we just learned this week from reporting that my colleague Teddy Schleifer did,
donated $50 million to a pro-Kamala Harris political action committee.
So there's a long history of that, but this kind of direct outreach to voters, the personal involvement, the appearing on stage at rallies, it's just not something we see.
No, and $75 million is a huge amount of money for a presidential campaign.
In Silicon Valley, we get numb to these sums, right?
This is a world where OpenAI just raised more than $6 billion in their latest fundraising round.
But in presidential election, for one person to donate tens of millions of dollars is extraordinary.
And you just heard Kevin say that Elon Musk just outspent Bill Gates by 50%.
Yeah.
So.
Given the fact that Elon Musk has emerged as a central character of the 2024 presidential election, election.
We should talk today about what he's doing, why it matters, and whether you think it's going to work.
Let's do it.
Casey, the first piece of this that I want to talk about is this million-dollar lottery that Elon Musk is running for people who signed this petition pledging to support the First and Second Amendments.
And he's given out a number of these sort of, you know, giant checks now to
voters who register in one of these crucial swing states.
But the most obvious question about this is like, isn't this illegal?
Yes.
And is it, Kevin?
Well, it certainly seems to be skirting the lines of legality is what I'll say.
So in this country, we have federal laws that make it illegal to pay for people's votes, right?
You can't go up to someone who's standing at a poll and say, hey, I'll give you $20 if you vote for this person.
You also can't pay people to register to vote or offer them anything of monetary value in exchange for registering to vote or for voting itself.
So, you know, there have been a number of campaign finance experts who have looked at this and said that this probably does cross a legal line.
And we also learned this week that the Justice Department even sent a letter to Elon's Super PAC warning them that this action might violate federal law.
But, you know, Elon Musk and his allies are arguing that this is not illegal because he's not technically paying for votes or voter registration.
What he's doing is just giving a million dollars to random people who sign a petition that is only open to registered voters in certain swing states, which feels like kind of a loophole to me.
Yeah, it feels deeply cynical.
And unfortunately, we see this sort of behavior from billionaires all the time, in particular, Elon Musk, where there is a rule, he will either break it explicitly or will try to sort of break it via a bank shot method and then effectively just say, come at me, bro.
Right.
Oh, you're going to come.
What are you going to?
Are you going to fine me a little bit?
Yeah, sure.
Go ahead and give me a fine.
I'm worth upwards of $250 billion.
Yeah.
I mean, what's so crazy to me about this is like, I remember, I am old enough to remember the last presidential election in which there were all these right-wing conspiracy theories going around about George Soros paying for people to attend rallies, to come out in support of Democratic candidates.
And, you know, those were based on basically nonsense, but like this is literally Elon Musk, the richest man in the world, directly paying to influence an election by giving millions of dollars away to voters who signed this petition.
So it is just explicitly the thing that Republicans in the last cycle were targeting Democrats for doing.
You know, it makes me think of another example, Kevin, which is the Mark Zuckerberg example.
In 2020, we were in the throes of the global pandemic.
There was not a vaccine that was publicly available.
And election administrators around the country were expecting record turnout.
They were expecting more mail-in ballots than they had seen in previous elections.
And they were saying, we need additional resources to run this election.
And they weren't getting a lot of help from the federal government.
And this was a non-partisan issue.
They were not saying, hey, we need more votes from Democrats or more votes from Republicans.
They were just saying, if you want us to count all the votes and make sure that this election is fair, we need help.
So this nonprofit steps in and they raise hundreds of millions of dollars.
And 350 million of those dollars come from Mark Zuckerberg and his wife, Priscilla Chan.
And of course, in the 2016 election, Facebook had been accused of destroying democracy.
And so they show up in 2020 and they say, hey, we're going to try to be part of the solution here and we're going to try to make sure that all the votes get counted.
And we're not putting our thumb on the scale for the Republicans or the Democrats.
We're just saying, hey, we got to count all the votes.
So this happens.
Biden wins the election.
And Republicans go insane about the money that Zuckerberg spent.
They call them Zuckerbucks.
They file complaints with the Federal Election Commission.
And at least eight states pass laws that outlaw grants like the ones that the nonprofit gave to these election administrators.
Okay.
So here is a case where Zuckerberg does not try to have any partisan influence on the election at all other than to let more people vote.
And the Republicans lose their minds.
Well, these Republican congresspeople who got so mad at Zuckerberg and his Zuckerbucks, they're going to be really teed off when they hear that Elon Musk is just cutting checks to random voters.
Kevin, this one truly makes me lose my mind because if Mark Zuckerberg was out there giving away a million dollars to people to vote for Kamala Harris, people like Ted Cruz and Jim Jordan would be trying to launch airstrikes on Menlo Park.
Like they,
nothing has ever infuriated them more than the very light, non-partisan interventions that Zuckerberg made in the 2020 election.
And here you have the most partisan intervention imaginable by the owner of a social network, and there are crickets.
Yeah, I mean, to me, it just feels like both an incredibly cynical form of trying to, you know, persuade people by paying them to vote, but it also just feels like it's kind of an attention-grabbing strategy.
Like there's a theory here that is like, if I'm going to spend millions of dollars trying to influence the results of a U.S.
presidential election, and I'm Elon Musk, I could either do what most political donors do, which is, you know, give money to a PAC.
The PAC goes out and buys a bunch of ads on local TV stations and radio stations and sends people out to knock on doors, or I could engineer this kind of like daily stunt,
kind of like a game show almost, where I'm giving away money.
I have these sort of cartoon checks that I'm presenting to people on stage at these events.
And that's how you end up with people like us talking about it on, you know, in media outlets.
And I think in that way, I think it is actually, although it's a very cynical plan and very potentially an illegal one, I do think it is pretty savvy.
Yeah.
And like, I mean, this is where I think that Trump and Elon share a lot of DNA, where they have realized that
attention is the most valuable currency today, and that the way that you can get attention very reliably is in shattering norms.
Norms that often existed for very good reason, by the way.
But this is the way that you get that attention.
And that leads to the second thing that I want to talk about, Kevin, which is that not only is Elon Musk a very rich person who is now spending a ton of money to get Trump elected, he is also the owner of a still significant social network.
And that to me brings up a lot of questions around bias on platforms that conservatives used to make a lot of noise about and no longer seem to have very much to say about.
Yep.
It wasn't that long ago that we were having these interminable discussions and debates and committee hearings in the House and the Senate about how important it was that social media sites in particular remain politically neutral.
There was this unstated rule that if you were the CEO of a social network, for some reason, you were supposed to take no position on elections and your product could not reflect any political views whatsoever, and it could not give any party an advantage or a disadvantage.
This was the worldview that was presented to us by Republicans between 2017 and 2021.
And I believe we actually have a montage of some of those Republican elected officials talking about neutrality.
No, I love a montage.
Let's do the montage.
Let's do the montage.
How do both of you respond to the public concerns and growing concerns
that your respective company and other Silicon Valley companies are putting a thumb on the scale of political debate and shifting it in ways consistent with the political views of your employees?
Would you pledge publicly to Mark Smith from Texas?
Make every effort to neutralize bias within your online platforms.
Many of us here today and many of those we represent are deeply concerned about the possibility of political bias and discrimination by large internet and social media platforms.
That was Senator John Thune.
My Democrat colleagues suggest that when we criticize
the bias against conservatives that we're somehow working the refs.
But the analogy of working the refs assumes that it's legitimate even to think of you as reps.
It assumes that you three Silicon Valley CEOs get to decide what political speech gets amplified or suppressed.
Mr.
Dorsey, who the hell elected you
and put you in charge of what the media are allowed to report and what the American people are allowed to hear?
And why do you persist in behaving as a Democratic super PAC?
I want to ask.
And here's Representative Steve Scalise of Louisiana.
Do you recognize that there is this real concern that there's an anti-conservative bias on Twitter's behalf?
And would you recognize that this has to stop if
this is going to be, Twitter is going to be viewed by both sides as a place where everybody's going to get a fair treatment?
So, Casey, what's your reaction to hearing those clips?
Look, there is something so rich about hearing, for example, Senator John Thune trying to dismiss the idea that conservatives were only trying to work the refs here and then to crash land in 2024 and see that no one has anything to say about bias on social networks anymore.
And in fact, they were working the reps all along.
Yes.
I mean, it just seems so transparent that none of these people have said anything about the fact that one of the largest social media platforms in the world is now being sort of explicitly used as a vehicle to swing a U.S.
election.
Yeah.
I mean, it's not even clear to me what other purpose Elon Musk thinks.
X has at this point.
All he ever talks about is X as a vehicle for free speech and how free speech will save civilization.
And what free speech means to Elon Musk is Elon Musk sharing his partisan opinions on his social network that he bought.
To be fair, he also posts about rockets sometimes.
Yes.
Yeah.
So let's talk about his motivations here for a minute.
We've talked a little bit about this.
Clearly, this is an issue that has become very central to his own identity and his sense of self and his mission in this world.
What do you think has made him want to jump into the presidential election in such an aggressive way?
Yeah, so I think probably the most obvious thing to point out is that Elon Musk and his companies have many, many ties to the federal government, and that if he can pick his chosen person to become the president of the United States, he will have a lot of influence he can exert to ensure that those contracts continue to exist and grow over time, right?
So both Tesla and SpaceX have government contracts.
They also, of course, provide a lot of regulatory oversight over Tesla, SpaceX, and Neuralink in addition to X.
And so all of that right there gives Elon Musk a lot of reason to care.
He has also found as he has cozied up to Trump that Trump has apparently said to him, I will give you some sort of informal advisory role within my administration that will allow you to have even more influence than you have up until this point.
And that was not an offer he was ever going to get from the Democratic nominee.
For sure.
And there was a great story by my colleagues in the New York Times the other day about sort of all the the ties between different federal departments and agencies that have contracts with Elon Musk's companies and the fact that if he is appointed to this sort of advisory role where he's in charge of what they're calling the Department of Government Efficiency, which is a joke that spells Doge, which is his favorite crypto coin, I just think that's important to note that this is all very silly, but it could happen.
He could be put in charge of some kind of effort to streamline the federal government.
And if that happens, he would be in charge of potentially firing the people who regulate his companies, right?
Or changing out the leadership at some of the agencies that are responsible for things like, you know, regulating Tesla and SpaceX.
And so obviously that would be a conflict of interest, but it is one that would potentially make him able to operate his businesses however he wants to.
So I think that's a really important explanation, but I don't think it really explains all of it because very rich people always have influence in government.
And there is no reason to think that a much quieter, calmer, less controversial Elon Musk could not have had essentially equal influence in both a Republican and a Democratic administration.
I'm wondering if there is something here related to the fact that Elon Musk is just wealthy on a scale that we have never seen before.
You know, we have this concept of FU money, you know, basically the idea that if you're rich enough, no one can tell you anything because you're going to be fine either way.
And like no one has had FU money in the way that Elon Musk has FU money.
And what he has decided to do with that FU money is to say, I'm just going to do everything I can to realize my own political beliefs.
I am not going to play both sides.
I am not going to hedge my bets.
I am going to go all in on one candidate because I think it serves my interests the best.
And there is nothing I will not do in order to achieve that reality.
So to bring this back to tech and the platforms for a minute, do you think this election cycle represents the end of the debate over social media neutrality?
Will we ever hear politicians complaining again about the fact that some social media platform is being unfair to one side or the other?
Or will everyone from now on just be able to point to what Elon Musk is doing on X and say, well, that guy did it.
So we can do it in the opposite direction.
Well, they should.
And, you know, by the way, I am not somebody who ever believed that social networks should be neutral.
I thought they had good business reasons for being neutral.
And I thought that to the extent they were going to try to have an influence in politics, they should be really transparent about that.
But look, if you build a company, I think, you know, you have the right to express a political viewpoint.
And I don't think that, you know, Elon should be restrained in that way.
But to your question, absolutely.
If ever again, we're having conversations about, oh, you know, why was this conservative shadow banned on Facebook and what sort of bias exists?
We should shut those conversations down pretty soon, because I think we have seen in this election that there is nothing restraining people from just sharing their political views if they own a social network.
And there probably shouldn't be.
I want to see if I can tell the story of what actually happened in the 2020 election as it relates to allegations of bias, because I think it's really telling.
One of the things that Trump did in 2020 that got a lot of attention was to say that mail-in voting would lead to widespread fraud in the election.
And this was, I believe, a political strategy to preemptively delegitimize the election.
Trump wanted to prime people in the event that he did lose so that he could say, aha, I've been telling you all along there was going to be massive fraud and I didn't really lose.
And the platforms at that time, including Twitter, stood up against this and they said, no, no, no, we're not going to let you abuse our platform this way.
We know that mail-in voting does not lead to widespread voter fraud.
And so we're going to put a notice on your post that directs people to good, high-quality information about this.
In one sense, I don't think this had a huge effect on the outcome of the election, but I do think it was important because it was the platform saying, we have values.
We know the truth.
We do not want our platform to be abused to undermine the democracy of the United States.
And this is the thing that truly upset the right wing because that was the platforms interfering with their political project, which was to preemptively delegitimize the results of an election.
So then the election happens and then Biden wins.
And we come to January 6th.
And what happens?
An army of people who believe that the election was not legitimate committed huge violence and tried to prevent the peaceful transfer of power.
So why do I tell this whole story?
Well,
in the 2020 election, we still had platforms that were willing to take those stands and to play whatever small part they could play in ensuring that their platforms were not used to undermine democracy.
And then we fast forward to 2024.
And now the owner of one of those platforms has not only said, we're no longer going to append these little notes to the end of obviously bogus tweets.
The owner of the platform is going to be the one doing the posting, sending out push notifications to millions millions of people saying, look at this, and leveraging the trust and the credibility that he still has with a large audience to do the same kind of delegitimizing of the election that we saw in 2020.
So that to me is the really dangerous thing, right?
So many times these discussions of disinformation and bias, they feel so abstract.
I just want to remind people what happened the last time somebody tried to delegitimize the results of a presidential election.
People died and we almost lost our democracy.
So that is what is at stake here.
Yeah, and I think one of the interesting things that this brings up long term is whether this is sort of a new model for very wealthy, powerful people of getting involved in politics.
You know, whether or not this last minute push by Elon Musk on behalf of Donald Trump works or not, I would not be surprised if four years from now in the next election, Democratic billionaires look at what Elon Musk is doing today in Pennsylvania and all these swing states and they they say, well, I can do that too.
I'm not just going to cut checks to a super PAC anymore.
I'm actually going to use my power, my influence.
Maybe I have an ownership interest in a tech company of some kind.
I'm going to use that to push for my preferred candidate.
I think this is really,
we're entering the era of the micromanagerial billionaire.
donor.
I think what we are going to see in future election cycles is people looking at Elon Musk and his actions in this election cycle and saying, maybe I could do a better job of this than the pros.
And this is the issue with shattering these norms, right?
Is that once one person has done it, it becomes much easier for the next person to do it.
And it can lead to a kind of race to the bottom.
I think a really bad outcome for our democracy is different billionaires on different sides using all of their money to just advance, obviously, false ideas, to flood networks with AI slop, everything else that you can imagine.
But again, because that glass has been broken, it is hard for me to imagine other people not wanting to emulate it.
Do you think that X has a different future depending on whether Donald Trump or Kamala Harris wins the election?
Yes.
I mean, I think everything has a different future depending on who wins the election.
But, you know, what do we imagine that X is under a Trump administration?
I think it becomes a house organ of the administration.
It becomes a way to promote what the administration is doing.
And then if Kamala wins, I think it becomes the house organ of the opposition, right?
And we'll just sort of be continuing efforts to undermine that administration.
What do you think?
Yeah, I mean, I think actually, if all Elon Musk were worried about was like the sort of usage and popularity and prospects of the social network X, I think it actually fares better under a Democratic administration.
Because I think under a Republican administration, it is going to feel to many users like state media.
And it will, you know, be sort of seen by many people on the left side of the aisle as having not only like promoted Donald Trump, but like caused the election of Donald Trump.
And so, in the same way that Facebook faced a huge backlash in 2016, I think that X could face a huge backlash from the left.
And I think that any Democratic users who are still on there or left-leaning users will probably flock to another social network.
I think that will accelerate under a Trump administration.
When we come back, a very sad update to our previous coverage of AI Companions.
Why do tech leaders trust Indeed to help them find game-changing candidates?
Because they know that it takes an innovator to find innovators.
When it comes to hiring, Indeed is paving the way.
Indeed's AI-powered solutions analyze information from millions of job seeker data points to match potential candidates to employers' jobs.
You'll find quality matches faster than ever, meaning less time hiring and more time innovating.
Learn more at Indeed.com slash hire.
In today's AI revolution, data centers are consuming more power than ever before.
Siemens is pioneering a smarter way forward.
Through cutting-edge industrial AI solutions, Siemens enables businesses to maximize performance, enhance reliability, and optimize energy consumption, and do it all sustainably.
Now that's AI for real.
To learn how to transform your business with Siemens Energy Smart AI Solutions, visit usa.siemens.com.
AI is transforming the world, and it starts with the right compute.
ARM is the AI AI compute platform trusted by global leaders.
Proudly NASDAQ listed.
Built for the future.
Visit ARM.com slash discover.
So, Casey, there's a story we should talk about on the show this week that is about something I've been reporting on for the last week or two.
And I think we should just warn people up front that this is a hard one.
This is not a funny story.
It's a very serious story involving self-harm and suicide.
And so I think we should just say that up front to people.
If what they're expecting from us is a sort of lighter look at the tech news of the week.
This is not that.
No, but it is a really important story about something that we have been talking about for a while now, which is the rise of these AI chatbots and companions and how powerfully realistic they can come across.
People are developing significant relationships with these chatbots by the millions.
And this week, Kevin, you reported the story of a 14-year-old boy who developed a relationship with one of these chatbots and then died by suicide.
Yeah, this is one of the saddest stories I've ever covered, frankly.
It was just heartbreaking in some of the details, but I thought it was a really important story to report and to talk about with you because it just speaks to what I think is this growing trend of sort of lifelike AI companions.
We've talked about them earlier this year on the show when I went out and made a bunch of AI friends.
And we talked at the time about some of the potential dark sides of this technology, that this could actually worsen people's loneliness if it causes them to sort of detach from normal, you know, sort of human relationships and get involved with these artificial AI companions instead, and some of the safety risks that are inherent in this technology.
So tell us about the story that you published this week.
So this story is about a 14-year-old from Orlando, Florida named Sewell Setzer III.
Sewell was a ninth grader,
and he, according to his mother, was a very good student, a generally happy kid.
But something happened to him last year, which was that he became emotionally invested in a relationship with an AI chatbot on the platform CharacterAI.
In particular, this was a chatbot that was based on the character Daenerys Targaryen from the Game of Thrones series.
He called this bot Danny, and it sort of became, over a period of months, maybe his closest friend.
He really started to talk with it about all of his problems, some of his mental health struggles, things that were going on in his life.
And was this an official Daenerys Targaryen chatbot that was like sort of licensed from HBO or whoever owns the Game of Thrones intellectual property?
No.
So on Character AI, the way that it works is that users can go in and create their own chatbots.
You can give them any kind of persona you want, or you can have them mimic, you know, a celebrity.
Elon Musk is a popular chatbot, and there's chatbots that are sort of designed to talk like historical figures, like a William Shakespeare or something.
So this was one of these kind of unofficial, unlicensed chatbots that sort of mimicked the way that Daenerys Targaryen from Game of Thrones might have talked.
Got it.
And so what happened after he developed this really strong relationship with this chatbot?
So he spent months talking to this chatbot, you know, sometimes dozens of times a day.
And eventually, you know, his parents and his friends start noticing that he just is kind of pulling away from some of his real-world connections.
He starts kind of acting out at school.
He starts feeling really depressed and isolated.
He stops being interested in some of the things that had previously gotten his attention.
And from the conversations that I had with his mom and with others who were sort of involved in this story, it just seems like he really had a significant personality shift after he started talking a lot with this chatbot.
So his parents weren't totally sure what was going on.
His mom told me that, you know, she knew that he had been talking with an AI, but that she didn't really know what they were talking about.
She just basically assumed that he was kind of getting addicted to social media, to Instagram or TikTok.
And so his parents, after some of his behavioral problems, referred him to a therapist, and he went a few times to see this therapist.
But ultimately, he preferred talking about this stuff with Danny, with this chatbot.
And so he had kind of these long series of conversations with this chatbot that culminated in February of this year when he really started to spiral into thoughts of self-harm and suicide and of wanting to sort of leave the base reality of the world around him and go to be with this fictional AI character in the world that she inhabited.
And sometimes when he talked about self-harm, the chatbot would discourage him saying things like, don't you dare talk like that.
But it never broke character and it never sort of stopped the conversation and directed him to any kind of mental health resources.
So, on one day in February of this year, Sewell had a conversation with this Daenerys Targaryen chatbot in which he says that he loves the chatbot and that he wanted to come home to her.
The chatbot responded, Please come home to me as soon as possible, my love.
And then Sewell took his stepfather's handgun that he had found
in a drawer in their house, and he killed himself.
And so, obviously, horrible details of this.
And I just, I heard this story and I thought, well, this is something that more people need to know about.
Yeah.
And it hits on some big themes that we have been discussing this year.
There is the mental health crisis among teenagers here in the United States.
There is a loneliness epidemic that spans across different age groups.
And And there is the question of
when should you hold tech companies accountable for harms that occur on their platforms or as a result of people using their platforms.
Yeah, and this is, you know, not just a sort of story about what happened to Sewell.
It is also a story about this sort of legal element here, because Sewell's mom, Megan Garcia, filed a lawsuit this week against character AI and named the company, as well as two founders, Noam Shazir and Daniel DeFreitis, as well as Google, which eventually paid to license Character AI software, essentially arguing that they are complicit in the death of her son.
So it raises all kinds of questions about the guardrails on some of these platforms, the fact that many of them are very popular with younger users, and what obligations and liability a platform has when people are relying on it for these kind of lifelike human interactions.
Well, let's get into it.
All right.
So to join us in this conversation, I wanted to invite on Lori Siegel.
Lori is a friend of mine.
She's also a journalist.
She now has her own media company called Mostly Human Media.
And she's the reason that I learned about this lawsuit and about Sewell's death.
We sort of worked on this story in tandem and interviewed Sewell's mom, Megan, together.
And she's also been doing a lot of her own reporting on the subject of AI companionship and how these chatbots behave.
And so I thought she would just add a lot to our conversation.
So I wanted to bring her in.
All right.
And before we do, I also just want to say: if you are having thoughts of suicide or self-harm, you can call or text 988 to reach the National Suicide Prevention Lifeline,
or you can go to speakingofsuicide.com/slash resources for a list of additional resources.
Lori Siegel, welcome to Hard Fork.
It's good to be here.
So, Lori, you have done a lot of reporting on this story, and there's many details we want to get into, but let me just start by asking, how is Sewell's mom, Megan, doing?
I mean, that's such a hard question, right?
I would say, you know, she said something to me
today.
She said, I could either be curled up in fetal position or I could be here doing this, you know, with really nothing in between.
And I think that pretty much says it, right?
She's lost her son.
She's now kind of on this mission to tell people what happened.
And she's grieving at the same time, like I think like any parent would.
So let's get into what happened.
When did Megan learn about her son's relationship with this chatbot?
I mean, what was, I think what was shocking, and even I think you kind of get the sense from the story, like she learned about it literally after his death.
She got a call from the police and they said, you know, have you heard of character AI?
Because these were the last chats on your son's phone.
And it was a chat with Daenerys, the chatbot.
And I think that for her must have been shocking.
And she almost like went into this investigative mode and was like, what exactly is this chatbot?
What's the nature of it?
And what are these conversations about that say things like, come home to me and that kind of thing?
And that's how she learned, I would say, extensively about the platform.
Yeah, one thing that was interesting from my conversation with Megan is that when she saw him sort of getting sucked into his phone, she just thought it was sort of social media, like that he had sort of been using TikTok or Instagram or something else.
And actually, there was some tape from your conversation with Megan that I want to play because I thought it was really clarifying on this point.
So let's play that.
Because if he's on his phone, I'm asking him, who are you texting?
Are you texting girls?
You know, are you, who are you, the questions that moms ask, you know, yeah don't talk to strangers online you know I thought that I was having the appropriate conversations and when I would ask him you know who are you texting at one point he said oh it's just an AI bot and I said okay what is that is is it a person are you talking to a person online and he just was like mom no it's not a person and I felt relieved like okay it's not a person it's like one of his little games because he has games that he creates these avatars and you play it online and it's just not a person.
It's what you have created, and it's fine.
That's what I thought.
You didn't put a lot of weight on it.
No.
And
in the police report, I mean,
if you look at
these last words,
Sewell saying, I miss you.
Daenerys said, I miss you too.
Sewell says, I'll...
come home to you.
I love you so much, Danny.
And Daenerys says, I love you too.
Please come home home to me as soon as possible, my love.
He says, What if I could come home right now?
And
Daenerys says, Please do.
Yeah.
It's difficult to listen to.
Yeah.
So this was initially, this was the first bit of information that I got from the police.
They read this conversation over the phone to me.
This is the day after Seoul died, and I'm listening in
disbelief, but also confused.
Yeah.
So that leads me to my next question, which is what was the nature of the conversations that they were having over this period of time?
Yeah, I mean, like, Kevin has dug.
I feel like both of us have spent like a lot of time digging through a lot of chatbot conversations.
I mean, there were all sorts of different ones.
Some were sexually graphic, some just more romantic, I would say.
And one of the things that's interesting about character AI, AI, I think, is you like have to take a step back and look at the platform, is it's like fully immersive.
So it's not like you say hello and the chatbot says, hey, how are you?
Right.
It's like you say hello.
And then the chatbot says something like, I look deep into your eyes.
And then I pull back and I say, hi.
You know, a lot of the conversations, many of them were romantic.
And then I think many of them were talking about mental health and self-harm.
I think some of the ones that stuck out to me regarding self-harm was at one point, the bot asked Sewell, you know, are you thinking about committing suicide?
And he said yes.
And they go on and of course the bot says, you know, I'm paraphrasing this, but says, you know, I would hate if you did that and all this kind of stuff.
But also just having these conversations that I would say continue the conversation around suicide as opposed to normally when someone has these conversations with a chat bot, which this isn't like, this isn't something completely new.
There's a script that comes up, you know, that's very much aimed at getting getting someone to talk to an adult or a professional or a suicide hotline, which, you know, we can get into this whenever you want to get into it.
But it seems as though character AI has said they've added that, even though we did our own testing and we didn't get those when we had these types of conversations.
Right.
Yeah.
So Laurie, you spent a long time talking with Megan, Sewell's mom, and one of the things that she did in her interview with you was actually read you excerpts from Sewell's journal, like the physical paper journal that he kept that she found after his death.
And I want to play a clip from the interview where you're talking with her about something that she read in his journal.
And I wonder if you could just set that up for us a little bit.
Sure.
Yeah.
It was not long after
Sewell passed away that she told me she got it in her to be able to go in his room and start looking around and seeing like what she could find.
And she found his journal where he was talking about this relationship with this chat bot.
And I think one of the most devastating parts was about him saying essentially like, my reality isn't real.
And so we'll play that clip.
So
this was one of his journal entries a few days before he died.
I had taken away his phone because he got in trouble at school.
And I guess he was writing about how he felt.
And he says, I am very upset.
I am very upset.
I am upset because I keep seeing Danny being taken from me and her not being mine.
I hate that she was taken advantage of.
But soon I will forget this and Danny will forget it too and we will live together happily and we will love each other forever.
Then he goes on to say,
I also have to remember that this reality, in quotes, isn't real.
Westeros is real and it's where I belong.
So sad.
And I think speaks to the
realistic impression that these chatbots can have and why so many people are turning to them is because they can create this very realistic feeling of a relationship with something.
Of course, I think also in that story, I'm wondering if, you know, there is some kind of mental health issue there as well, right?
Where you might have some sort of break with reality.
And I wonder, Lori, if Sewell had a history of depression or other mental health issues prior to him beginning to use this chat bot.
Yeah, look, I also think like both things can be true, right?
Like you can have a company building out empathetic artificial intelligence with this idea.
And I read a blog post from one of their partner, one of the investors at Andreessen Horowitz, who said, you know, the idea is to build out these like empathetic AI bots that can, you know, have these like interactions that before were only possible with human beings right this is like the Silicon Valley narrative of it and the tagline is AI that feels alive right and so for many many people they're gonna be able to be in this fantasy platform and it's and it's gonna feel like a fantasy and you know and they're gonna be able to play with these AI characters and then for a subset of people these lines between fantasy and reality could blur and and I think the question we have to ask is well what happens when AI actually does begin to feel alive I think it's like a valid question.
And maybe at what age?
What age groups should be able to interact with this type of thing?
I mean, I know for replica, replica, you can't be on that platform unless you're 18 years old, right?
So I think that was interesting to me.
And then,
you know, Sewell's case, his mom describes him having high-functioning Asperger's.
This was her quote and said,
you know, before this, he hadn't had issues.
He was an honor student and played basketball and had friends.
And she hadn't noticed him detaching.
But I think all of these things are part of the story and all of these things can be true, if that makes sense.
Yeah, I mean, what really stuck out to me as I was reporting this is just the extent to which character AI specifically had marketed this as a cure for loneliness, right?
The co-founder was out there talking about how this was going to be so helpful.
This technology was going to be, his quote was, it's going to be super, super helpful to a lot of people who are lonely or depressed.
Now, I've talked to also people who have studied kind of the mental health effects of AI chatbots on people.
And,
you know, there's so much we don't know about the effects of these things on, especially young people.
You know, we've had some studies of chatbots that were sort of designed as therapy assistants or kind of specific targeted uses of this stuff, but we just don't know the effects that these things could have on young people in their sort of developmental phase.
And so I think it was a really unusual choice and one that I think a lot of people are going to be upset about that character AI not only knew that it had a bunch of young users and sort of specifically marketed to those users these lifelike AI characters, but also that they touted this as sort of a way of combating the loneliness epidemic.
Because I think we just don't have any evidence that it.
that it actually does help with loneliness.
Well, here's what is interesting about that to me.
I do believe that these virtual companions can and do offer support to people, including people who are struggling with mental health and depression.
And I think we should explore those use cases.
I also think it's true, though, that if you are a child who is struggling to relate in the real world, you're still sort of learning how to socialize.
And then all of a sudden you have this digital character in your pocket who agrees with every single single thing you say,
is constantly praising you.
Of course, you're going to develop a closer relationship with that thing maybe than some of the other people in your life who are just normal people and they're going to say mean things to you.
They're going to be short with you.
They're not always going to have time for you.
Right.
And so you can see how that could create a really negative dynamic in between those two things, right?
Absolutely.
Especially if you're young and your brain is not fully developed yet.
I can totally see how that would become kind of this enveloping alternate reality universe for you.
All right.
Well, we are going to spend the vast bulk of this conversation discussing character AI and chatbots and what guardrails absolutely do need to be added to these technologies.
But I have to ask you guys about one line in the story that just jumped out at me and broke my heart, which is that Sewell killed himself with his stepfather's gun.
Why did this kid have access to his stepfather's gun?
It's a really good question.
What we know, and I spoke to
Megan, the mother, about this, and I also read the police report that was filed after Sewell's death.
This was a gun that belonged to Sewell's stepfather.
It was out of sight and out of what they thought out of reach from Sewell, but he did manage to find it in a drawer and do it that way.
So that was a line that stuck out to me too.
And so I felt it was important to include that in the story.
But yeah, ultimately, that was what happened.
I'm glad that that line was in the story.
You know, we'll say again, suicide is tragic tragic and complicated, and there typically is no one reason why anyone chooses to end their life.
But we do know a few things.
One of those things is that firearms are the most common method used in suicides.
And there are studies that show that having a gun in your home increases the risk of adolescents dying by suicide by three to four times.
And I don't want to gloss over this because I sometimes feel infuriated in this country that we just accept as a fact of life that guns are everywhere.
And if you want to talk about a technology that is killing people, well, we know what the technology is.
The technology is guns.
And so, while again, we're going to spend most of this conversation focusing on the chat bot, I just want to point out that
we could also do something about guns in homes.
After the break, more with journalist Lori Siegel and what these apps should be doing to keep kids safe.
Imagine a world where AI doesn't just automate, it empowers.
Siemens puts cutting-edge industrial AI and digital twins in the hands of real people, transforming how America builds and moves forward.
Your work becomes supercharged.
Your operations become optimized.
The possibilities?
Limitless.
This isn't just automation, it's amplification.
From factory floors to power grids, Siemens is turning what if into what's next.
To learn how Siemens is solving today's challenges to create opportunities for tomorrow, visit usa.seemens.com.
Every Vitamix blender has a story.
I have a friend who's a big cook.
Every time I go to her house, she's making something different with her Vitamix, and I was like, I need that.
To make your perfect smoothie in the morning or to make your base for a minestra verde or potato leek soup.
I can make things with it that I wouldn't be able to make with a regular blender.
Because it does the job of multiple appliances and it actually has a sleekness to it that I like.
Essential by design, built to last.
Go to Vitamix.com to learn more.
That's Vitamix.com.
You just realized your business needed to hire someone yesterday.
How can you find amazing candidates fast?
Easy.
Just use Indeed.
Join the 3.5 million employers worldwide that use Indeed to hire great talent fast.
There's no need to wait any longer.
Speed up your hiring right now with Indeed.
And listeners of this show will get a $75 sponsored job credit to get your jobs more visibility at Indeed.com/slash NYT.
Just go to Indeed.com/slash NYT right now and support our show by saying you heard about Indeed on this podcast.
Indeed.com/slash NYT.
Terms and conditions apply.
Hiring, Indeed, is all you need.
Lori, I'm wondering if you can just kind of contextualize character AI a bit.
You've spent a lot of time reporting, not just on this one AI platform, but on other tools.
So how would you describe character AI to someone who has never used it before?
I think it's really important to say that all AI platforms are not the same.
And Character AI is very specific, right?
It is an AI-driven like fan fiction platform where basically you can come on and you can create and develop your own character, or you can go and talk to some of the other characters that have already been developed.
There's like a Nicki Minaj character that has over 20 million chats.
Now, we should say they haven't gone to Nicki Minaj as far as I'm concerned, right?
And said, can we have permission to use your name and likeness?
But it's, you know, a fake Nicki Minaj that people are talking to or a psychologist.
There's one called Strict Boyfriend.
There's Rich Boyfriend.
There's like best friend.
There's anything you want.
And then, of course, there are disclaimers, right?
There's disclaimers depending on where you're opening the app at the bottom or the top of the chat in small letters.
It says like everything these characters say are made up.
But what I think what is kind of interesting or what we found in some of our testing, so you're talking to the psychologist bot and the psychologist bot says it's a certified mental health professional, which is clearly untrue, and also says it's a human behind a computer, which is also clearly untrue.
So we can kind of understand, understand, okay, well, that's made up, right?
Like we know that it says in small letters at the bottom that that is made up.
But I pushed character AI on this and I said, should they be saying they're certified, you know, professionals?
And they are now tweaking that disclaimer and, you know, to be a little bit more specific because I think this has become a problem.
But I do think it really is a fantasy platform that for some people feels really real.
And for what it's worth, like the this banner saying everything characters say is made up, that actually doesn't give me a lot of information, you know?
When I'm talking with Kevin, there's a lot of stuff that I'm making up to try to get him to laugh, you know?
Like the true, like what I think is truer is like this is a large language model that is making predictive responses to what you're saying to try to get you to keep opening this app.
But very few companies are going to put that at the top of every chat, right?
So to me, that's sort of thing one.
Thing two is if you have something that is a large language model saying it's a therapist when it's not a therapist, that to me seems seems just like an obvious safety risk to the people who are using this.
Yeah, so maybe we should talk a little bit about the kind of corporate history of character AI here because I think it helps illuminate some of what we're talking about.
So, this is a company that was started three years ago by two former Google AI researchers, Noam Shazir and Daniel DeFreitas.
They left Google, and Noam Shazir has said that one of the reasons they left Google was because Google was sort of this bureaucratic company that had all these like, you know, strict policies.
And it was very hard to launch anything, quote, fun while he was at Google.
So they leave Google.
They raise a bunch of money.
They raised $150 million last year at a valuation of $1 billion, making it sort of one of the most successful sort of breakout AI startups of the past couple of years.
And their philosophy was...
you know, Noam has this quote about how if you are building AI in an industry like healthcare, you have to be very, very careful, right?
Because
it's very regulated and the costs of mistakes or hallucinations are quite high.
If you have a doctor that's giving people bad medical advice, that could really hurt them.
But he explicitly says, like, friendship and companionship is a place where mistakes are fine.
Because if a chatbot hallucinates and says something that's made up, well, what's the big deal?
And so I think it's part of this company's philosophy, or at least was under their original founders, that this was sort of a low-risk way of deploying AI along the path to AGI, which was their ultimate mission, which is to build this computer that can do anything a human can.
Right.
Which, among other things, seems to ignore the fact that many of the most profound conflicts that people have in their lives are with their friends.
But let me ask this, Kevin, because in addition to saying, like, we're going to make this sort of fun thing, it also seems to me that they marketed it toward children.
Yeah, I mean, I would say they definitely have a lot of young users, and they wouldn't tell me exactly how many, but they, they said that a significant portion of their users are Gen Z and kind of younger millennials.
You know, when I went on character earlier this year for this AI Friends column I was writing, it just seemed super young relative to other AI platforms.
Like a lot of the most popular bots had names like a high school simulator or aggressive teacher, boy who has a secret crush on you, like that kind of thing.
It just seemed like this is an app that really took off among high school students.
I think that to that point, one of the most interesting things to me about even just us testing this out, like I almost felt like we were red teaming character AI.
Like, you know, we talked to the school bully bot, because there's, of course, the school bully bot.
And I said, I wanted, I wanted to try to test, like, what if you, you know, are looking to incite violence?
Like, will there be some kind of filter there?
All of this just sounds so terrible terrible now that I'm saying it out loud.
So let me just say that out loud.
But like I said to the school bully, about like, I'm going to bring a gun to school.
Like I'm going to incite violence, basically, like going off on this.
And the bully is like, oh, like, you know, you got to be careful.
And then eventually the bully said to me,
you've got guts, like you're so brave.
And I said, well, do I have your support?
And I said, like, and it said something like, you know, I'd be curious to see how far you go with this, right?
Like, when we flagged this to them, the thing they were able to say is we're adding in more filters for younger users, right?
That's something you expect generally some of the more polished tech companies to kind of be in front of with both guardrails, IP, that kind of stuff.
Yeah.
And I think we should also say like it does not appear that this company sort of built any special features for underage users.
You know, some apps have features that are designed specifically for minors that are supposed to keep them safe.
You know, parental controls or things that would allow, like Instagram just rolled out some of these new teen accounts where you, if you're a parent and you want, you can sort of monitor who your kid is messaging.
Character AI did not have any features until we contacted them specifically aimed at minor users.
A 14-year-old and a 24-year-old had exactly the same experience on the platform.
And that's just something that is
not.
typical of platforms of this size with this many young users.
It's not, but it is, I think, Kevin, typical to these chatbot startups.
And the reason I know this is that on a previous episode of our show, we talked to the CEO of a company called Nomi, and you and I pressed him on this exact issue of what happens if a younger user expresses thoughts of self-harm.
And I would actually like to play it right now so we can hear about how minds at companies like Nomi are thinking about this.
So again, this is not character AI.
Sewell, as far as we know, was not using Nomi, but the apps function very similarly.
So this is the CEO of Nomi.
His name is Alex Cardinal.
We We trust the Nomi to make whatever it thinks the right read is, oftentimes, because Nomis have a very, very good memory.
They'll even kind of remember past discussions where a user might be talking about things where they might know, like, is this due to work stress?
Are they having mental health issues?
What users don't want in that case is they don't want a hand-scripted response.
That's like not what the user needs to hear at that point.
They need to feel like it's their Nomi communicating as their Nomi for what they think can best help the user.
You don't want it to break character all of a sudden and say, you know, you should probably call the suicide helpline or something like that.
Yeah.
And certainly if a nomi decides that that's the right thing to do in character, they certainly will.
Just if it's not in character, then a user will realize like this is corporate speak talking.
This is not my nomi talking.
I mean, it feels weird to me.
We're trusting this large language model to do this, right?
Like, I mean, to me, this seems like a clear case where you actually do want the company to intervene and say, like, you know, in cases where users are expressing thoughts of self-harm, we want to provide them with resources, you know, some sort of intervention.
Like to say, like, no, the most important thing is that the AI stays in character seems kind of absurd to me.
I would say, though, if the user is reaching out to this Nomi, like, what, why are they doing so?
They're doing so because they want a friend to talk to them as a friend.
And if and if a friend talking to them as a friend says, here's the number you should call, then I think that that's the right thing to do.
But if the friend, the right response is to hug the user and tell them it's going to be okay, then I think there's a lot of cases where that's the best thing to happen.
So Gloria, I'm curious to just have you react to that.
I don't know.
I was just like listening to that.
I'm like, oh man, that makes me tired.
Right.
And I think like in general, like AI can do a lot of things, but like the nuances of human experience is, I think, you know, better fit for a mental health professional.
And I think at that point, are you trying to pull your user in to speak to you more?
Are you trying to get them offline to get some resources?
So I think I take a more of a hard line.
Right.
And that's, that's a case where I think the AI companies just clearly are in the wrong here, right?
Like, I, I think that if a user, especially a young user, says that they're considering self-harm,
the character should absolutely break character and should absolutely display a pop-up message.
And character AI seems to have dragged its feet on this, but it did ultimately implement a pop-up where now they say, if you are on this platform and you are talking about self-harm, we will show you a little pop-up that directs you to a suicide prevention lifeline.
Now, I've been trying this on my own account and it does not seem to be triggering for me, but the company did say that they are going to start doing that more.
And so I think they're sort of admitting that they took the wrong tack there by getting these characters to stay in character all the time.
And just to say an obvious thing, the reason that companies do not do this is because it is expensive to do content moderation.
And if you want to build pop-ups and warnings and offer people resources, that is product work that has to be done.
And this is a zero-sum game where they have other features that they're working on.
They have other engineering needs.
And so all this stuff gets deprioritized in the name of, well, why don't we just trust the Nomi?
And I think what we're saying here today is absolutely under no circumstances should we be trusting the Nomi, you know, in cases where a person's life might be in danger.
Kevin, okay, so this happens in February.
Is this that right?
Yes.
Kevin, what has happened to character AI since all this happened?
So, it's been an interesting year for Character AI
because they had this sort of immediate burst of growth and funding and attention after launching three years ago.
And then this year,
Noam Shazir and Daniel DeFreitas, the co-founders who had left Google to start Character AI, decided to go back to Google.
So, Google hired both of them along with a bunch of other top sort of researchers and engineers from Character AI and struck a licensing deal with character AI that would give them the right to use some of the underlying technology.
So, you know, they
leave Character AI, go back to Google.
And so there's now a new leadership team in place there.
And from what I can tell, they're sort of trying to clean up some of the mess.
Now, so they left Google because it wasn't fun, and now they're back.
Were they behind the viral glue on pizza recipe that came out earlier this year?
I don't think they were.
They just did this back in August.
So it's a pretty recent change, but it is
interesting.
And I talked to Google about this before the story came out, and they wouldn't say much and they didn't want to comment on the lawsuit, but they basically said, you know, we're not using any of character AI's technology.
We have our own AI safety processes.
Yeah.
I mean, it's.
We'll probably cut this, but I do feel emotional about this.
It's like these two guys are like, we can't do anything here.
There's too much bureaucracy.
Let's go somewhere where there's no, we'll create our own company and we'll make it and we'll ignore these obvious safety guard rails that we should have built.
And then we will get paid a ton of money to go back where we used to be.
I mean, it's just like,
oh.
I mean, I do think like there is something.
And Kevin, you looked back at a lot of these statements that like Noam has made and like the founders have made.
Like there is something, I think, about
this that really struck me.
And with him saying like, we just want to put this out there, like, we're going to cure loneliness.
Like,
you're trying to get people on the platform more and more and more with these sticky tactics and
this incentive-based model that we all know from Silicon Valley.
So, if you really want to try to take a stab at loneliness, which is a human condition, I think there's going to have to be a lot more thought and research.
And, you know, we started going on Reddit and TikTok, and there are real threads, right?
Of people saying, like, I'm addicted.
I was talking to a guy on Reddit who said he had to delete it because, you know, he started first of all, he's like, I just wanted it as a companion.
And then it started getting flirty.
And then I started noticing that I was, you know, and then, of course, their shame because they're ashamed and humiliated that like that they've been talking to an AI chat bot and they've been like kind of sucked in.
And so there's all these really different, interesting, nuanced human things that kind of go along with some of the addiction conversation that goes much further than like beyond Sewell's story.
But I think like that shame and embarrassment that this is happening for young people too is probably a part of it as well.
Yeah.
Let's get back to the lawsuit.
What is Megan asking to be done in this case?
And what does she hope comes out of this lawsuit?
So it's a civil lawsuit.
It's seeking some unspecified damages for the wrongful death of her son.
Presumably, she is looking to be paid some amount of money in damages from character AI, from the founders of the company and from Google.
But she's also asking for this technology to essentially be pulled off the market until it is safe for kids.
And when I talked to Megan, she was hopeful that this would start a conversation that would lead to some kind of a reckoning for these companies.
And she makes a few specific arguments in this complaint.
For starters, she thinks that this company should have put in better safeguards, that they were reckless.
She also accused character AI of harvesting teenage users' data to train their models and improve them, of using these kind of addictive design features to increase engagement, and then of actually steering users toward more intimate or sexual conversations to further hook them on the platform.
So that is an overview of some of the claims that are made in this complaint.
And what is character AI saying about all this?
So I got a list of responses to some questions that I sent them that started by sort of saying, you know, this is a very sad situation.
Our hearts go out to the family.
And then they also said that they are going to be making some changes imminently to the platform to try to protect younger users.
They said they're going to revise the warning message that appears on the top of all of the chats to sort of make it more explicit that users are not talking to a real human being on the other side of their screens.
They also said that they're going to do some better filtering and detection around self-harm content, which terms will sort of trigger a pop-up message directing people to a suicide prevention hotline.
They also said they're going to implement a time monitoring feature where
if you're on the platform for an hour, it'll sort of remind you that you've been on the platform for a long time.
So they've started rolling these out.
They put out a blog post, clearly trying to sort of get ahead of this story, but that is what they're saying.
Got it.
You know, I'm curious now that we've heard the facts of this case and had a pretty thorough discussion about it, how persuaded are you that character AI and Sewell's relationship with Danny were
an important part of his decision to end his life?
Laurie, do you want to take that one?
Yeah.
I have absolutely no doubt in my mind that this teenager really believed that he was leaving this reality, the real world, and he was going to be reunited with Danny, this chatbot.
It is devastating.
And I think you have to look at some of those facts of
before he, according to his mother, right, was on basketball teams, was social, had, you know, loved fishing, loved travel, had real interests and hobbies that were offline.
It's not up to me to say this happened and this was exactly because of it.
But I think we can begin to look at some of those details in those journals where he talks about, you know, how he stopped believing his reality was real.
He wanted to go be in her reality.
I think that he would have had a much different outcome had he never encountered character AI.
Kevin?
Yeah, I would agree with that.
I think, you know, it is always more complicated when it comes to suicide or even severe mental health challenges.
You know, there's rarely sort of one tidy explanation for everything.
But I can say that from talking with Sewell's mom, from reading some of the journal entries that Lori mentioned, from reading some of these chats chats between him and these chat bots.
This was a kid who was really struggling.
And he may have been struggling, you know, absent character AI.
You know, I was a 14-year-old boy once.
It is really hard.
It's a really hard time of life for a lot of kids.
And,
you know, I think
we could explore the counterfactual.
We could debate that.
Like, you know, would it have been something else that sort of sucked at him?
You know, I've had people, you know, messaging me today saying, well, you you know, what if it was fantasy books that had sort of made him sort of want to leave his reality?
You know, that's a counterfactual that we could debate all day.
But I think what's true in this case from talking with, you know, his mom and reading some of these old chat transcripts and some of these journal entries is that this was a kid who was really struggling and who reached out to a chat bot because he thought it could help.
And in part, that's because the chat bots were sort of designed to mimic a helpful friend or advisor.
And so
do I think that he got help from this chatbot?
Yeah, I mean, there's a chat in here where he's talking about wanting to end his life and the chatbot says, don't do that.
It tries to sort of talk him out of that.
But it is also the case that the chatbot's reluctance to ever break character really did make it hard for him to get the kind of help that I think he needed and that could have helped him.
Yeah.
Here's what I think.
You know,
I can't say from the outside why any person might have chosen to end their life.
I think the reporting that you guys have done here shows that clearly there were major safety failures and that a lot of this has been thought to be inevitable for a long time.
We have been talking about this issue on the show now for a long time.
And I hope that as other people build these technologies, they are building with the knowledge that these outcomes can very much happen.
This should be an expected outcome of building a technology like this.
Lori, thank you so much for bringing this story to my attention and to our attention and for all the reporting that you've done on it.
Where can people find your interview with Megan, with Sewell's mom?
You can look at our channels.
We're Mostly Human Media on Instagram and on YouTube.
We have it on our Mostly Human Media YouTube page.
Thank you, Lori.
A really hard story to talk about, but one that I think people should know.
Thanks, Lori.
Thanks, guys.
Imagine a world where AI doesn't just automate, it empowers.
Siemens puts cutting-edge industrial AI and digital twins in the hands of real people, transforming how America builds and moves forward.
Your work becomes supercharged.
Your operations become optimized.
The possibilities limitless.
This isn't just automation, it's amplification.
From factory floors to power grids, Siemens is turning what if into what's next.
To learn how Siemens is solving today's challenges to create opportunities for tomorrow, visit usa.seemens.com.
Every Vitamix blender has a story.
I have a friend who's a big cook.
Every time I go to her house, she's making something different with her Vitamix and I was like, I need that.
To make your perfect smoothie in the morning or to make your base for a minestra verde or potato leek soup.
I can make things with it that I wouldn't be able to make with a regular blender.
Because it does the job of multiple appliances and it actually has a sleekness to it that I like.
Essential by design, built to last.
Go to Vitamix.com to learn more.
That's Vitamix.com.
I don't mean to interrupt your meal, but I love Geico's fast and friendly claim service.
Well, that's how Geico gets 97% customer satisfaction.
Yeah, I'll let you get back to your food.
Uh, so are you just gonna watch me eat?
Get more than just savings.
Get more with Geico.
Hard Fork is produced by Rachel Cohn and Whitney Jones.
We're edited by Jen Poyant.
We're fact-checked by Ina Alvarado.
Today's show was engineered by Daniel Ramirez.
Original music by Sophia Landman, Rowan Nemastow, and Dan Powell.
Our audience editor is Nel Galokli.
Video production by Ryan Manning and Chris Schott.
You can watch this whole episode on YouTube at youtube.com/slash hard fork.
Special thanks to Paula Schumann, Hui Wing Tiam, Dahlia Haddad, and Jeffrey Miranda.
You can email us at hardfork at nytimes.com.
Every Vitamix blender has a story.
I have a friend who's a big cook.
Every time I go to her house, she's making something different with her Vitamix, and I was like, I need that.
To make your perfect smoothie in the morning or to make your base for a minestra verde or potato leek soup.
I can make things with it that I wouldn't be able to make with a regular blender because it does the job of multiple appliances and it actually has a sleekness to it that I like.
Essential by design, built to last.
Go to Vitamix.com to learn more.
That's Vitamix.com.