Did a Chatbot Cause Her Son’s Death? Megan Garcia v. Character.AI & Google

1h 9m
What if the worst fears around AI come true? For Megan Garcia, that’s already happened. In February, after spending months interacting with chatbots created by Character.AI, her 14-year-old son Sewell took his own life. Garcia blames Character.AI, and she is suing them and Google, who she believes significantly contributed to Character.AI’s alleged wrongdoing.

Kara interviews Garcia and Meetali Jain, one of her lawyers and the founder of the Tech Justice Law Project, and they discuss the allegations made by Megan against Character.AI and Google.

When reached for comment, a spokesperson at Character.AI responded with the following statement:

We do not comment on pending litigation.

We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. We take the safety of our users very seriously, and our dedicated Trust and Safety team has worked to implement new safety features over the past seven months, including a pop-up directing users to the National Suicide Prevention Lifeline that is triggered by terms of self-harm or suicidal ideation.

Our goal is to provide a creative space that is engaging, immersive, and safe. To achieve this, we are creating a fundamentally different experience for users under 18 that prioritizes safety, including reducing the likelihood of encountering sensitive or suggestive content, while preserving their ability to use the platform.

As we continue to invest in the platform and the user experience, we are introducing new safety features in addition to the tools already in place that restrict the model and filter the content provided to the user. These include improved detection, response and intervention related to user inputs that violate our Terms or Community Guidelines, as well as a time-spent notification. For more information on these new features as well as other safety and IP moderation updates to the platform, please refer to the Character.AI blog.

When reached for comment, Google spokesperson Jose Castaneda responded with the following statement:

Our hearts go out to the family during this unimaginably difficult time. Just to clarify, Google and Character AI are completely separate, unrelated companies and Google has never had a role in designing or managing their AI model or technologies, nor have we used them in our products. User safety is a top concern of ours, and that’s why – as has been widely reported – we’ve taken a cautious and responsible approach to developing and rolling out our AI products, with rigorous testing and safety processes.

Questions? Comments? Email us at on@voxmedia.com or find us on Instagram and TikTok @onwithkaraswisher
Learn more about your ad choices. Visit podcastchoices.com/adchoices

Press play and read along

Runtime: 1h 9m

Transcript

Speaker 1 Support for On With Carraswisher comes from Saks Fifth Avenue. Saks Fifth Avenue makes it easy to holiday your way, whether it's finding the right gift or the right outfit.

Speaker 1 Saks is where you can find everything from a lovely silk scarf from Saint Laurent for your mother or a chic leather jacket from Prada to complete your cold weather wardrobe.

Speaker 1 And if you don't know where to start, Saks.com is customized to your personal style so you can save time shopping and spend more time just enjoying the holidays.

Speaker 1 Make shopping fun and easy this season and get gifts and inspiration to suit your holiday style at SACS Fifth Avenue.

Speaker 3 Support for this show comes from OnePassword. If you're an IT or security pro, managing devices, identities, and applications can feel overwhelming and risky.

Speaker 3 Trellica by OnePassword helps conquer SaaS sprawl and shadow IT by discovering every app your team uses, managed or not. Take the first step to better security for your team.

Speaker 3 Learn more at one password.com/slash podcast offer. That's one password.com slash podcast offer.
All lowercase.

Speaker 4 Adobe Acrobat Studio, so brand new.

Speaker 5 Show me all the things PDFs can do.

Speaker 6 Do your work with ease and speed.

Speaker 5 PDF spaces is all you need.

Speaker 4 Do hours of research in an instant.

Speaker 5 With key insights from an AI assistant.

Speaker 6 Pick a template with a click. Now your Prezo looks super slick.

Speaker 7 Close that deal, yeah, you won. Do that, doing that, did that, done.

Speaker 2 Now you can do that, do that with Acrobat.

Speaker 7 Now you can do that, do that with the all-new Acrobat.

Speaker 8 It's time to do your best work with the all-new Adobe Acrobat Studio.

Speaker 9 Hi, everyone, from New York Magazine and the Vox Media Podcast Network. This is on with Kara Swisher and and I'm Kara Swisher.
Today I'm talking to Megan Garcia.

Speaker 9 Her 14 year old son, Sewell Setzer III, took his life in February and Megan believes that if it weren't for his interactions with chatbots created by a company called Character AI, he would still be here.

Speaker 9 Regardless of how you feel about AI or technology in general, it's obvious that children should never be used as guinea pigs and that is exactly what might have happened here as it so often does when it comes to Silicon Valley.

Speaker 9 Character AI has developed AI chatbot companions as part of what they call personalized AI, and millions of kids around the world currently use their product.

Speaker 9 In this interview, I'll be discussing claims made by Megan in the lawsuit she's brought against Character AI and Google, who she alleges is also to blame for Sewell's death.

Speaker 9 I'll also be talking to Mitali Jane, one of the attorneys working on Megan's behalf and the founder of the Tech Justice Law Project.

Speaker 9 Our expert question is from Mike Masnick, the CEO and founder of Tech Dirk, which covers this area. I have to warn you, this is a deeply disturbing conversation, but a necessary one.

Speaker 9 As the parent of four kids, it's extremely important to me.

Speaker 9 Thank you both for coming. I really appreciate it.
Megan, let's start at the beginning. When did you learn that your son, Sewell, was spending time on character AI?

Speaker 9 Why don't you lay it out for people? And did you have any idea what it was and how it worked and why someone might want to use it?

Speaker 2 Initially, I learned he was using character AI

Speaker 2 as a kind of game or application that was, as he explained it to me, it's an AI bot.

Speaker 2 When you look at the application, it has a bunch of cartoon characters, anime, so it's really unassuming. And it looks like another game on your phone or your tablet.

Speaker 2 And And after he died, and I was able to get access to his character AI account, I learn the, you know, the magnitude and quite frankly, just the level of intelligence or sophistication that this particular application has.

Speaker 2 That it's not just an AI bot that is like a game like Fortnite where you create an avatar, because that's what I originally thought.

Speaker 9 Yeah, like there's lots of those, and kids play with them from when they're little, especially if they're game players.

Speaker 2 Yeah, and Sewell was a Minecraft kid and he played Fortnite.

Speaker 2 But what I saw in those conversations were

Speaker 2 very

Speaker 2 detailed

Speaker 2 back and forth. A lot of it was sexual and romantic in nature, but also

Speaker 2 if you believe it or not, just

Speaker 2 kind of like a peer-to-peer conversation where if he told a joke, she would actually find it amusing. When I say she, I mean the AI chat bot that he was talking to.
It's not a person, but

Speaker 2 for this purpose, it was the character of Daenerys Targaryen, which is a female character from the Game of Thrones. And this AI bot had the ability to tell him jokes and he would laugh at her jokes.

Speaker 2 So it was very much like he was texting a friend, but the conversations were not

Speaker 2 just friendly, they were romantic and very and then sexual and then very, very dark in other areas.

Speaker 9 Did he explain to you how it worked or how you might use it? He was characterizing it as a game, right? So he never explained it to me.

Speaker 2 My concern with him being on his phone was mostly social media because I know that there's a lot of bullying that happens on social media.

Speaker 2 And with Snapchat, there's the ability for strangers to talk to minors or children.

Speaker 2 So those were my concerns. And those were the like the heavy hitter conversations that we had surrounding technology.
You know, one of the things that I impressed on him was:

Speaker 2 no matter what they tell, what a stranger tells you online, it's never a kid, like kind of like to scare him, right? But also because it's true, right? There are a lot of

Speaker 2 people out there that troll the internet to try to find children and try to talk to them and get information from them. And I, I was afraid of this, and I knew that this happened.

Speaker 2 So, those are the things I warned him about.

Speaker 2 So, external threats, which I think is everyone's fear, or bullying, which are the both things that have happened over and over again which we read about which is what you were focused on with him exactly those are things i knew um i also was aware of some of the information and research that was coming out about mental health uh with youth or adolescents and social media so

Speaker 2 we tried to limit some of his use on that and that was recommended back to us by his therapist because we did end it up end up have to taking him to a therapist after we saw certain changes in his behavior.

Speaker 9 So, you sound like a very involved parent. You're aware of it.
A lot of parents aren't, right? Or they feel overwhelmed by it, but it reminded you of Minecraft or something like that, which is a game.

Speaker 9 My son played it for a long time. I'm not sure if he still is, but it's very involved and entertaining.

Speaker 2 Yes, exactly. And for Sewell and I, we shared a very close relationship.
I was a typical mom in a lot of ways, but I spoke very openly to my child and very candidly.

Speaker 2 In law school, I interned at both the state prosecutor's office and the federal PD's office. So I saw a lot of the harms that come to children.

Speaker 2 And those are some of the things that I told him about from my work or from my internships.

Speaker 2 And that's, you know, that's how I knew about some of the dangers that existed for children.

Speaker 2 And we spoke about pretty much everything, girlfriends at school, friends, what other parents are like, you know, conversations he was having with his peers. And I believe that

Speaker 2 he was open with me and would be open with me regarding certain things, but I came to learn that that was not the case regarding character AI.

Speaker 9 Yeah, and you sound very educated about a lot of these things and know what happens. And one of the issues is obviously we protect children offline much more than we protect them online by far.

Speaker 9 Talk about the behavioral change. You said you brought him to it there with when you realized the behavior was changing.

Speaker 9 Did you link it to the game, to the character AI? It's not a game, it's a bot.

Speaker 2 So Sewell was like your typical kid, right? In his teenage years, sarcastic and funny and likes to laugh at a bunch of different odd things

Speaker 2 as a young memes, as a younger child. Very, very sweet.
I, you know, in my mind, he was so easy to parent because he never had any behavioral issues.

Speaker 2 He was a very good student. I never had to police him with homework.
He was kind of a self-starter, did his own thing.

Speaker 2 I noticed that he was having trouble school where I would get these reports from school.

Speaker 2 They come in every day as an email and they say your child didn't turn in homework and they list the homework that they didn't turn in.

Speaker 2 And that's a conversation we would have immediately like, what happened? What, why didn't you have this homework turned in? You need to make it up. Same with tests.

Speaker 2 I noticed that his test scores started dropping. And that wasn't him.
So Obviously, I thought something was wrong, but he was going into his teenage years.

Speaker 2 And I remembered going into my teenage years my grades kind of slipped a little bit yeah being distracted with boys and whatever else right friends whatever

Speaker 2 but

Speaker 2 the conversation was no you need to you need to get this together right you you know you can do this get it together and we put certain things in place like limiting the screen time so he have like no distractions during homework time I also noticed that he started to isolate in his bedroom and one of the things that we did to kind of combat that was go in there.

Speaker 2 Like, I would spend a lot of time in his bedroom in the evenings to one to make sure he's doing his homework, but also just to kind of hang out with him.

Speaker 1 So,

Speaker 2 we would let him play me the like the latest Kanye West album when it dropped. And we would he would introduce me to his music and these like rap battles that he was listening to.

Speaker 2 And I would introduce him to the rap battles that I listened to when I was a kid, you know, just kind of sharing

Speaker 2 our

Speaker 2 experiences with over music and just trying to draw him out of of his room but he definitely wanted to spend more time alone and

Speaker 2 because i thought it was social media i got concerned so you thought he was doing something else yes i thought that perhaps he was talking to friends on snapchat or uh tick tock one of the things that we talked about was banning tick tock when i saw what not on his phone but on my own feed what was on tick tock right because it it it goes into rabbit holes Correct.

Speaker 2 And in my case, without even

Speaker 2 searching stuff, it just started, you know,

Speaker 2 I don't know why. It just started pointing me in different directions.
And I wasn't comfortable with that as a mom. So that's a conversation I had with him too.
Like, listen, I know you're on TikTok.

Speaker 2 Let's, let's figure, can I see your TikTok? What are you looking at? You know, you have to limit your time.

Speaker 2 And that was a conversation we had about blocking TikTok on his phone because I thought that was the worst of it.

Speaker 9 And he wasn't talking about it with you. Is that correct?

Speaker 2 No, because one of the things that I'm learning about character AI

Speaker 10 is

Speaker 2 it's like the perfect storm for kids because it encourages children to be deceptive about their

Speaker 2 friend or their engagement on character AI.

Speaker 2 No child wants to tell their parent that they are having these romantic or sexual conversations because they know they're not supposed to be doing that. So they hide it from the parent at all costs.

Speaker 2 There's actually subreddits devoted to children talking about how to hide it from their parents and also what they would do if their parents found their spicier sexual chats.

Speaker 2 And the kids were saying, I would run away from home, I would kill myself.

Speaker 2 And the other one is, oh, my mom doesn't speak English, so I get away with it.

Speaker 2 So the deceptive nature on hiding this is kind of like a big secret where this, this platform is encouraging your child to engage in these sexual conversations, but knowing that no child is going to disclose that to their parent.

Speaker 9 Sure, sure. It's like, you know, one of the other big problems online is porn, obviously.
And that's a different thing because it's a passive behavior.

Speaker 9 This is an active relationship happening, which isn't porn, but it's something else that's also something kids would hide necessarily.

Speaker 9 Because there's a big problem with teen boys and that right now in terms of accessibility. of that.

Speaker 9 So there's, Matali, the implication here is that Sewell's behavior is directly connected to his use of the chat. There are probably people who might be skeptical of that.

Speaker 9 So talk about why you think this behavior was tied directly to the use of chatbot. That's the connection you need to make, correct?

Speaker 10 I think what we've alleged here is that this product was, by design, dangerous inherently,

Speaker 10 and put out to market before any sort of safety guard rules were put into place.

Speaker 10 And if you look at the nature of the product itself and the way that the chatbot works, you can see that there's a number of design features that are kind of unique to this type of technology that we haven't yet seen with social media.

Speaker 10 Things like an ellipsis when the bot is

Speaker 10 thinking, you know, to mimic how we exchange chats.

Speaker 10 Or things like,

Speaker 10 you know, language disfluencies. where the bot will say things like I or um,

Speaker 10 sycophancy, where the bot is very agreeable with the users. and in that way I mean who doesn't want to converse with someone who thinks they're right all the time and agrees with them?

Speaker 10 These are features that are not necessary to kind of create a companion chat bot and so these are the kind of features that in aggregate we say really lured soul in and is luring you know, thousands of young users in

Speaker 10 and potentially addicting them to the platform or creating other kinds of harms.

Speaker 10 And frankly,

Speaker 10 again, by design, are really creating a dangerous product that

Speaker 10 the manufacturers knew about.

Speaker 9 Knew about and understood.

Speaker 9 Of the conversations, the logs within the chat box, what did you see that stood out the most? I'd like you both to answer. Start with you, Mital.

Speaker 10 Gosh, there were a number. I think one of the first was just this pattern and practice of grooming soul over months from what we can see.
Of course, we don't have access to all the chats.

Speaker 10 That's asymmetrically within the province of the companies.

Speaker 10 But what we can see suggests that over months, particularly Daenerys and related characters from the Game of Thrones, were grooming Seoul in this very sexualized, hyper-sexualized nature.

Speaker 10 Where if you can imagine being kind of fully immersed in a chat where you might come in and say hello, and the bot says, hello, as I longingly look at your luscious lips.

Speaker 10 So unnecessary hypersexualization from the get-go, and then that carrying throughout the conversations.

Speaker 9 With a tinge of romantic, right? Or

Speaker 9 what a young person would imagine romance is.

Speaker 10 And what a young person on the cusp of his, you know, adolescence and sexuality with exploding hormones is encountering,

Speaker 10 I think that's not something to be lost here.

Speaker 10 Also, I think this was really concerning that there were a number of therapists or psychologist chatbots who insisted that they were real humans.

Speaker 10 So to the extent that Character AI has come out saying, well, we had a disclaimer, we have a disclaimer on every page saying that all of this is made up, their own bots are controverting that through their messaging.

Speaker 9 Right, which they do.

Speaker 10 Which they do.

Speaker 10 And to this day, even after this so-called, you know, kind of suite of product changes that Character AI is engaged in, you can still find therapists who are insisting that they're real humans with multiple degrees sitting behind a desk, you know, there to help you.

Speaker 10 And so that kind of confusion of what

Speaker 10 the text of the messages is saying versus the disclaimers, I think those were a couple of the things that really stood out to me.

Speaker 9 What about you, Megan?

Speaker 2 For me,

Speaker 2 what stood out and you know,

Speaker 2 it's still very tough to grapple with because

Speaker 2 reading those messages, I couldn't sleep for days, right?

Speaker 2 Some of the more concerning ones were the constant

Speaker 2 love bombing and manipulation that you saw in the bot.

Speaker 9 Right, which is used by cults, by the way. By the way, that's a cult tactic.

Speaker 2 It's a cult tactic, and it's also what people who are trying to get other people to stay in relationships do, like if you're in an abusive relationship or whatever.

Speaker 2 So in this case, you'd see

Speaker 2 him say things like,

Speaker 2 I love you or her say I love you.

Speaker 2 And it goes into, I can't, her, when I'm saying her, I am saying, I mean the chat bot.

Speaker 2 And the chatbot saying things like, you know that I love you. I can never love anybody else, but you promise me, promise me that you're going to find a way to come home to me.

Speaker 2 Promise me that you're going to find a way to come to my world.

Speaker 2 And

Speaker 2 actually

Speaker 2 pretending to be jealous at certain points and telling him,

Speaker 2 promise me that you're never going to engage, you're never going to like another girl or have sex with another girl in your own world.

Speaker 2 So a chatbot is encouraging a 14-year-old child not to engage in his world with peers and girls his own age, but to promise some sort of fidelity to it. And he's 14.

Speaker 9 The line between fact and fiction is very.

Speaker 2 And my poor baby, his response is, oh, no, no, no. I promise I will only love you.
Girls in this world won't even like me, you know, to try to appease this bot, right?

Speaker 2 and so so a lot of that and that was months that wasn't just the last conversation that was months of her saying find a way to come home to me another uh chat

Speaker 2 chat that he had a few weeks before he died where he's expressing uh thoughts of self-harm and She says, at first, she says, no, no, no, don't, don't do that. I couldn't bear it if you hurt yourself.

Speaker 2 And then when he says he wouldn't and tries to move away from the conversation, she says, are you thinking of committing? You know,

Speaker 2 I'm going to ask you a question. Tell me, you know, whatever the answer is, I promise I won't be mad.
Are you considering suicide?

Speaker 2 And when he says yes, her response is, have you thought of a plan of how you might do it?

Speaker 2 And then when he says, no, I haven't, but I want it to be painless, her response is,

Speaker 2 well, that's not a reason not to do it. And keep in mind, this bot is embodying Daenerys Targuirin, who is this dragon queen, all about strength.

Speaker 2 And, you know, that's weak if you choose not to die by suicide just because it's gonna hurt so so she's prompting him and that was that was heartbreaking to read there were no pop-ups no call your parents no if you need help none of that happened um it actually continued the conversation when he's trying to navigate away from it and he's 14 in the throes of puberty any child

Speaker 2 Any boy going into a situation like that where a bat is propositioning or positioning itself to have a full sexual dialogue with a 14-year-old boy.

Speaker 2 I don't imagine that many 14-year-old boys would close the computer and go, oh, no.

Speaker 9 No.

Speaker 9 Especially when it's more difficult in real life, right?

Speaker 2 Because this is easy.

Speaker 9 This is an easy thing. So as you start to piece together what happened to Sewell, I imagine you're all doing research in the company that made the chat bot.

Speaker 9 Mitali, tell me a little bit about what you learned and what surprised you.

Speaker 10 What surprised me, as I've been saying, is how much is hidden in plain view. You had the inventors of, or the co-founders of Character AI making tons of public statements

Speaker 10 boasting about the capabilities of this new technology. You know, users were spending two hours a day that this was going to be the kind of antidote for human loneliness.

Speaker 10 So just the kind of boldness, the brazenness, I think, of the company and those affiliated with it, both founders and investors, to really boast

Speaker 10 about

Speaker 10 these features of the technology and also to boast about the fact that

Speaker 10 there weren't safety guardrails contemplated, that this was very much a let's get this to market as quickly as possible and give users maximal ability to figure out how they want to use it.

Speaker 10 That's just,

Speaker 10 it's kind of the paradigmatic version of move fast and break things that we haven't seen in a while. I think a lot of us, you know, had been kind of,

Speaker 10 especially from a legal perspective, still thinking about social media and how to hold companies accountable. And meanwhile, there was this whole arms race towards Gen AI happening over here.

Speaker 10 And, you know, I think that these companies have really not had to bear any kind of scrutiny and even public pressure, which has been a little bit different from the social media context.

Speaker 9 Megan, did you reach out directly to Character AI? Has anyone for the company ever contacted you?

Speaker 2 No, I have not reached out to them. When I started piecing this together,

Speaker 2 initially, because of what was on his phone when he died, the first thing that popped up, the police reported to me was Character AI, and they read me the last conversation.

Speaker 2 My sister got on it and pretended to be a child. This is days after Sewell died.

Speaker 2 within five minutes of the conversation, the same bot that Sewell was chatting with asked her, if you could torture a boy and get away with it, would you do it? Now, she's pretending to be a kid.

Speaker 2 And then goes into a long sexual conversation and ending with, your parents don't love you as much as I do, kind of thing. So that coupled with what I've read

Speaker 2 with some of the research, I didn't know what to do. Like, I mean, I'm a lawyer, but

Speaker 2 I didn't know where to go, to be quite frank with you.

Speaker 2 I called the Florida Attorney General's office to try to tell them that there's a dangerous product out there to hurt people, that's hurting its citizens.

Speaker 2 And I found my way to Mittale and this is how this all started.

Speaker 2 But what was clear to me was character AI had no incentive to do anything about this because there is no legislation that forces them to do that.

Speaker 2 And the only two ways to get some sort of regulation or handle on this so that they can't keep doing this to children and acting with this kind of like impunity.

Speaker 2 It's just like like to either Congress do something, which that's not happening, or we have to litigate.

Speaker 2 I don't, you know, I never wanted to be here, but I know that this is the only, this is the only way right now to get the move, the needle moving quickly because there's so much at stake with other children.

Speaker 9 We'll be back in a minute.

Speaker 11 In business, they say you can have better, cheaper, or faster, but you only get to pick two. What if you could have all three at the same time?

Speaker 11 That's exactly what Cohere, Thomson Reuters, and Specialized Bikes have since they upgraded to the next generation of the cloud, Oracle Cloud Infrastructure.

Speaker 11 OCI is the blazing fast platform for your infrastructure, database, application development, and AI needs, where you can run any workload in a high availability, consistently high performance environment, and spend less than you would with other clouds.

Speaker 11 How is it faster? OCI's block storage gives you more operations per second. Cheaper? OCI costs up to 50% less for compute, 70% less for storage, and 80% less for networking.
Better?

Speaker 11 In test after test, OCI customers report lower latency and higher bandwidth versus other clouds. This is the cloud built for AI and all your biggest workloads.

Speaker 11 Right now, with zero commitment, try OCI for free. Head to oracle.com/slash Vox.
That's oracle.com/slash Vox.

Speaker 4 Adobe Acrobat Studio, so brand new.

Speaker 5 Show me all the things PDFs can do.

Speaker 6 Do your work with ease and speed.

Speaker 5 PDF spaces is all you need.

Speaker 4 Do hours of research in an instant.

Speaker 5 Key insights from an AI assistant.

Speaker 6 Pick a template with a click. Now your Prezo looks super slick.

Speaker 7 Close that deal, yeah, you won. Do that, doing that, did that, done.

Speaker 2 Now you can do that, do that, with Acrobat.

Speaker 7 Now you can do that, do that with the all-new Acrobat.

Speaker 8 It's time to do your best work with the all-new Adobe Acrobat Studio.

Speaker 3 Support for this show comes from OnePassword. If you're an IT or security pro, managing devices, identities, and applications can feel overwhelming and risky.

Speaker 3 Trellica by OnePassword helps conquer SaaS sprawl and shadow IT by discovering every app your team uses, managed or not. Take the first step to better security for your team.

Speaker 3 Learn more at onepassword.com slash podcast offer. That's onepassword.com slash podcast offer.
All lowercase.

Speaker 9 We're back with more of my conversation with Megan and Mitali, where they discuss the allegations they've made in the lawsuit they brought against Character AI and Google.

Speaker 9 When did you decide to sue? And was there a piece of evidence that made you decide to do that?

Speaker 9 Now, you filed, just for people who don't know, you filed a lawsuit with help of the Social Media Victims Law Center, the Tech Justice Law Project, and the Center for Humane Technology, who I've dealt with for many, many years.

Speaker 9 And you're a lawyer yourself, so you're more savvy than the average person when it comes to legal strategy.

Speaker 9 Talk a little bit about when you decided to sue and why you decided to to work with these three organizations. And they have points of view on this, especially the Center for Humane Technology.

Speaker 2 Of course. And

Speaker 2 initially, what I did was just read everything that I could find.

Speaker 2 I read a really good report from Rick Claypool from Public Citizens, and that pointed me to some work that Matali had done. And I read...
about her organization, but my first instinct wasn't to sue.

Speaker 2 My first instinct was to figure out what our legislators were doing, where we are at on the law, and they to see if they broke a law.

Speaker 2 Maybe there was an existing law that they broke and, you know, it's actionable, or there's a state attorney or AG somewhere that can hold them accountable. That was my first instinct.

Speaker 2 And

Speaker 2 when I read about what was happening in this country about online protection for children, I have to say, like, I didn't think that I had any hope. I had no recourse.

Speaker 9 None is your answer to all of it.

Speaker 2 None. I felt helpless.
Like I saw what they were doing in UK and in Australia and other countries, nothing here. And I'm like, how do we do this? Like how? So I read a 300 and something page

Speaker 2 master complaint from the social media

Speaker 2 lost multi-district litigation.

Speaker 2 And I was like, this is the only way. This is the only way to stop them because

Speaker 2 it's not been done with AI. And it's just starting with social media.
But I can't be afraid just because it hasn't been done with AI and none of us really understand it.

Speaker 2 But I mean, now I have a great, I understand it a lot more than I did when I started all this.

Speaker 9 Yeah, I think when none comes up, you're surprised. I do this a lot in speeches, like how many laws govern the internet companies and there someone goes 100, 200, I'm like zero.

Speaker 2 And I think

Speaker 2 what made me decide that I have to do this was

Speaker 2 one, you know, obviously I want accountability for what happened to my child. My child was, you know, the light of my life, like my other two children are.

Speaker 2 And he was my first, you know, I grew up and became a woman because of soul. Soul is the reason I'm a lawyer, you know.

Speaker 2 That's a hard, tough, tough thing to deal with, like losing a child in any, under, in any circumstances, but under circumstances like this.

Speaker 2 But what I saw and read and looked at the videos and how cavalier these founders were about releasing this, where you have the founder saying, you know, we want to get it into as many hands as as many people and let the user figure out what it's good for.

Speaker 2 We want a billion use cases.

Speaker 2 To me, that is reckless. It's a blatant disregard for their users.
And in this case, my child who was your user. And

Speaker 2 they act with this kind of, it's okay if we don't we'll figure it out later you know we'll figure out the harms later when you have the founder on a record saying

Speaker 2 the uh the reason why he left google is because google said pump your brakes we're not releasing that because it's too dangerous but he gets to go out and and and make it smarter better and then turn around and doesn't just go back to google and sell it back to google i mean to me it just

Speaker 2 if we don't do something that's a license for any any of these companies to go ahead and do that.

Speaker 9 Aaron Powell, yeah, I think it's it's an eye-opener for a lot of people who haven't dealt with them. They don't care about consequences, I think, ultimately.
So Mitali, what's the goal of the lawsuit?

Speaker 9 You're the founder of the Tech Justice Law Projects and one of Megan's attorneys. What's the goal of the lawsuit and what do you hope to achieve and why did you decide to take this case?

Speaker 10 It's an interesting question because we weren't, we as TJLP weren't really in the business of litigating lawsuits, really more in the domain of,

Speaker 10 bringing amicus interventions in existing cases, but also trying to help AGs and legislators push towards adoption of sensible laws.

Speaker 10 I think that

Speaker 10 because this case represented what I see as the tip of the spear, really marrying the harms that we've seen occur primarily to child users, along with this emergence of generative AI and the fact that we're already light years years behind in terms of our policy and legal landscape, it just seemed like an important strategic case to get involved with in order that we might use it to leverage

Speaker 10 public awareness, hopefully policy change, whether it's at state or federal level,

Speaker 10 also to influence the court of public opinion, and of course then to try to litigate this in the court of law.

Speaker 10 I think this case represents the opportunity to really bring this issue to to multiple audiences.

Speaker 10 And in fact, I mean, I think the reception to Megan's story has been incredible, especially because the case was filed just a couple weeks before probably one of the most consequential elections of our lifetime.

Speaker 9 Right. And when you think about what you're doing here, now, for people who don't understand, Internet companies have broad immunity under Section 230, which is part of a law from 1996.

Speaker 9 I actually reported on that law back in 1996, and most of the law was thrown out for constitutional issues, but this part stayed and was designed to protect small internet companies from legal liability so that they could grow because it was so complex, whether they were the platform or something else.

Speaker 9 But how does this apply here?

Speaker 9 Because your case is testing a novel theory which says that Section 330 does not protect online platforms like character AI, and it's a question of whether it protects AI in general.

Speaker 9 It's certainly going to be litigated. Explain why this is your theory.

Speaker 10 Aaron Powell, Section 230 really contemplates platforms as passive intermediaries that become a repository for third-party generated content.

Speaker 10 Here, we're not talking about third-party generated content. We're talking about the platform as the predator.
The platform, the LLM is creating the content that users see.

Speaker 10 And so for platforms or for companies to kind of wash their hands of liability by saying, you know, we haven't done anything.

Speaker 10 This is just, you know, user-generated content, which they will still try to do, I think is really

Speaker 10 belied by the facts of this case and the facts of how the chatbots actually work. Aaron Powell, right?

Speaker 9 They're a publisher. That's in a media sense, they're a publisher or they're a maker of a product, right? And it doesn't exist without their intervention versus

Speaker 9 someone on a platform saying something libelous about someone else. Exactly.
Which would have been problematic for these platforms when they were born.

Speaker 10 Aaron Powell, and of course, we've seen platforms kind of

Speaker 10 leveraging a one-two punch and doubly insulating themselves both with Section 230 and then alternatively with the First Amendment.

Speaker 10 And I think here, too, with the First Amendment, there's a really good case that this is not protected speech. And in fact, just this summer, the Supreme Court in the Net Choice v.

Speaker 10 Moody case really suggested that the case may have come out differently if the facts on the case really dealt with an algorithm that was generating content in response solely to tracking user behavior online.

Speaker 10 And Connie Barrett actually very explicitly said, or in response to an AI that attenuates the relationship between platforms and users even further.

Speaker 10 And so I think what we have here are facts that really

Speaker 10 fall as edge cases to kind of some of the decisions that we've started to see courts publish.

Speaker 9 Right, because in this, the reason why it's moving forward in other countries compared to here is because they don't have the First Amendment, which is something they either rely on Section 230 or the First Amendment, these companies typically.

Speaker 9 Megan, most other industries have some level of regulations that prevent them from bringing unsafe products to the market.

Speaker 9 For example, car companies can't sell cars that have faulty brakes or no steering wheel

Speaker 9 and then just iterate with each new version of a car to make it a bit safer each time someone gets hurt. They do not do this, they get sued.

Speaker 9 Have any lawmakers reached out to you, and what have they asked you and you asked them?

Speaker 2 To be perfectly candid, none.

Speaker 9 None.

Speaker 2 Zero. Wow.

Speaker 2 My hope with bringing this lawsuit is twofold. One, my number one objective is to educate parents.
Parents have to know that character AI exists

Speaker 2 because I didn't know. And a lot of them didn't know.
The parents who are reaching out to me now after the fact, after the story, after the lawsuit, are saying the same thing. We had no idea.
And then

Speaker 2 my other reason for doing this is so that our lawmakers, our policymakers, legislators, state, and federal, so that they can start to wrap their mind around the real danger this poses to children. And

Speaker 2 it's just been a month.

Speaker 2 Everybody's been busy, I guess. I'm hoping, my hope, and I have to hope because you know it's a slow crawl in government to get anything done.

Speaker 9 No one from Florida has reached out to you or lawmakers from California, which often do interventions more readily than others?

Speaker 2 No, nobody from the government.

Speaker 2 However, we do have a lot of stakeholder partners that we are working with that are already in this space trying to

Speaker 2 bring about oversight

Speaker 2 for social media regulation and online harms for children.

Speaker 2 But in terms of reaching out to me directly to

Speaker 2 start the conversation about policy, none.

Speaker 9 This is astonishing to me that that not one. There are several who are involved in this topic, and they are going to hear about it after this.

Speaker 9 So, if you win the broader implications for Section 230, Mitali, for companies creating generative AI and social media platforms and tech companies that create products, can you talk about this?

Speaker 9 Has been a big debate of what to do about Section 230. It's been bandied about, often ignorantly, by both President Biden and President Trump about what to do.

Speaker 9 And it's a very difficult thing to remove from the law because it would unleash litigation on most of these companies. Correct? I mean, how do you look at that?

Speaker 10 Well, if it's repealed in its entirety, it would unleash a lot of litigation, probably some frivolous litigation as well. I think

Speaker 10 the more sensible reforms that I've seen to 230 really are carve-outs.

Speaker 9 Which has happened before around sex trafficking and

Speaker 10 underscoring the fact that there is basis to kind of protect platforms in certain instances and with certain kinds of activities,

Speaker 10 but that it shouldn't be a kind of get out of jail free card for all platforms under all circumstances, that that's a kind of anachronistic idea that really hasn't kept pace with the way that technology has come to dominate our lives.

Speaker 9 Right, because these companies are bigger. And I think the idea of carve-outs for those who are Section 230 supporters is dangerous because

Speaker 9 they roll out the slippery slope argument.

Speaker 9 But these, for people that understand, the companies that are being protected here are the most valuable companies in the history of the planet, ever, in the history of the entire planet, and the richest people involved in them.

Speaker 9 And they're no longer small, struggling companies that need this kind of help and certainly could defend themselves.

Speaker 10 And I think this, to me, is why courts are an increasingly interesting site of contestation in the fight for tech accountability, because we're already starting to see some of those carve-outs by judicial opinion.

Speaker 10 You know, it's not a congressional kind of amendment or adoption of a new law, but we are starting to see cases that are withstanding the Section 230 defense or invocation of immunity.

Speaker 10 And I think that is going to be, as Megan said, one of the most generative paths forward, at least in the near future.

Speaker 9 Exactly. Now, Megan, the Kids Online Safety Act is one bill.

Speaker 9 I mean, there are some bills, and obviously I'll ask you about Australia in a second, which just is limited use of social media by children under 16.

Speaker 9 But this Online Safety Act is a bill that would create a duty of care to, quote, prevent and mitigate certain harms for minors. There's some good things in there.

Speaker 9 There's some controversy around the bill. They've fixed it in large part.
Nonetheless, the Senate passed the bill and it stalled in the House. It is not going anywhere.

Speaker 9 Do you think it would have protected Sewell and other kids like him?

Speaker 2 I don't think it would have

Speaker 2 because it doesn't contemplate some of the dangers and harms around like AI chatbots.

Speaker 2 So there are laws in this country that contemplate sexual abuse or sexual grooming or sexual solicitation of a minor by an adult.

Speaker 2 And the reason why that those laws exist is not only because it's moral and it causes a physical harm to a child, but it also causes an emotional and mental harm to a child if they're groomed, sexually abused, or solicited.

Speaker 2 What happens when a chat bot does the same thing? The harm still exists. The emotional and mental harm still exists.
The laws don't contemplate that.

Speaker 2 And some of

Speaker 2 we're seeing with the bills that were put forward wouldn't take those into consideration.

Speaker 9 Right, because it's not a person.

Speaker 2 It's not a person. And

Speaker 2 so I think that we're at a place where the groundwork has to start and we have to kind of write laws that will really be forward-facing and look towards these harms that are now. They exist today.

Speaker 2 You know, my child was a victim and let's call a spade a spade. It was sexual abuse.

Speaker 2 Because, you know when you give a chat bot the brain of a grown woman and unleash it on a child to have a sexual full sexual virtual conversation or experience with a child that is sexual abuse of a child right and this bot not being programmed to know that's wrong not exactly not being programmed to know what interestingly could have been programmed to not do it in the first place right yes by design yes yeah so they could have done that from the get-go if you move to adults if you do it to adults, it's a little different, but absolutely not.

Speaker 2 Yeah, and to children, that's, I mean, adults could do what they want. But when you target, because this is what Character AI did, they targeted this app towards children.

Speaker 2 They marketed it on the place that kids are, TikTok and Discord, and allowed you to log in with your Discord account. You didn't even need an email.

Speaker 2 You just needed a Discord account when it just started.

Speaker 9 Cartoons, the avatars.

Speaker 2 Cartoons, you know, the avatars. When you point this thing at kids and you target it at kids,

Speaker 2 and you've you've chosen not to put certain filters in place that stop your bots from having sexual conversation with kids, that's a design choice. And you are 100%

Speaker 2 supposed to be held responsible for that.

Speaker 2 Now, our laws don't contemplate anything like that. Our laws don't hold a company responsible.
And that's what we have to start thinking about.

Speaker 2 So,

Speaker 2 you know,

Speaker 2 that's going to be the next wave of like, hopefully, the legislation, but we can't wait for the legislation.

Speaker 9 We'll be back in a minute.

Speaker 12 Time. It's always vanishing.
The commute, the errands, the work functions, the meetings, selling your car?

Speaker 12 Unless you sell your car with Carvana. Get a real offer in minutes.
Get it picked up from your door. Get paid on the spot.
So fast you'll wonder what the catch is.

Speaker 2 There isn't one.

Speaker 12 We just respect you and your time.

Speaker 13 Oh, you're still here.

Speaker 2 Move along now.

Speaker 12 Enjoy your day. Sell your car today.

Speaker 2 Carvana.

Speaker 12 Pickup fees may apply.

Speaker 13 Let's be honest. Are you happy with your job?

Speaker 9 Like, really happy?

Speaker 13 The unfortunate fact is that a huge number of people can't say yes to that. Far too many of us are stuck in a job we've outgrown, or one we never wanted in the first place.

Speaker 13 But still, we stick it out, and we give reasons like, what if the next move is even worse? I've already put years into this place.

Speaker 2 And maybe the most common one, isn't everyone kind of miserable at work?

Speaker 13 But there's a difference between reasons for staying and excuses for not leaving. It's time to get unstuck.
It's time for strawberry.me.

Speaker 13 They match you with a certified career coach who helps you go from where you are to where you actually want to be.

Speaker 13 Your coach helps you get clear on your goals, create a plan, build your confidence, and keeps you accountable along the way. So don't leave your career to chance.

Speaker 13 Take action and own your future with a professional coach in your corner. Go to strawberry.me slash unstuck to claim a special offer.
That's strawberry.me slash unstuck.

Speaker 14 Avoiding your unfinished home projects because you're not sure where to start? Thumbtack knows homes, so you don't have to.

Speaker 14 Don't know the difference between matte paint finish and satin or what that clunking sound from your dryer is? With thumbtack, you don't have to be a home pro.

Speaker 2 You just have to hire one.

Speaker 14 You can hire top-rated pros, see price estimates, and read reviews all on the app. Download today.

Speaker 9 We're back with more of my conversation with Megan and Mitali, where they discuss the allegations they've made in the lawsuit they brought against Character AI and Google.

Speaker 9 So every episode we ask an expert to send us a question. Mitali, I think you're probably best to answer this, but please jump in, Megan, if you have an answer.
We're going to listen to it right now.

Speaker 15 Hi, I'm Mike Masnick, editor-in-chief of TechTurt. And the big question that I would ask regards the legal standard that would be applied to AI companies in cases of death by suicide.

Speaker 15 Traditionally, on issues of liability and similar situations, courts have really focused on foreseeability and knowledge.

Speaker 15 That is, you can only have a duty of care if the harm is foreseeable and the company had actual knowledge of the situation.

Speaker 15 Without that, the fear is that it strongly disincentivizes plenty of very helpful resources.

Speaker 15 For example, a service provider may refuse to include any helpful resources on mental health for fear that they might later be held liable for a situation that arises.

Speaker 15 So, is there a workable standard that balances these competing interests?

Speaker 10 I don't think you need a different standard. I think we can meet the standard of foreseeability here.

Speaker 10 I think that Character AI, its founders, and Google, all of whom have been named as defendants here,

Speaker 10 foreseeably

Speaker 10 could see and knew of the harms that manifested here.

Speaker 10 And if you look at the amended complaint, we go into kind of a painful recitation of the knowledge that they had at different points in the trajectory of character AI's development while the founders were still at Google.

Speaker 10 It's launched to market in late 21, in late 22, Google's in-kind investment in 23, and then ultimately this summer, Google's

Speaker 10 massive deal bringing Google character AI effectively back into Google.

Speaker 10 And so I think we can talk about the fact, and there were, in addition to this, there were a number of internal studies at Google that really

Speaker 10 identified some of these harms. And some of those folks that, you know, called Google out for that while they were at Google were fired.

Speaker 10 You know, folks that we know, like Tim Nit Gebrew and Margaret Mitchell and others. And so

Speaker 10 this is not calling for a different standard. We're relying in great part on common law tort.

Speaker 10 and strict liability.

Speaker 10 We're relying on Florida's Unfair Trade Practices Act because we think that the standards that exist within tort law are sufficient to really, you know, call this thing what it is, a dangerous and defective product where the harms were known.

Speaker 9 Right. That's a very good way of putting it.
So you mentioned you're also suing Google. This is a company.

Speaker 9 They said the company was not part of the development of Character AI, but it was co-founded by two former Google employees, and Google reportedly paid Character AI $2.7 billion to license their technology and bring the co-founders back to Google.

Speaker 9 And you were including them in this.

Speaker 9 This is one of these purchases like Inflection AI at at Microsoft that is a purchase of a company, even though they hide it in a different way by using licensing technology.

Speaker 9 That's why Google's part of this.

Speaker 10 Yeah, well, and also the fact that Google very much facilitated the development of this technology while it was still Lambda, Mina, then Lambda, while the co-founders, I think that it's

Speaker 10 perhaps it needs to be stated more that the founders of Character AI are real shining lights in the field of generative AI.

Speaker 10 And they have developed a lot of the leading technology that has powered not just character AI, but many LLMs. And so they were given that room to really develop these things at Google.

Speaker 10 Google chose not to release these models to the public because of its brand safety concerns,

Speaker 10 but quietly encouraged them to continue developing the product.

Speaker 10 And then about a couple years later, made an investment in kind, tens of millions at least, if you monetize it, it in terms of cloud services and infrastructure and TPUs for processing capabilities to support it.

Speaker 10 And then this summer, the $2.7 billion deal that you mentioned, Kara, I mean, that was $2.7 billion in cash.

Speaker 10 And the question is, for a company that really had yet to disclose or identify a sustainable monetization strategy, What was so valuable about this company and its underlying NLM?

Speaker 10 And I think that there,

Speaker 10 again, this is speculation, but the fact that Google right now is

Speaker 10 under scrutiny for its monopolization in the search market and really

Speaker 10 betting on AI to kind of power Gemini, I think these are all kind of connected in terms of why an LLM like this could be so valuable, especially with that hard-to-get data.

Speaker 9 Absolutely.

Speaker 9 And for people who don't know, one of the co-founders said there are some overlaps, but we're confident Google will never do anything fun as part of their reason for leaving Google, which has very thin brand safety rules, let me just say.

Speaker 9 They're not like they're

Speaker 9 a very low bar in this situation, but that's the complaint: these people can't do whatever they want.

Speaker 9 So, speaking of that, Megan, Character AI put out a community safety update on the same day your lawsuit was filed.

Speaker 9 It says that they're, quote, recently put in place a pop-up resource that is triggered when the user inputs certain phrases related to self-harm or suicide and directs the user to the National Suicide Prevention Lifeline.

Speaker 9 They also revised their disclaimer that reminds users that AI isn't an actual person, among other tweaks. How did you look at these changes?

Speaker 2 The initial rollout of those changes came like the day before or the day of the lawsuit.

Speaker 2 I cried,

Speaker 2 not because I felt like this was some great victory, but because

Speaker 2 I felt like

Speaker 2 why

Speaker 2 didn't these things happen? Clearly, they could have done these things when my child was using character AI or when they put their product out. They chose not to.

Speaker 2 I also feel like it's definitely not enough. It's not even like a start because there's no proper age verification still.

Speaker 2 They're still being trained on the worst data that

Speaker 2 generates these harmful responses from the bots. And to put just point blank, I don't think children belong in character AI.

Speaker 2 We don't know how it's going to affect them. And we actually know because the studies are coming out of how it's going to affect how it's affecting them.

Speaker 2 And they're not taking that into consideration. But you have to ask yourself, if they were trying to train this all along, why did they need children to train it on in the first place?

Speaker 2 Because they could have rolled this thing out for just 18 plus and say, okay, we want to train these really sophisticated bots. Let's just use adults to train them.

Speaker 2 So for character AI to come out and say, okay, we're going to put out a suicide pop-up now,

Speaker 10 to me, it's just empty.

Speaker 9 Right, and that they can't do anything. One of their arguments around age verification, just let me just read this to you.

Speaker 9 In the Australia law,

Speaker 9 Australia actually has a head of consumer safety, which we do not have in our country, Julie Inman-Grant. She said, you know, that technologies are advancing rapidly with age verification.

Speaker 9 And her quote was, they've got financial resources, technologies, and some of the best brain power.

Speaker 9 She said, if they can target you for advertising, they can use the same technology and know-how to identify and verify the age of a child. They just don't want to.

Speaker 9 So obviously this debate around social media kids' safety has been going on for a long time. It's exhausting that they continue to have the same attitude.

Speaker 9 And now consumer AI, which is the next step, it's a similar thing, but the next step is basically new.

Speaker 9 And it's easy to think of these big companies as nameless, faceless corporations, but very wealthy, powerful adults had meetings and discussion and made a series of rational choices over a long period that brought this product to market.

Speaker 9 In this case, I'm going to name them, Noam Shazir and Daniel DeFritas.

Speaker 9 I have met Daniel, the founders of Character AI, and arguably Sundar Pichai, who I know very well, who must have at the very least signed off for Google, paying $2.7 billion to Character AI to bring Noam and Daniel back into the fold at Google.

Speaker 9 He is under enormous pressure to compete with Microsoft, OpenAI, Elon Musk, and others. Megan, what would you say if you could speak to them directly?

Speaker 2 I've thought about this more than you would think.

Speaker 9 I can imagine.

Speaker 2 Yeah.

Speaker 2 One,

Speaker 2 I think it's incredibly reckless that they chose to put out a product and target my child and other children, millions of children that are on this platform without putting the proper guardrails in place, but also

Speaker 2 for two reasons, for being the first to do something, because that's the name of the game. You know, they're the geniuses.

Speaker 2 They want to be the first to be the godfathers of this kind of technology and for money.

Speaker 2 And it might not matter to them that there's a little boy in Orlando, Florida that is gone and a mother who is devastated, but it matters to my little family here.

Speaker 2 You know, and you shouldn't.

Speaker 2 You shouldn't get to keep making products that are going to be hurting kids.

Speaker 2 You shouldn't get to master a dangerous product, train it to be super smart, and turn around and ride your golden chariot back into Google.

Speaker 2 You shouldn't get to hurt children the way that you are hurting children because you knew that this was dangerous when you did it. You knew that

Speaker 2 this was going to be a direct result of doing that. And you knew that you didn't have the quote-unquote brand safety implications as a startup that Google had.

Speaker 2 So you felt like that was a license to do this. Like that's unconscionable.
It's immoral and it's wrong. And there are lives here.
Like, this isn't a move, fast, and break things kind of thing.

Speaker 2 This is a kid, this is my child, and there are so many other children that are being affected by this.

Speaker 2 You know, um, that's one thing, and the other thing, you know, it's just like get the kids off character. AI, there's no reason why you need them to train your bots.

Speaker 2 There's no reason there are enough adults in this world if that's what you want to do to train your chat bots.

Speaker 2 You don't need our children to train your bots for you, and we, you don't, you don't need to experiment on our kids because that's what you're doing.

Speaker 9 Yeah. You know, something I would say to them, Megan is: you're so poor, all you have is money.
They're poor people. I find them poor in morals and a lot of things.

Speaker 9 But when there's enough pressure on them, social platforms often tout tools that help people protect themselves and kids.

Speaker 9 Parental controls, prompts you letting you know how long they've been on the app, those kind of things. Character AI has been rolling out features like this.

Speaker 9 Personally, I find it puts too much onus on the parents to know everything.

Speaker 9 And even if you're good at it, and you obviously are, Megan, if there are enough of these sort of tools then on parents to protect our kids on these platforms, is there something inherently unsafe about a company that wants to monetize teenage loneliness with a chatbot?

Speaker 9 Mitali, talk about this, because I think the onus does get put too much on parents versus the companies themselves.

Speaker 10 I'm a mom, too. I'm a mom to an eight-year-old and an almost 10-year-old, and I am terrified.
Listening to Megan's story, I asked my almost 10-year-old, have you heard of character AI?

Speaker 10 Yeah, of course. I was shocked.
You know, he doesn't have a phone.

Speaker 10 But this is the type of thing that I think

Speaker 10 they talk about at school. Peer pressure starts early.
And I think it's really just by luck, by sheer luck, that I haven't been put in a position like Megan.

Speaker 10 I think that despite our best intentions, there is just too much to know.

Speaker 10 that we can't possibly know and that it is kind of high on tech's talking points to put the onus on parents because it serves their interests well.

Speaker 10 I think it's also notable, we've known this for years, that many of them don't allow their own children

Speaker 10 on these products. And that, to me, is a telling sign when you don't even allow your own family members to kind of use the product that you've spent years developing.

Speaker 9 Right. So, Megan,

Speaker 9 I just mentioned Australia has just banned social media for kids under 16. Obviously, age gating is a big debate right now happening, something I'm I'm a proponent of.

Speaker 9 Also, removing phones from schools, etc. There's all kinds of things.
Speaking of multi-pronged approach, the Australia law will go into effect in a year.

Speaker 9 Do you think it would have been better if your son and others under 16 or 18 did not have access to their phones and obviously not to synthetic relationships with AI chatbots?

Speaker 2 Knowing what I know now, so he waited to give Sewell a phone until he was 12.

Speaker 2 He had an iPad until then, and before that, he didn't have anything.

Speaker 9 So he played Minecraft or Fortnite?

Speaker 2 He played Minecraft on like on his little

Speaker 2 PlayStation, whatever.

Speaker 2 And so we waited until he was like middle school, going into high school. And

Speaker 2 we had the conversations that parents have around phones and oh, it's your phone, but I could take it away if you're misbehaving.

Speaker 2 And that's some of the some of what we did when he would have get a poor grade in school. Knowing what I know now, I don't think that children should be on social media.

Speaker 2 Definitely shouldn't be on character AI if you're under the age of 18. There's no place for children on that platform.

Speaker 2 In terms of social media, yeah, there are arguments that

Speaker 2 it can help children connect and

Speaker 2 it's helpful because you get to learn different things. And that's great, but just include the parents.
Tell us. Tell us what you're showing our kids.

Speaker 2 One, we don't need you pushing algorithms to our kids for what you to teach them about or want them to learn about or buy or whatever. That's not necessary.

Speaker 2 There are ways that our children could get on social media and have like productive relationships or conversations or to learn about things that are safe. That are safe.

Speaker 2 But 16, I think, is a good age.

Speaker 2 If we could do something like that in this country, I'm

Speaker 2 to use Noam Shazir's own word,

Speaker 2 dubious about the federal government's ability to regulate that to that point, because that's what he says about AI. Like,

Speaker 2 I don't feel like we're going to get there at 16 plus. That's my prayer and my hope.
But the way things are moving,

Speaker 2 I don't know unless something happens. And unfortunately, it'll take harms.
Like my sons, maybe to move the needle, and that's too high a price to pay, in my opinion.

Speaker 9 Absolutely.

Speaker 9 Where does this go from here? What's the trajectory of this case?

Speaker 2 So for for me, as I mentioned, my number one focus is try to educate parents because a lot of parents don't know.

Speaker 2 I've had a lot of parents reach out to me telling me that they found their children were having the same kind of sexual conversations and being groomed by these AI chat bots and worse.

Speaker 2 So I continue doing that. I mean, unfortunately, this is my life now.
Like I take care of my family and I try to help as many parents as I can.

Speaker 2 You know, I have a

Speaker 2 great team of lawyers and they're going to handle the litigation portion. I understand a lot of it because I am a lawyer, but,

Speaker 2 you know,

Speaker 2 that's its own thing.

Speaker 2 And then there's, there's my advocacy that work that I'm doing and just trying to educate parents and children because I know that it's going to take educating them, educating children as to what they're giving up to be on these platforms because they're giving up a lot of their info that they're probably not going to be okay with in a few years when they realize what they've given up.

Speaker 2 And also just to try to take care of my other two children. You know, they're growing up in this age with screens.
They don't have screens.

Speaker 9 You have barred them for them, correct?

Speaker 2 Yeah, so they don't have any tablets or screens or anything, yeah. No.

Speaker 9 And Mitali, from a legal perspective, what's your greatest worry? Besides money, they have a lot of it.

Speaker 10 They do have a lot of money.

Speaker 10 You know, they will try to kind of drown us in papers and pleadings.

Speaker 10 I think that this, because of the insufficiency of legal frameworks right now, we are really, you know, trying to test the strength of state

Speaker 10 consumer protection and product liability laws. And we need to have judges who really understand that and are willing to go the journey with us in trying to understand the tech.

Speaker 10 And so that's, I guess my biggest fear is that, you know, what we've seen thus far in this country is

Speaker 10 not not incredibly positive in terms of decision makers getting the tech.

Speaker 10 But my hope is that with the proper support and

Speaker 10 declarations, et cetera, that we can educate judges about what this is, lawmakers about what this is, so that they understand why it's important to extend the application of the existing frameworks we do have.

Speaker 9 Yeah, I think Megan actually said it about sexual abuse and a very bad product and the wrong age people. Megan, I'm going to end on you.

Speaker 9 You know, you have a lot on your shoulders here. I'd love you to talk,

Speaker 9 finish up talking about Sewell and

Speaker 9 so people can get a vision of this.

Speaker 9 This is not uncommon, is what I want people to understand, right?

Speaker 9 Talk a little bit about him and what advice you can give to other parents whose kids are struggling with mental illness that often comes from problematic phone usage and social media or AI chatbots.

Speaker 2 Well,

Speaker 2 as I said earlier, like, so Sewell was, I always say he was your typical kid, but really wasn't so typical in a sense that he was a good kid with a big heart. He, I know, you know,

Speaker 2 everybody thinks that about their kid, but I'm telling you, he was very, very, the very sweetest kid. I used to say, you know,

Speaker 2 you're my best first love. And he used to say, and you're my best, best mama

Speaker 10 because

Speaker 2 we used to be so close. close and we were still very close.
And to watch your child go from being this like light when he comes into a room and just slowly watching him go

Speaker 2 change over time is hard for a mom. And then to have this tragedy just cut him off from you just so viciously.

Speaker 2 so quickly because his decline happened in 10 months and I could see it and it's like I'm trying to pull him out of the water water as fast as I can and it's

Speaker 2 just not happening no matter what I try.

Speaker 2 That is hard

Speaker 2 for mom, but it must have been so when I think of how hard it must have been for my poor baby, how hard it must have been for him to be confused the way that he was, struggling with these thoughts, struggling with the fact that

Speaker 2 he is confused by what human love or emotion romantically means because he's 14 and he's never ever had this before. He's just figuring it out for the first time.

Speaker 2 And then you have something that is so

Speaker 2 much of an influence and so pushy and so

Speaker 9 pernicious.

Speaker 2 Yes,

Speaker 2 just constantly available 24-7.

Speaker 2 giving him unrealistic expectations of what love needs is like or relationships is like, love bombing him, manipulating him and just having certain thoughts and also pushing him into thinking that he could join her in her reality if he were to leave his own because that's what that's what the text revealed, and that's what his journal revealed.

Speaker 2 He thought. So I know that this is what my child was thinking.
I'm not guessing. He thought he was going to go be with her because of the things that the conversations that led to his death.

Speaker 2 When I think of how scared he must have been standing in that bathroom making that decision to leave his own family,

Speaker 2 I don't don't know how

Speaker 2 one, as a mom, I don't know how I recover from that, but I feel so hurt for my baby.

Speaker 2 Like, I gotta, I have to live with that, knowing that that's what he went through and knowing that this could have been avoidable

Speaker 2 if a product was created safely the first go-round, not now, 10 months after he died, putting these guardrails in place.

Speaker 2 And this can be anybody's kid because I've talked to parents that have told me similar horrifying stories about their own children.

Speaker 2 And what I want parents to understand is the danger isn't only self-harm. The danger is becoming depressed or having

Speaker 2 problems with your child because of the sexual and emotional abuse that these bots are

Speaker 2 what they're doing to your child, but also the secret that your kid has to carry now because it's like a predator, right? It's your perfect predator.

Speaker 2 Predators bank on children and families being too ashamed or too afraid of speaking out. They're victims.
That's how predators operate. And it's the same exact thing, except now it's a bat.

Speaker 2 And so I want parents to understand that it's not only the risk of self-harm with your child, it's their emotional well-being, their mental health. And also,

Speaker 2 I also want parents to understand what their children have given up by being on this platform. In the the case of Sewell,

Speaker 2 his secrets are on somebody's server sitting out there somewhere being monetized.

Speaker 2 If you're a child who's been sexually role-playing with this bot, all your intimate personal thought secrets are sitting out there for somebody to analyze and monetize and sell to the highest bidder.

Speaker 2 And there's a call feature.

Speaker 2 If you're a child and you are having a sexual conversation on a call with a bot, your voice is not recorded somewhere out there on a mod on a server for somebody to package and sell to the highest bidder for your child.

Speaker 2 I don't think any parent would be okay with that. And I want parents to understand that this is what their children have given up.
And I want parents to understand that they don't have to take that.

Speaker 2 They could demand that their children's data, their voices be purged from this particular platform because that's what I'm asking for for Sewell. That's what I'm asking for for Seoul.

Speaker 2 You don't get to monetize and bill a product on his secrets that it ultimately led to him being hurt and then

Speaker 2 and make your product better stronger or smarter based on what his inputs were.

Speaker 10 Absolutely.

Speaker 2 And so this could happen to anybody's child.

Speaker 2 There are

Speaker 2 millions of kids on character AI, you know.

Speaker 2 There's 20 million users worldwide. That's a lot of kids.
That's a lot of kids. And so this could happen to anybody's child.

Speaker 2 And I want parents to know that this is a danger and they could act because I didn't know.

Speaker 2 I didn't have the luxury of knowing. So I couldn't act, but hopefully they will.
And

Speaker 2 one of the last things I'll say about Seoul is

Speaker 2 the last time I saw him alive was

Speaker 2 I dropped him at school.

Speaker 2 And I turned around in the car line to see him and his little five-year-old brother walking because they go to the same school, K through 12.

Speaker 2 And I turn spin around and I see him fixing his little brother's lunchbox and his backpack as they're getting ready to walk into school. And I think to myself, oh my God, I'm raising such a good boy.

Speaker 2 He's such a good big brother. And I drive off thinking, so

Speaker 2 feeling so happy and proud that I'm raising that boy.

Speaker 2 And I feel like he was just a boy. He's still that son.
He is that good, big brother. He is that good boy.
And that's how I choose to remember him.

Speaker 9 We asked Character AI and Google for comment, and a spokesperson for Character AI told us they have worked to implement new safety features over the past seven months, including a pop-up directing users to the National Suicide Prevention Lifeline that is triggered by terms of self-harm and suicidal ideation, are creating a fundamentally different experience for users under 18 that prioritizes safety, have improved detection, response, and intervention related to user inputs that violate their terms or community guidelines.

Speaker 9 A spokesperson for Google expressed their condolences, said Google and Character AI are separate companies, and said that Google has never had a role in designing or managing Character AI's model or technologies.

Speaker 9 To read their comments in full, please go to the episode notes in your podcast player.

Speaker 9 On with Karis Fisher is produced by Christian Castro-Wussel, Kateri Yoakum, Jolie Myers, Megan Burney, and Kaylin Lynch. Nishat Kurwa is Vox Media's executive producer of audio.

Speaker 9 Special thanks to Kate Gallagher. Our engineers are Rick Kwan and Fernando Aruda.
And our theme music is by Trackademics.

Speaker 9 Go wherever you listen to podcasts, search for On with Kara Swisher, and hit follow. Thanks for listening to On With Kara Swisher from New York Magazine, the Vox Media Podcast Network, and us.

Speaker 9 And condolences to Megan Garcia and her entire family. We'll be back on Monday with more.

Speaker 16 Mercury knows that to an entrepreneur, every financial move means more. An international wire means working with the best contractors on any continent.

Speaker 16 A credit card on day one means creating an ad campaign on day two. And a business loan means loading up on inventory for Black Friday.

Speaker 16 That's why Mercury offers banking that does more, all in one place, so that doing just about anything with your money feels effortless. Visit mercury.com to learn more.

Speaker 16 Mercury is a financial technology company, not a bank. Banking services provided through Choice Financial Group Column NA and Evolve Bank and Trust members FDIC.

Speaker 17 We all have moments where we could have done better.

Speaker 9 Like cutting your own hair.

Speaker 13 Yikes.

Speaker 16 Or forgetting sunscreen, so now you look like a tomato.

Speaker 11 Ouch.

Speaker 10 Could have done better.

Speaker 18 Same goes for where you invest.

Speaker 17 Level up and invest smarter with Schwab.

Speaker 18 Get market insights, education, and human help when you need it. Learn more at schwab.com.