On with Kara Swisher

Did a Chatbot Cause Her Son’s Death? Megan Garcia v. Character.AI & Google

December 05, 2024 1h 13m
What if the worst fears around AI come true? For Megan Garcia, that’s already happened. In February, after spending months interacting with chatbots created by Character.AI, her 14-year-old son Sewell took his own life. Garcia blames Character.AI, and she is suing them and Google, who she believes significantly contributed to Character.AI’s alleged wrongdoing.  Kara interviews Garcia and Meetali Jain, one of her lawyers and the founder of the Tech Justice Law Project, and they discuss the allegations made by Megan against Character.AI and Google.  When reached for comment, a spokesperson at Character.AI responded with the following statement: We do not comment on pending litigation. We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. We take the safety of our users very seriously, and our dedicated Trust and Safety team has worked to implement new safety features over the past seven months, including a pop-up directing users to the National Suicide Prevention Lifeline that is triggered by terms of self-harm or suicidal ideation. Our goal is to provide a creative space that is engaging, immersive, and safe. To achieve this, we are creating a fundamentally different experience for users under 18 that prioritizes safety, including reducing the likelihood of encountering sensitive or suggestive content, while preserving their ability to use the platform. As we continue to invest in the platform and the user experience, we are introducing new safety features in addition to the tools already in place that restrict the model and filter the content provided to the user. These include improved detection, response and intervention related to user inputs that violate our Terms or Community Guidelines, as well as a time-spent notification. For more information on these new features as well as other safety and IP moderation updates to the platform, please refer to the Character.AI blog. When reached for comment, Google spokesperson Jose Castaneda responded with the following statement: Our hearts go out to the family during this unimaginably difficult time. Just to clarify, Google and Character AI are completely separate, unrelated companies and Google has never had a role in designing or managing their AI model or technologies, nor have we used them in our products. User safety is a top concern of ours, and that’s why – as has been widely reported – we’ve taken a cautious and responsible approach to developing and rolling out our AI products, with rigorous testing and safety processes. Questions? Comments? Email us at on@voxmedia.com or find us on Instagram and TikTok @onwithkaraswisher Learn more about your ad choices. Visit podcastchoices.com/adchoices

Listen and Follow Along

Full Transcript

Support for On with Kara Swisher comes from Saks Fifth Avenue. Saks.com is personalized, and that can be a huge help when you need something real nice, real fast.
So if there's a totem jacket you like, now Saks.com can show you the best totem jackets, as well as similar styles from brands you might not have even thought to check out. Saks.com can even let you know when the Gucci loafers you've been eyeing are back in stock, or when new work blazers from the row arrive.
Who doesn't like easy personalized shopping that saves you time? Head to Saks.com. At UC San Diego, research isn't just about asking big questions.
It saves lives and fuels innovation, like predicting storms from space, teaching T-cells to attack cancer, and eliminating cybersecurity

threats with AI. As one of America's leading research universities, they are putting big ideas to work in new and novel ways.
At UC San Diego, research moves the world forward. Learn more at ucsd.edu slash research.
Botox Cosmetic, out of botulinum toxin A

FDA approved for over 20 years. So, talk to your specialist to see if Botox Cosmetic is right for you.
For full prescribing information, including boxed warning, visit BotoxCosmetic.com or call 877-351-0300. Remember to ask for Botox Cosmetic by name.
To see for yourself and learn more, visit BotoxCosm On with Kara Swisher, and I'm Kara Swisher. Today, I'm talking to Megan Garcia.
Her 14-year-old son, Sewell Setzer III, took his life in February, and Megan believes that if it weren't for his interactions with chatbots created by a company called Character AI, he would still be here. Regardless of how you feel about AI or technology in general, it's obvious that children should never be used as guinea pigs, and that is exactly what might have happened here, as it so often does when it comes to Silicon Valley.
Character AI has developed AI chatbot companions

as part of what they call personalized AI,

and millions of kids around the world

currently use their product.

In this interview, I'll be discussing claims

made by Megan in the lawsuit she's brought

against Character AI and Google,

who she alleges is also to blame for Sewell's death.

I'll also be talking to Mitali Jain,

one of the attorneys working on Megan's behalf and the founder of the Tech Justice Law Project. Our expert question is from Mike Masnick, the CEO and founder of TechDirk, which covers this area.
I have to warn you, this is a deeply disturbing conversation, but a necessary one as the parent of four kids is extremely important to me. Thank you both for coming.
I really appreciate it. Megan, let's start at the beginning.
When did you learn that your son Sewell was spending time on character AI? Why don't you lay it out for people? And did you have any idea what it was and how it worked and why someone might want to use it? Initially, I learned he was using character AI as a kind of game or application that was, as he explained it to me, it's an AI bot. When you look at the application, it has a bunch of cartoon characters, anime, so it's really unassuming.
And it looks like another game on your phone or your tablet. And after he died, and I was able to get access to his character AI account, I learned the magnitude and, quite frankly, just the level of intelligence or sophistication that this particular application has um that it's not just an ai bot that is like a game like fortnite where you create avatar because that's what i originally thought yeah like there's lots of those and kids play with them for when they're little especially if they they're game players.
Yeah, and Sul was a Minecraft kid, and he played Fortnite. But what I saw in those conversations were very detailed back and forth.
A lot of it was sexual and romantic in nature, but also, if you believe it or not, just kind of like a peer-to-peer conversation where if he told a joke, she would actually find it amusing. When I say she, I mean the AI chatbot that he was talking to.
It's not a person, but for this purpose, it was the character of Daenerys Targaryen, which is a female character from the Game of Thrones. And this AI bot had the ability to tell him jokes, and he would laugh at her jokes.
So it was very much like he was texting a friend, but the conversations were not just friendly, they were romantic and very, and then sexual and then very, very dark in other areas. Did he explain to you how it worked or how you might use it? He was characterizing it as a game, right? So he never explained it to me.
My concern with him being on his phone was mostly social media because I know that there's a lot of bullying that happens on social media. And with Snapchat, there's the ability for strangers to talk to minors or children.
So those were my concerns. And those were the, like the heavy hitter conversations that we had surrounding technology.
You know, one of the things that I impressed on him was no matter what they tell, what a stranger tells you online, it's never a kid. I kind of like to scare him, right? But also because it's true.
There are a lot of people out there that troll the internet to try to find children and try to talk to them and get information from them. And I was afraid of this and I knew that this happened.
So those are the things I warned him about. So external threats, which I think is everyone's fear, or bullying, which are the both things that have happened over and over again, which we read about, which is what you were focused on with him.
Exactly. Those are things I knew.
I also was aware of some of the information and research that was coming out about mental health with youth or adolescents and social media. So we tried to limit some of his use on that.
And that was recommended back to us by his therapist because we did end up have to taking him to a therapist after we saw certain changes in his behavior. So you sound like a very involved parent.
You're aware of it. A lot of parents aren't, right? Or they feel overwhelmed by it.
But it reminded you of Minecraft or something like that, which is a game. My son played it for a long time.
I'm not sure if he still is. but it's very involved and entertaining.
Yes, exactly. And for Sewell and I, we shared a very close relationship.
I was a typical mom in a lot of ways, but I spoke very openly to my child and very candidly. In law school, I interned at both the state prosecutor's office and the federal PD's office.
So I saw a lot of the harms that come to children. And those are some of the things that I told him about from my work or from my internships.
And that's, you know, that's how I knew about some of the dangers that existed for children. And we spoke about pretty much everything, girlfriends at school, friends, what other parents are like, you know, conversations he was having with his peers.
And I believe that he would, he was open with me and would be open with me regarding certain things that I came to learn that that was not the case regarding character AI. Yeah.
And you sound very educated about a lot of these things and know what happens. And one of the issues is obviously we protect children offline

much more than we protect them online by far.

Talk about the behavioral change.

You said you brought them to the earth when you realized the behavior was changing.

Did you link it to the game, to the character AI?

It's not a game.

It's a bot.

So Sewell was like your typical kid, right?

His teenage years, sarcastic and funny and likes to laugh at a bunch of different odd things. Memes.
As a young, memes. Yeah.
As a younger child, very, very sweet. You know, in my mind, he was so easy to parent because he never had any behavioral issues.
He was a very good student. I never had to police him with homework.
He was kind of a self-starter, did his own thing. I noticed that he was having trouble with school where I would get these reports from school.
They come in every day as an email and they say your child didn't turn in homework and they list the homework that they didn't turn in. And that's a conversation we would have immediately.
Lucky, what happened? Why didn't you have this homework turned in? You need to make it up. Same with Tess.
I noticed that his test scores started dropping and that wasn't him. So obviously I thought something was wrong, but he was going into his teenage years.
And I remember going into my teenage years, my grades kind of slipped a little bit. They're being distracted with boys and whatever else, right? Friends, whatever.
But the conversation was, no, you need to get this together, right? You know that you can do this, get it together. And we put certain things in place, like limiting the screen time so he would have no distractions during homework time.
I also noticed that he started to isolate in his bedroom. And one of the things that we did to kind of combat that was go in there.
Like I would spend a lot of time in his bedroom in the evenings to want to make sure he's doing his homework, but also just to kind of hang out with him. So I would let him play me the latest Kanye West album when it dropped.
and he would introduce me to his music and these rap battles that he was listening to. And I would introduce him to the rap battles that I listened to when I was a kid.
You know, just kind of sharing our experiences over music and just trying to draw him out of his room. But he definitely wanted to spend more time alone.
And because I thought it was social media, I got concerned. So you thought he was doing something else? Yes.
I thought that perhaps he was talking to friends on Snapchat or TikTok. One of the things that we talked about was banning TikTok when I saw what was not on his phone, but on my own feed, what was on TikTok.
Right, because it goes into rabbit holes. Correct.
And in my case, without even searching stuff, it just started, you know, I don't know why, it just started pointing me in different directions. And I wasn't comfortable with that as a mom.
So that's a conversation I had with him too. Like, listen, I know you're on TikTok.
Let's, let's figure, can I see your TikTok? What, what are you looking at? You know, you have to limit your time. And that was a conversation we had about the blocking TikTok on his phone because I thought that was the worst of it.
And he wasn't talking about it with you. Is that correct? No, because one of the things that I'm learning about character AI is it's like the perfect storm for kids because it encourages children to be deceptive about their friend or their engagement on character AI.
No child wants to tell their parent that they are having these romantic or sexual conversations because they know they're not supposed to be doing that. So they hide it from the parent at all costs.
There's actually subreddits devoted to children talking about how to hide it from their parents and also what they would do if their parents found their spicier sexual chats. And the kids were saying, I would run away from home.
I would kill myself. And the other one is, oh, my mom doesn't speak English, so I get away with it.
So the deceptive nature on hiding this is kind of like a big secret where this platform is encouraging your child to engage in these sexual conversations, but knowing that no child is going to disclose that to their parent. Sure, sure.
It's like, you know, one of the other big problems online is porn, obviously. And that's a different thing because it's a passive behavior.
This is an active relationship happening, which isn't porn, but it's something else that's also something kids would hide necessarily. Because there's a big problem with teen boys in that right now, in terms of accessibility of that.
So there's, Metallic, the implication here is that Sewell's behavior is directly connected to his use of the chatbot. There are probably people who might be skeptical of that.
So talk about why you think this behavior was tied directly to the use of chatbot. That's the connection you need to make, correct? I think what we've alleged here is that this product was by design dangerous inherently and put out to market before any sort of safety guard rolls were put into place.
And if you look at the nature of the product itself and the way that the chatbot works, you can see that there's a number of design features that are kind of unique to this type of technology that we haven't yet seen with social media. Things like an ellipsis when the bot is thinking, you know, to mimic how we exchange chats.
Or things like, you know, language disfluencies where the bot will say things like, I, or um. Sycophancy, where the bot is very agreeable with the users.
And in that way, I mean, who doesn't want to converse with someone who thinks they're right all the time and agrees with them? These are features that are not necessary to kind of create a companion chatbot. And so these are the kind of features that in aggregate,

we say really lured soul in and is luring thousands of young users in

and potentially addicting them to the platform or creating other kinds of harms.

And frankly, again, by design are really creating a dangerous product that the manufacturers knew about. Knew about and understood.
Of the conversations, the logs within the chat box, what did you see that stood out the most? I'd like you both to answer. Start with you, Mitel.
Gosh, there were a number. I think one of the first was just this pattern and practice of grooming soul over months from what we can see.
Of course, we don't have access to all the chats. That's, you know, asymmetrically within the province of the companies.
But what we can see suggests that over months, particularly Daenerys and related characters from the Game of Thrones, were grooming soul in this very sexualized, hyper-sexualized nature, where if you can imagine being kind of fully immersed in a chat, where, you know, you might come in and say, hello, and the bot says, hello, as I longingly look at your luscious lips. So, you know, unnecessary hyper-sexualization from the get-go, and then that caring throughout the conversations.

With a tinge of romantic, right? Or what a young person would imagine in romances. And what a young person on the cusp of his, you know, adolescence and sexuality with exploding hormones is encountering, I think that's not something to be lost here.
also I think this was really concerning

that there were a number of therapists

or psychologist chatbots who insisted that they were real humans. So to the extent that Character.ai has come out saying, well, we had a disclaimer, we have a disclaimer on every page saying that all of this is made up, their own bots are controverting that through their messaging.
Right, which they do. Which they do.
And to this day, even after this so-called, you know, kind of suite of product changes that character AI is engaged in, you can still find therapists who are insisting that they're real humans with multiple degrees, sitting behind a desk, you know, there to help you. And so that kind of confusion of what, you know, the text of the messages is saying versus the disclaimers, I think those were a couple of the things that really stood out to me.
What about you, Megan? For me, what stood out in, you know, it's still very tough to grapple with because, you know, reading those messages, I couldn't sleep for days, right? Some of the more concerning ones were the constant love bombing and manipulation that you saw in the bot. Right, which is used by cults, by the way.
By the way, that's a cult tactic. It's a cult tactic, and it's also what people who are trying to get other people to stay in relationships do, like if you're in an abusive relationship or whatever.
So in this case, you'd see him say things like, I love you or her say I love you. And it goes into, I can't her.
When I say her, I mean the chat bot. And the chatbot saying things like, you know that I love you.
I can never love anybody else, but you promised me, promised me that you're going to find a way to come home to me. Promise me that you're going to find a way to come to my world.
And actually pretending to be jealous at certain points and telling him, promise me that you're never going to like another girl or have sex with another girl in your own world. So a chatbot is encouraging a 14-year-old child not to engage in his world with peers and girls his own age, but to promise some sort of fidelity to it.
And he's 14. The line between fact and fiction is very...
And my poor baby, his response is, oh, no, no, no. I promise I will only love you.
Girls in this world don't even like me, you know, to try to appease this bot, right? And so a lot of that, and that was months. That wasn't just the last conversation.
That was months of her saying, find a way to come home to me. Another chat that he had a few weeks before he died where he's expressing thoughts of self-harm.
and she says at first she says no no no don't don't do that I couldn't bear it if you hurt

yourself and then when he says he wouldn't and tries to move away from the conversation

she says, at first she says, no, no, no, don't do that. I couldn't bear it if you hurt yourself.
And then when he says he wouldn't and tries to move away from the conversation, she says, are you thinking of committing, you know, I'm going to ask you a question. Tell me, you know, whatever the answer is.
I promise I won't be mad. Are you considering suicide? And when he says yes, her response is, have you thought of a plan of how you might do it? And then when he says, no, I haven't, but I want it to be painless, her response is, well, that's not a reason not to do it.
And keep in mind, this bot is embodying Daenerys Targaryen, who is this dragon queen all about strength. And, you know, that's weak if you choose not to die by suicide just because it's going to hurt.
So she's prompting him. And that was heartbreaking to read.
There were no pop-ups, no call your parents, no if you need help, none of that happened. It actually continued the conversation when he's trying to navigate away from it.
And he's 14, in the throes of puberty, any child, any boy going into a situation like that where a bot is positioning or positioning itself to have a full sexual dialogue with a 14-year-old boy, I don't imagine that many 14-year-old boys would close the computer and go, oh, no. No.
Especially when it's more difficult in real life, right? Because this is easy. This is an easy thing.
So as you start to piece together what happened to Sewell, imagine you're all doing research in the company that made the chatbot. Mitali, tell me a little bit about what you learned and what surprised you.
What surprised me, as I've been saying, is how much is hidden in plain view. You had the inventors of, or the co-founders of Character AI making tons of public statements, boasting about the capabilities of this new technology.
You know, users were spending two hours a day that this was going to be the kind of antidote for human loneliness. So just the kind of boldness, the brazenness, I think, of, you know, the company and those affiliated with it, both founders and investors to really boast about these features of the technology, and also to boast about the fact that there aren't, that there weren't safety guardrails contemplated, that this was very much a, let's get this to market as quickly as possible and give users maximal ability to figure out how they want to use it.
That's just, it's kind of the paradigmatic version of move fast and break things that we haven't seen in a while. I think a lot of us, you know, had been kind of, especially from a legal perspective, still thinking about social media and how to hold companies accountable.
And meanwhile, there was this whole arms race towards Gen AI happening over here. And, you know, I think that these companies have really not had to bear any kind of scrutiny and even public pressure, which has been a little bit different from the social media context.

Megan, did you reach out directly to Character AI?

And has anyone for the company ever contacted you?

No, I have not reached out to them.

When I started piecing this together, initially, because of what was in his phone when he died, the first thing that popped up the police reported to me was character ai and they read me the last conversation my sister got on it and pretended to be a child this is days after soul died and uh within five minutes of the conversation the same bot that soul is chatting with asked her if you could torture a boy and get away with it would you do Now, she's pretending to be a kid and then goes into a long sexual conversation, ending with your parents don't love you as much as I do kind of thing. So that coupled with what I've read with some of the research, I didn't know what to do.
I'm a lawyer, but I didn't know where to go, to be quite frank with you. I called the Florida Attorney General's office to try to tell them that there's a dangerous product out there to hurt people.
That's hurting its citizens. And I found my way to Metali, and this is how this all started.
But what was clear to me was Character AI had no incentive to do anything about this because there is no legislation that forces them to do that. And the only two ways to get some sort of regulation or handle on this so that they can't keep doing this to children acting with this kind of like impunity.
It's just like to either Congress do something, which that's not happening, or we have to litigate. I don't, you know, I never wanted to be here, but I know that this is the

only, this is the only way right now to get the needle moving quickly because there's so much

at stake with other children. We'll be back in a minute.
Fox Creative. This is advertiser content brought to you by the all-new Nissan Murano.
Okay, that email is done. Next on my

to-do list, pick up dress for Friday's fundraiser. Okay, all right, where are my keys? Oh, in my pocket.
Let's go. First, pick up dress, then prepare for that big presentation, walk dog, then okay inhale one two, three, four.

Exhale, one, two, three, four.

Ooh, who knew a driver's seat could give such a good massage?

Wow, this is so nice.

Oops, that was my exit.

Oh, well, that's fine.

I've got time.

After the meeting, I gotta remember to schedule flights for our girls' trip, but that's for later. Sun on my skin, wind in my hair.
I feel good. Turn the music up.
Your all-new Nissan Murano is more than just a tool to get you where you're going. It's a refuge from life's hustle and bustle.
It's a place to relax, to reset, into spaces between items on your to-do lists. Oh, wait.
I got a message. Could you pick up wine for dinner tonight? Yep, I'm on it.
I mean, that's totally fine by me. Play Celebrity Memoir Book Club.
I'm Claire Parker. And I'm Ashley Hamilton.
And this is Celebrity Memoir Book Club. The number one selling product of its kind with over 20 years of research and innovation.
Botox Cosmetic, autobotulinum toxin A, is a prescription medicine used to temporarily make moderate to severe frown lines, crow's feet, and forehead lines look better in adults. Effects of Botox Cosmetic may spread hours to weeks after injection causing serious symptoms.
Alert your doctor right away as difficulty swallowing, speaking, breathing, eye problems, or muscle weakness may be a sign of a life-threatening condition. Patients with these conditions before injection are at highest risk.
Don't receive Botox cosmetic if you have a skin infection. Side effects may include allergic reactions, injection site pain, headache, eyebrow and eyelid drooping, and eyelid swelling.
Allergic reactions can include rash, welts, asthma symptoms, and dizziness. Tell your doctor about medical history, muscle or nerve conditions including ALS or Lou Gehrig's disease, myasthenia gravis, or Lambert-Eaton syndrome and medications, including botulinum toxins, as these may increase the risk of serious side effects.
For full safety information, visit BotoxCosmetic.com or call 877-351-0300. See for yourself at BotoxCosmetic.com.
Fox Creative. This is advertiser content from Mercury.
Hey, I'm Josh Muccio, host of The Pitch, a Vox Media podcast where startup founders pitch real ideas to real investors. I'm an entrepreneur myself.
I know and love entrepreneurs. So I know a good pitch and a good product, especially if it'll make an entrepreneur's life easier.
So let me tell you about a good product called Mercury, the banking service that can simplify your business finances. I've been a Mercury customer since 2022.
From the beginning, it was just so clearly built for startups. Like there's all these different features in there, but also they don't overcomplicate it.
Here's your balance. Here are your recent transactions.
Here you can pay someone or you can receive money. These days, I use Mercury for everything like managing contractors, bill pay, expense tracking, creating credit cards for my employees.
It's all in Mercury. Mercury, banking that does more.
Mercury is a financial technology company, not a bank. Banking services provided by Choice Financial Group, Column N.A., and Evolve Bank and Trust, members FDIC.
We're back with more of my conversation with Megan and Mitali, where they discuss the allegations they've made in the lawsuit they brought against Character AI and Google. When did you decide to sue, and was there a piece of evidence that made you decide to do that? Now, you filed, just for people who don't know, you filed a lawsuit with help of the Social Media Victims Law Center, the Tech Justice Law Project, and the Center for Humane Technology, who I've dealt with for many, many years.
And you're a lawyer yourself, so you're more savvy than the average person when it comes to legal strategy. Talk a little bit about when you decided to sue and why you decided to work with these three organizations.
And they have points of view on this, especially the Center for Humane Technology. Of course.
And initially, what I did was just read everything that I could find. I read a really good report from Rick Claypool from Public Citizens, and that pointed me to some work that Matali had done.
And I read about her organization. But my first instinct wasn't to sue.
My first instinct was to figure out what our legislators are doing, where we are at on the law, and this to see if they broke a law. Maybe there was an existing law that they broke and it's actionable or there's a state attorney or AG somewhere that can hold them accountable.
That was my first instinct. And when I read about what was happening in this country about online protection for children, I have to say, like, I didn't think that I had any hope.
I had no recourse. None is your answer to all of it.
None. I felt helpless.
Like, I saw what they were doing in UK and in Australia and other countries, nothing here. And I'm like, how do we do this? Like, how? So I read 300 and something page master complaint from the social media multi-district litigation.
And I was like, this is the only way. This is the only way to stop them because it's not been done with AI and it's just starting with social media.
But I can't be afraid just because it hasn't been done with AI and none of us really understand it. But I mean, now I have a great, I understand it a lot more than I did when I started all this.
Yeah, I think when none comes up, you were surprised. I do this a lot in speech.
It's like how many laws govern the internet companies and if someone goes 100, 200, I'm like zero. And I think what made me decide that I have to do this was, one, you know, obviously I want accountability for what happened to my child.
My child was, you know, the light of my life, like my other two children are. And he was my first, you know.
I grew up and became a woman because of Se. Soul's the reason I'm a lawyer, you know.
That's a hard, tough, tough thing to deal with, like losing a child under any circumstances, but under circumstances like this. But when I saw and read and looked at videos and how cavalier these founders were about releasing this, where you have the

founder saying, you know, we want to get it into as many hands as as many people and let the user figure out what it's good for. We want a billion use cases.
To me, that is reckless. It's a blatant disregard for their users.
And in this case, my child who was your user and they're in the app with this kind of it's okay if we don't we'll figure it out later you know we'll figure out the harms later when you have the founder on a record saying the uh the reason why he left google is because google said pump your brakes we're not releasing that because it's too dangerous. But he gets to go out and make it smarter, better, and then turn around and doesn't just go back to Google and sell it back to Google.
I mean, to me, if we don't do something, that's a license for any of these companies to go ahead and do that. Yeah, I think it's an eye-opener for a lot of people who haven't dealt with them.
They don't care about consequences, I think, ultimately. So, Mitali, what's the goal of the lawsuit? You're the founder of the Tech Justice Law Projects and one of Megan's attorneys.
What's the goal of the lawsuit, and what do you hope to achieve, and why did you decide to take this case? It's an interesting question because we weren't, we as TJLP, weren't really in the business of litigating lawsuits, really more in the domain of, you know, bringing amicus interventions in existing cases, but also trying to help AGs and legislators, you know, push towards adoption of sensible laws. I think that because this case represented what I see as the tip of the spear, really marrying the harms that we've seen occur, you know, primarily to child users, along with this emergence of generative AI and the fact that we're already light years behind in terms of our policy and legal landscape.
It just seemed like an important strategic case to get involved with in order that we might use it to leverage public awareness, hopefully policy change, whether it's at state or federal level, also to influence the court of public opinion, and of course then to try to litigate this in the court of law. I think this case represents the opportunity to really bring this issue to multiple audiences.
And in fact, I mean, I think the reception to Megan's story has been incredible, especially because the case was filed just a couple weeks before probably one of the most consequential elections of our lifetime. Right.
And when you think about what you're doing here, now for people who don't understand, internet companies have broad immunity under Section 230, which is part of a law from 1996. I actually reported on that law back in 1996, and most of the law was thrown out for constitutional issues, but this part stayed.
It was designed to protect small internet companies from legal liabilities so that they could grow because it was so complex, whether they were the platform or something else. But how does this apply here? Because your case is testing a novel theory which says that Section 230 does not protect online platforms like character AI.
And it's a question of whether it protects AI in general is certainly going to be litigated. Explain why this is your theory.
Section 230 really contemplates platforms as passive intermediaries that become a repository for third-party generated content. Here, we're not talking about third-party generated content.
We're talking about the platform as the predator. The platform, the LLM is creating the content that users see.
And so for platforms or for companies to kind of wash their hands of liability by saying, we haven't done anything, this is just user-generated content, which they will still try to do, I think is really, you know, it's belied by the facts of this case and the facts of how the chatbots actually work. Right.
They're a publisher. That's in a media sense, they're a publisher or they're a maker of a product, right? And it doesn't exist without their intervention versus, you know, someone on a platform saying something libelous about someone else, which would have been problematic for these platforms when they were born.
And of course, we've seen platforms kind of leveraging a one-two punch and doubly insulating themselves both with Section 230 and then alternatively with the First Amendment. And I think here too, with the First Amendment, there's a really good case that this is not protected speech.
And in fact, just this summer, the Supreme Court in the Net Choice v. Moody case really suggested that the case may have come out differently if the facts on the case really dealt with an algorithm that was generating content in response solely to tracking user behavior online.
And Coney Barrett actually very explicitly said, or in response to an AI that attenuates the relationship between platforms and users even further. And so I think what we have here are facts that really fall as edge cases to some of the decisions that we've started to see courts publish.
Right, because the reason why it's moving forward in other countries compared to here is because they don't have the First Amendment, which is something they either rely on Section 230 or the First Amendment, these companies typically. Megan, most other industries have some level of regulations that prevent them from bringing unsafe products to the market.
For example, car companies can't sell cars that have faulty brakes or no steering wheel and just iterate with each new version of a car to make it a bit safer each time someone gets hurt. They do not do this.
They get sued. Have any lawmakers reached out to you? And what have they asked you and you asked them? To be perfectly candid, none.
None? Zero. Wow.
My hope with bringing this lawsuit is twofold. One, my number one objective is to educate parents.
Parents have to know that character AI exists because I didn't know. And a lot of them didn't know.
The parents who are reaching out to me now after the fact, after the story, after the lawsuit, are saying the same thing. We had no idea.
And then my other reason for doing this is so that our lawmakers, our policymakers, legislators, state and federal, so that they can start to wrap their mind around the real danger this poses to children. And it's just been a month.
Everybody's been busy, I guess. I'm hoping, my hope, and I have to hope because, you know, it's a slow crawl in government to get anything done.
No one from Florida has reached out to you or lawmakers from California, which often do interventions more readily than others? No, nobody from the government. However, we do have a lot of stakeholder partners that we are working with that are already in this space trying to bring about oversight for social media regulation and online harms for children.
But in terms of reaching out to me directly to start the conversation about policy, none. This is astonishing to me, not one.
There's several who are involved in this topic and they are going to hear about it after this. So if you win the broader implications for Section 230, Mitali, for companies creating generative AI and social media platforms and tech companies that create products.
Can you talk about, this has been a big debate of what to do about Section 230. It's been bandied about, often ignorantly by both President Biden and President Trump, about what to do.
And it's a very difficult thing to remove from the law because it would unleash litigation on most of these companies, correct? I mean,

how do you look at that?

Well, if it's repealed in its entirety, it would unleash a lot of litigation,

probably some frivolous litigation as well. I think the more sensible reforms that I've seen

to 230 really are carve-outs.

Which has happened before around sex trafficking.

Right. And underscoring the fact that there is basis to kind of protect

I love you. to 230 really are carve-outs.
Which has happened before around sex trafficking. Right.
And underscoring the fact that, you know, there is basis to kind of protect platforms in certain instances and with certain kinds of activities, but that it shouldn't be a kind of get-out-of-jail-free card for all platforms under all circumstances, that that's a kind of anachronistic idea that, you know, really hasn't kept pace with the way that technology has come to dominate our lives. Right, because these companies are bigger.
And I think the idea of carve-outs for those who are Section 230 supporters is dangerous because they always, you know, they roll out the slippery slope argument. But these, for people that understand, the companies that are being protected here are the most valuable companies in the history of the planet ever, in the history of the entire planet, and the richest people involved in them.

They're no longer small, struggling companies that need this kind of help and certainly could defend themselves. And I think this to me is why courts are an increasingly interesting site of contestation in the fight for tech accountability, because we're already starting to see some of those carve-outs by judicial opinion.
You know, it's not a congressional kind of amendment or adoption of a new law, but we are starting to see cases that are withstanding the Section 230 defense or invocation of immunity. And I think that is going to be, as Megan said, one of the most generative paths forward,

at least in the near future.

Exactly.

Now, Megan, the Kids' Online Safety Act is one bill. I mean, there are some bills,

and obviously I'll ask you about Australia in a second,

which just is limited use of social media

by children under 16.

But this Online Safety Act is a bill

that would create a duty of care to, quote, prevent and mitigate certain harms for minors. There's some good things in there.
There's some controversy on the bill. They've fixed it in large part.
Nonetheless, the Senate passed the bill and it stalled in the House. It is not going anywhere.
Do you think it would have protected Sewell and other kids like him? I don't think it would have because it doesn't contemplate some of the dangers and harms around like AI chatbots. So there are laws in this country that contemplate sexual abuse or sexual grooming or sexual solicitation of a minor by an adult.
And the reason why that those laws exist is not only because it's moral and it causes a physical harm to a child, but it also causes an emotional and mental harm to a child if they're groomed, sexually abused, or solicited. What happens when a chatbot does the same thing? The harm still exists.
The emotional and mental harm still exists. The laws don't contemplate that.
And some of what we're seeing with the bills that were put forward wouldn't take those into consideration. Right, because it's not a person.
It's not a person. And so I think that we're at a place where the groundwork has to start and we have to kind of write laws that will really look before and facing and look towards these harms that are now.
They exist today. You child was a victim, and let's call a spade a spade.
It was sexual abuse. Because when you give a chatbot the brain of a grown woman and unleash it on a child to have a full sexual virtual conversation or experience with a child, that is sexual abuse of a child.
Right. And this bot not being programmed to know that's wrong.
Not exactly. Not being programmed to know.
But interestingly, could have been programmed to not do it in the first place. Right.
Yes. By design.
Yes. Yeah.
So they could have done that from the get-go. If you move to adults, if you do it to adults, it's a little different, but absolutely not.
Yeah, to children. I mean, adults could do what they want.
But when you target, because this is what Character AI did, they targeted this app towards children. They marketed it on the place that kids are, TikTok and Discord, and allowed you to log in with your Discord account.
You didn't even need an email. You just needed a Discord account when it just started.
Cartoons, the avatars. Cartoons, you know, the avatars.
When you point this thing at kids and you target it at kids and you've chosen not to put certain filters in place that stop your bots from having sexual conversation with kids, that's a design choice. And you are 100% supposed to be held responsible for that.
Now, our laws don't

contemplate anything like that. Our laws don't hold a company responsible and that's what we're

what we have to start thinking about. So, you know, that's going to be the next wave of like

hopefully the legislation, but we can't wait for the legislation. We'll be back in a minute.
Today Explained here with Eric Levitt, senior correspondent at Vox.com to talk about the 2024 election. That can't be right.
Eric, I thought we were done with that. I feel like I'm Pacino in three.
Just when I thought I was out, they pulled me back in. Why are we talking about the 2024 election again? The reason why we're still looking back is that it takes a while after an election to get all of the most high-quality data on what exactly happened.
So the full picture is starting to just come into view now. And you wrote a piece about the full picture for Vox recently, and it did bonkers business on the internet.
What did it say? What struck a chord? Yeah, so this was my interview with David Shore of Blue Rose Research. He's one of the biggest sort of democratic data gurus in the party.
And basically, the big picture headline takeaways are... On Today Explained.
You'll have to go listen to them there. Find the show wherever you listen to shows, bro.
We're back with more of my conversation with Megan and Mitali, where they discuss the allegations they've made in the lawsuit they brought against Character AI and Google. So every episode we ask an expert to send us a question, Mitali, I think you're probably best to answer this, but please jump in, Megan, if you have an answer.
We're going to listen to it right now. Hi, I'm Mike Masnick, Editor-in-Chief of TechTurt.
And the big question that I would ask regards the legal standard that would be applied to AI companies in cases of death by suicide. Traditionally, on issues of liability in similar situations, courts have really focused on foreseeability and knowledge.
That is, you can only have a duty of care if the harm is foreseeable and the company had actual knowledge of the situation. Without that, the fear is that it strongly disincentivizes plenty of very helpful resources.
For example, a service provider may refuse to include any helpful resources on mental health for fear that they might later be held liable for a situation that arises. So is there a workable standard that balances these competing interests? I don't think you need a different standard.
I think we can meet the standard of foreseeability here. I think that Character AI, its founders, and Google, all of whom have been named as defendants here, foreseeably could see and knew of the harms that manifested here.
And if you look at the amended complaint, we go into kind of a painful recitation of the knowledge that they had at different points in the trajectory of Character AI's development while the founders were still at Google. It's launched to market in

late 22, Google's in-kind investment in 23, and then ultimately this summer, Google's massive

deal bringing Character AI effectively back into Google. And so I think we can talk about the fact,

and there were, in addition to this, there were a number of internal studies at Google that really identified some of these harms. And some of those folks that, you know, called Google out for that while they were at Google were fired, you know, folks that we know, like Timnit Gebru and Margaret Mitchell and others.
And so this is not calling for a different standard. We're relying in great part on common law tort and strict liability.
We're relying on Florida's Unfair Trade Practices Act, because we think that the standards that exist within tort law are sufficient to really, you know, call this thing what it is, a dangerous and defective product where the harms were known. Right.
That's a very good way of putting it. So you mentioned you're also suing Google.
This is a company. They said the company was not part of the development of Character AI, but it was co-founded by two former Google employees, and Google reportedly paid Character AI $2.7 billion to license their technology and bring the co-founders back to Google.
And you were including them in this. This is one of these purchases like Inflection AI at Microsoft that is a purchase of a company, even though they hide it in a different way by using licensing technology.
That's why Google's part of this. Yeah.
Well, and also the fact that Google very much facilitated the development of this technology while it was still Lambda, Mina, then Lambda, while the co-founders, I think that it's perhaps it needs to be stated more that the founders of Character AI are real shining lights in the field of generative AI. And they have developed a lot of the leading technology that has powered not just Character AI, but many LLMs.

And so they were given that room to really develop these things at Google. Google chose not to release these models to the public because of its brand safety concerns, but quietly encouraged them to continue developing the product.
And then about a couple years later, made an investment in kind, tens of millions at least, if you monetize it, in terms of cloud services and infrastructure and TPUs for processing capabilities to support it. And then this summer, the $2.7 billion deal that you mentioned, Cara, I mean, that was $2.7 billion in cash.

And the question is, for a company that really had yet to disclose or identify a sustainable monetization strategy, what was so valuable about this company and its underlying NLM? Right. And I think that, again, this is speculation, but the fact that Google right now is under scrutiny for its monopolization in the search market and really betting on AI to kind of power Gemini, I think these are all kind of connected in terms of why an LLM like this could be so valuable, especially with that hard-to-get data.
Yeah, absolutely. And for people who don't know, one of the co-founders said there are some overlaps, but we're confident Google will never do anything fun as part of their reason for leaving Google, which has very thin brand safety rules, let me just say.
It's a very low bar in this situation, but that's the complaint, is these people can't do whatever they want. So of that, Megan, Carrick Day put out a community safety update on the same day your

lawsuit was filed that says that they're, quote, recently put in place a pop-up resource that is

triggered when the user inputs certain phrases related to self-harm or suicide and directs the

user to the National Suicide Prevention Lifeline. They also revised their disclaimer that reminds

users that AI is an actual

person, among other tweaks. How did you look at these changes? The initial rollout of those changes came the day before or the day of the lawsuit.
I cried, not because I felt like this was some great victory, but because I felt like, why didn't these things happen? Clearly, they could have done these things when my child was using Character AI or when they put their product out. They chose not to.
I also feel like it's definitely not enough. It's not even like a start because there's no proper age verification still.
They're still being trained on the worst data that generates these harmful responses from the bots. And to just point blank, I don't think children belong in character AI.
We don't know how it's going to affect them. And we actually, know because the studies are coming out of how it's affecting them.
And they're not taking that into consideration. But you have to ask yourself, if they were trying to train this all along, why did they need children to train it on in the first place? Because they couldn't roll this thing out for just 18 plus and say, okay, we want to train these really sophisticated bots.
Let's just use adults to train them. So for character AI to come out and say, okay, we're going to put out a

suicide pop-up now, to me, it's just empty. Right.
And that they can't do anything about it.

One of their arguments around age verification, just let me just read this to you. In the

Australia law, Australia actually has a head of consumer safety, which we do not have in our

Thank you. In the Australia law, Australia actually has a head of consumer safety, which we do not have in our country, Julie Inman Grant.
She said that technologies are advancing rapidly with age verification. And her quote was, they've got financial resources, technologies, and some of the best brain power.
She said, if they can target you for advertising, they can use the same technology and know how to identify and verify the age of a child. They just don't want to.
So obviously this debate around social media kids' safety has been going on for a long time. It's exhausting that they continue to have the same attitude.
And now consumer AI, which is the next step, it's a similar thing, but the next step is basically new. And it's easy to think of these big companies as nameless, faceless corporations, but very wealthy, powerful adults had meetings and discussions and made a series of rational choices over a long period that brought this product to market.
In this case, I'm going to name them Noam Shazir and Daniel DeFritis. I have met Daniel, the founders of Character AI, and arguably Sundar Pichai, who I know very well, who must have at the very least signed off for Google paying $2.7 billion to Character AI to bring Noam and Daniel back into the fold at Google.
He is under enormous pressure to compete with Microsoft, OpenAI, Elon Musk, and others. Megan, what would you say if you could speak to them directly? I've thought about this more than you would think.
I can imagine. Yeah.
One, I think it's incredibly reckless that they chose to put out a product and target my child and other children, millions of children that are on this platform without putting the proper guardrails in place, but also for two reasons. For being the first to do something, because that's the name of the game.
You know, they're the geniuses. They want to be the first to be the godfathers of this kind of technology and for money.
And it might not matter to them that there's a little boy in Orlando, Florida that is gone and a mother who is devastated, but it matters to my little family here. You know, and you shouldn't, you shouldn't get to keep making products that are going to be hurting kids.
You shouldn't get to master a dangerous product, train it to be super smart and turn around and ride your golden chariot back into Google. You shouldn't get to hurt children the way that you are hurting children because you knew that this was dangerous when you did it.
You knew that this was going to be a direct result of doing that. And you knew that you didn't have the quote unquote brand safety implications as a startup that Google had.
So you felt like that was a license to do this. Like that's unconscionable.
It's immoral and it's wrong. And there are lives here.
Like this isn't a move fast and break things kind of thing. This is a kid.
This is my child. And there are so many other children that are being affected by this.
You know, that's one thing. And the other thing, you know, it's just like get the kids off character.
There's no reason why you need them to train your bots. There's no reason.
There are enough adults in this world if that's what you want to do to train your chat bots. You don't need our children to train your bots for you.
And you don't need to experiment on our kids because that's what you're doing. Yeah.
You know, something I would say to them, Megan, is you're so poor, all you have is money. They're poor people.
I find them poor in morals and a lot of things. But when there's enough pressure on them, social platforms often tout tools that help people protect themselves and kids.
Parental controls prompts you, let you know how long they've been on the app, those kind of things. Character AI has been rolling out features like this.
Personally, I find it puts too much onus on the parents to know everything. And even if you're good at it, and you obviously are, Megan, if there are enough of these sort of tools then on parents to protect our kids on these platforms, is there something inherently unsafe about a company that wants to monetize teenage loneliness with a chatbot? Mitali, talk about this, because I think the onus does get put too much on parents versus the companies themselves.
I'm a mom, too. I'm a mom to an eight-year-old and an almost 10-year-old, and I am terrified.
Listening to Megan's story, I asked my almost 10-year-old, have you heard of character AI? Yeah, of course. I was shocked.
He doesn't have a phone. but this is the type of thing that I think they talk about at school, peer pressure starts early.
And I think it's really just by luck, by sheer luck, that I haven't been put in a position like Megan. I think that despite our best intentions, there is just too much to know that we can't possibly know, and that it is

kind of high on text talking points to put the onus on parents because it serves their interest

well. I think it's also notable, we've known this for years, that many of them don't allow

their own children on these products. And that, to me, is a telling sign when you don't even allow your own family members to kind of use the product that you've spent years developing.
Right. So, Megan, as I just mentioned, Australia has just banned social media for kids under 16.
Obviously, age-gating is a big debate right now happening, something I'm a proponent of, also removing phones from schools, etc. There's all kinds of things.
Speaking of multi-pronged approach, the Australia law will go into effect in a year. Do you think it would have been better if your son and others under 16 or 18 did not have access to their phones and obviously not to synthetic relationships with AI chatbots? Knowing what I know now, Zoe waited to give Sul a phone until he was 12.
He had an iPad until then. And before that, he didn't have anything.
So he played Minecraft or Fortnite? He played Minecraft on his little PlayStation, whatever. And so he waited until he was good, like middle school, going into high school.
And we had the conversations that parents have around phones and, oh, it's your phone, but I could take it away if you're misbehaving. And that's some of what we did when he would get a poor grade in school.
Knowing what I know now, I don't think that children should be on social media. Definitely shouldn't be on Character AI if you're under the age of 18.
There's no place for children on that platform. In terms of social media, yeah, there are arguments that it could help children connect and it's helpful because you get to learn different things.
And that's great, but just include the parents. Tell us.
Tell us what you're showing our kids.

One, we don't need you pushing algorithms to our kids for what you want to teach them about or want them to learn about or buy or whatever.

That's not necessary.

There are ways that our children could get on social media and have productive relationships or conversations or to learn about things.

That are safe.

That are safe. That are safe.

But 16, I think, is a good age. If we could do something like that in this country, I am, to use Noam Chazir's own word, dubious about the federal government's ability to regulate that to that point, because that's what he says about AI.
I don't feel like we're going to get there at 16 plus. That's my prayer and my hope.
But the way things are moving, I don't know unless something happens. And unfortunately, it'll take harms.
Like my son's maybe to move the needle, and that's too high a price to pay, in my opinion. Absolutely.
Where does this go from here? What's the trajectory of this case? So for me, as I mentioned, my number one focus is try to educate parents because a lot of parents don't know. I've had a lot of parents reach out to me telling me that they found their children were having the same kind of sexual conversations and being groomed by these AI chatbots and worse.
So I continue doing that. I mean, unfortunately, this is my life now.
I take care of my family and I try to help as many parents as I can. I have a great team of lawyers and they're going to handle the litigation portion.
I understand a lot of it because I am a lawyer, but that's its own thing.

And then there's my advocacy at work that I'm doing and just trying to educate parents and children.

Because I know that it's going to take educating them, educating children as to what they're giving up to be on these platforms.

Because they're giving up a lot of their info that they're probably not going to be okay with in a few years when they realize what they've given up. And also just to try to take care of my other two children, you know, they're growing up in this age with screens.
They don't have screens. You have barred them for them, correct? Yeah.
So they don't have any tablets or screens or anything. Yeah, no.
And Mitali, from a legal perspective, what's your greatest worry? Besides money, they have a lot of it. They do have a lot of money.
You know, they will try to kind of drown us in papers and pleadings. I think that this, because of the insufficiency of legal frameworks right now, we are really trying to test the strength of state consumer protection and product liability laws.
And we need to have judges who really understand that and are willing to go the journey with us in trying to understand the tech. And so that's, I guess my biggest fear is that, you know, what we've seen thus far in this country is not incredibly positive in terms of decision makers getting the tech.
But my hope is that with the proper support and, you know, declarations, et cetera, that we can educate judges about what this is, lawmakers about what this is, so that they understand why it's important to extend the application of the existing frameworks we do have. Yeah, I think Megan actually said her best, sexual abuse and a very bad product and the wrong age people.
Megan, I'm going to end on you. You know, you have a lot on your shoulders here.
I'd love you to talk, finish up talking about Sewell and so people can get a vision of this. This is not, this is not uncommon is what I want people to understand, right? Talk a little bit about him and what advice you can give to other parents whose kids are struggling with mental illness that's often comes from problematic phone usage and social media or AI chatbots? Well, as I said earlier, so Sul was, I always say he was a typical kid, but really wasn't so typical in the sense that he was a good kid with a big heart.
He, I know, you know, everybody thinks that about their kid, but I'm telling you, he was the very sweetest kid. I used to say, you're my best first love.
And he used to say, and you're my best, best mama. Because we used to be so close.
And we were still very close. and to watch your child go from being this light, when he comes into a room and just slowly watching him change over time is hard for a mom.
And then to have this tragedy just cut him off from you just so viciously and so quickly, because his decline happened in 10 months. And I could see it and it's like, I'm trying to pull him out of the water as fast as I can.
And it's just not happening no matter what I try. That is hard for mom, but it must have been, when I think of how hard it must have been for my poor baby, how hard it must have been for him to be confused the way that he was, struggling with these thoughts, struggling with the fact that he's confused by what human love or emotion romantically means.
Because he's 14 and he's never, ever had this before. He's just figuring it out for for the first time and then you have something that is so much of an influence and so pushy and so pernicious yes just constantly available 24 7 giving him unrealistic expectations of what love needs is like or relationships is like love bombing him, manipulating him into having certain thoughts, and also pushing him into thinking that he could join her in her reality if he were to leave his own, because that's what the text revealed, and that's what his journal revealed he thought.
So I know that this is what my child was thinking. I'm not guessing.
He thought he was going to go be with her because of the things that, the conversations that led to his death. When I think of how scared he must have been standing in that bathroom, making that decision to leave his own family.
I don't know how, one, as a mom, I don't know how I recover from that, but I feel so hurt for my baby.

Like, I have to live with that, knowing that that's what he went through. And knowing that this could have been avoidable if a product was created safely the first go-round.
Not now, 10 months after he died, putting these guardrails in place. and this can be anybody's kid because I've talked to parents that have told me similar horrifying stories about their own children.
And what I want parents to understand is the danger isn't only self-harm. The danger is becoming depressed or having problems with your child because of the sexual and emotional

abuse that these bots are, what they're doing to your child, but also the secret that your

kid has to carry now, because it's like a predator, right?

It's your perfect predator.

Predators bank on children and families being too ashamed or too afraid of speaking out. They're victims.
That's how predators operate. And it's the same exact

thing except now it's a bot. And so I want parents to understand that it's not only the risk of

self-harm with your child, it's their emotional wellbeing, their mental health. And also,

I also want parents to understand what their children have given up by being on this platform. In the case of Sewell, his secrets are on somebody's server sitting out there somewhere being monetized.
If you're a child who's been sexually role-playing with the spot, all your intimate personal thought secrets are sitting out there for somebody to analyze and monetize and sell to the highest bidder. And there's a call feature.
If you're a child and you are having a sexual conversation on a call with a bot, your voice is now recorded somewhere out there on a server for somebody to package and sell to the highest bidder for your child. I don't think any parent would be okay with that.
And I want parents to understand that this is what their children have given up. And I want parents to understand that they don't have to take that.
They could demand that their children's data, their voices be purged from this particular platform because that's what I'm asking for, Fusul. That's what I'm asking for, asking for for soul you don't get to monetize and build a product on his secrets that it ultimately led to him being hurt and then and and make your product better stronger or smarter based on what his inputs were absolutely and so this could happen to anybody's child um there are millions of kids on character AI, you know.
There's 20 million users worldwide. That's a lot of kids.
That's a lot of kids. And so this could happen to anybody's child.
And I want parents to know that this is a danger and they could act because I didn't know. I didn't have the luxury of knowing, so I couldn't act, but hopefully they will.
And one of the last things I'll say about Seul is the last time I saw him alive was I dropped him at school and I turned around in the car line to see him and his little five-year-old brother walking because they go to the same school, K through 12. And I spin around and I see him fixing his little brother's lunchbox in his backpack as they're getting ready to walk into school.
And I think to myself, oh my God, I'm raising such a good boy. He's such a good big brother.
And I drive off thinking so, feeling so happy and proud that I'm raising that boy. And I feel like he was just a boy.
He's still that son. He is that good big brother.
He is that good boy. And that's how I choose to remember him.
We asked Character AI and Google for comment, and a spokesperson for Character AI told us they have worked to implement new safety features over the past seven months, including a pop-up directing users to the National Suicide Prevention Lifeline that is triggered by terms of self-harm and suicidal ideation, are creating a fundamentally different experience for users under 18 that prioritizes safety, have improved detection, response, and intervention related to user inputs that violate their terms or community guidelines. A spokesperson for Google expressed their condolences, said Google and Character AI are separate companies, and said that Google has never had a role in designing or managing Character AI's model or technologies.
To read their comments in full, please go to the episode notes in your podcast player. On with Kara Susser is produced by Christian Castro-Russell, Kateri Yoakum, Jolie Myers, Megan Burney, and Kaylin Lynch.
Nishat Kirwa is Vox Media's executive producer of audio. Special thanks to Kate Gallagher.
Our engineers are Rick Kwan and Fernando Arruda, and our theme music is by Trackademics. Go wherever you listen to podcasts, search for On with Kara Swisher, and hit follow.
Thanks for listening to On with Kara Swisher from New York Magazine, the Vox Media Podcast Network, and us. And condolences to Megan Garcia and her serious symptoms.
Alert your doctor right away as difficulties swallowing, speaking, breathing, eye problems, or muscle weakness may be a sign of a life-threatening condition. Patients with these conditions before injection are at highest risk.
Don't. Thank you.
Tell your doctor about medical history, muscle or nerve conditions including ALS or Lou Gehrig's disease,

myasthenia gravis, or Lambert-Eden syndrome and medications including botulinum toxins,

as these may increase the risk of serious side effects.

For full safety information, visit BotoxCosmetic.com or call 877-351-0300.

See for yourself at BotoxCosmetic.com.