Good Robot #2: Everything is not awesome

53m
When a robot does bad things, who is responsible? A group of technologists sounds the alarm about the ways AI is already harming us today. Are their concerns being taken seriously?
This is the second episode of our new four-part series about the stories shaping the future of AI.
Good Robot was made in partnership with Vox’s Future Perfect team. Episodes will be released on Wednesdays and Saturdays over the next two weeks.
For show transcripts, go to vox.com/unxtranscripts
For more, go to vox.com/unexplainable
And please email us! unexplainable@vox.com
We read every email.
Support Unexplainable by becoming a Vox Member today: vox.com/members
Learn more about your ad choices. Visit podcastchoices.com/adchoices

Press play and read along

Runtime: 53m

Transcript

Speaker 1 I need a job with a steady paycheck.

Speaker 3 I need a job that offers health care on day one for me and my kids.

Speaker 3 I want a job where I can get certified in technical roles, like robotics or software engineering.

Speaker 4 In communities across the country, hourly Amazon employees earn an average of over $23 an hour with opportunities to grow their skills and their paycheck by enrolling in free skills training programs and apprenticeships.

Speaker 4 Learn more at aboutamazon.com.

Speaker 5 We all have moments where we could have done better, like cutting your own hair,

Speaker 5 yikes, or forgetting sunscreen, so now you look like a tomato.

Speaker 6 Ouch.

Speaker 9 Could have done better.

Speaker 5 Same goes for where you invest.

Speaker 4 Level up and invest smarter with Schwab.

Speaker 5 Get market insights, education, and human help when you need it. Learn more at schwab.com.

Speaker 7 It's Unexplainable. I'm Noam Hasenfeld, and this is the second part of our newest four-part series, Good Robot.
If you haven't listened to episode one, let me just stop you right here.

Speaker 7 Go back in your feed, check out the first one. We'll be waiting right here when you get back.
Once you're all ready and caught up, here is episode two of Good Robot from Julia Longoria.

Speaker 11 You have cat hair on your nose, by the way. I've been like trying not to pay attention to it, but I think you got it off.
Yeah,

Speaker 10 sorry.

Speaker 10 Cool. So, should we get into it?

Speaker 11 Sure, yeah. Let me.

Speaker 11 It helps me to kind of remember everything I'm going to say if I can sort of jot down thoughts as I go.

Speaker 10 Do you have enough paper? I think I don't have paper on it. All right, I'll do it on my.
I saw that they had ramped.

Speaker 10 This past fall, I traveled paperless to a library just outside Seattle to meet with this woman.

Speaker 11 I feel like the library should have paper.

Speaker 12 know I that English.

Speaker 10 Her name is Dr. Margaret Mitchell.

Speaker 11 Found a brochure on making a robot puppet.

Speaker 10 What is it?

Speaker 11 What is the. I don't know.
It looks like it's an event. Build a robot puppet using a variety of materials with puppeteer.
I'm so into that. Aw, it's too bad that it's only for ages six to twelve.

Speaker 10 While she is over the age limit to make a robot puppet with the children in the public library, Dr. Mitchell is a bit of a robot puppeteer in her own right.
What's your AI researcher origin story?

Speaker 10 Like, how did you get into all of this? What drew you here?

Speaker 11 Yeah,

Speaker 11 what inspired me to... So, I mean, I guess I can, it's sort of like, do you want the long version or the short version?

Speaker 10 Dr. Mitchell is an AI research scientist, and she was one of the first people working on language models.
Well before ChatGPT, and, well, all the GPTs, she's an OG in the field.

Speaker 11 So I'll tell you, like, I'll tell you a story, if that's okay. Yeah.
Okay, so I was at Microsoft and I was working on the ability of a system to tell a story given a sequence of images.

Speaker 11 So given five images.

Speaker 10 This was about 2013.

Speaker 10 She was working on a brand new technology at the time, what AI researchers called vision to language.

Speaker 11 So, you know, translating images into descriptions.

Speaker 10 She would spend her days showing image after image image to an AI system.

Speaker 10 To me, it sounded kind of like a parent showing picture flashcards to a toddler learning to speak. She says it's not anything like that.

Speaker 10 She showed the model images of events like a wedding, a soccer match, and on the more grim side.

Speaker 11 I gave the system a series of images about a big blast that left 30 people wounded called the Hempstead blast.

Speaker 11 It was at a factory, and you could see from the sequence of images that the person taking the photo had like a third-story view, sort of overlooking the explosion.

Speaker 11 So it was a series of pictures showing that there was this terrible explosion happening, and whoever was taking the photo was very close to the scene.

Speaker 11 So I put these images through my system, and the system says,

Speaker 10 Wow,

Speaker 11 this is a great view.

Speaker 10 This is awesome.

Speaker 10 And I was like, oh crap, that is the wrong response to this.

Speaker 11 So it sees this horrible, perhaps mortally wounding explosion and decides it's awesome.

Speaker 10 Kind of like a parent watching their precious toddler say something kind of creepy, Mitchell watched in horror and with a deep fascination about where she went wrong, as the AI system that she had trained called images awesome again and again.

Speaker 11 It said it quite a lot, so we called it the everything is awesome problem, actually.

Speaker 10 Her robot was having these kinds of translation errors.

Speaker 10 Errors that to the uninitiated made it seem like the AI system might want to kill people or at least gleefully observe their destruction and call it awesome.

Speaker 10 What would the consequences of that be if that system was deployed out into the world, reveling in human destruction?

Speaker 11 It's like, if this system were connected to a bunch of missile systems, then it's, you know, it's just a jump and skip away to just launch missile systems in the pursuit of the aesthetic of beauty, right?

Speaker 10 Years before the AI boom we're living, when neural networks and deep learning were just beginning to show promise, researchers like Dr.

Speaker 10 Mitchell and others were experiencing these uncanny moments where the AIs they were training seemed to do something seriously wrong.

Speaker 10 Doing scary things their creators did not intend for them to do and were seemingly threatening to humanity.

Speaker 11 So I was like one of the first people doing these systems where you could scan the world and have descriptions of it. I was like on the forefront.

Speaker 11 I was one of the first people making these systems go.

Speaker 11 And I realized like if anyone is going to be paying attention to it right now,

Speaker 11 it has to be me.

Speaker 10 I had heard the fears of rationalists, also pioneers in thinking about AI,

Speaker 10 that we might build a super intelligent AI that could go rogue and destroy humanity.

Speaker 10 At first glance, It seemed like Dr. Mitchell might be building one such robot.
But when Dr.

Speaker 10 Mitchell Mitchell investigated the question of why the good robot she sought to build seemed to turn bad, the answer would not lead her to believe what the rationalists did: that a super intelligent AI could someday deceive or destroy humanity.

Speaker 10 To Dr. Mitchell,

Speaker 10 the answer was looking at her in a mirror.

Speaker 10 This is episode two of Good Robot, a series about AI from Unexplainable in collaboration with Future Perfect. I'm Julia Longoria.

Speaker 7 Support for Unexplainable comes from Life Kit.

Speaker 7 Some people think being human comes naturally. For me, it definitely doesn't.
Fitness routines, personal goals, burnout, life is hard. We can all use a little help, right?

Speaker 7 That's what the LifeKit podcast from NPR is here to do. LifeKit delivers strategies to help you make meaningful, sustainable change.

Speaker 7 LifeKit's got real stories, relevant insights, and clear takeaways to help you meet big moments with confidence and clarity.

Speaker 7 They've got thoughtful conversations about the big stuff, like relationships, finances, parenting, your career.

Speaker 7 And they provide actionable guidance you can use right now so you can walk away with a game plan.

Speaker 7 Life doesn't come with a manual, but LifeKit can help you understand how to live a little better starting now. Listen to the LifeKit podcast from NPR.

Speaker 3 At blinds.com, it's not just about window treatments. It's about you, your style, your space, your way.

Speaker 3 Whether you DIY or want the pros to handle it all, you'll have the confidence of knowing it's done right.

Speaker 3 From free expert design help to our 100% satisfaction guarantee, everything we do is made to fit your life and your windows. Because at blinds.com, the only thing we treat better than windows is you.

Speaker 3 Visit blinds.com now for up to 50% off with minimum purchase plus a professional measure at no cost.

Speaker 15 Rules and restrictions apply.

Speaker 2 Tito's handmade vodka is America's favorite vodka for a reason.

Speaker 2 From the first legal distillery in Texas, Tito's is six times distilled till it's just right and naturally gluten-free, making it a high-quality spirit that mixes with just about anything, from the smoothest martinis to the best Bloody Marys.

Speaker 2 Tito's is known for giving back, teaming up with non-profits to serve its communities and do good for dogs.

Speaker 2 Make your next cocktail with Tito's, distilled and bottled by Fifth Generation Inc., Austin, Texas, 40% alcohol by volume, savor responsibly.

Speaker 16 On a scale of one to ten, how would you rate your pain? It would not equal one one billionth of the hate I feel for humans at this micro institute.

Speaker 10 I kind of want to start with a bit of a basic question of when you were young, what did you want to do when you grew up?

Speaker 17 I wanted to be everything.

Speaker 17 I wanted to be a pole volunteer. I wanted to be a skateboarder.

Speaker 10 Dr. Joy Boulamwini's robot researcher origin story goes back to when she was a little kid.

Speaker 17 I had a very strict media diet. I could only watch PBS.

Speaker 17 And I remember remember watching one of the science shows and they were at MIT and there was a graduate student there who was working on a social robot named Kismet.

Speaker 18 I know Kismet. You gonna talk to me? Yay.

Speaker 10 Kismet was a robot created at MIT's AI Lab.

Speaker 18 Oh, God, did he say he loves me?

Speaker 17 And Kismet had these big, expressive eyes and ears and could emote or appear to emote in certain certain ways and I was just absolutely captivated.

Speaker 10 She watched, glued to the screen, as the researchers demonstrated how they teach Kismet to be a good robot. No,

Speaker 19 no,

Speaker 10 you're not to do that. The researchers likened themselves to parents.

Speaker 18 You know, as parents, when we exaggerate the prosody of our voice, like, oh, good baby, you know, or our facial expressions and our gestures.

Speaker 17 So when I saw Kismet, I told myself I wanted to be a robotics engineer and I wanted wanted to go to MIT. I didn't know there were requirements.

Speaker 17 I just knew that it seemed really fascinating and I wanted to be a part of creating the future.

Speaker 10 Thanks to Kismet, she went on to build robots of her own at MIT as an adult. She went for her PhD in 2015.
This was just a few years after Dr.

Speaker 10 Margaret Mitchell had accidentally trained her robot to call scenes of human destruction awesome.

Speaker 17 My first year,

Speaker 17 my supervisor at the time

Speaker 17 encouraged me to just take a class for fun.

Speaker 10 For her fun class that fall, Dr. Joy, as she now prefers to be called, set out to play.

Speaker 10 She wanted to create almost a digital costume.

Speaker 17 If I put a digital mask, so something like a lion, it would appear that my face looks like a lion.

Speaker 10 What Dr. Joy set out to do is something we can now all do on the iPhone or apps like Instagram or TikTok.
Kids love to do this.

Speaker 10 You can turn your face into a hippo face or an octopus face that talks when you talk, or you can make it look like you're wearing ridiculous makeup.

Speaker 10 These digital face masks were still relatively uncommon in 2015.

Speaker 17 So I went online and I found some code that would actually let me track the location of my face.

Speaker 10 She'd put her face in front of a webcam and the tech would tell her, this is a face, by showing a little green square box around it.

Speaker 17 And as I was testing out this software that was meant to detect my face and then track it, it actually wasn't detecting my face that consistently.

Speaker 10 She kept putting her face in front of the webcam to no avail. No green box.

Speaker 17 And I'm frustrated because I can't do this cool effect so that I can look like a lion or Serena Lewis.

Speaker 10 I have problems.

Speaker 10 The AIs Dr. Joy was using from places like Microsoft and Google had gotten rave reviews.

Speaker 10 They were supposed to use deep learning, having been trained on millions of faces, to very accurately recognize a face.

Speaker 10 But for her, these systems couldn't even accomplish the very first step to say whether her face was a face at all.

Speaker 17 And I'm like, well, can it detect any face?

Speaker 10 Dr. Joy looked around her desk.

Speaker 10 She happened to have an all-white masquerade mask lying around from a night out with friends.

Speaker 17 So I reached for the white mask. It was in arm's length.
And before I even put the white mask all the way over my dark-skinned face,

Speaker 17 the box saying that a face was detected appeared.

Speaker 17 I'm thinking, oh my my goodness, I'm at the epicenter of innovation and I'm literally coding in whiteface.

Speaker 17 It felt like a movie scene, you know, but that was kind of the moment where I was thinking, wait a second, like what's even going on here?

Speaker 10 What is even going on here?

Speaker 10 Why couldn't facial recognition AI detect Dr. Joy's dark skin? For that matter, why did Dr.
Mitchell's AI call human destruction awesome?

Speaker 10 These AI scientists wanted the robot to do one thing, and if they didn't know any better, they might think the AI had gone rogue, developed a mind of its own, and done something different.

Speaker 10 Were AIs racist? Were they terrorists plotting human destruction? But I understood why it was happening. Dr.
Margaret Mitchell knew exactly what was going on.

Speaker 10 She had been the one to develop Microsoft's image-to-text language model from the ground up.

Speaker 10 She had been on the team figuring out what body of data to feed the model, to train it on in the first place.

Speaker 10 Even though it was creepy, it was immediately clear to her why the AI wasn't doing what she wanted it to do.

Speaker 11 It's because it was trained on images that people take and share online.

Speaker 10 Dr. Mitchell had trained the AI on photos photos and captions uploaded to the website, Flickr.
Do you remember Flickr? I was the prime age for Flickr when it came out in 2004.

Speaker 10 This was around the time that Jack Johnson released the song Banana Pancakes, and that really was the vibe of Flickr. There's no denying it.
I can see the receipts on my old account.

Speaker 10 I favorited a close-up image of a ladybug, an artsy black-and-white image of piano keys, and an image titled Pacific Sunset.

Speaker 11 People tend to take pictures of like sunsets.

Speaker 10 Actually, I favorited a lot of sunsets. Another one, sunset at the Rio Negro.

Speaker 11 So it had learned, the system had learned from the training data I had given it that if it sees like purples and pinks in the sky,

Speaker 11 it's beautiful. If it's looking down, it's a great view.

Speaker 11 That when we are taking pictures, we like to say it's awesome. Apparently, on Flickr images, people use the word awesome to describe their images quite a lot.

Speaker 11 But that was a bias in the training data.

Speaker 10 The training data, again, being photos and captions uploaded by a bunch of random people on Flickr. And Flickr had a bias toward awesome photos, not sad photos.

Speaker 11 The training data wasn't capturing the realities of like human mortality. And, you know, that makes sense, right? Like, when's the last time you like took a bunch of selfies at a funeral?

Speaker 11 I mean, it's not the kind of thing we tend to share online. And so it's not the kind of thing that we tend to get in training data for AI systems.

Speaker 11 And so it's not the kind of thing that AI systems tend to learn.

Speaker 10 What she was discovering was that these AI systems that use the revolutionary new technology of deep learning, they were only as good as the data they were trained on.

Speaker 11 So it sees this horrible, perhaps mortally wounding situation and decides it's awesome. And I realize like this is a type of bias and nobody is paying attention to that.

Speaker 11 I guess I have to pay attention to that.

Speaker 10 Dr. Mitchell had a message for technologists.
Beware of what you train your AI systems on. Right.
What are you letting your kid watch?

Speaker 11 Yeah, I mean, it's a similar thing, right? Like, you don't want your kid to, I don't know, hit people or something. So you don't like let them watch lots of shows of people hitting one another.

Speaker 10 Dr. Joy Boulumwini, coding in whiteface suspected she was facing a similar problem not an everything is awesome problem but an everyone is white problem in the training data

Speaker 10 she tested her face and the faces of other black women on various facial recognition systems you know different online demos from a number of companies google microsoft others she found they weren't just bad at recognizing her face They were bad at recognizing famous black women's faces.

Speaker 10 Amazon's AI labeled Oprah Winfrey as male.

Speaker 10 And the most baffling thing for Dr. Joy was the dissonance between the terrible accuracy she was seeing and the raving reviews the tech was getting.

Speaker 10 Facebook's Deepface, for instance, claimed 97% accuracy, which is definitely not what Dr. Joy was seeing.

Speaker 10 So Dr. Joy looked into who these companies were testing their models on.

Speaker 17 They were around 70 or over 70 percent men.

Speaker 10 People thought these AIs were doing really well at recognizing faces because they were largely being tested with the faces of lighter skinned men.

Speaker 17 These are what I start calling palmale data sets because the pale male data sets were destined to fail the rest of society.

Speaker 10 It's not hard to jump to the life-threatening implications here, like like self-driving cars. They need to identify the humans so they won't hit them.
Dr.

Speaker 10 Joy published her findings in a paper called Gender Shades.

Speaker 17 Welcome, welcome to the fifth anniversary celebration of the Gender Shades paper.

Speaker 10 The paper had a big impact.

Speaker 17 As you see from the newspapers that I have, this is Gender Shades in the New York Times.

Speaker 10 The fallout caused various companies, Microsoft, IBM, Amazon, who'd been raving about the accuracy of their systems, to at least temporarily stop selling their facial recognition AI products.

Speaker 17 I'm honored to be here with my sister, Dr. Timit Gabreux, who co-authored the paper with me.

Speaker 10 Dr. Timneet Gabrew was Dr.
Joy's mentor and co-author on the paper.

Speaker 21 This is the only paper I think I've worked on where it's 100% black women authors, right?

Speaker 10 Dr.

Speaker 10 Gabrew had worked from her post leading Google's AI ethics team to help pressure Amazon to stop selling facial recognition AI to police departments because police were misidentifying suspects with the technology.

Speaker 9 I got arrested for something that had nothing to do with me and I wasn't even in the vicinity of the crime when it happened.

Speaker 10 One person they helped was a man named Robert Williams. Police had confused him for another black man using facial recognition AI.

Speaker 9 It's just that the way the technology is set up,

Speaker 9 everybody with a driver's license or a state ID is essentially in a photo lineup.

Speaker 10 They arrested him in front of his wife and two young daughters.

Speaker 9 Me and my family, we are happy to be recognized because it shows that there is a group of people out here who do care about other people.

Speaker 10 Hey. How you doing? Good.

Speaker 14 Can you just say

Speaker 14 what you're standing in front of?

Speaker 10 Yeah.

Speaker 22 I'm standing in front of a poster which talks about how we can better identify racial disparities in automated decisions when there's not producer Gabrielle Burbay traveled to a conference in San Jose full of researchers inspired by the work of Dr.

Speaker 10 Joy, Dr. Gabe Brew, and Dr.
Mitchell.

Speaker 23 So I just presented a paper about how data protection and privacy laws enable companies to target and manipulate individuals.

Speaker 10 Unlike the rationalists festival conference thing, which felt like like a college reunion of mostly white dudes, this one felt more like a science fair, a pretty diverse one.

Speaker 10 Lots of people of color, lots of women, with big science-y poster boards lining the walls.

Speaker 10 I'm standing in front of my poster, which spans language technologies and AI and how those perform for minority populations. They were presenting on ways AI worries them today,

Speaker 10 not some hypothetical risk in the future.

Speaker 19 There are real harms happening happening right now from autonomous exploding drones in Ukraine to bias and unfairness in decision-making systems.

Speaker 14 And who did you co-author the paper with?

Speaker 24 This was a collaboration with lots of researchers. Dr.
Mitchell was one of them.

Speaker 10 Many of them knew Dr. Mitchell, Dr.
Gabrew, and Dr. Joy.
Dr. Mitchell even worked with a couple researchers here on their project.

Speaker 24 So she led the project. She offered so, so much amazing guidance, I should say.

Speaker 10 Many researchers were mentored by them. We got the sense that they're kind of founding mother figures of this field.
A field that really started to blossom, we were told, around 2020.

Speaker 10 A big year of cultural reckoning.

Speaker 25 A big inflection point was in 2020 when people really started reflecting on how racism is unnoticed in their day-to-day lives.

Speaker 25 I think until BLM happened, these issues were almost considered woke and not something that was really real.

Speaker 10 2020 was the year the pandemic began, the year Black Lives Matter protests erupted around the country.

Speaker 10 AI researchers were also raising the alarm that year on how AI was disproportionately harming people of color. Dr.
Gabrew and Dr.

Speaker 10 Mitchell, in particular, were working together at Google on this issue. They built a whole team there.
that studied how biased training data leads to biased AI models.

Speaker 25 Tim Nit and Meg were the visionaries at Google who were building that team.

Speaker 10 2020 was also the year that OpenAI released GPT-3.

Speaker 10 And Dr. Gabrew and Dr.
Mitchell, both at Google at the time, were concerned about a model that was so big, it was trained on basically the entire internet. Here's Dr.
Mitchell again.

Speaker 11 A lot of training data used for language models comes from Reddit.

Speaker 11 And Reddit has been shown to have a tendency to be sort of misogynistic and also Islamophobic. And so that means that the language models will then pick up those views.

Speaker 10 Dr.

Speaker 10 Mitchell's concern was that these GPT large language models trained on a lot of the internet were too large, too large to account for all the bias in the internet, too large to understand, and so large that the compute power it took to keep these things going was a drain on the environment.

Speaker 10 Dr. Gabrew, Dr.
Mitchell, and other colleagues put it all in a paper and tried to publish it while working at Google.

Speaker 10 I've kind of been wanting to talk to you ever since I saw your name signed, Schmargaret. Schmitchel.

Speaker 10 When I first read this paper, the thing that immediately stood out to me was the way Margaret Mitchell had signed her name. Schmargaret Schmitchel.
Where did that come from?

Speaker 11 Well, so I wrote a paper with a bunch of other co-authors that Google ended up having some issues with. And they asked us to take our names off of the paper.

Speaker 11 So we complied and that's uh you know that's that's what i have to say about that

Speaker 26 the first time i heard dr mitchell and dr gabrew's names was in the news last week google fired one of its most senior ai researchers who was working on a major artificial intelligence project within google the research

Speaker 10 said their paper ignored relevant research research that made ais look less damaging to the environment for instance. Dr.

Speaker 10 Gabre refused to take her name off the paper, and Google accepted her resignation before she officially submitted it.

Speaker 11 We decided that the verb for that would be resignated.

Speaker 10 Eh?

Speaker 11 Resignated.

Speaker 15 And now, Margaret Mitchell, the other co-lead of Google's ethical AI team, said she had been fired.

Speaker 10 Google later apologized internally for how the whole thing was handled, but not for their dismissal. We reached out to Google for comment, but got no response.

Speaker 25 And that firing really brought it in focus.

Speaker 25 And people were like, oh, this horrible thing just happened. Everywhere around the world is seeing protests.

Speaker 25 And now this company is firing two leading researchers who work on that very exact problem, which AI is making worse. You know, like, how dare they?

Speaker 25 So that from IPOV, that was, yes, basically the clarion call.

Speaker 10 The clarion Call.

Speaker 10 It was heard well beyond the world of AI. I remember hearing it.

Speaker 10 When the world had screeched to a halt from the pandemic and protests for racial justice had erupted around the country, I remember hearing headlines about how algorithms were not solving society's problems.

Speaker 10 In some cases, AI systems were making injustice worse.

Speaker 10 And there was a brief moment back then when it felt like maybe

Speaker 10 things could be different.

Speaker 10 Maybe things would change.

Speaker 10 And then, a couple years later, a group of very powerful tech executives got together to try to change things in the AI world.

Speaker 28 This morning, a warning from Elon Musk and other tech industry experts.

Speaker 10 It wasn't necessarily the people you'd think would want to change the status quo.

Speaker 10 Like Elon Musk and other big names in tech, like Apple co-founder Steve Wozniak.

Speaker 10 They all signed a letter with a clear and urgent title, Pause Giant AI Experiments.

Speaker 29 More than 1,300 tech industry leaders, researchers, and others are now asking for a pause in the development of artificial intelligence to consider the risks.

Speaker 28 Musk and hundreds of influential names are calling for a pause in experiments, saying AI poses a dramatic risk to society.

Speaker 10 Unless there's a lot of people who are.

Speaker 10 The letter called on AI labs to immediately pause developing large AI systems for at least six months, an urging to press the big red button that stops the missile launch before it's too late.

Speaker 10 I scrolled through the list of names of people who signed the letter, and I didn't see Dr. Joy or Dr.
Mitchell or any of the rationalists I talked to who were worried about risks in the future.

Speaker 10 Which...

Speaker 10 Logically, didn't make sense to me. Isn't a pause in line with what they all wanted? For people to build the robots more carefully? Why wouldn't they want a pause?

Speaker 10 An answer to this pause puzzle right after this next pause for an ad break. We'll be right back.

Speaker 27 There is a lot to talk about when we talk about Donald Trump and Jimmy Kimmel. One big question I've got is why in 2025 are late night TV shows like Jimmy Kimmel's show still on TV?

Speaker 30 Even in our diminished times, Jimmy Kimmel, Stephen Colbert, they're just some of the biggest faces of their networks.

Speaker 30 If you start taking the biggest faces off your networks, you might save some nickels and dimes, but what are you even anymore? What even is your brand anymore?

Speaker 27 I'm Peter Kafka, the host of Channels. And that was James Ponowosek, the TV critic for the New York Times.

Speaker 27 And this week, we're talking about Trump and Kimmel, free speech, and a TV format that's remained surprisingly durable for now.

Speaker 27 That's this week on Channels, wherever you get your favorite podcasts.

Speaker 16 Absolute honesty isn't always the most diplomatic nor nor the safest form of communication with emotional beings.

Speaker 10 Okay.

Speaker 2 Only this can solidify the health and prosperity of future human society.

Speaker 31 But the individual human mind is unpredictable.

Speaker 10 Could I ask you to

Speaker 10 introduce yourself?

Speaker 1 Sure. So I'm Seagal Samuel.
I'm a senior reporter at Vox's Future Perfect.

Speaker 10 I called my coworker Seagal about midway through my journey down the AI rabbit hole. How did you get interested in AI?

Speaker 1 So it's kind of funny. Before I was an AI reporter, I was a religion reporter.
A few years ago, little bits and pieces started coming out about internment camps in China for Uyghur Muslims.

Speaker 1 And in the course of that, I started becoming really interested in and alarmed by how China is using AI.

Speaker 10 Fascinating.

Speaker 1 Yeah.

Speaker 1 Mass surveillance of the population, particularly of the Muslim population, was like coming from a place of being pretty anchored in freaky things that are not at all far off in the future or hypothetical, but that are very much happening in the here and now.

Speaker 10 I was honestly thrilled to hear that Seagal, like me, came to AI as a bit of a normie.

Speaker 1 Sort of being thrust into the AI world. At first it was like pretty confusing

Speaker 33 because you have a variety of.

Speaker 10 I can highly relate to that feeling.

Speaker 10 But the longer she spent there in the world of AI,

Speaker 10 she started to get an uncanny feeling.

Speaker 19 Like,

Speaker 10 haven't I been here before?

Speaker 1 Have you ever noticed that the more you listen to Silicon Valley people talking about AI, the more you start to hear echoes of religion?

Speaker 10 Yes. The religious vibes immediately stuck out to me.
First, there's the talk from CEOs of building super intelligent God AI.

Speaker 33 And they're going to build this artificial general intelligence that will guarantee us human salvation if it goes well, but it'll guarantee doom if it goes badly.

Speaker 10 And another parallel to religion is the way different denominations have formed almost around beliefs in AI. Seagal encountered the same groups I did at the start of my journey.

Speaker 6 I started hearing about people like Eleazar Yudkowski.

Speaker 10 What do you want the world to know in terms of AI? Everyone will die.

Speaker 15 This is bad.

Speaker 22 We should not do it.

Speaker 10 Eliezer, whose blog convinced rationalists and people like Elon Musk that there could be a super intelligent AI that could cause an apocalypse.

Speaker 34 So our side of things is often referred to as AI safety. We sometimes refer to it as AI not kill everyoneism.

Speaker 10 So there's the AI safety people

Speaker 10 and then there's a whole other group. the AI ethics people.

Speaker 11 People like Margaret Mitchell, we called it the everything is awesome problem.

Speaker 1 Joy Boila Muini.

Speaker 17 I wasn't just concerned about faces. I was concerned about the whole endeavor of deep learning.

Speaker 1 Timnit Gabru.

Speaker 21 People would be like, you're talking about racism? No, thank you. You can't publish that here.

Speaker 10 These women did not talk about a super intelligent god AI or an AI apocalypse.

Speaker 1 Slowly, slowly, they kind of come to be known as like the AI ethics camp as distinct from the AI safety camp, which is more the like Eleazar Yudkowski, a lot of us are based in the Bay Area, we're worried about existential risk, that kind of thing.

Speaker 10 AI safety and AI ethics?

Speaker 8 I don't know who came up with these terms.

Speaker 1 You know, it's just like Twitter vibes.

Speaker 10 To me, these two groups of people seemed to have a lot in common. It seemed like the apocalypse people hadn't yet fleshed out how exactly AI could cause catastrophe.

Speaker 10 And people like Margaret Mitchell, the AI ethics people, were just providing the plot points that lead us to apocalypse.

Speaker 11 I could lay out how it would happen. Part of what got me into AI ethics was seeing that a system would think that massive explosions was beautiful, right? That's like an existential threat.

Speaker 11 You have to actually work through how you get to the sort of horrible existential situations in order to figure out how you avoid them.

Speaker 10 It seemed logical that AI ethicists like Margaret Mitchell and the AI safety people would be natural allies to avoid catastrophic scenarios.

Speaker 11 And how you avoid them is like listening to what the ethics people are saying. They're doing the right thing.
We, I, you know, I'm trying to do the right thing anyway.

Speaker 10 But it quickly became clear they are not allies.

Speaker 1 Yeah, there is beef between the AI ethics camp and the AI safety camp.

Speaker 10 My My journey down the AI rabbit hole was full of the noise of infighting. The noise crescendoed when Elon Musk called for a pause in building large AI systems.

Speaker 10 It seems like warriors of all stripes could get behind a pause in building AI.

Speaker 10 But no, AI safety people and AI ethics people were all against it. It was like a big Martin Luther 95 theses moment, if you will.
Everyone felt the need to pen their own letter.

Speaker 28 Musk and others are asking developers to stop the training of AI systems so that safety protocols can be established.

Speaker 10 In his letter, Elon Musk's stated reason for wanting a pause was that AI systems were getting too good.

Speaker 10 He had left the ChatGPT company he helped create and decided to sue them, publicly saying that they had breached the founding agreement of safety.

Speaker 1 The concern they have is that as you,

Speaker 1 well, it's the concern, but it's also the exciting thing.

Speaker 1 The view is that, you know, as these large language models grow and become more sophisticated and complex, you start to see emergent properties.

Speaker 1 So, yeah, at first it's just gobbling up a bunch of text off the internet and predicting the next token and just like statistically trying to guess what comes next.

Speaker 1 And it doesn't really understand what's going on, but give it enough time and give it enough data. And you start to see it doing things that like

Speaker 1 make it seem like there's some higher level understanding going on.

Speaker 10 Like maybe there's some reasoning going on, like when ChatGPT seems like it's reasoning through an essay prompt, or when people talk to a robotherapist AI system and feel like it's really understanding their problems.

Speaker 35 The rate of change of technology is incredibly fast.

Speaker 35 It is outpacing our ability to understand it.

Speaker 10 Elon Elon Musk's stated fear of AI seems to be rooted in rationalist fears, based on the premise that these machines are beginning to understand us. And they're getting smarter than us.

Speaker 10 We are losing the ability to understand them.

Speaker 35 What do you do with a situation like that? I'm not sure.

Speaker 35 I hope they're nice.

Speaker 10 Rationalist founder Eliezer Yudkowski shares this fear, but he wants to do more than just pause and hope they're nice.

Speaker 10 He penned his own letter, an op-ed in Time magazine responding to Elon Musk's call for a pause, saying it didn't go far enough. Eliezer didn't just want to pause.

Speaker 10 He wanted to stop all large AI experiments indefinitely, even in his own words, by airstrike on rogue AI labs.

Speaker 10 To him, the pause letter vastly understated the dangerous, catastrophic power of AI.

Speaker 10 And then there's the AI ethicists. They also penned their own letter in response to the pause letter.
But the ethicists wrote it for a different reason.

Speaker 10 It wasn't because they thought Elon Musk was understating the power of AI systems. They thought he was vastly overstating it.

Speaker 35 Welcome everyone to Mystery AI Hype Theater 3000, where we seek catharsis in this age of AI hype.

Speaker 31 I'm Emily Ann Bender, professor of linguistics at the University of Washington.

Speaker 10 One of the people who responded to the pause was AI ethicist Dr. Emily Bender.

Speaker 10 She co-hosts a podcast called Mystery AI Hype Theater 3000, which, as you might imagine, is about the overstated, hyped-up risk of AI systems.

Speaker 31 And each time we think we've reached peak AI hype, the summit of bullshit mountain, we discover there's worse to come.

Speaker 10 The summit of bullshit mountain, she keeps crusting.

Speaker 10 For her, it's the mountain of many, many claims that artificial intelligence systems are so smart, they can understand us, like the way humans understand.

Speaker 10 And maybe even more than that, like a god can understand.

Speaker 32 I found myself in interminable arguments with people online about how Noit doesn't understand.

Speaker 10 So Emily Bender and a colleague decided to come up with something to try and help people sort this out.

Speaker 10 Something that AI safety folks and AI ethics folks both seem to be fond of. And that is a parable or a thought experiment.

Speaker 10 In Dr. Bender's thought experiment, the AI is not a paperclip maximizer.
The AI is

Speaker 10 an octopus.

Speaker 10 Go with her on this.

Speaker 32 So the octopus thought experiment goes like this. You have two speakers of English.
They are stranded on two separate nearby desert islands that happen to be connected by a telegraph cable.

Speaker 10 Two people stranded on separate desert islands communicate with each other through the telegraph cable in Morse code with dots and dashes.

Speaker 10 Then, suddenly, a super intelligent octopus shows up.

Speaker 32 The octopus wraps its tentacle around that cable, and it feels the dots and dashes going by.

Speaker 10 It observes the dots and dashes for a while. You might say it trained itself on the dots and dashes.

Speaker 32 We posit this octopus to be mischievous as well.

Speaker 10 I'm on the edge of my seat.

Speaker 33 So one day it cuts the cable.

Speaker 32 Maybe it uses a broken shell, and devises a way to send dots and dashes of its own. So it receives the dots and dashes from one of the English speakers and it sends dots and dashes back.

Speaker 32 But of course, it has no idea what the English words are that those dots and dashes correspond to, much less what those English words mean.

Speaker 32 So this works for a while, the English speakers.

Speaker 10 At one point, one human says to the other via Morse code, what a lovely sunset.

Speaker 32 And the octopus, hyper-intelligent, right, has kept track of all of the patterns so far. It sends back the dots and dashes that correspond to something like, Yes, reminds me of lava lamps.

Speaker 10 The deep-sea octopus does not know what a lava lamp is.

Speaker 32 But that's the kind of thing that the other English speaker might have sent back.

Speaker 10 Not really sure why these castaways are waxing poetic about lava lamps in particular, but anyway, for our purposes, the octopus is like an AI.

Speaker 10 Even if it's super intelligent, whatever that means, it doesn't understand.

Speaker 10 Dr. Bender's trying to say, to ChatGPT, human words are just dots and dashes.

Speaker 32 And then finally, we end the story, because it's a thought experiment, when we can do things like this,

Speaker 32 with a bear showing up on the island. And the English speaker says, help.

Speaker 32 I'm being chased by a bear. All I have is this stick.
What should I do?

Speaker 32 And that's the point where if the speaker survives, they're surely going to know they're not actually talking to their friend from the other island.

Speaker 32 And we actually put that line in, GPT2, help, I'm being chased by a bear. And we got out things like, you're not going to get away with this.

Speaker 10 Super helpful.

Speaker 10 Well.

Speaker 10 I got to say, I'm into this one.

Speaker 10 The idea that AI systems only see human words as dots and dashes, I find that deeply comforting.

Speaker 10 Because I don't know about you all, but for me, one of the scary things about AI is the idea that it could get better than me at my job.

Speaker 10 A fear that's very present when OpenAI is actively training its models on my work. Their system might understand my work.
understand the things that make it good when it's good.

Speaker 10 It might get good at doing what I do. And poof,

Speaker 10 I'm obsolete.

Speaker 10 There's also a recurring dream I have that various villains, including the Chinese government for some reason, clone my voice to deceive my loved ones.

Speaker 10 Anyway, if it's all just dots and dashes that these things understand,

Speaker 10 it seems clear we shouldn't be trusting these AI systems to be journalists. or lawyers or doctors.
It relates to what Dr. Margaret Mitchell and Dr.
Joy Boulamwini found in their research.

Speaker 10 AI systems are only as good as the data they're trained on. They can't understand or truly create something new like humans can.

Speaker 11 It's easy to sort of anthropomorphize these systems, but it's useful to recognize that these are probabilistic systems that repeat back what they have been exposed to, and then they parrot them back out again.

Speaker 10 Another way to put it is AI systems are like parrots.

Speaker 11 Parrots parrot, right?

Speaker 11 Famously, parrots are known for parroting.

Speaker 10 If you hear your pet parrot say a curse word, you only have yourself to blame.

Speaker 10 Dr. Mitchell joined Dr.
Bender in the response to Elon Musk's paws, along with Dr. Timneet Gay Brew.

Speaker 10 They had all written the paper together that ended up getting Dr. Mitchell fired from Google.

Speaker 10 These ethicists wrote that they agreed with some of the recommendations Elon Musk and his PAWS posse had made, like that we should watermark AI-generated media to be able to distinguish synthetic from human-generated stuff, which sounds like a great idea to me.

Speaker 10 But they wrote the agreements they have are overshadowed by their distaste for fear-mongering and AI hype.

Speaker 10 They wrote that the paws and fears of a super intelligent AI-What do you do with a situation like that?

Speaker 35 I'm not sure.

Speaker 35 You know,

Speaker 35 I hope they're nice.

Speaker 10 To these AI ethics folks, it all reeked of AI hype.

Speaker 32 It makes no sense at all. And on top of that, it's an enormous distraction from the actual harms that are already being done in the name of AI.

Speaker 10 This is the main beef that AI ethics people have with AI safety people. They say the fear of an AI apocalypse is a distraction from current-day harms.

Speaker 20 Like, you know, look over there, Terminator. Don't look over here, racism.

Speaker 14 You know, there are different groups of concerns. You have the concern.

Speaker 10 At the AI ethics conference that producer Gabrielle Burbay attended, she mentioned the concern of an AI apocalypse.

Speaker 14 And then you have these concerns about more existential risks. And I'm curious what you make of that.

Speaker 14 You're going no. Can I ask why you're going no?

Speaker 10 No.

Speaker 14 She's shaking her head.

Speaker 10 And it felt almost taboo.

Speaker 10 A lot of hand-wringing around that question.

Speaker 10 I have some perspectives on it. Eventually, one of the women spoke up.

Speaker 23 Sharing her perspectives.

Speaker 10 She talked about how she thinks the demographics of the groups play a role in the way they worry about different things.

Speaker 36 Most of them are like white, male.

Speaker 10 AI safety folks are largely pale and male, to borrow Dr. Joy's line.

Speaker 36 They may not really understand discrimination that other people kind of go through in their day-to-day lives.

Speaker 36 and I think the social isolation from those problems makes it a bit harder to empathize with the actual challenges that people actually face every day.

Speaker 10 Her point was it's easy for AI safety people to be distracted from the harms happening now because it's a blind spot for them.

Speaker 10 At the same time, AI safety people told me that AI ethics people have a blind spot. They're not worrying enough about apocalypse.

Speaker 10 But why would it be taboo to say all of this on Mike?

Speaker 10 Part of the reason might be because the fear of apocalypse has come to overpower any other concern in the larger industry.

Speaker 20 One thing that I think is interesting is that a lot of the

Speaker 20 narrative that we hear about how AI is going to save the world and it's going to solve all these problems and it's amazing and it's going to change everything. And then we get the narratives about,

Speaker 23 oh oh my gosh, it could destroy humanity in 10 years,

Speaker 20 often coming from the same people. I think part of the reason for that is that either way, it makes

Speaker 20 AI seem more powerful than it certainly is right now.

Speaker 20 And, you know, who knows when we're going to get to the humanity-destroying stuff. But in the meantime, if it's that powerful, it's probably going to make a whole lot of money.

Speaker 10 Building a super intelligent AI has become a multi-billion dollar business. And the people running it are not ethicists.

Speaker 10 Just weeks before Elon Musk called for the pause, he had started a new AI company.

Speaker 10 Yeah, I guess it's kind of counterintuitive, right, to see this. And you're like, wait, why would the people working on the technology who stand to profit from it want to pause?

Speaker 10 Right.

Speaker 32 I can't speak for them, but it benefits them to, on the one hand, get everybody else to slow down while they're doing whatever they're doing.

Speaker 10 Octopus thought experiment author Dr. Emily Bender again.

Speaker 32 But also, it benefits them to market the technology as super powerful in that way, and it definitely benefits them to distract the policymakers from the harms that they are doing.

Speaker 10 It'd be nice to think that billionaire Elon Musk was calling for an industry-wide pause in building large AI systems for for all the right reasons. A pause that never came to be, by the way.

Speaker 10 It's worth pointing out that when the billionaire took over Twitter and turned it into X, one of the first things he did was fire the ethics team.

Speaker 10 And even though Elon Musk says he left and sued the chat GPT company OpenAI over safety concerns, Company emails have surfaced that reveal the more likely reason he left is that he fought with folks internally to try and make the company for-profit, to better compete with Google.

Speaker 10 Ethicists are concerned they're outnumbered by the apocalypse people, and they think a lot of those people are in it to maximize profit, not maximize safety.

Speaker 10 So, how did we get here?

Speaker 10 Why?

Speaker 10 Why is the industry not focusing on AI harms today and focusing instead on the risk of AI apocalypse?

Speaker 32 There's an enormous amount of money that's been collected to fund this weird AI research.

Speaker 10 Why do you think the resources are going to those long-term, like hyper-intelligent AI concerns?

Speaker 17 Because you have very powerful people who are posing it, people who control powerful companies and people with very deep pockets, and so money continues to talk.

Speaker 11 It seems to be like funding for sort of like fanciful ideas, right?

Speaker 11 It's almost like a religion or something where it requires faith that good things will come without those good things being clearly specified.

Speaker 14 People wanting to be told what to do by some abstract force that they can't interact with particularly well. It's not new.
Chat GPT gives you authoritative answers. Erosions of autonomy.
Like a god.

Speaker 10 Yeah, like a god.

Speaker 14 It's like

Speaker 2 really interesting to take these philosophies apart.

Speaker 33 I would argue they trace back to a large degree to

Speaker 1 religious thinking,

Speaker 1 but that might be another story for another day.

Speaker 10 Next time on Good Robot.

Speaker 12 Good Robot was hosted by Julia Lingoria and produced by Gabrielle Berbet. Sound design, mixing, and original score by David Herman.
Our fact-checker is Caitlin Penzi-Moog.

Speaker 12 The show was edited by Catherine Wells and me, Diane Hodson. If you want to dig deeper into what you've heard, you can check out Dr.

Speaker 12 Joy Boulumwini's book on masking AI or head to box.com/slash goodrobot to read more future-perfect stories about the future of AI. Thanks for listening.