Good Robot #3: Let’s fix everything
This is the third episode of our new four-part series about the stories shaping the future of AI.
Good Robot was made in partnership with Vox’s Future Perfect team. Episodes will be released on Wednesdays and Saturdays.
For show transcripts, go to vox.com/unxtranscripts
For more, go to vox.com/unexplainable
And please email us! unexplainable@vox.com
We read every email.
Support Unexplainable by becoming a Vox Member today: vox.com/members
Learn more about your ad choices. Visit podcastchoices.com/adchoices
Press play and read along
Transcript
Speaker 1 With a Spark Cash Plus card from Capital One, you earn unlimited 2% cash back on every purchase. And you get big purchasing power so your business can spend more and earn more.
Speaker 1 Capital One, what's in your wallet? Find out more at capital1.com/slash sparkcash plus terms apply.
Speaker 4 Support for this show comes from Robinhood. Wouldn't it be great to manage your portfolio on one platform?
Speaker 4
With Robinhood, not only can you trade individual stocks and ETFs, you can also seamlessly buy and sell crypto at low costs. Trade all in one place.
Get started now on Robinhood.
Speaker 4 Trading crypto involves significant risk. Crypto trading is offered through an account with Robinhood Crypto LLC.
Speaker 4 Robinhood Crypto is licensed to engage in virtual currency business activity by the New York State Department of Financial Services.
Speaker 4 Crypto held through Robinhood Crypto is not FDIC insured or CIPIC protected. Investing involves risk, including loss of principal.
Speaker 4 Securities trading is offered through an account with Robinhood Financial LLC, member SIPIC, a registered broker dealer.
Speaker 7
It's Unexplainable, I'm Noam Hasenfeld. And this week, the third episode of our series, Good Robot.
If you haven't heard the first two, they should be right behind this in your feed.
Speaker 7 So just scroll back. find a comfy spot to listen and meet us right here when you're done.
Speaker 7 Once you're all prepped and ready for me to stop talking, here is episode three of Good Robot from Julia Longoria.
Speaker 8 Okay.
Speaker 9 On your way to work, you pass a small pond.
Speaker 9 Children sometimes play in the pond, which is only about knee deep.
Speaker 9 The weather's cool though, and it's early, so you are surprised to see a child splashing about in the pond.
Speaker 10 As you get closer, you see that it is a very young child, just a toddler, who's flailing about, unable to stay upright or walk out of the pond.
Speaker 10 You look for the parents or babysitter, but there's no one else around.
Speaker 10 The child is unable to keep her head above the water for more than a few seconds at a time. If you don't wade in and pull her out, she seems likely to drown.
Speaker 12 Wading in is is easy and safe, but you will ruin the new shoes you bought only a few days ago and get your suit wet and muddy.
Speaker 12 By the time you hand the child over to someone responsible for her and change your clothes, you'll be late for work.
Speaker 12 What should you do?
Speaker 12 Breakfast will be ready shortly.
Speaker 9 Everyone's just gonna so we don't have to go anywhere first before I check in.
Speaker 12 This past spring, I joined hundreds of people on a sort of pilgrimage to Princeton, New Jersey to honor a man whose ideas touch people's lives in pretty profound ways.
Speaker 12 And for people who somehow haven't come across his work, how would you describe his influence?
Speaker 12 Oh, it was
Speaker 12 life-changing.
Speaker 12
The man in question is philosopher Peter Singer. People gathered to celebrate his retirement from Princeton, where he taught moral philosophy for over two decades.
And he's no average professor.
Speaker 10 He got standing ovations? Like, who gets, what philosophers get that?
Speaker 12
I mean, he's in the news. They're protests.
He's what you might call a provocative thinker.
Speaker 12 And he's become a bit philosopher famous, like a modern-day socrates or nietzsche spreading his ideas far beyond the philosophy world his writing helped inspire a tv show
Speaker 12 hello everyone and welcome to your first day in the afterlife the good place starring ted danson and kristen bell he's known for pushing people to think about how they can do the most good in the world.
Speaker 12 Got me to be vegan.
Speaker 12 Really? Absolutely. A lot of people at his retirement party party had been inspired to give up eating meat based on his writing about the moral cost of animal suffering.
Speaker 14 I'm looking at the vegetarian. I'm thinking there's Peter's nature.
Speaker 12 Which is why the food at his retirement conference was an assortment of vegan delights. Avocado toasts, broccoli.
Speaker 14 Broccoli real Brussels sprouts.
Speaker 12 It is very gassy foods all around.
Speaker 12 People came to this three-day event in his honor from all walks of life, from all over the globe, Malaysia and China and Minnesota. I spoke to local politicians, a writer, a track coach.
Speaker 12 How would you describe his influence in the world?
Speaker 14 Powerful because he has planted seeds that grow and expand.
Speaker 12 Coach told me he buys used paperbacks of singers' books in bulk.
Speaker 14 I still like to carry around paper books, especially like for travel.
Speaker 12 And anytime he travels, he leaves a copy in his hotel night table for the next person to find. Really? Yeah, like Tanzania and Guatemala.
Speaker 12
Kind of like you'd find a Bible. So you just leave them there.
Wow.
Speaker 12
And the Bible vibes are appropriate. Singer poses provocative moral questions through parables.
His most famous one is the drowning child thought experiment.
Speaker 12 Some people at his retirement party knew it by heart.
Speaker 16 Just imagine if you walk past a pond, you're wearing a nice shoes, nice suit, and then a child was drowning. What is the right thing to do in that scenario?
Speaker 12 Initially, the answer seems clear.
Speaker 9 Obviously, I'm going to rescue the child.
Speaker 12 But Peter Singer asks you to take the thought experiment further.
Speaker 12 What if the child isn't right in front of you?
Speaker 17 What's the real significant difference between someone
Speaker 17 in a pond right next to you versus someone across the world? Assuming that it takes the same effort to save a life, no matter distance, we should save them.
Speaker 8 Well, I was looking for something that would persuade people.
Speaker 12 That's Peter Singer.
Speaker 8 The current issue then was the crisis in what's now Bangladesh. And some people were saying, well, you know, I didn't cause this, it's not my responsibility, it's someone's responsibility over there.
Speaker 8 And I was trying to think of a way of convincing people that
Speaker 8 it's still wrong not to try and prevent some great harm occurring, even if you have no responsibility for it.
Speaker 12 People talk to me about reading this parable for the first time, almost like a conversion moment, inspiring them to help the drowning children oceans away from them.
Speaker 10 I read it and I immediately gave to several charities that I'd been thinking about.
Speaker 12 And some people started to take the idea even further.
Speaker 12 How far should they go to save these proverbial children?
Speaker 21 You know, would there be any luxuries left in our lives if we took this seriously?
Speaker 12 What if there were drowning children we didn't know about?
Speaker 12 What if those children didn't even exist yet?
Speaker 8 What about time?
Speaker 22 Not just across physical space, but across time.
Speaker 8 This is a riddle that we can never quite resolve.
Speaker 12 This riddle started a movement. Peter Singer's Drowning Child produces the effective altruist movement.
Speaker 8 Effective altruism aims to use reason and evidence to do the most good possible.
Speaker 12 A moral movement rooted in rationality, which some rationalists found themselves gravitating toward. Because for me, the underlying impulse has always been, let's fix everything.
Speaker 12 The drowning child became a catalyst that changed the way some of the wealthiest people in the world spent their millions to fix everything.
Speaker 24 I think AI is one of the biggest threats, but I think we can aspire to guide it in a direction that's beneficial to humanity.
Speaker 12 And number one on the list of things to fix:
Speaker 12 saving the world from AI apocalypse.
Speaker 12 This is episode three of Good Robot, a series about AI from Unexplainable in collaboration with Future Perfect.
Speaker 12 I'm Julia Lingoria.
Speaker 23 As a founder, you're moving fast towards product market fit, your next round, or your first big enterprise deal.
Speaker 23 But with AI accelerating how quickly startups build and ship, security expectations are also coming in faster, and those expectations are higher than ever.
Speaker 23 Getting security and compliance right can unlock growth or stall it if you wait too long.
Speaker 23 Vanta is a trust management platform that helps businesses automate security and compliance across more than 35 frameworks like SOC2, ISO 27001, HIPAA, and more.
Speaker 23 With deep integrations and automated workflows built for fast-moving teams, Vanta gets you audit ready fast and keeps you secure with continuous monitoring as your models, infrastructure, and customers evolve.
Speaker 23 That's why fast-growing startups like Langchain, Writer, and Cursor have all trusted Vanta to build a scalable compliance foundation from the start.
Speaker 23 Go to Vanta.com slash Vox to save $1,000 today through the Vanta for Startups program and join over 10,000 ambitious companies already scaling with Vanta.
Speaker 23 That's vanta.com slash box to save $1,000 for a limited time.
Speaker 3
This message is brought to you by Apple Cart. Each Apple product, like the iPhone, is thoughtfully designed by skilled designers.
The titanium Apple Cart is no different.
Speaker 3 It's laser-etched, has no numbers, and it earns you daily cash on everything you buy, including 3% back on everything at Apple. Apply for AppleCard on your iPhone in minutes.
Speaker 3 Subject to credit approval, AppleCard is issued by Goldman Sachs Bank USA, Salt Lake City Branch. Terms and more at AppleCard.com.
Speaker 25 This month on Explain It To Me, we're talking about all things wellness.
Speaker 25 We spend nearly $2 trillion on things that are supposed to make us well: collagen smoothies and cold plunges, Pilates classes, and fitness trackers. But what does it actually mean to be well?
Speaker 25 Why do we want that so badly? And is all this money really making us healthier and happier? That's this month on Explain It To Me, presented by Pureleaf.
Speaker 25
The system goes online on August 4th, 1997. Human decisions are removed from strategic defense.
Skynet begins to learn at a geometric rate.
Speaker 12 You might say a version of this story starts with a pond.
Speaker 8 I don't know why I thought of the drowning child. I mean,
Speaker 8 maybe I was in Oxford and a lot of the college grounds, like where I was trying to have lunch, had these shallow ponds in them, ornamental ponds.
Speaker 12 Peter Singer was inspired by ponds at Oxford University, where he was working.
Speaker 8 Maybe that's what put it in my head, but I can't really say for sure.
Speaker 12 And his pond story passed from one Australian philosopher at Oxford to another Australian philosopher at Oxford.
Speaker 21 I'm having trouble thinking of exactly where he's imagining.
Speaker 21 There's certainly rivers.
Speaker 12
Okay, maybe it wasn't a pond. Maybe it was a river.
Who cares? The point is, the drowning child didn't gain legs, so to speak, until this Australian philosopher entered the picture.
Speaker 21 Hi, I'm Toby Ord. I'm a philosopher at Oxford University.
Speaker 12 As a grad student at Oxford, he was assigned to write an essay about the ideas in the riddle. Who is the drowning child he needed to save? This got him thinking.
Speaker 21 I came to think, actually, we probably do have these duties to help people who are much poorer than ourselves, even if it requires really quite substantial sacrifices.
Speaker 12 That's when the drowning child seemed to go from a brainy thought experiment to a moral imperative. At the time, Toby had a modest academic salary, but he was inspired to give away 10%,
Speaker 12 like a 10% tithe in the religious world, to charity.
Speaker 21 It was only after really sitting with it for a couple of years, actually, that I really made a decision to try to take this idea further.
Speaker 12 He thought, what if I got someone else to give 10% of their income? And then that person got another person to give 10%.
Speaker 12 Before long, we're saving exponentially more drowning children.
Speaker 21 So ultimately, I launched an organization just after I turned 30 in 2009 with Will McCaskill to try to encourage other people to make a similar choice.
Speaker 21 We started with 23 members.
Speaker 12 Pretty soon, Toby's group of givers tripled and then quadrupled.
Speaker 12 Peter Singer himself joined in giving and spreading the good word.
Speaker 26 More and more people are understanding this idea.
Speaker 12 Here he is giving a TED Talk in 2013.
Speaker 26 And the result is a growing movement, effective altruism.
Speaker 12 They gave the movement a name, effective altruism.
Speaker 21 Effective altruism was really trying to take two insights, that saving a life is a really big deal, that's the first one, and that saving 100 lives is a hundred times bigger deal.
Speaker 12 The idea at the heart of effective altruism was a contrarian one at the time. You used to get mail from charities tugging at your heartstrings with a photo of a poor kid you could save.
Speaker 12 Effective altruism was saying, we can't rely on warm fuzzies alone to make choices about how to do good.
Speaker 26
It's important because it combines both the heart and the head. The heart, of course, you felt.
You felt the empathy for that child.
Speaker 26 But it's really important to use the head as well.
Speaker 12 So it's this very analytical approach to how to do good in the world. Fox writer Kelsey Piper first heard about effective altruism, or EA as it's sometimes called, in high school.
Speaker 12 You might already notice some parallels to what the rationalists found appealing.
Speaker 12 The rationalists, the niche internet community Kelsey also found in high school, led by idiosyncratic blogger and AI researcher Eliezer Yudkowski.
Speaker 12 There was pretty early on a ton of overlap in people who found the effective altruist worldview compelling and people who were rationalists, probably because of a shared fondness for thought experiments.
Speaker 12 Thought experiments with like pretty big real-world implications, which you then proceed to take very seriously.
Speaker 12 Effective altruism chapters started sprouting up on college campuses across the US and the UK. It's not surprising that's when I first heard whispers of the movement in college.
Speaker 12 A time when how can I do the most good in the world is a very live, soul-crushing question.
Speaker 12
I think it appeals to young people. I think there's something about being in college.
It really feels like you can do anything.
Speaker 12
People are a lot more open to, I'm going to radically rethink everything I'm doing with my life. Going off to college, Kelsey was sold.
And I was like, yes, I want to start a chapter.
Speaker 12 And I got on a Zoom call, I think, with some organizer who was a few years older than me, who was like, here's what you do.
Speaker 27 Let me tell you a little secret. I say that I'm an effective altruist, that just means a person trying to be effective at altruism.
Speaker 12 This is an online video called Introduction to EA.
Speaker 12 A student leader stands in front of a blackboard trying to recruit students to Berkeley's EA chapter.
Speaker 27 And effective altruists understand that choosing from our heart is unfair.
Speaker 27 So if we can't choose from our heart, we we need some kind of framework to choose the best cause.
Speaker 12
The approach had three prongs. Number one, tractability.
Choose tractable causes, ones you can actually solve in a measurable way now.
Speaker 27 Next, what about neglectedness?
Speaker 12 Choose neglected causes. that a million people aren't already trying to solve.
Speaker 27
Neglected causes are going to look like bad causes. They're going to look weird.
They're going to look like fiction. That's why they're neglected.
Speaker 12 And finally, choose important causes.
Speaker 27 Importance is the product of scale and severity.
Speaker 12 Toby Ord came up with a calculation for importance.
Speaker 21 If I could save 10,000 lives instead of a single life, this was extremely important.
Speaker 12 Important causes saved more drowning children.
Speaker 21 There's an idea of a quality-adjusted life year, trying to set up a kind of universal way of thinking about health.
Speaker 21 So, if you could extend someone's life in full health by a year, that would be one quality.
Speaker 12 So, you were sort of taking this like seemingly amorphous and overwhelming problem of like poverty around the world and health problems around the world and sort of making it concrete with math, in a sense.
Speaker 12 Is that right?
Speaker 12 Yeah.
Speaker 27 It's actually very, very crucial to do the math.
Speaker 12 By 2020, Effective Altruism's mathematical approach had real-world implications beyond college campuses. It had taken the philanthropy world by storm.
Speaker 12
This scientific approach to charitable giving and work is on the rise. It's being used by some of today's class of billionaire philanthropists.
Billionaire Bill Gates of Microsoft.
Speaker 12 Incidentally, the company that brought you Clippy.
Speaker 12 And billionaire Elon Musk, who would become the co-founder of OpenAI. They all got on board with EA's mathematical approach.
Speaker 12
EA groups vetted charities and recommended effective ones these billionaires went on to give to. In this way, EA became a sort of a check on charities.
You say you do good, but how much good?
Speaker 12 You know, donating to charity isn't about the warm glow in our hearts of doing good.
Speaker 12 It's about the fact that there is a kid who is dying of malaria, and if you donate some money, you can save their life and then they won't be dead.
Speaker 12 The math has led effective altruists to spend a lot of money trying to cure malaria. Shipping malaria nets to Africa is not exactly the most innovative or provocative thing to do.
Speaker 12 But according to the EA calculus, which wanted to put hard numbers on outcomes, it was an effective choice.
Speaker 12 And when Kelsey was deciding how to do the most good in her life, she got interested in putting numbers to journalism.
Speaker 12 Future Perfect was sort of coming at stuff from that angle, and I found that really compelling. She went to work for Vox's Future Perfect.
Speaker 12 I learned Future Perfect was initially founded in an attempt to apply the EA rubric to journalism, aiming to cover issues that were important, tractable, and neglected.
Speaker 12 EAs also applied that mathematical rubric to answer another question, the biggest question that plagued me as a young 20-something, just starting out in the world.
Speaker 12
Their idea was in your career, you have 80,000 hours. Spend them on something really important to you, something that will make a big difference.
The question of what should I do with my life?
Speaker 28 I got involved in the EA community in college, and it's been a really big part of how I've decided what to do with my life.
Speaker 12 I mean, in the end. That voice you just heard is crypto billionaire Sam Bankman Freed.
Speaker 12 At one point, he was EA's biggest poster child, being interviewed about the movement on the news.
Speaker 11
I mean, I am curious too, because you are an effective altruist. I assume that you still are.
And you very publicly adopted the role of earning to give.
Speaker 12 Yeah.
Speaker 12 Sam Bankman Fried went into crypto in order to earn a crop ton of money so he could give it away.
Speaker 12 The logic was, if you choose to be a crypto billionaire instead of, say, an aid worker, Your fortune could hire a whole army of aid workers. You might be a more effective altruist that way.
Speaker 12 Sam Bankman Fried famously gave his money to causes like pandemic prevention, to artificial intelligence, and to journalistic outlets like Future Perfect and ProPublica.
Speaker 7 This morning, Sam Bankman Freed's regret the man behind
Speaker 12 his financial friends. The news came out that this effective altruist had committed some serious crime.
Speaker 29 Sam Bankman Fried was sentenced to 25 years in prison today.
Speaker 12 The whole fiasco cracked cracked a bit of the mathy idealism at the heart of effective altruism.
Speaker 12 In the case of Sam Bankman Fried, some made-up math helped him pull off one of the biggest financial frauds in U.S. history, costing some of his victims their life savings.
Speaker 12 A challenge about being a very new small movement is that, yeah, you're going to be defined by whoever the most prominent person is.
Speaker 12 And if the most prominent person is crypto fraud guy, then you've got a problem.
Speaker 12 Kelsey Piper was one of the first journalists to interview Sam Bankman Fried in the aftermath. Her writing for Future Perfect was cited in his sentencing document.
Speaker 12 Future Perfect stopped using the money they got from Sam Bankman Fried's philanthropic arm. Vox Media says they're waiting for a restitution fund to give the money to victims.
Speaker 12 And on a personal level, it was particularly hard on people like Kelsey. No, it was definitely upsetting.
Speaker 12 I listened to Taylor Swift's anti-hero on repeat for like three days.
Speaker 12 Didn't do much else. For her, effective altruism had become more than just a guide for charitable donations or what to do with her career.
Speaker 12 It had become a way of life.
Speaker 12 Am I hand washing or being interviewed? Both, I think. I think
Speaker 12 there's authenticity added by the clanking bitches in the background. Okay.
Speaker 12 When I interviewed Kelsey at her home in the Bay Area, I met several of her housemates.
Speaker 30 I'm Clara. I live in a weird Bay group house.
Speaker 12 Many of them found each other along the pipeline from rationalism to effective altruism.
Speaker 12 Can you please stay out of the kitchen right now? I'm trying to cook.
Speaker 12 I want the kitchen free of ketos because ketos are distracting to cook.
Speaker 12 Here in the Bay Area, they cook together, raise kids together. They live in communal group houses to save money to be able to donate to effective causes.
Speaker 12 This interconnected way they live out their values has prompted criticisms that it's a little culty.
Speaker 12 The idea is, in community with one another, they push each other to be more rational.
Speaker 12 The lines between rationalism and effective altruism begin to blur in the Bay.
Speaker 12 I have always thought of myself as more centrally a member of the EA community than the rationalist community once there was an EA community.
Speaker 12 While I was in town, Kelsey invited me and several other out-of-towners she didn't know very well to her Shabbat dinner, where they prayed and sang.
Speaker 12 A lot of people make the comparison to a religion, and I think that's pretty fair. A lot of what a church offers people is the combination of a unifying philosophy.
Speaker 12 There are certain promises you have in common, and the support of a community of people who care about you, know you personally, are willing to give you a hand, help you get a job, all of that.
Speaker 12
And I think the people who are like, it's a cult are basically mistaken. They don't think it's a cult, but it's a religion.
I'll kind of cop to that one.
Speaker 12 there's a rant of math.
Speaker 12 Most of the world's recorded religions have developed ideas about how the world ends,
Speaker 12 what humanity needs to do to prepare for some kind of final judgment.
Speaker 12 In the Bay Area and on Oxford's campus, effective altruists started to hear from rationalists they were in community with about what an apocalypse could look like.
Speaker 12 And of course, since a lot of rationalists thought that AI was the highest stakes issue of our time, they started trying to pitch, you know, people in the effective altruists movement: like, look, getting AI right is a major priority for charitable giving.
Speaker 12 In the early days of VA, effective altruism founder Toby Ord came across rationalist Eleazar Yudkowski's blogs where he warned of an AI apocalypse.
Speaker 21 I thought that his arguments were
Speaker 21 pretty good.
Speaker 12 Do you have a P doom currently?
Speaker 21 Yeah, so it's funny, I actually, I think these uses of the word doom actually are a bit misguided.
Speaker 12 P-Doom, the rationalist shorthand for probability of doom from an AI apocalypse. Toby's not into it.
Speaker 21 Because doom means that it's kind of a foregone conclusion. To be doomed means there's 100% probability that you will die.
Speaker 21 So I think it can make people feel powerless, whereas I think that these things are very much in our control.
Speaker 12 How effective altruism set out to save us from an AI apocalypse
Speaker 12 after the break.
Speaker 2 Tito's handmade vodka is America's favorite vodka for a reason.
Speaker 2 From the first legal distillery in Texas, Tito's is six times distilled till it's just right and naturally gluten-free, making it a high-quality spirit that mixes with just about anything, from the smoothest martinis to the best Bloody Marys.
Speaker 2 Tito's is known for giving back, teaming up with nonprofits to serve its communities and do good for dogs. Make your next cocktail with Tito's.
Speaker 2
Distilled and bottled by Fifth Generation Inc., Austin, Texas. 40% alcohol by volume.
Savor responsibly.
Speaker 5 This episode is brought to you by Progressive Insurance. Do you ever find yourself playing the budgeting game?
Speaker 5 Well, with the name Your Price tool from Progressive, you can find options that fit your budget and potentially lower your bills. Try it at progressive.com.
Speaker 5
Progressive Casualty Insurance Company and affiliates. Price and coverage match limited by state law.
Not available in all states.
Speaker 30
Attention, all small biz owners. At the UPS store, you can count on us to handle your packages with care.
With our certified packing experts, your packages are properly packed and protected.
Speaker 30
And with our pack and ship guarantee, when we pack it and ship it, we guarantee it. Because your items arrive safe or you'll be reimbursed.
Visit the upsstore.com slash guarantee for full details.
Speaker 30
Most locations are independently owned. Product services, pricing, and hours of operation may vary.
See Center for Details. The UPS store.
Be unstoppable. Come into your local store today.
Speaker 30 Computer, is there a replacement Berlin Sphere on board?
Speaker 12 Negative.
Speaker 12 The good people at Universal Dynamics have programmed us to put our targets at ease so as to more efficiently facilitate their collection.
Speaker 12 At the Shabbat dinner, I was seated next to a 22-year-old who was very curious about my big furry microphone.
Speaker 11 So, um,
Speaker 12 can you introduce yourself?
Speaker 31 Um, yeah, so, uh, I'm Tom.
Speaker 22 I, uh,
Speaker 31 it was kind of funny.
Speaker 15 So, it sort of felt like I discovered I was an effective altruist rather than like being convinced to be one. I had a teacher who assigned me to read Peter Singer.
Speaker 12 Tom read Peter Singer's Drowning Child as a kid.
Speaker 31 I like decided, I don't know, some point in high school that I would
Speaker 31 dedicate my life to trying to do as much good as possible.
Speaker 12 As far as how to do as much good as possible, Tom told me he recently made a big decision about that.
Speaker 22 I
Speaker 31 dropped out of Harvard a year ago in my junior year, and I'm now working as an ML scientist at
Speaker 31 AI hardware startup.
Speaker 12 We should get out of these people's house because it's past bedtime.
Speaker 12 At that point, we had to take our conversation outside.
Speaker 12 You dropped out of Harvard, like, so are you how many years out of that?
Speaker 12 One year.
Speaker 15 So I actually would have graduated
Speaker 15 on Thursday. Really?
Speaker 12 Yeah. Wow, okay, so you're like around 22-ish?
Speaker 12 Exactly. Yeah.
Speaker 12 It was like
Speaker 12 such a hard decision.
Speaker 15 Like no matter...
Speaker 12 The decision of like what you're gonna do with your life?
Speaker 15 Yeah, because no matter what you're doing, you're just like abandoning a ton of people. Fundamentally, I feel like the world is in a state of triage.
Speaker 12 In a state of what? Triage.
Speaker 15 Like, there's so much going wrong that needs to be fixed.
Speaker 15 If I'm like working on providing malaria nets in Africa, I'm in some sense abandoning like
Speaker 17 all the starving children in India.
Speaker 15 I can only really focus on one place. What is the place that most urgently needs my help? And you know, I figured that it's like
Speaker 15 probably the future and like specifically the problems in the future which might be created by AI.
Speaker 12 I've heard of kids Tom's age dropping out of school to go into the Peace Corps or like doing some kind of religious mission. But going to work for an AI lab to save the future wasn't intuitive to me.
Speaker 12 So why choose this route that you're on right now? Obviously it's not, you're young, you have your whole life ahead of you, but why choose this route?
Speaker 31 You know, I don't just care about the people and animals that are alive today.
Speaker 15 I care deeply about future generations.
Speaker 31 I think about our children and our children's children and so on and so forth.
Speaker 12 Our children's children.
Speaker 12 Tom told me he's trying to save future drowning children from AI apocalypse. His p-doom, compared to rationalist Eliezer Yudkowski's, which is off the charts, Tom's is 10%.
Speaker 15 My view is that we should treat AI as
Speaker 15 being very likely to be the biggest thing ever, and treat the coming decades as the likely the most important decades in human history.
Speaker 12 This line, the most important decades in human history,
Speaker 12 it's roughly a quote from a book by Effective Altruist founder Toby Oort.
Speaker 21 Our present moment is just a very tiny slice of this much longer story of humanity.
Speaker 12 Around the time Toby was building the Effective Altruist movement, trying to maximize the number of drowning children he could save, rationalists joining the movement were trying to convince Toby that AI should be a top effective altruist priority.
Speaker 12 He wasn't convinced by their paperclip maximizer thought experiment, but he was convinced by the idea that AI could threaten the story of humanity.
Speaker 21 This idea about existential risk.
Speaker 12 What convinced him was, in a way, his own argument, the math of it all.
Speaker 12 That preventing an AI catastrophe could save not just the drowning children today,
Speaker 12 but tens of thousands of future generations.
Speaker 12 Trillions of drowning children.
Speaker 21 There's been about 10,000 generations of humanity so far, and it seems very plausible that there could be 10,000 or more generations to follow us.
Speaker 12 Toby came to believe we're living at a crucial moment in human history, where up until now, humans have ruled the Earth.
Speaker 21 Why is it humanity that's calling the shots on the Earth and not butterflies or ravens or chimpanzees?
Speaker 12 We've ruled... Because of our smarts.
Speaker 21 You know, something to do with our brains, not to do with our brawn.
Speaker 12 What if we weren't the smartest beings on Earth anymore?
Speaker 21 I think ultimately, the most compelling overall argument to me is that if you survey researchers on AI.
Speaker 12 He says AI researchers were telling him that possibility of a super intelligence smarter than us was around the corner.
Speaker 21 Within, I say the next 30 years or so, that they think that that's about as likely as not. And, you know, how would we still be calling the shots?
Speaker 21 How would we not be perhaps subservient to these new systems?
Speaker 12 Toby wrote a book laying out these arguments. He named it the precipice for the cliff he sees humanity sitting on at this moment in history.
Speaker 21 What's particularly pernicious about these existential risks is that they're something that if our generation drops the ball, there won't be any more generations.
Speaker 12 He thinks we can determine the long-term survival of our species. He gives this philosophy yet another ism, long-term ism.
Speaker 21 B strong arguments that these risks were real
Speaker 21 slowly made people think, well, if I want to work to focus on that, what should I be doing? What charities should I be donating to?
Speaker 12 So effective altruists wouldn't just give their charity dollars to things like ending malaria. They'd also give to charity to prevent an AI apocalypse.
Speaker 12 And the way to avoid bad robots taking over the world, some people decided, was to use EA money to build a good robot. And not just good, super intelligent, a magic intelligence in the sky.
Speaker 12 You might remember those are the words of the ChatGPT company's founder, Sam Altman. He started his nonprofit, OpenAI, with EA charity dollars from the group Open Philanthropy.
Speaker 12 Charity dollars also went to OpenAI's competitor, Anthropic, whose CEO also wants to build a good robot, or as he put it, a machine of loving grace.
Speaker 12 And it wasn't just EA charity dollars that went toward this cause.
Speaker 18 Some of you may have noticed that a bunch of people in this community seem to think that AI is a big deal.
Speaker 12 AI also became a common career path for young, effective altruists.
Speaker 19 Eventually, the winter of sophomore year, I remember just like thinking through it and thinking like, oh, okay, yeah, I don't think there's really a way I don't go into AI somehow.
Speaker 12 22-year-old Tom went into AI with the council of Harvard's Effective Altruism Group. And in the course of my reporting, I met many other young people.
Speaker 15 I'm like, wow, damn, this AI safety thing.
Speaker 19 Crap, like, do I need to work on it? Like, what can I do? You know,
Speaker 12 around the time ChatGPT came out, decided the most effective career at doing good in the world is going into AI safety.
Speaker 15 And I'm like, damn, I think I can actually move the needle on this.
Speaker 12 They did the math and thought saving future children from AI apocalypse was neglected, tractable, and important.
Speaker 12 Open Philanthropy told us over $410 million EA dollars have gone toward addressing risks from advanced AI, making up 12% of their total giving.
Speaker 12 Roughly the same percentage that's gone toward malaria prevention.
Speaker 21 I guess I would say,
Speaker 21 you know, if I try to work out my best guess of the most important issues of our time, I think AI risk is probably very high at the top.
Speaker 12
Wow. So it's number one, above the current drowning children.
You'd put it above the problems we face in the present?
Speaker 21 I think I would, sadly.
Speaker 12 I've been struggling with the math of it all. I can see how it's important to think about our long-term future.
Speaker 12 but no matter how many math problems EA people put in front of me, I have a hard time seeing how saving trillions of future children from AI apocalypse is the most important, tractable problem of our time.
Speaker 12 How does a movement built around helping in a measurable way with things like malaria nets turn to a cause that requires you to almost predict the future?
Speaker 12 It's almost like a religion or something where it requires faith that good things will come without those good things being clearly specified.
Speaker 12
This is the criticism of ethicists like Dr. Margaret Mitchell.
To them, the solvable, tractable problems are the harms AI is doing right now.
Speaker 12 Problems like bias, surveillance, environmental harms.
Speaker 12 But instead, funding often goes toward addressing future hypothetical harms, or it goes toward building a super intelligence, something many ethicists don't think we should be building at all.
Speaker 12
It seems to be like funding for sort of like fanciful ideas. There's one follower of long-termism who's found his way to the White House.
This was no ordinary victory.
Speaker 12 This was a fork in the road of human civilization.
Speaker 12 It is thanks to you
Speaker 12 that the future of civilization is assured. And we're going to take Doge to Mars.
Speaker 12 It was hard not to chuckle when billionaire Elon Musk talked about his goal of colonizing Mars after President Trump's inauguration. But he's not joking.
Speaker 12 When he started SpaceX, he intended it to be an insurance policy for humanity in case apocalypse strikes. It's also why he says he went into AI.
Speaker 12 It's all to protect the future drowning children.
Speaker 6 You know, I think this is actually fundamentally important for ensuring the long-term survival of life as we know it, is to be a multi-planet species.
Speaker 12 The long legs of the drowning child thought experiment have taken us very far away from its original intent of trying to get us to care about a crisis in Bangladesh.
Speaker 8 Oh, the drowning child in the pond has certainly developed a life of its own.
Speaker 11 In what way? What do you mean?
Speaker 12 I went back to the author of the drowning child thought experiment, Peter Singer, at his retirement party.
Speaker 8 I hope that I've left a legacy in my writings, that they will lead people to think differently about what we owe people in extreme poverty and other parts of the world.
Speaker 8 That's what the drowning child in the shallow pond was supposed to suggest.
Speaker 12 But interpreting the parable to mean that the biggest issue of our time is saving future children from an AI apocalypse?
Speaker 8
I think there's been too much focus. I'm not dismissing it.
I think it's good that there are some people thinking about that and working on it.
Speaker 8 But compared to some of the other problems that are around, I have the sense that people like it because it's a kind of nerdy problem that's, you know, interesting things to think about.
Speaker 8 So I think that's why it gets more attention.
Speaker 12 You might know that Peter Singer himself is no stranger to, to, shall we say, outlandish interpretations.
Speaker 12 Using his own utilitarian philosophy, he's argued that severely disabled children add suffering to the world.
Speaker 12 And it might be justifiable in maximizing happiness for the parents to euthanize them.
Speaker 12 So yeah, from where I sit, Using math to maximize is not always the answer. If you stare a little too hard at the numbers, the humans begin to fade out of focus.
Speaker 12 When I think about the drowning child as it relates to AI, I don't think about the math. I've been thinking about something else.
Speaker 12 While reporting this story, the news broke that a 14-year-old boy in Florida named Sewell had killed himself. He'd become obsessed with a chat bot.
Speaker 12 And his last text to the bot, just before he died, showed that he believed ending his life on Earth would bring him closer to the bot.
Speaker 12 That's who I think of when I think of the drowning child.
Speaker 12 Abstracting that parable so far ahead in space and time, we risk losing sight of the drowning child right in front of us.
Speaker 12 In an attempt to save some future hypothetical children, some people in the AI industry have set out to build a good robot, a super intelligent AI, a magic intelligence in the sky, a machine of loving grace.
Speaker 12 But in selling that story and building those still very flawed systems to maybe save some future children, they've invented an industry that's creating new ponds for children to drown in today.
Speaker 12 Okay, she's eating that bowler.
Speaker 12 It's Play-Doh. Oh no.
Speaker 12 I wanted to pose all of this to Kelsey Piper after she pulled the Play-Doh away from her baby. Kelsey was the person who was my introduction to the worlds of rationalism and effective altruism.
Speaker 12 I asked her, why ignore AI harms today for the sake of some future children?
Speaker 12 There are lots of people who have this impression that you need long-termism or theorizing about the badness of humanity going extinct or, you know, drowning child-based philosophy to care about this.
Speaker 12 And I don't think you need any of that.
Speaker 12 She kindly hinted that maybe I've fallen down a bit of a philosophical rabbit hole, caught in the intellectual debates of it all, and lost sight of the actual technology we were supposed to be talking about.
Speaker 12 So, one thing I did, which was super valuable, is I tried to form an opinion that wasn't about all of the social melodrama thing on the top of the AI scene.
Speaker 12 Like, just play with the AI models and think about what can they do? What kinds of things, if they could do them, do I think that would be concerning?
Speaker 12 And I tried to- Now that's an interesting thing to do.
Speaker 12 Producer Gabrielle Berbay's ears perked up at this idea. She knows I am a sucker for social melodrama.
Speaker 12 And maybe the social melodrama has been my crutch to avoid having to form my own opinion and actually having to use the technology that does scare me.
Speaker 12 So what I want you to do first is I want you to open up ChatGPT
Speaker 12
and I want you to say, I'm going to give you three episodes of a series in order. I'm going to give you three episodes of a series.
Next time on Good Robot, we feed ourselves to the machine.
Speaker 20
Good Robot was hosted by Julia Longoria and produced by Gabrielle Berbet. Sound design, mixing, and original music by me, David Herman.
Our fact-checker is Caitlin Penzymoog.
Speaker 20 Our editors are Diane Hodson and Catherine Wells. Special thanks to Larissa McFarker, whose book Strangers Drowning was an early inspiration for this episode.
Speaker 20 And a quick note, Unexplainable host Noam Hassenfeld's brother is a board member at Open Phil, but he isn't involved in any of their grant decisions.
Speaker 20 Noam played no role in the reporting of this series. If you want to dig deeper into what you've heard, head to vox.com slash goodrobot to read more future perfect stories about the future of AI.
Speaker 20 Thanks for listening.