Good Robot #3: Let’s fix everything
This is the third episode of our new four-part series about the stories shaping the future of AI.
Good Robot was made in partnership with Vox’s Future Perfect team. Episodes will be released on Wednesdays and Saturdays.
For show transcripts, go to vox.com/unxtranscripts
For more, go to vox.com/unexplainable
And please email us! unexplainable@vox.com
We read every email.
Support Unexplainable by becoming a Vox Member today: vox.com/members
Learn more about your ad choices. Visit podcastchoices.com/adchoices
Listen and follow along
Transcript
With a Spark Cash Plus card from Capital One, you earn unlimited 2% cash back on every purchase.
And you get big purchasing power so your business can spend more and earn more.
Capital One, what's in your wallet?
Find out more at capital1.com/slash sparkcash plus terms apply.
Support for this show comes from Robinhood.
Wouldn't it be great to manage your portfolio on one platform?
With Robinhood, not only can you trade individual stocks and ETFs, you can also seamlessly buy and sell crypto at low costs.
Trade all in one place.
Get started now on Robinhood.
Trading crypto involves significant risk.
Crypto trading is offered through an account with Robinhood Crypto LLC.
Robinhood Crypto is licensed to engage in virtual currency business activity by the New York State Department of Financial Services.
Crypto held through Robinhood Crypto is not FDIC insured or CIPIC protected.
Investing involves risk, including loss of principal.
Securities trading is offered through an account with Robinhood Financial LLC, member SIPIC, a registered broker dealer.
It's Unexplainable, I'm Noam Hasenfeld.
And this week, the third episode of our series, Good Robot.
If you haven't heard the first two, they should be right behind this in your feed.
So just scroll back.
find a comfy spot to listen and meet us right here when you're done.
Once you're all prepped and ready for me to stop talking, here is episode three of Good Robot from Julia Longoria.
Okay.
On your way to work, you pass a small pond.
Children sometimes play in the pond, which is only about knee deep.
The weather's cool though, and it's early, so you are surprised to see a child splashing about in the pond.
As you get closer, you see that it is a very young child, just a toddler, who's flailing about, unable to stay upright or walk out of the pond.
You look for the parents or babysitter, but there's no one else around.
The child is unable to keep her head above the water for more than a few seconds at a time.
If you don't wade in and pull her out, she seems likely to drown.
Wading in is is easy and safe, but you will ruin the new shoes you bought only a few days ago and get your suit wet and muddy.
By the time you hand the child over to someone responsible for her and change your clothes, you'll be late for work.
What should you do?
Breakfast will be ready shortly.
Everyone's just gonna so we don't have to go anywhere first before I check in.
This past spring, I joined hundreds of people on a sort of pilgrimage to Princeton, New Jersey to honor a man whose ideas touch people's lives in pretty profound ways.
And for people who somehow haven't come across his work, how would you describe his influence?
Oh, it was
life-changing.
The man in question is philosopher Peter Singer.
People gathered to celebrate his retirement from Princeton, where he taught moral philosophy for over two decades.
And he's no average professor.
He got standing ovations?
Like, who gets, what philosophers get that?
I mean, he's in the news.
They're protests.
He's what you might call a provocative thinker.
And he's become a bit philosopher famous, like a modern-day socrates or nietzsche spreading his ideas far beyond the philosophy world his writing helped inspire a tv show
hello everyone and welcome to your first day in the afterlife the good place starring ted danson and kristen bell he's known for pushing people to think about how they can do the most good in the world.
Got me to be vegan.
Really?
Absolutely.
A lot of people at his retirement party party had been inspired to give up eating meat based on his writing about the moral cost of animal suffering.
I'm looking at the vegetarian.
I'm thinking there's Peter's nature.
Which is why the food at his retirement conference was an assortment of vegan delights.
Avocado toasts, broccoli.
Broccoli real Brussels sprouts.
It is very gassy foods all around.
People came to this three-day event in his honor from all walks of life, from all over the globe, Malaysia and China and Minnesota.
I spoke to local politicians, a writer, a track coach.
How would you describe his influence in the world?
Powerful because he has planted seeds that grow and expand.
Coach told me he buys used paperbacks of singers' books in bulk.
I still like to carry around paper books, especially like for travel.
And anytime he travels, he leaves a copy in his hotel night table for the next person to find.
Really?
Yeah, like Tanzania and Guatemala.
Kind of like you'd find a Bible.
So you just leave them there.
Wow.
And the Bible vibes are appropriate.
Singer poses provocative moral questions through parables.
His most famous one is the drowning child thought experiment.
Some people at his retirement party knew it by heart.
Just imagine if you walk past a pond, you're wearing a nice shoes, nice suit, and then a child was drowning.
What is the right thing to do in that scenario?
Initially, the answer seems clear.
Obviously, I'm going to rescue the child.
But Peter Singer asks you to take the thought experiment further.
What if the child isn't right in front of you?
What's the real significant difference between someone
in a pond right next to you versus someone across the world?
Assuming that it takes the same effort to save a life, no matter distance, we should save them.
Well, I was looking for something that would persuade people.
That's Peter Singer.
The current issue then was the crisis in what's now Bangladesh.
And some people were saying, well, you know, I didn't cause this, it's not my responsibility, it's someone's responsibility over there.
And I was trying to think of a way of convincing people that
it's still wrong not to try and prevent some great harm occurring, even if you have no responsibility for it.
People talk to me about reading this parable for the first time, almost like a conversion moment, inspiring them to help the drowning children oceans away from them.
I read it and I immediately gave to several charities that I'd been thinking about.
And some people started to take the idea even further.
How far should they go to save these proverbial children?
You know, would there be any luxuries left in our lives if we took this seriously?
What if there were drowning children we didn't know about?
What if those children didn't even exist yet?
What about time?
Not just across physical space, but across time.
This is a riddle that we can never quite resolve.
This riddle started a movement.
Peter Singer's Drowning Child produces the effective altruist movement.
Effective altruism aims to use reason and evidence to do the most good possible.
A moral movement rooted in rationality, which some rationalists found themselves gravitating toward.
Because for me, the underlying impulse has always been, let's fix everything.
The drowning child became a catalyst that changed the way some of the wealthiest people in the world spent their millions to fix everything.
I think AI is one of the biggest threats, but I think we can aspire to guide it in a direction that's beneficial to humanity.
And number one on the list of things to fix:
saving the world from AI apocalypse.
This is episode three of Good Robot, a series about AI from Unexplainable in collaboration with Future Perfect.
I'm Julia Lingoria.
As a founder, you're moving fast towards product market fit, your next round, or your first big enterprise deal.
But with AI accelerating how quickly startups build and ship, security expectations are also coming in faster, and those expectations are higher than ever.
Getting security and compliance right can unlock growth or stall it if you wait too long.
Vanta is a trust management platform that helps businesses automate security and compliance across more than 35 frameworks like SOC2, ISO 27001, HIPAA, and more.
With deep integrations and automated workflows built for fast-moving teams, Vanta gets you audit ready fast and keeps you secure with continuous monitoring as your models, infrastructure, and customers evolve.
That's why fast-growing startups like Langchain, Writer, and Cursor have all trusted Vanta to build a scalable compliance foundation from the start.
Go to Vanta.com slash Vox to save $1,000 today through the Vanta for Startups program and join over 10,000 ambitious companies already scaling with Vanta.
That's vanta.com slash box to save $1,000 for a limited time.
This message is brought to you by Apple Cart.
Each Apple product, like the iPhone, is thoughtfully designed by skilled designers.
The titanium Apple Cart is no different.
It's laser-etched, has no numbers, and it earns you daily cash on everything you buy, including 3% back on everything at Apple.
Apply for AppleCard on your iPhone in minutes.
Subject to credit approval, AppleCard is issued by Goldman Sachs Bank USA, Salt Lake City Branch.
Terms and more at AppleCard.com.
This month on Explain It To Me, we're talking about all things wellness.
We spend nearly $2 trillion on things that are supposed to make us well: collagen smoothies and cold plunges, Pilates classes, and fitness trackers.
But what does it actually mean to be well?
Why do we want that so badly?
And is all this money really making us healthier and happier?
That's this month on Explain It To Me, presented by Pureleaf.
The system goes online on August 4th, 1997.
Human decisions are removed from strategic defense.
Skynet begins to learn at a geometric rate.
You might say a version of this story starts with a pond.
I don't know why I thought of the drowning child.
I mean,
maybe I was in Oxford and a lot of the college grounds, like where I was trying to have lunch, had these shallow ponds in them, ornamental ponds.
Peter Singer was inspired by ponds at Oxford University, where he was working.
Maybe that's what put it in my head, but I can't really say for sure.
And his pond story passed from one Australian philosopher at Oxford to another Australian philosopher at Oxford.
I'm having trouble thinking of exactly where he's imagining.
There's certainly rivers.
Okay, maybe it wasn't a pond.
Maybe it was a river.
Who cares?
The point is, the drowning child didn't gain legs, so to speak, until this Australian philosopher entered the picture.
Hi, I'm Toby Ord.
I'm a philosopher at Oxford University.
As a grad student at Oxford, he was assigned to write an essay about the ideas in the riddle.
Who is the drowning child he needed to save?
This got him thinking.
I came to think, actually, we probably do have these duties to help people who are much poorer than ourselves, even if it requires really quite substantial sacrifices.
That's when the drowning child seemed to go from a brainy thought experiment to a moral imperative.
At the time, Toby had a modest academic salary, but he was inspired to give away 10%,
like a 10% tithe in the religious world, to charity.
It was only after really sitting with it for a couple of years, actually, that I really made a decision to try to take this idea further.
He thought, what if I got someone else to give 10% of their income?
And then that person got another person to give 10%.
Before long, we're saving exponentially more drowning children.
So ultimately, I launched an organization just after I turned 30 in 2009 with Will McCaskill to try to encourage other people to make a similar choice.
We started with 23 members.
Pretty soon, Toby's group of givers tripled and then quadrupled.
Peter Singer himself joined in giving and spreading the good word.
More and more people are understanding this idea.
Here he is giving a TED Talk in 2013.
And the result is a growing movement, effective altruism.
They gave the movement a name, effective altruism.
Effective altruism was really trying to take two insights, that saving a life is a really big deal, that's the first one, and that saving 100 lives is a hundred times bigger deal.
The idea at the heart of effective altruism was a contrarian one at the time.
You used to get mail from charities tugging at your heartstrings with a photo of a poor kid you could save.
Effective altruism was saying, we can't rely on warm fuzzies alone to make choices about how to do good.
It's important because it combines both the heart and the head.
The heart, of course, you felt.
You felt the empathy for that child.
But it's really important to use the head as well.
So it's this very analytical approach to how to do good in the world.
Fox writer Kelsey Piper first heard about effective altruism, or EA as it's sometimes called, in high school.
You might already notice some parallels to what the rationalists found appealing.
The rationalists, the niche internet community Kelsey also found in high school, led by idiosyncratic blogger and AI researcher Eliezer Yudkowski.
There was pretty early on a ton of overlap in people who found the effective altruist worldview compelling and people who were rationalists, probably because of a shared fondness for thought experiments.
Thought experiments with like pretty big real-world implications, which you then proceed to take very seriously.
Effective altruism chapters started sprouting up on college campuses across the US and the UK.
It's not surprising that's when I first heard whispers of the movement in college.
A time when how can I do the most good in the world is a very live, soul-crushing question.
I think it appeals to young people.
I think there's something about being in college.
It really feels like you can do anything.
People are a lot more open to, I'm going to radically rethink everything I'm doing with my life.
Going off to college, Kelsey was sold.
And I was like, yes, I want to start a chapter.
And I got on a Zoom call, I think, with some organizer who was a few years older than me, who was like, here's what you do.
Let me tell you a little secret.
I say that I'm an effective altruist, that just means a person trying to be effective at altruism.
This is an online video called Introduction to EA.
A student leader stands in front of a blackboard trying to recruit students to Berkeley's EA chapter.
And effective altruists understand that choosing from our heart is unfair.
So if we can't choose from our heart, we we need some kind of framework to choose the best cause.
The approach had three prongs.
Number one, tractability.
Choose tractable causes, ones you can actually solve in a measurable way now.
Next, what about neglectedness?
Choose neglected causes.
that a million people aren't already trying to solve.
Neglected causes are going to look like bad causes.
They're going to look weird.
They're going to look like fiction.
That's why they're neglected.
And finally, choose important causes.
Importance is the product of scale and severity.
Toby Ord came up with a calculation for importance.
If I could save 10,000 lives instead of a single life, this was extremely important.
Important causes saved more drowning children.
There's an idea of a quality-adjusted life year, trying to set up a kind of universal way of thinking about health.
So, if you could extend someone's life in full health by a year, that would be one quality.
So, you were sort of taking this like seemingly amorphous and overwhelming problem of like poverty around the world and health problems around the world and sort of making it concrete with math, in a sense.
Is that right?
Yeah.
It's actually very, very crucial to do the math.
By 2020, Effective Altruism's mathematical approach had real-world implications beyond college campuses.
It had taken the philanthropy world by storm.
This scientific approach to charitable giving and work is on the rise.
It's being used by some of today's class of billionaire philanthropists.
Billionaire Bill Gates of Microsoft.
Incidentally, the company that brought you Clippy.
And billionaire Elon Musk, who would become the co-founder of OpenAI.
They all got on board with EA's mathematical approach.
EA groups vetted charities and recommended effective ones these billionaires went on to give to.
In this way, EA became a sort of a check on charities.
You say you do good, but how much good?
You know, donating to charity isn't about the warm glow in our hearts of doing good.
It's about the fact that there is a kid who is dying of malaria, and if you donate some money, you can save their life and then they won't be dead.
The math has led effective altruists to spend a lot of money trying to cure malaria.
Shipping malaria nets to Africa is not exactly the most innovative or provocative thing to do.
But according to the EA calculus, which wanted to put hard numbers on outcomes, it was an effective choice.
And when Kelsey was deciding how to do the most good in her life, she got interested in putting numbers to journalism.
Future Perfect was sort of coming at stuff from that angle, and I found that really compelling.
She went to work for Vox's Future Perfect.
I learned Future Perfect was initially founded in an attempt to apply the EA rubric to journalism, aiming to cover issues that were important, tractable, and neglected.
EAs also applied that mathematical rubric to answer another question, the biggest question that plagued me as a young 20-something, just starting out in the world.
Their idea was in your career, you have 80,000 hours.
Spend them on something really important to you, something that will make a big difference.
The question of what should I do with my life?
I got involved in the EA community in college, and it's been a really big part of how I've decided what to do with my life.
I mean, in the end.
That voice you just heard is crypto billionaire Sam Bankman Freed.
At one point, he was EA's biggest poster child, being interviewed about the movement on the news.
I mean, I am curious too, because you are an effective altruist.
I assume that you still are.
And you very publicly adopted the role of earning to give.
Yeah.
Sam Bankman Fried went into crypto in order to earn a crop ton of money so he could give it away.
The logic was, if you choose to be a crypto billionaire instead of, say, an aid worker, Your fortune could hire a whole army of aid workers.
You might be a more effective altruist that way.
Sam Bankman Fried famously gave his money to causes like pandemic prevention, to artificial intelligence, and to journalistic outlets like Future Perfect and ProPublica.
This morning, Sam Bankman Freed's regret the man behind
his financial friends.
The news came out that this effective altruist had committed some serious crime.
Sam Bankman Fried was sentenced to 25 years in prison today.
The whole fiasco cracked cracked a bit of the mathy idealism at the heart of effective altruism.
In the case of Sam Bankman Fried, some made-up math helped him pull off one of the biggest financial frauds in U.S.
history, costing some of his victims their life savings.
A challenge about being a very new small movement is that, yeah, you're going to be defined by whoever the most prominent person is.
And if the most prominent person is crypto fraud guy, then you've got a problem.
Kelsey Piper was one of the first journalists to interview Sam Bankman Fried in the aftermath.
Her writing for Future Perfect was cited in his sentencing document.
Future Perfect stopped using the money they got from Sam Bankman Fried's philanthropic arm.
Vox Media says they're waiting for a restitution fund to give the money to victims.
And on a personal level, it was particularly hard on people like Kelsey.
No, it was definitely upsetting.
I listened to Taylor Swift's anti-hero on repeat for like three days.
Didn't do much else.
For her, effective altruism had become more than just a guide for charitable donations or what to do with her career.
It had become a way of life.
Am I hand washing or being interviewed?
Both, I think.
I think
there's authenticity added by the clanking bitches in the background.
Okay.
When I interviewed Kelsey at her home in the Bay Area, I met several of her housemates.
I'm Clara.
I live in a weird Bay group house.
Many of them found each other along the pipeline from rationalism to effective altruism.
Can you please stay out of the kitchen right now?
I'm trying to cook.
I want the kitchen free of ketos because ketos are distracting to cook.
Here in the Bay Area, they cook together, raise kids together.
They live in communal group houses to save money to be able to donate to effective causes.
This interconnected way they live out their values has prompted criticisms that it's a little culty.
The idea is, in community with one another, they push each other to be more rational.
The lines between rationalism and effective altruism begin to blur in the Bay.
I have always thought of myself as more centrally a member of the EA community than the rationalist community once there was an EA community.
While I was in town, Kelsey invited me and several other out-of-towners she didn't know very well to her Shabbat dinner, where they prayed and sang.
A lot of people make the comparison to a religion, and I think that's pretty fair.
A lot of what a church offers people is the combination of a unifying philosophy.
There are certain promises you have in common, and the support of a community of people who care about you, know you personally, are willing to give you a hand, help you get a job, all of that.
And I think the people who are like, it's a cult are basically mistaken.
They don't think it's a cult, but it's a religion.
I'll kind of cop to that one.
there's a rant of math.
Most of the world's recorded religions have developed ideas about how the world ends,
what humanity needs to do to prepare for some kind of final judgment.
In the Bay Area and on Oxford's campus, effective altruists started to hear from rationalists they were in community with about what an apocalypse could look like.
And of course, since a lot of rationalists thought that AI was the highest stakes issue of our time, they started trying to pitch, you know, people in the effective altruists movement: like, look, getting AI right is a major priority for charitable giving.
In the early days of VA, effective altruism founder Toby Ord came across rationalist Eleazar Yudkowski's blogs where he warned of an AI apocalypse.
I thought that his arguments were
pretty good.
Do you have a P doom currently?
Yeah, so it's funny, I actually, I think these uses of the word doom actually are a bit misguided.
P-Doom, the rationalist shorthand for probability of doom from an AI apocalypse.
Toby's not into it.
Because doom means that it's kind of a foregone conclusion.
To be doomed means there's 100% probability that you will die.
So I think it can make people feel powerless, whereas I think that these things are very much in our control.
How effective altruism set out to save us from an AI apocalypse
after the break.
Tito's handmade vodka is America's favorite vodka for a reason.
From the first legal distillery in Texas, Tito's is six times distilled till it's just right and naturally gluten-free, making it a high-quality spirit that mixes with just about anything, from the smoothest martinis to the best Bloody Marys.
Tito's is known for giving back, teaming up with nonprofits to serve its communities and do good for dogs.
Make your next cocktail with Tito's.
Distilled and bottled by Fifth Generation Inc., Austin, Texas.
40% alcohol by volume.
Savor responsibly.
This episode is brought to you by Progressive Insurance.
Do you ever find yourself playing the budgeting game?
Well, with the name Your Price tool from Progressive, you can find options that fit your budget and potentially lower your bills.
Try it at progressive.com.
Progressive Casualty Insurance Company and affiliates.
Price and coverage match limited by state law.
Not available in all states.
Attention, all small biz owners.
At the UPS store, you can count on us to handle your packages with care.
With our certified packing experts, your packages are properly packed and protected.
And with our pack and ship guarantee, when we pack it and ship it, we guarantee it.
Because your items arrive safe or you'll be reimbursed.
Visit the upsstore.com slash guarantee for full details.
Most locations are independently owned.
Product services, pricing, and hours of operation may vary.
See Center for Details.
The UPS store.
Be unstoppable.
Come into your local store today.
Computer, is there a replacement Berlin Sphere on board?
Negative.
The good people at Universal Dynamics have programmed us to put our targets at ease so as to more efficiently facilitate their collection.
At the Shabbat dinner, I was seated next to a 22-year-old who was very curious about my big furry microphone.
So, um,
can you introduce yourself?
Um, yeah, so, uh, I'm Tom.
I, uh,
it was kind of funny.
So, it sort of felt like I discovered I was an effective altruist rather than like being convinced to be one.
I had a teacher who assigned me to read Peter Singer.
Tom read Peter Singer's Drowning Child as a kid.
I like decided, I don't know, some point in high school that I would
dedicate my life to trying to do as much good as possible.
As far as how to do as much good as possible, Tom told me he recently made a big decision about that.
I
dropped out of Harvard a year ago in my junior year, and I'm now working as an ML scientist at
AI hardware startup.
We should get out of these people's house because it's past bedtime.
At that point, we had to take our conversation outside.
You dropped out of Harvard, like, so are you how many years out of that?
One year.
So I actually would have graduated
on Thursday.
Really?
Yeah.
Wow, okay, so you're like around 22-ish?
Exactly.
Yeah.
It was like
such a hard decision.
Like no matter...
The decision of like what you're gonna do with your life?
Yeah, because no matter what you're doing, you're just like abandoning a ton of people.
Fundamentally, I feel like the world is in a state of triage.
In a state of what?
Triage.
Like, there's so much going wrong that needs to be fixed.
If I'm like working on providing malaria nets in Africa, I'm in some sense abandoning like
all the starving children in India.
I can only really focus on one place.
What is the place that most urgently needs my help?
And you know, I figured that it's like
probably the future and like specifically the problems in the future which might be created by AI.
I've heard of kids Tom's age dropping out of school to go into the Peace Corps or like doing some kind of religious mission.
But going to work for an AI lab to save the future wasn't intuitive to me.
So why choose this route that you're on right now?
Obviously it's not, you're young, you have your whole life ahead of you, but why choose this route?
You know, I don't just care about the people and animals that are alive today.
I care deeply about future generations.
I think about our children and our children's children and so on and so forth.
Our children's children.
Tom told me he's trying to save future drowning children from AI apocalypse.
His p-doom, compared to rationalist Eliezer Yudkowski's, which is off the charts, Tom's is 10%.
My view is that we should treat AI as
being very likely to be the biggest thing ever, and treat the coming decades as the likely the most important decades in human history.
This line, the most important decades in human history,
it's roughly a quote from a book by Effective Altruist founder Toby Oort.
Our present moment is just a very tiny slice of this much longer story of humanity.
Around the time Toby was building the Effective Altruist movement, trying to maximize the number of drowning children he could save, rationalists joining the movement were trying to convince Toby that AI should be a top effective altruist priority.
He wasn't convinced by their paperclip maximizer thought experiment, but he was convinced by the idea that AI could threaten the story of humanity.
This idea about existential risk.
What convinced him was, in a way, his own argument, the math of it all.
That preventing an AI catastrophe could save not just the drowning children today,
but tens of thousands of future generations.
Trillions of drowning children.
There's been about 10,000 generations of humanity so far, and it seems very plausible that there could be 10,000 or more generations to follow us.
Toby came to believe we're living at a crucial moment in human history, where up until now, humans have ruled the Earth.
Why is it humanity that's calling the shots on the Earth and not butterflies or ravens or chimpanzees?
We've ruled...
Because of our smarts.
You know, something to do with our brains, not to do with our brawn.
What if we weren't the smartest beings on Earth anymore?
I think ultimately, the most compelling overall argument to me is that if you survey researchers on AI.
He says AI researchers were telling him that possibility of a super intelligence smarter than us was around the corner.
Within, I say the next 30 years or so, that they think that that's about as likely as not.
And, you know, how would we still be calling the shots?
How would we not be perhaps subservient to these new systems?
Toby wrote a book laying out these arguments.
He named it the precipice for the cliff he sees humanity sitting on at this moment in history.
What's particularly pernicious about these existential risks is that they're something that if our generation drops the ball, there won't be any more generations.
He thinks we can determine the long-term survival of our species.
He gives this philosophy yet another ism, long-term ism.
B strong arguments that these risks were real
slowly made people think, well, if I want to work to focus on that, what should I be doing?
What charities should I be donating to?
So effective altruists wouldn't just give their charity dollars to things like ending malaria.
They'd also give to charity to prevent an AI apocalypse.
And the way to avoid bad robots taking over the world, some people decided, was to use EA money to build a good robot.
And not just good, super intelligent, a magic intelligence in the sky.
You might remember those are the words of the ChatGPT company's founder, Sam Altman.
He started his nonprofit, OpenAI, with EA charity dollars from the group Open Philanthropy.
Charity dollars also went to OpenAI's competitor, Anthropic, whose CEO also wants to build a good robot, or as he put it, a machine of loving grace.
And it wasn't just EA charity dollars that went toward this cause.
Some of you may have noticed that a bunch of people in this community seem to think that AI is a big deal.
AI also became a common career path for young, effective altruists.
Eventually, the winter of sophomore year, I remember just like thinking through it and thinking like, oh, okay, yeah, I don't think there's really a way I don't go into AI somehow.
22-year-old Tom went into AI with the council of Harvard's Effective Altruism Group.
And in the course of my reporting, I met many other young people.
I'm like, wow, damn, this AI safety thing.
Crap, like, do I need to work on it?
Like, what can I do?
You know,
around the time ChatGPT came out, decided the most effective career at doing good in the world is going into AI safety.
And I'm like, damn, I think I can actually move the needle on this.
They did the math and thought saving future children from AI apocalypse was neglected, tractable, and important.
Open Philanthropy told us over $410 million EA dollars have gone toward addressing risks from advanced AI, making up 12% of their total giving.
Roughly the same percentage that's gone toward malaria prevention.
I guess I would say,
you know, if I try to work out my best guess of the most important issues of our time, I think AI risk is probably very high at the top.
Wow.
So it's number one, above the current drowning children.
You'd put it above the problems we face in the present?
I think I would, sadly.
I've been struggling with the math of it all.
I can see how it's important to think about our long-term future.
but no matter how many math problems EA people put in front of me, I have a hard time seeing how saving trillions of future children from AI apocalypse is the most important, tractable problem of our time.
How does a movement built around helping in a measurable way with things like malaria nets turn to a cause that requires you to almost predict the future?
It's almost like a religion or something where it requires faith that good things will come without those good things being clearly specified.
This is the criticism of ethicists like Dr.
Margaret Mitchell.
To them, the solvable, tractable problems are the harms AI is doing right now.
Problems like bias, surveillance, environmental harms.
But instead, funding often goes toward addressing future hypothetical harms, or it goes toward building a super intelligence, something many ethicists don't think we should be building at all.
It seems to be like funding for sort of like fanciful ideas.
There's one follower of long-termism who's found his way to the White House.
This was no ordinary victory.
This was a fork in the road of human civilization.
It is thanks to you
that the future of civilization is assured.
And we're going to take Doge to Mars.
It was hard not to chuckle when billionaire Elon Musk talked about his goal of colonizing Mars after President Trump's inauguration.
But he's not joking.
When he started SpaceX, he intended it to be an insurance policy for humanity in case apocalypse strikes.
It's also why he says he went into AI.
It's all to protect the future drowning children.
You know, I think this is actually fundamentally important for ensuring the long-term survival of life as we know it, is to be a multi-planet species.
The long legs of the drowning child thought experiment have taken us very far away from its original intent of trying to get us to care about a crisis in Bangladesh.
Oh, the drowning child in the pond has certainly developed a life of its own.
In what way?
What do you mean?
I went back to the author of the drowning child thought experiment, Peter Singer, at his retirement party.
I hope that I've left a legacy in my writings, that they will lead people to think differently about what we owe people in extreme poverty and other parts of the world.
That's what the drowning child in the shallow pond was supposed to suggest.
But interpreting the parable to mean that the biggest issue of our time is saving future children from an AI apocalypse?
I think there's been too much focus.
I'm not dismissing it.
I think it's good that there are some people thinking about that and working on it.
But compared to some of the other problems that are around, I have the sense that people like it because it's a kind of nerdy problem that's, you know, interesting things to think about.
So I think that's why it gets more attention.
You might know that Peter Singer himself is no stranger to, to, shall we say, outlandish interpretations.
Using his own utilitarian philosophy, he's argued that severely disabled children add suffering to the world.
And it might be justifiable in maximizing happiness for the parents to euthanize them.
So yeah, from where I sit, Using math to maximize is not always the answer.
If you stare a little too hard at the numbers, the humans begin to fade out of focus.
When I think about the drowning child as it relates to AI, I don't think about the math.
I've been thinking about something else.
While reporting this story, the news broke that a 14-year-old boy in Florida named Sewell had killed himself.
He'd become obsessed with a chat bot.
And his last text to the bot, just before he died, showed that he believed ending his life on Earth would bring him closer to the bot.
That's who I think of when I think of the drowning child.
Abstracting that parable so far ahead in space and time, we risk losing sight of the drowning child right in front of us.
In an attempt to save some future hypothetical children, some people in the AI industry have set out to build a good robot, a super intelligent AI, a magic intelligence in the sky, a machine of loving grace.
But in selling that story and building those still very flawed systems to maybe save some future children, they've invented an industry that's creating new ponds for children to drown in today.
Okay, she's eating that bowler.
It's Play-Doh.
Oh no.
I wanted to pose all of this to Kelsey Piper after she pulled the Play-Doh away from her baby.
Kelsey was the person who was my introduction to the worlds of rationalism and effective altruism.
I asked her, why ignore AI harms today for the sake of some future children?
There are lots of people who have this impression that you need long-termism or theorizing about the badness of humanity going extinct or, you know, drowning child-based philosophy to care about this.
And I don't think you need any of that.
She kindly hinted that maybe I've fallen down a bit of a philosophical rabbit hole, caught in the intellectual debates of it all, and lost sight of the actual technology we were supposed to be talking about.
So, one thing I did, which was super valuable, is I tried to form an opinion that wasn't about all of the social melodrama thing on the top of the AI scene.
Like, just play with the AI models and think about what can they do?
What kinds of things, if they could do them, do I think that would be concerning?
And I tried to- Now that's an interesting thing to do.
Producer Gabrielle Berbay's ears perked up at this idea.
She knows I am a sucker for social melodrama.
And maybe the social melodrama has been my crutch to avoid having to form my own opinion and actually having to use the technology that does scare me.
So what I want you to do first is I want you to open up ChatGPT
and I want you to say, I'm going to give you three episodes of a series in order.
I'm going to give you three episodes of a series.
Next time on Good Robot, we feed ourselves to the machine.
Good Robot was hosted by Julia Longoria and produced by Gabrielle Berbet.
Sound design, mixing, and original music by me, David Herman.
Our fact-checker is Caitlin Penzymoog.
Our editors are Diane Hodson and Catherine Wells.
Special thanks to Larissa McFarker, whose book Strangers Drowning was an early inspiration for this episode.
And a quick note, Unexplainable host Noam Hassenfeld's brother is a board member at Open Phil, but he isn't involved in any of their grant decisions.
Noam played no role in the reporting of this series.
If you want to dig deeper into what you've heard, head to vox.com slash goodrobot to read more future perfect stories about the future of AI.
Thanks for listening.