AI Won’t Really Kill Us All, Will It?

24m
For months, more than a thousand researchers and technology experts involved in creating artificial intelligence have been warning us that they’ve created something that may be dangerous. Something that might eventually lead humanity to become extinct. In this Radio Atlantic episode, The Atlantic’s executive editor, Adrienne LaFrance, and staff writer Charlie Warzel talk about how seriously we should take these warnings, and what else we might consider worrying about.
Learn more about your ad choices. Visit megaphone.fm/adchoices

Listen and follow along

Transcript

Packages by Expedia.

You were made to occasionally take the hard route to the top of the Eiffel Tower.

We were made to easily bundle your trip.

Expedia.

Made to travel.

Flight-inclusive packages are at all protected.

Charlie Sheen is an icon of decadence.

I lit the fuse and my life turns into everything it wasn't supposed to be.

He's going the distance.

He was the highest paid TV star of all time.

When it started to change, it was quick.

He kept saying, no, no, no, I'm in the hospital now, but next week I'll be ready for the show.

Now, Charlie's sober.

He's going to tell you the truth.

How do I present this with a class?

I think we're past that, Charlie.

We're past that, yeah.

Somebody call action.

AKA Charlie Sheen, only on Netflix, September 10th.

I'm Hannah Rosen, and this is Radio Atlantic.

I remember when I was a little kid being alone in my room one night watching this movie called The Day After.

it was about nuclear war and for some absurd reason it was airing on regular network tv bad down here i can't even breathe now listen denise you get a hold of yourself now you i particularly remember a scene where a character named denise my best friend's name was denise runs panicked out of her family's nuclear fallout show

It was definitely extra, but also to teenage me, genuinely terrifying.

This very particular blend of scary, ridiculous, which I hadn't experienced since until a couple of weeks ago, when someone sent me a link to this YouTube video, an interview with Paul Cristiano, who's an artificial intelligence researcher.

The most likely way we die involves like not AI comes out of the blue and kills everyone, but involves we have deployed a lot of AI everywhere.

And you can kind of just look and be like, oh yeah, if for, if for some reason, God forbid, all these AI systems were trying to kill us, they would definitely kill us.

Christiana was talking on this podcast called Bankless, and then I started to notice other major AI researchers saying similar things.

More than 1,300 tech industry leaders, researchers, and others are now asking for a pause in the development of artificial intelligence to consider the risk.

Now it's kind of permeating into the cognitive space.

Before it was more the mechanical space.

There needs to be at least a six-month stop on the training of these systems.

Contemporary AI systems are now becoming human competitors.

We have to get our act together.

We are hearing the last winds start to blow, the fabric of reality start to fray.

Is this another campy Denise moment?

Am I terrified?

Is it funny?

I can't really tell.

But I do suspect that the very doomiest stuff at least is a distraction.

That there are some actual dangers with AI, less less flashy, but maybe equally life-altering.

And so today we're talking to Atlantic executive editor Adrienne LaFrance and staff writer Charlie Wurzell, both of whom have been researching and tracking AI for some time.

Charlie, Adrian.

So when these experts are saying, worry about the extinction of humanity, what are they actually talking about?

Yeah, let's game out.

Let's game out the existential doom for sure.

Thanks, guys.

I mean, so when people warn about the extinction of humanity at the hands of AI, that's literally what they mean, that all humans will be killed by the machines.

And it sounds very sci-fi, but the sort of the nature of the threat is that you imagine a world where more and more we rely on artificial intelligence to complete tasks or make judgments that previously were reserved for humans.

Obviously, humans are flawed.

Human Human judgment is deeply flawed, can be dangerous.

But one of the key challenges is that this assumes a moment at which AI's cognitive abilities eclipse our species.

And so all of a sudden, AI is really in charge of the biggest and most consequential decisions that humans make.

You can imagine they're making decisions in wartime about when to deploy nuclear weapons.

You could very easily imagine how that could go sideways.

Wait, but I can't very easily imagine how that would go sideways.

First of all, wouldn't

a human would put in many checks before you would give access to a machine?

Aaron Powell,

well, one would hope, but the way that this technology would work would be that you give the AI the imperative to win this war no matter what.

And, you know, maybe you're feeding in other conditions that say we don't want mass civilian casualties or, you know, but ultimately, this is what people refer to as an alignment problem, that if you give the machine a goal, it will do whatever it takes to reach that goal, even maneuvers that humans can't anticipate or that go against what would be human ethics.

Like a very

sort of a meme of this that has been around for a long time.

It's called the paperclip maximizer problem.

So it's basically you tell an artificial intelligence, this hypothetical one, you say, we want to build as many paperclips as fast as we can in the most efficient way.

And the AI goes through all the computations and says, well, really the thing that is stopping us from building is the fact that humans have other goals.

So we better just eradicate humans.

Why can't you just program in machine?

You are allowed to do anything to make those paperclips short of killing everyone.

Let me lay out like a classic.

AI doomer scenario that maybe is a little more plausible, right?

That you have an AI and let's say, you know, five, ten years down the line, right?

With a supercomputer that's powering it, it's able to process that much more information.

It is, you know, on a scale of a hundred X more powerful than whatever we have now.

So it knows how to build iterations of itself.

So it builds a model.

That model, it builds a model.

And it gets to a point where it's replicated enough that it's sort of like a gene that is mutating, right?

And then this is the alignment problem that Adrian mentioned.

It's like the humans and the AI are going along together and we have the same objectives and the same objectives.

And then all of a sudden the AI takes this sharp left turn and realizes that to actually achieve that objective, it has to get rid of the humans.

Right.

It can hack a bank, either socially engineer by impersonating someone, or it can, you know, actually hack, steal funds away from a bank, get money.

pose as a human being, basically get, you know, someone involved funding a state actor or a terrorist cell to you know use the money that it's gotten and and pay that that group to release this this bioweapon you know on on the planet and just to interject before you play it out completely there's no intention here like it's not necessarily intending to gain power the way say an autocrat would be or intending to rule the world it's simply achieving an objective that it began with in the most effective way possible right it speaks to this idea that once you build a machine that is so powerful, that is, you know, working on its own or that is building other machines like it, and you give it an imperative, there may not be enough alignment parameters that a human can set

to keep it in check.

And I followed your scenario completely.

That was very helpful, except you don't sound at all worried.

I don't know that I fully buy any of it.

I mean, but you, Charlie, you don't even sound somber.

I mean, we're talking about the end of humanity.

Like, you actually don't.

And so.

Why don't you like humans, Charlie?

Yeah, no.

Totally anti-human.

This is my hot take for the

no.

So, wait, so that's a real question.

Can you just actually explain why you don't take it seriously?

Is it because you think all these steps haven't been worked out?

Or is it because you think there are a lot of checks in place, like say there are with human cloning?

Like what is the real reason why you, Charlie, can intelligently lay out this scenario, but not actually take it seriously?

Bear with me here.

Are you familiar with the South Park underpants gnomes?

I am.

Yeah, thanks.

So if you want to know what the underpants gnomes have to do with the end of humanity, you'll have to stick around.

More after the break.

Tires matter.

They're the only part of your vehicle that touches the road.

Tread confidently with new tires from Tire Rack.

Whether you're looking for expert recommendations or know exactly what you want, Tire Rack makes it easy.

Fast, free shipping, free road hazard protection, convenient installation options, and the best selection of Bridgestone tires.

Go to tire rack.com to see their Bridgestone test results, tire ratings, and reviews, And be sure to check out all the special offers.

TireAct.com, the way tire buying should be.

This podcast is supported by Progressive, a leader in RV Insurance.

RVs are for sharing adventures with family, friends, and even your pets.

So if you bring your cats and dogs along for the ride, you'll want Progressive RV Insurance.

They protect your cats and dogs like family by offering up to $1,000 in optional coverage for vet bills in case of an RV accident, making it a great companion for the responsible pet owner who loves to travel.

See Progressive's other benefits and more when you quote RV Insurance at progressive.com today.

Progressive casualty insurance company and affiliates, pet injuries, and additional coverage and subject to policy terms.

So, Charlie, continue.

South Park, Underpants, and Doomers.

For those who are like blissfully unaware, the Underpants Gnomes is from an episode of South Park, but what's important is that they have a business model that is notoriously vague.

Collecting underpants is just phase one.

Phase one is collect underpants.

Phase two.

Hell, what's phase two?

Phase one, recollect underpants.

La, la, la, but what about phase two?

Is a question mark.

Well, phase three is profit, get it?

And that's become kind of like a cultural signifier over the last decade or so for, you know, a really vague business plan.

Okay.

So there is an underpants gnomes logic quality to a lot of these ai doomers and and i mean this sounds so funny say i mean this as respectfully as possible but when you listen to a lot of the doomers you have somebody who is obviously an expert who's obviously incredibly smart who i don't think is necessarily being overly cynical here and they're saying like step one build incredibly powerful artificial intelligence system that, you know, maybe gets close to or actually, you know, surpasses human intelligence.

Step two, question mark.

Step three, existential doom.

And it's like, I just have not never really heard a very good walkthrough of step two or two and a half and two and three quarters, which is, you know, what I'm looking for from the AI doomers who are talking about the end of human civilization as a really true risk, one that we have to truly think about and, you know, maybe pause all our innovation.

I'm not seeing like checkpoints.

Like if A happens, we know we are, you know, getting closer to this point of no return, right?

Because no one is saying right now that we have reached the point of no return.

And what a lot of people are saying right now is we're building something really helpful, you know, that may increase the amount of

knowledge and intelligence in the world, that may give us all kinds of unbelievable scientific breakthroughs and economic opportunities.

Wait, but Charlie, I think you did give us step two, because step two is the AI hacks a bank and pays a terrorist, and the terrorists unleash a virus that kills humanity.

And I would also say that I think what people who are most worried would argue is that there isn't time for a checklist.

And that's the nature of their worry is we don't have the luxury of a checklist because the way this technology works will be, you know, it's too fast, it's too inscrutable to know exactly what the sort of point of no return has been.

And there are some who've said we are past the point of no return.

And I get that.

I really do.

I'll just say, my feeling on this is that I think the existential, like the, you know,

the Terminator 2 Judgment Day, like robots rolling over, you know, human skulls, it feels like a distraction from the bigger problems.

Okay, wait, but that's actually what I want to know.

So I'm not distracted by the shiny doom movie.

What actually are these bigger problems that we do need to worry about or pay attention to?

I think the possibility of wiping out entire job categories and industries, though that is a phenomenon we've experienced throughout technological history, is real.

That's a real threat to people's real lives and ability to buy groceries, you know?

And I have real questions about what it means for the arts and

threatening our sense of what art is and whose work is valued,

specifically with regard to artists and writers.

But Charlie, what are yours?

Well, I think before we talk about exterminating the human race,

I'm worried about financial institutions adopting

these types of automated generative AI machines.

And let's use Wall Street as an example, right?

If you have an investment firm that is using a powerful piece of technology and you want to optimize for a very specific stock or a very specific commodity, then you get, you know, the possibility of something like that paperclip problem with, well, what's the best way to drive the price of, you know, of corn up?

Well, maybe it's to get

to get rid of a certain crop or start a conflict in a certain region.

Now, again,

there's still a little bit of that underpanzynomish quality to it where I'm like, I don't know all the different steps.

But when I worry, you know, about AI in this way, I think of it like that.

I think a good sort of analog for this from the social media era, right, is that when we were all worried about, you know, what happens if Facebook connects the world, it would have been really silly to

imagine when Mark Zuckerberg is making the Facebook in his Harvard dorm room that it could lead to ethnic cleansing or genocide in somewhere like Myanmar.

But ultimately, when you create powerful networks, you connect people, there's all sorts of unintended consequences.

Like basically, we have a lot of problems in the world.

There's a way that AI, that certain AI systems could act as a force multiplier for those problems.

Right.

So given the speed and suddenness that these bad things can happen, you can understand why a lot of intelligent people are asking for a pause just to slow all this AI work down.

Do you guys think that's even possible?

Is that the right thing to do?

No, I mean, I think it's unrealistic to expect tech companies to slow themselves down.

It's intensely competitive right now.

You know, I'm not convinced that regulation right now would be the right move either.

We'd have to know exactly what that looks like.

We saw it with social platforms.

They call for, you know, they're saying, Congress, if you would just regulate us, it would be all fine.

And then at the same time, they're lobbying very hard not to be regulated.

I see.

So you're saying it's a cynical public play.

What they're looking for is sort of toothless regulations or things that aren't that serious.

I think that is unquestionably one dynamic at play.

Also, to be fair, I think that the people, many of the people who are building this technology are indeed very thoughtful and hopefully are reflecting with some degree of seriousness about what they're unleashing.

So I don't want to suggest that they're all just doing it for political reasons, but there certainly is that element, no question.

I mean, when it comes to how we slow it down, I think it has to be individual people deciding for themselves how they think this world should be.

But I've had conversations with people who are not journalists, who are not in tech, but who are unbridled in their enthusiasm for what this will mean.

And someone mentioned to me how excited he was that this would mean that you could just

surveil your workers all the time using AI to tell exactly what they were doing, what websites they were visiting.

At the end of the day, you get a report that shows how productive they were.

To me, that's an example of something that could very quickly be seen among some people as culturally acceptable.

That I think we now have to really push back against in terms of civil liberties and in terms of how we think about how this technology will actually be used in our lives.

To me, this is much more threatening than the existential doom in the sense that these are the sorts of decisions that are being made right now by people who have genuine enthusiasm for changing the world in ways that seem small but are actually big.

And so I think it it has to be individual people speaking up, whether it's to their employer or

their representatives in

whatever way you can be active on this issue and take a stand for shaping how we use technology, I think is crucially important right now because norms will be hardened before most people have a chance to grasp what's happening.

Aaron Powell, so that's a little bit like cell phones.

We were all handed cell phones before we fully absorbed what would be the consequences for our relationships, for our society, how we communicate, for a lot of different things.

I'm 100% with you.

I guess I just don't know who we

is in that sentence, and it makes me feel a little vulnerable to think of, once again, in America, like every individual and their family and their friends has to decide for themselves.

as opposed to, say, the European model, where you just put some basic regulations in place.

Like the EU already passed a a resolution to ban certain forms of public surveillance, like facial recognition and

to review AI systems before they go fully commercial, like that kind of thing.

Aaron Powell, there's a way which

even if you do put regulations on things, it doesn't stop somebody from building something on their own.

It wouldn't be as powerful as the multi-billion dollar supercomputer from OpenAI or whatnot.

But those models will be out in the world.

Those models will also maybe not have some of the restrictions that some of these companies who are trying to build them thoughtfully are going to have.

Maybe they won't have content restrictions.

Maybe they will tell you how to make a chemical weapon, right?

Or maybe you'll have people like we have in the software industry creating malware, creating AI malware and selling it to the highest bidder, whether that's a foreign government or a terrorist group or some kind of state-sponsored sale of some kind.

And then you get into this idea of like a geopolitical race, which is part of all of this, right?

Even if governments aren't talking about it behind closed doors, they are talking about an AI race with China.

They are trying to become an AI, you know, global leader.

So there are all these very, very confusing, thorny problems that are being posed by, you know, who owns this, who develops it, how do we develop it?

You have all of that.

And then you have the cultural issues.

Those are the ones that I think we will see and feel really acutely before we feel any of this other stuff.

Like what?

Like what's an example of a cultural issue?

Like you have all of these systems that are optimized for scale with a real cold, hard machine logic.

And I think that artificial intelligence is sort of the,

it is the, the truest sort of almost final realization of, you know, of scale.

It is a scale machine.

Like it is human intelligence at a scale that humans can't have.

That's really worrisome to me.

Like that to me feels like the thing, like, hey, do you like succession?

Well, AI is going to generate 150 seasons of succession for you to watch, right?

It's like, I don't want to necessarily live in that world because it's not made by people and it's not, you know, like if we insert

artificial intelligence in the sort of the most literal sense, it really is sort of like a strip mining of the humanity out of a lot of life.

And that is really worrisome.

worrisome i mean that charlie that sounds even worse than the doom scenarios i started with because how am i

say as one writer or person x who as adrian started out saying is trying to pay their groceries supposed to take up arms and take a stance against this enormous global force we have to assert that our purpose on the planet is not just an efficient world right

yeah we have to insist on that yeah at least as a start.

Charlie, you have any tiny bit of optimism for us?

Maybe just more of a realist.

I mean, you can look at the way that we have coexisted with all kinds of technologies as a story where, you know, the disruption comes in, things

never feel the same as they were, and there's usually, you know, a chaotic period of upheaval.

And then you sort of learn to adapt, right?

I am optimistic that humanity is not going to end, which I think is like, you know, the

best I can do here, that like the doomsday scenario

is probably not as likely.

Yeah, I mean, you're struggling.

I hear you struggling to be definitive, but I feel like what you're getting at is you have faith in our history of adaptation.

We have learned to live with really

cataclysmic and shattering technologies many times in the past.

And so you just have faith that we can learn to live with this one.

Yeah, I am.

And on that somewhat maybe tiny bit of optimistic note, Charlie Worzel, Adrian LaFrance, thank you again.

You've made me feel, I don't know, safe enough to crawl out of my bunker for now.

Thanks for watching.

Thank you.

This episode of Radio Atlantic was produced by by Jocelyn Frank and edited by Claudina Bade.

Our engineer is Rob Smirziak, with help from Rico Tolbert.

Fact-checking by Michelle Soraka and Steph Hayes.

Thank you also to managing editor Andrea Valdez.

I have a favor to ask you guys this week.

If you like the show, please leave us a review, tell your friends about it, or maybe just tell everyone named Denise about the show.

I'm Hannah Rosen, and we'll be back next Thursday.