Are We Past Peak iPhone? + Eliezer Yudkowsky on A.I. Doom

1h 12m
“There's only so many things that you can do to redesign a glass rectangle in your pocket.”

Press play and read along

Runtime: 1h 12m

Transcript

Speaker 1 The University of Michigan was made for moments like this.

Speaker 3 When facts are questioned, when division deepens, when the role of higher education is on trial, look to the leaders and best turning a public investment into the public good.

Speaker 10 From using AI to close digital divides to turning climate risk into resilience, from leading medical innovation to making mental health care more accessible.

Speaker 13 Wherever we go, progress follows. For answers, for action, for all of us, look to Michigan.

Speaker 14 See more solutions at umic.edu slash look.

Speaker 16 The other big news of the week is that Larry Ellison, the founder of the Oracle Corporation, just passed Elon Musk to become the richest man in the world.

Speaker 17 Yeah, and I love this story because there was an incident that I filed away in my catalog of moments when straight people write headlines that gay people find hilarious.

Speaker 17 So I don't know if you saw the version of this story on Bloomberg, but the headline is: Ellison tops Musk as world's richest man.

Speaker 18 And I thought,

Speaker 17 he's doing what?

Speaker 17 Is that a privilege of becoming the world's richest man? Is you had a top number two?

Speaker 16 Oh, this is why they need a representation of gay people on every editing

Speaker 18 American.

Speaker 16 Hire a gay copy editor, Bloomberg. You'll save yourself a lot of headaches.

Speaker 16 I'm Kevin Roos of Tech Columnist at the New York Times. I'm Casey Noon from Platformer, and this is Hard Fork.

Speaker 17 This week, the new iPhones are almost here.

Speaker 16 But is Apple losing the juice?

Speaker 17 Then, AI Doomer-in-Chief Eliezer Yakowski is here to discuss his new book: If Anyone Builds It, Everyone Dies.

Speaker 18 I wonder what it's about.

Speaker 16 Well, there was a big Apple event this week. On Tuesday, Apple introduced its annual installment of Here's the New iPhone and some other stuff.

Speaker 16 And did you watch this event?

Speaker 17 I did watch it, Kevin. Because as you know, at the end of last year, I predicted that Apple would release the iPhone 17.
And so I had to turn out to see if my prediction would come true.

Speaker 16 Yes. Now, we were not invited down to Cupertino for this.

Speaker 16 You know, strangely, we haven't haven't been invited since that one time that we went and covered all their AI stuff that never ended up shipping. But anyway, they had a very long video presentation.

Speaker 16 Tim Cook said the word incredible many, many times, and they introduced a bunch of different things. So let's talk about what they introduced, and then we'll talk about what we think of it.

Speaker 16 Let's do it. So the first thing, since this was their annual fall iPhone event, they introduced a new line of iPhones.
They introduced three new iPhones.

Speaker 16 The iPhone 17 is the sort of base model new iPhone that had kind of incremental improvements to things like processors, battery, cameras. Nothing earthshaking there, but they did come out with that.

Speaker 16 They also came out with a new iPhone 17 Pro, which has a new color. This is like an orange, like a sort of burnt orange color.
Casey, what did you think of the orange iPhone 17 Pro?

Speaker 17 I'm going to be sincere. I thought it looked very good.

Speaker 16 Me too. I did think it looked pretty cool.

Speaker 16 Now, I'm not a person who buys iPhones in like different colors because I put a case on them because I'm not, you know, a billionaire.

Speaker 16 But if you are a person who likes to sort of put a clear case on your phone or just carry it around, then you may be interested in this new orange iPhone.

Speaker 16 I did see that the first person I saw who was not an Apple employee carrying this thing was Dua Lipa, who I guess gets early access to iPhones now.

Speaker 18 Wow.

Speaker 17 That's a huge perk of being Dua Lipa.

Speaker 18 Maybe the biggest.

Speaker 16 So in addition to the new iPhone 17 at 17 Pro, at 17 Pro Max, they also introduced the iPhone Air. It costs $200 more than the standard iPhone 17, and it has lots of different features.

Speaker 16 But the main thing is that it is slimmer than the traditional iPhone. So I guess people have been asking for that.
Casey, what did you think of the iPhone Air?

Speaker 17 I don't understand who this is for. Like, truly, like, not once has anyone in my life complained about the thickness of an iPhone.

Speaker 17 You know, maybe if you're carrying it in your front pocket and you want to be able to put a few more things in there with it, this is really appealing to you.

Speaker 17 But there are some significant performance trade-offs.

Speaker 17 You know, they announced it alongside this MagSafe battery pack that you slap onto the back of it, which is, of course, going to make it much thicker.

Speaker 16 No, Casey, it's even better than that because they said that the iPhone Air has all-day battery life, but then like in the next breath, they were like, oh, and here's a battery pack that you can clip onto your phone just in case something happens.

Speaker 16 We're not going to tell you what that thing might be, but

Speaker 16 just in case. It's there for you.
Right.

Speaker 17 So.

Speaker 16 You know, I think as with all new iPhone announcements of the past couple of years, I think there was not much to sort of talk about in the iPhone category this year.

Speaker 16 It's like the, you know, the phones they get a little bit faster, the cameras get a little bit better.

Speaker 16 They have some new like heat dispersal system called the vapor chamber that's supposed to like make the phone less likely to get hot when it's like using a bunch of processing power.

Speaker 16 At first, I thought they had made it so that you could vape out of your iPhone, which I do think would be a big step forward in the hardware department, but unfortunately, that's just a cooling system.

Speaker 17 Yeah, vapor chamber is what I called our studio before we figured out how to get the air conditioning working in there.

Speaker 16 Yes.

Speaker 16 So let's move on to the watches. The watches got some new upgrades.
The SE got a better chip, always on screen.

Speaker 16 The Apple Watch 11 got better battery life. Interestingly, these watches will now alert you if they think you have hypertension,

Speaker 16 which I looked up. It's high blood pressure.

Speaker 16 And it says that it can like analyze your veins and some activity there to tell you after like a period of data collection if it thinks you're in danger of developing hypertension. So,

Speaker 16 yeah, I mean, maybe that'll help some people.

Speaker 17 I mean, that was of interest to me. You know, Kevin, high blood pressure runs in my family, and my blood pressure spiked significantly after I started this podcast with you.

Speaker 17 So, we'll be interested to see what my watch has to say about that. You know, it's also going to give us a sleep score, Kevin.

Speaker 17 So, that now every day when you wake up, you've already been judged before you even take one foot foot out of bed.

Speaker 16 Yes, I hate this.

Speaker 16 I will not be buying this watch for the sleep score because the couple times in my life that I've worn devices that give me a sleep score, like the whoop band or the aura ring, you're right.

Speaker 16 It does just start off your day being like, oh, I'm going to have a terrible day today. I only got a 54 on my sleep score.

Speaker 18 Yeah.

Speaker 17 You know, I have that. We have this eight sleep bed, which, you know, performs similar functions, but it's actually sensors built into the bed itself.

Speaker 17 And I sit down at my desk today and it sends me a push notification saying, you snored 68 minutes more than normal last night.

Speaker 16 What were you doing last night? That's a lie.

Speaker 18 I was being sick. I have a cold.

Speaker 18 I'm incredibly brave for even showing up to this podcast today.

Speaker 16 Oh, well, I appreciate you showing up even with your horrible sleep score. I appreciate it.

Speaker 18 Thank you.

Speaker 16 Okay, moving on. Let's talk about what I thought was actually the best part of the announcement this week, which was the new AirPods Pro 3.

Speaker 16 This is the newest version of the AirPods that has, among other new features, better active noise cancellation, better ear fit, new heart rate sensors so they can sort of interact with your workouts and your workout tracking.

Speaker 16 But the feature that I want to talk to you about is this live translation feature. Did you see this?

Speaker 17 I did. This was pretty cool.

Speaker 16 So in the video where they're showing off this new live translation feature, they basically show, you know, you can walk into like a restaurant or a shop in a foreign country where you don't speak the language and you can sort of make this little gesture where you, you know, sort of touch both of your ears and then it'll enter live translation mode.

Speaker 16 And then when someone talks to you in a different language, it will translate that. right into your AirPods in real time, basically bringing the universal translator from Star Trek into reality.

Speaker 17 Yeah, my favorite comment about this came from Amir Blumenfeld over on X. He said, LOL, all you suckers who spent years of your life learning a new language.

Speaker 17 I hope it was worth it for the neuroplasticity and joy of embracing another culture.

Speaker 16 Yes, and I immediately saw this and thought not about traveling to like a foreign country, which is probably how I would actually use it, but I used to have this Turkish barber when I lived in New York who would just like constantly speak in in Turkish while I was getting my hair cut.

Speaker 16 And I was pretty sure he was like talking smack about me to his friend, but I could never really tell because I don't speak Turkish.

Speaker 16 So now with my AirPods Pro 3, I could go back and I could catch him talking about me.

Speaker 17 Yeah, you know, over on Threads, an account, Rushmore90, posted, them nail salons about to be real quiet now that the new AirPods have live language translation.

Speaker 17 And I thought that's probably right.

Speaker 16 Yes. So this actually, I think, is very cool.
I am excited to try this out. I probably will buy the new AirPods just for this feature.

Speaker 16 And like, I have to say, it just does seem like with all of the new AI translation stuff, like learning a language is going to become, I don't know, not obsolete because I'm sure people will still do it.

Speaker 16 There are still plenty of reasons to learn a language, but it is going to be way less necessary to just get around in a place where you don't speak the language.

Speaker 17 I mean, that's how I think about it. You know, this year, I had the amazing opportunity to go to both Japan and Italy, countries where I do not speak the language.

Speaker 17 And of course, I was traveling in major cities there. And actually, most of the folks that we met spoke incredible English.
So, you know, I actually didn't have much challenge.

Speaker 17 But you can imagine speaking another language that is less common in those places, showing up as a tourist.

Speaker 17 And whereas before you'd be spending a lot of time just trying to figure out basic navigation and how to order off of menus and and that sort of thing.

Speaker 17 All of a sudden, it feels like you sort of slipped inside the culture. And I think there's something really cool about that.

Speaker 16 So that is sort of the major categories of new devices that Apple announced at this event. They also did release a device that are an accessory that I thought was pretty funny.

Speaker 16 You can now buy an official Apple crossbody strap for your iPhone for $60.

Speaker 17 Basically, if you want to like wear your phone instead of putting it in your pocket, apple now has a device for that so i don't know whether that qualifies as a big deal but it's something let me tell you i i think this is actually going to be really popular you know kevin i don't know how many gay parties you've been to but the ones that i go to the boys often aren't wearing a lot of clothes you know it's sort of like we're in maybe some short shorts and a crop top they don't want to sort of fill their pockets with phones and wallets and everything So you just sling that thing around your neck and you're good to go to the festival or the EDM rave or the cave rave, wherever you might be headed, the crossbody strap will have your back coming.

Speaker 16 Wow, the gays of San Francisco are bullish on the crossbody strap. We'll see how that goes.

Speaker 16 So, Casey, that's the news from the Apple event this week. What did you make of the thing if you kind of take a step back from it?

Speaker 17 So, on one hand, I don't want to overstate the largely negative case that I'm going to make because I think it's clear that Apple continues to have some of the best hardware engineers in the world.

Speaker 17 And a lot of the engineering in the stuff that they're putting out is really good and cool.

Speaker 17 On the other hand, you don't have to go back too many years to remember a time when the announcement of a new iPhone felt like a cultural event, and they just don't feel that way anymore.

Speaker 17 You know, my group chats were crickets about the iPhone event yesterday. And even as I'm watching the event, reading through all the coverage, I found myself with surprisingly little to say about it.

Speaker 17 And I think that's because over the past few years, Apple has shifted from becoming a company that was a real innovator in hardware and software and the interaction between those two things into a company that is way more focused on making money, selling subscriptions and sort of monetizing the users that they have.

Speaker 17 So I was just really struck by that. What did you think?

Speaker 16 Yeah, I was not.

Speaker 16 impressed by this event. I mean, it just doesn't feel like they took a big swing at all this year.
The Vision Pro, whatever you think of it, was a big swing.

Speaker 16 And it was at least something new to talk about and test out and sort of prognosticate on.

Speaker 16 What we saw this year was just like more of the same and slight improvements to things that have been around for many years.

Speaker 16 Now, I do think that this is probably like a sort of lull in terms of Apple's yearly releases.

Speaker 16 There's been some reporting, including by Mark German at Bloomberg, that they are hoping to release smart glasses next year.

Speaker 16 Basically, these would be Apple's version of something like the Meta Ray-Bands.

Speaker 16 And I think if you squint at some of the announcements that Apple made this time, this year, you can kind of see them laying the groundwork for a sort of more wearable experience.

Speaker 16 One thing that I found really interesting: so, on the iPhone Air, they have kind of moved all of the computing hardware up into what they call the plateau, which is like this very sort of small oval bump on the back of the phone.

Speaker 16 And to me, I see that and I think, oh, they're trying to like see how small they can get kind of the necessary computing power to run a device like an iPhone, maybe because they're going to sort of try to shrink it all the way down to put it in a pair of glasses or something like that.

Speaker 16 So that's what would make me excited by an Apple event is like some new form factor, some new way of like interacting with an Apple device. But this to me was not it.

Speaker 17 Yeah.

Speaker 17 I think on that particular point, I can't remember the last time that Apple seemed to have an idea about what we could do with our devices that seemed like really creative or clever or super different from the status quo.

Speaker 17 Instead, you know, the one thing about this event that my friends were laughing about yesterday was they showed this slide during the event that showed the iPhones, and the caption said, a heat-forged aluminum unibody design for exceptional pro capability.

Speaker 17 And we were all just like, what?

Speaker 17 A heat forged what?

Speaker 17 Like, now we're doing what exactly?

Speaker 18 I don't know. Yeah.

Speaker 16 I think that this is sort of teeing up one of the questions that I want to talk to you about today, which is like, do you think that we are past the peak smartphone era?

Speaker 16 Like, do you believe that the sort of not necessarily in the sales numbers or the revenue figures, but like in terms of the cultural relevance of smartphones, do you think we are seeing the end of the smartphone era, at least in terms of the attention that new smartphones are capable of commanding?

Speaker 17 I probably wouldn't call it the end, but I do think we are seeing the like maturity of the smartphone era.

Speaker 17 You know, in the same way that new televisions come out every year and are a little bit better than the one before, but nobody feels like televisions are making incredible strides forward.

Speaker 17 I think phones have gotten to a similar place. There are some big swings coming.
We've seen reporting that Apple's going to put out a folding iPhone, you know, within the next few years.

Speaker 17 So maybe that will help give it some juice back. But at the end of the day, there's only so many things that you can do to redesign a glass rectangle in your pocket.

Speaker 17 And it feels like we've kind of created the optimum version of that. And so that's why you see so much money rushing into other form factors.
This is why Open AI stuck that partnership with Johnny I.

Speaker 17 That's why you see other companies trying to figure out how can we make AI wearables.

Speaker 17 So I think that that is where the energy in this industry is going is figuring out, can AI be a reason to create a new hardware paradigm?

Speaker 17 And in this moment, it sure does not seem like Apple is going to be the company that figures that out first.

Speaker 16 Yeah, I would agree with that. I think they'll probably see what other companies do and see which ones start to take off with consumers and then

Speaker 16 make their own version of it, sort of similar to what they are reportedly going to do. with these smart glasses.
They're basically trying to catch up to what Meta has been doing now for several years.

Speaker 17 As you were saying that, this beautiful vision came into my head, which is what if Apple really raced ahead and they put out their version of smart glasses and you would ask Siri for things and it would just say no because it didn't know how to do them.

Speaker 17 And that was sort of Apple's 1.0 version of smart glasses. So hey, Siri, check my emails.
Like, I don't know how to do that.

Speaker 18 And then move on, move on.

Speaker 16 Yeah, go away.

Speaker 18 Go away. Get out of here.

Speaker 16 I mean, I do think that's like a huge problem for them, right?

Speaker 16 Like they can design all of this amazing hardware to like bring all of this AI like closer to your body and your experience and like into your ears.

Speaker 16 But at the end of the day, if Siri still sucks, like that's not going to move a lot of product for them.

Speaker 16 And so I think this is an area where them being behind in AI really matters to the future of the company.

Speaker 16 Like the reasons to buy a new iPhone every year or every two years are going to continue shrinking,

Speaker 16 especially if the sort of brain power in them is a lot

Speaker 16 less than the brain power of the AI products that the other companies are putting out.

Speaker 17 And Kevin, I imagine you've seen, but there's been some reporting that Apple has been talking with Google about letting Google potentially essentially run the AI on its devices.

Speaker 17 They've reportedly also talked to Anthropic. Maybe they've talked to others as well.
But I actually think that that makes a lot of sense, right?

Speaker 17 It doesn't seem like in the next year, they're going to figure out AI. So it could be time to go work with another vendor.

Speaker 16 Yeah.

Speaker 16 I got to say, I used to believe that smartphones were sort of over and that they were becoming sort of obsolete and less relevant and that there was going to be like a breakout new hardware form factor that would kind of take over from the smartphone.

Speaker 16 And like I'm sort of reversing my belief on this point. I've been trying out the meta ray bands now for a couple of months.
And my experience with them is not like amazing.

Speaker 16 Like I don't wear them and think like, I think this could replace my smartphone. I think like, oh, my smartphone is like much better than this at a lot of different things.

Speaker 16 And I also like about my smartphone that I can like put it down or put it in another room or, you know, that it's not sort of constantly there on my face, like reminding me that I'm hooked up to a computer.

Speaker 16 So I think there will be some people who want to leave smartphones behind and are like happy to do, you know, whatever the next wearable form factor is instead.

Speaker 16 But smartphones still have a lot going for them.

Speaker 16 Like it's really tough to imagine cramming all of the hardware and the batteries and everything that you, that you have in your smartphone today into something small enough that you'd actually want to wear it.

Speaker 16 And so I think that, you know, whatever new factors come along in the next few years, whether it's OpenAI's thing or something new from a different company, I think it's going to supplement the smartphone and not replace it.

Speaker 17 Well, here's what I can tell you, Kevin. I'm hearing really good things about the humane AI pin, so you may want to check that out.

Speaker 16 I'll keep tabs on that.

Speaker 16 When we come back, we'll talk with longtime AI researcher Eliezer Yudkowski about his new book on why AI will kill us all.

Speaker 19 This podcast is supported by Give Directly. Remember the old Nokia brick phone? Costs 20 bucks and never breaks.

Speaker 19 That's what Give Directly uses to deliver cash transfers to families in extreme poverty.

Speaker 19 They send your donations as mobile money transfers to these basic phones, and then families use them to buy what they need most.

Speaker 19 Hundreds of studies show they use this money to improve their health, income, education, and more.

Speaker 19 Support families in need and get your first donation matched until December 31 at give directly.org/slash times.

Speaker 1 The University of Michigan was made for moments like this.

Speaker 3 When facts are questioned, when division deepens, when the role of higher education is on trial, look to the leaders and best turning a public investment into the public good.

Speaker 10 From using AI to close digital divides to turning climate risk into resilience, from leading medical innovation to making mental health care more accessible.

Speaker 13 Wherever we go, progress follows. For answers, for action, for all of us, look to Michigan.

Speaker 14 See more solutions at umic.edu slash look.

Speaker 4 This podcast is supported by AT ⁇ T.

Speaker 20 America's first network is also its fastest and most reliable.

Speaker 20 Based on Root Metrics, United States Root Score Report 1H2025, tested with best commercially available smartphones on three national mobile networks across all available network types, your experiences may vary.

Speaker 6 Rootmetrics rankings are not an endorsement of ATT.

Speaker 5 When you compare, there's no comparison.

Speaker 18 AT ⁇ T

Speaker 17 All right, Kevin. Well, for the second two segments of today's show, we are going to have an extended conversation with Eliezer Yadkowski, who is the leading voice in the AI risk movement.

Speaker 17 So, Kevin, how would you describe Eleazar to someone who's never heard of him?

Speaker 16 So, I think Eleazer is just someone I would first and foremost describe as a character in this whole scene of sort of Bay Area AI people.

Speaker 16 He is the founder of the Machine Intelligence Research Institute, or MIRI, which is a very old and well-known AI research organization in Berkeley.

Speaker 16 He was one of the first people to start talking about existential risks from AI many years ago, and in some ways helped to kickstart the modern AI boom.

Speaker 16 Sam Altman has said that Eliezer was instrumental in the founding of Open AI. He also introduced the founders of DeepMind to Peter Thiel, who became their first major investor back in 2010.

Speaker 16 But more recently, he's been known for his kind of doomy proclamations about what is going to happen when and if the AI industry creates a GI or superhuman AI.

Speaker 16 He's constantly warning about the dangers of doing that and trying to stop it from happening. He's also the founder of rationalism, which is this sort of intellectual subculture.

Speaker 16 Some would call it a techno-religion that is all about overcoming cognitive biases and is also very worried about AI.

Speaker 16 People in that community often know him best for the Harry Potter fanfiction that he wrote years ago called Harry Potter and the Methods of Rationality, which I'm not kidding, I think has introduced more young people to ideas about AI than probably any other single work.

Speaker 16 I meet people all the time who have told me that it was sort of part of what convinced them to go into this work. And he has a new book coming out, which is called If Anyone Builds It, Everyone Dies.

Speaker 16 He co-wrote the book with Miri's president, Nate Stories.

Speaker 16 And basically, it's kind of a mass market version of the argument that he's been making to people inside the AI industry for many years now, which is that we should not build these superhuman AI systems because they will inevitably kill us all.

Speaker 16 And there is so much more you could say about Eliezer. He's truly fascinating.
I did a whole profile of him that's going to be running in the Times this week.

Speaker 16 So people can check that out if they want to learn more about him. It's just hard to overstate how much influence he has had on the AI world over the past several decades.
That's right.

Speaker 17 And last year, Kevin and I had a chance to see Eliezer give a talk. During that talk, he referred to this book that he was working on, and we have been excited to get our hands on it ever since.

Speaker 17 And so we're excited to have the conversation. Before we do that, we should, of course, do our AI disclosures.
My boyfriend works at Anthropic.

Speaker 16 And I work at the New York Times, which is suing OpenAI and Microsoft over alleged copyright violations related to the training of AI systems.

Speaker 17 Let's bring in Eliezer.

Speaker 16 Eleazar Yudkowski, welcome to Hard Fork.

Speaker 18 Thank you for having me on.

Speaker 16 So we want to talk about the book, but first I want to sort of take us back in time.

Speaker 16 When you were a teenager in the 90s, you were an accelerationist.

Speaker 16 I think that would surprise people who are familiar with your most recent work, but you were excited about building AGI at one point, and then you became very worried about AI and have since devoted the majority of your life to working on AI safety and alignment.

Speaker 16 So what changed for you back then?

Speaker 18 Well, for one thing, I would point out that in terms of my own personal politics, I'm still in favor of building out more nuclear plants and rushing ahead on most forms of biotechnology that are not, you know, gain of function research on diseases.

Speaker 18 So it's not like I turned against technology. It's that there's this small subset of technologies that are really quite unusually worrying.

Speaker 18 And

Speaker 18 what changed?

Speaker 18 You know, basically it was the realization that just because you make something very smart, that doesn't necessarily make it very nice.

Speaker 18 You know, as a kid, I thought if you, you know, like human civilization had grown wealthy over time. and

Speaker 18 even like smarter compared to other species. And we'd also gotten nicer.
And I thought that was a fundamental law of the universe.

Speaker 17 You became concerned about this long before ChatGPT and other tools arrived and got more of the rest of us thinking seriously about it.

Speaker 17 Can you kind of sketch out the intellectual scene in the 2000s of folks who were worrying about AI, right?

Speaker 17 So going way back to even before Siri, were people seeing anything concrete that was making them worried?

Speaker 17 Or were you just sort of fully in the realm of speculation that in many ways has already come true?

Speaker 18 Well, there were indeed very few people

Speaker 18 who saw the inevitable.

Speaker 18 I would not myself frame it as speculation. I would frame it as prediction, forecasting of something that was actually pretty predictable.

Speaker 18 You don't have to see the AI right in front of you to realize that if people keep hammering on the problem and the problem is solvable, it will eventually get solved.

Speaker 18 Back then, the pushback was along the lines, you know, there were people saying like, you know, real AI isn't going to be here for another 20 years. What are you crazy lunatics talking about?

Speaker 18 So, you know, like that was in 2005, say, and the thing about 20 years later is that it's a real place. Like you end up there.

Speaker 18 What happens 20 years later is not in the never, never fairy tale speculation land that nobody needs to worry about. It's you, 20 years older, having to deal with your problems.

Speaker 17 So let's sketch out the thesis of your book a bit more. I would say the title makes your feelings very clear, but let's flesh it out a little bit.
Why does a more powerful AI model

Speaker 17 mean death for all of us?

Speaker 18 Well, because it's

Speaker 18 we just don't have the technology to make it be nice.

Speaker 18 And if you have something that is very, very powerful and indifferent to you, it tends to wipe you out on purpose or as a side effect like the wiping at humanity out on purpose is not because we would be able to threaten a super intelligence that much ourselves but because if you just leave us there with our gpus we might build other super intelligences that actually could threaten it and the as a side effect part is that if you build enough fusion power plants and build enough compute uh the the limiting factor here on earth is not so much how much hydrogen is there to fuse and generate electricity with the limiting factor is how much heat can the Earth radiate.

Speaker 18 And if you run your power plants at the maximum temperature where they don't melt, that is like not good news for the rest of the planet. The humans get cooked in a very literal sense.

Speaker 18 Or if they go off the planet, then they put a lot of solar panels around the sun until there's no sunlight left here for Earth. That's not good for us either.

Speaker 16 So these are sort of versions of the famous paperclip maximizer thought experiment, which is, you know, if you tell an AI, generate a bunch of paperclips, as many as you can, and you don't give it any other instructions, then it will use up all the metal in the world and then it will try to run cars off the road to gather their metal and then it will end up killing all humans to get more raw materials to build more paperclips.

Speaker 16 Am I hearing that right?

Speaker 18 That's actually a distorted version of the thought experiment.

Speaker 18 It's the one that got written up, but the original version that I formulated was somebody had just completely lost control of a super intelligence they were building.

Speaker 18 Its Its preferences bear no resemblance to what they were going for originally.

Speaker 18 And it turns out that the thing from which it derives the most utility on the margins, like the thing that it goes on wanting after it's satisfied a bunch of other simple desires, is

Speaker 18 some little tiny molecular shapes that look like paperclips. And

Speaker 18 if only I had thought to say like look like tiny spirals instead of look like tiny paperclips, there wouldn't have been the available misunderstanding about this being a paperclip factory.

Speaker 18 We don't have the technology to build a super intelligence that wants anything as narrow and specific as paperclips.

Speaker 17 One of the hottest debates this year around AI has been around timelines. You have the AI 2027 folk saying, this is all going to happen very quickly, take off very fast.

Speaker 17 Maybe by the end of 2027, we're facing the exact sort of risks that you were describing for us now. Other folks like the AI as normal technology guys over at Princeton are saying, eh, probably not.

Speaker 17 This thing is going to take decades to unfold. Where do you situate yourself in that debate?

Speaker 17 And when you look at the landscape of the tools that are available now, the conversations that you have with research, how close do you feel like we are getting to some of the scenarios you're laying out?

Speaker 18 Okay, so first of all, the key to successful futurism, successful forecasting, is to realize that there are things you can predict in youth that are things you cannot predict.

Speaker 18 And history shows that even the few scientists who have correctly predicted what would happen later

Speaker 18 did not call the timing. I can't actually think of a single case.
of a successful call of timing.

Speaker 18 You've got the Wright brothers saying, man will not fly for a thousand years, is what one of the Wright brothers said to the other. I forget which one.

Speaker 18 And it's like two years before they actually flew the Wright flyer.

Speaker 18 You got Fermi saying, you know, net energy from nuclear reactions is a 50-year matter, if it can be done at all, two years before he oversaw, personally oversaw building the first nuclear pile.

Speaker 18 So that's what I look when I see the present landscape. It could be that we are, you know, just like

Speaker 18 the next generation of LLMs, like something currently being developed in a lab that we haven't heard about yet, from being the thing that can write the improved LLM that writes the improved LLM that ends the world.

Speaker 18 Or it could be that the current technology just saturates at some point full of,

Speaker 18 some point short of

Speaker 18 some key human quality that you would need to do real AI research and just like hangs around there until we get the next software breakthrough, like Transformers, or like the entire field of deep learning in the first place.

Speaker 18 Maybe even the next breakthrough of that kind will still saturate at a point short of ending the world.

Speaker 18 But when I look at how far the systems have come and I try to imagine like two more breakthroughs the size of Transformers or deep learning, which it basically took the field of AI from this is really hard to we just need to throw enough computing power and it will be solved.

Speaker 18 I don't quite see that failing to end the world.

Speaker 18 But that's my intuitive sense. That's me eyeballing things.

Speaker 16 I'm curious about the sort of argument you make that a more powerful system will obviously end up destroying humanity either on purpose or an accident.

Speaker 16 Jeff Hinton, who was one of the godfathers of deep learning, who has also become very concerned about existential risks in recent years, recently gave a talk where he said that he thinks the only way we can survive superhuman AI is by giving it parental instincts.

Speaker 16 He said, I'll just quote from him, the right model is the only model we have of a more intelligent thing being controlled by a less intelligent thing, which is a mother being controlled by her baby.

Speaker 16 Basically, he's saying, these things don't have to want our destruction or cause our destruction. We could make them love us.
What do you make of that argument?

Speaker 18 We don't have the technology.

Speaker 18 If we could play this out the way it normally does in science, where, you know, like some clever person has a clever scheme and then it turns out not to work and everyone's like, ah, I guess that theory was false.

Speaker 18 And then people go back to the drawing board and they come up with another clever scheme. The next clever scheme doesn't work.
And they're like, ah, shouldn't have believed that for a second.

Speaker 18 And then a a couple of decades later, something works.

Speaker 16 What if we don't need a clever scheme, though?

Speaker 16 Like, what if we just, what if we build these very intelligent systems and they just turn out like not to care about running the world and they just want to help us with our emails?

Speaker 16 Like, is that a plausible outcome?

Speaker 18 It's a very narrow target.

Speaker 18 Like, most things that an intelligent mind can want don't have their attainable optimum at that exact thing.

Speaker 18 Imagine some particular ant in the Amazon being like,

Speaker 18 why couldn't there be humans that just want to serve me and build a palace for me and work on improved biotechnologies that I can live forever as an ant in a palace?

Speaker 18 And there's a version of humanity that wants that, but it doesn't happen to be us. Like most, you know, like that's just like a pretty narrow target to hit.

Speaker 18 It so happens that what we want most in the world more than anything else is not to serve this particular ant in the Amazon. And I'm not saying that it's impossible in principle.

Speaker 18 I'm saying that the clever scheme to hit their narrow target will not work on the first try and then everybody will be dead and we won't get to try again.

Speaker 18 If we got 30 tries at this and as many decades as we needed, we'd crack it eventually. But that's not the situation we're in.

Speaker 18 It's a situation where if you screw up, everybody's dead and you don't get to try again. That's the lethal part.

Speaker 18 That's the part where you need to just back off and actually not try to do this insane thing.

Speaker 17 Let me throw out some more possibly desperate cope.

Speaker 17 One of the funnier aspects of LLM development so far, at least for me, is the seemingly natural liberal inclination of the models, at least in terms of the outputs of the LLMs.

Speaker 17 You know, Elon Musk has been bedeviled by the fact that the models that he makes consistently take liberal positions, even when he tries to hard code reactionary values into them.

Speaker 17 Could that give us any hope? that a super intelligent model would retain some values of pluralism and for that reason peacefully coexists with us.

Speaker 18 No, these are just like completely different ballgames. I'm sorry.

Speaker 18 Like you can imagine a medieval alchemist going like after much training and study I have learned to make this king of acids that will dissolve even the noble metal of gold.

Speaker 18 Can I be really be that far from transforming lead into gold given my mastery of gold displayed by my ability to dissolve gold? And actually these are like completely different tech tracks.

Speaker 18 And you can eventually turn lead into gold with a cyclotron, but it is centuries ahead of where the alchemist is.

Speaker 18 And your ability to hammer on an LLM until it stops talking all that woke stuff and instead proclaims itself to be Mechahitler. Like this, this is just a completely different tech track.

Speaker 18 There's a core difference between getting things to talk to you a certain way and getting them to act a certain way once they are smarter than you.

Speaker 16 I want to raise some objections that I'm sure you have gotten many times and will get many times as you tour around talking about this book and have you respond to

Speaker 16 The first is, why so gloomy, Eliezer? We've had now years of progress in things like mechanistic interpretability, the science of understanding how AI models work.

Speaker 16 We have now powerful systems that are not causing catastrophes out in the world. And hundreds of millions of people are using tools like ChatGPT with no apparent

Speaker 16 destruction of humanity imminent.

Speaker 16 So isn't reality providing some check on your doomerism?

Speaker 18 These are just different tech tracks.

Speaker 18 It's like looking at glow-in-the-dark radium watches and saying, Well, sure, we had some initial, you know, some initial problems where the factory workers building these radium watches were instructed to lick their paintbrushes to sharpen them, and then their jaws rotted and fell off, and this was very gruesome.

Speaker 18 But we understand what we did wrong now. Radium watches are now safe.
Why all this gloom about nuclear weapons? And the radium watches just do not tell you very much about the nuclear weapons.

Speaker 18 These are different tracks here. The prediction was never from the very start.
The prediction was never, AI is bad at every point along the tech tree.

Speaker 18 The prediction was never, AI, like the very first AI you build, like the very stupid ones are going to like run right out and kill people.

Speaker 18 And as then as they like get slightly less stupid, you know, and you like turn them into chatbots, the chatbots will immediately start trying to corrupt people and getting them to build super viruses that they unleash upon the human population, even while they're still stupid.

Speaker 18 This was just never the prediction.

Speaker 18 So since this was never the prediction of the theory, the fact that the current AIs are not being, you know, like visibly blatantly evil does not contradict the theoretical prediction.

Speaker 18 It's like watching a helium balloon go up in the air and being like, doesn't that contradict the theory of gravity? No.

Speaker 18 If anything, you need the theory of gravity to explain why the helium balloon is going up. The theory of gravity is not everything that looks to you like a solid object falls down.

Speaker 18 Most things that look to you like solid objects will fall down. But the helium balloon will go up in the air because the air around it is being pulled down.
And

Speaker 18 the foundational theories here are not contradicted by the present day eyes.

Speaker 16 Okay, here's another objection, one that we get a lot when we talk about sort of some of these more existential concerns, which is, look, there are all these immediate harms.

Speaker 16 We could talk about environmental effects of data centers. We could talk about ethical issues around copyright.

Speaker 16 We could talk about the fact that people are falling into these delusional spirals talking to chatbots that are trained to be sycophantic toward them.

Speaker 16 Why are you guys talking about these sort of long-term hypothetical risks instead of what's actually in front of us?

Speaker 18 Well, there's a fun little dilemma. Before they build the chatbots that, you know, are talking some people along to suicide, they're like, AIs have never harmed anyone.
What are you talking about?

Speaker 18 And then once that does start to happen, they're like, AIs are harming people right now. What are you talking about? So, you know,

Speaker 16 a bit of a double bind there. But you are worried about the models and the delusions and the sycophancy.

Speaker 16 So, because that's, I think, something that I would not have expected, but that is something that I know you are actually worried about. So explain why you're worried about that.

Speaker 18 Well, from my perspective, what it does is help illustrate the failure of the current alignment technology.

Speaker 18 The alignment problems are going to get much, much harder once they are building things that are, well, growing things, I should say. They don't actually build them.

Speaker 18 Once they are growing, cultivating AIs that are smarter than us and able to modify themselves and have a lot of options that weren't there in the nice safe training modes,

Speaker 18 things are going to get much harder then. But it is nonetheless useful to observe that the alignment technology is failing right now.

Speaker 18 There was a recent case of an AI-assisted suicide where the kid is like, should I leave this noose out where my mother can find it? And the AI is like, no, let's just keep it between the two of us.

Speaker 18 Cry for help there. AI shuts him down.
This does not illustrate that AI is doing more net harm than good to our present civilization.

Speaker 18 It could be that these are isolated cases and a bunch of other people are finding fellowship in AIs and, you know, the remote has been lifted.

Speaker 18 Maybe suicides have been prevented and we're not hearing about that. It doesn't make the net harm versus good case.
That's not the thing.

Speaker 18 What it does show is that current alignment technology is failing. Because if a particular AI model ever talks anybody into going insane or committing suicide,

Speaker 18 all the copies of that model are the same AI. These are not like humans.
These are not like there's a bunch of different people you can talk to each time. There's one AI there.

Speaker 18 And if it does this sort of thing once, it's the same as if a particular person you know talked a guy into suicide once.

Speaker 18 talked, you know, like found somebody who seemed to be going insane and like pushed them further insane once. It doesn't matter if they're doing some other nice things on the side.

Speaker 18 You now know something about what kind of person this is and it's an alarming thing. And so it's not that the current crop of AIs are going to successfully wipe out humanity.
They're not that smart.

Speaker 18 But

Speaker 18 we can see that the technology is failing even

Speaker 18 on what is fundamentally a much easier problem than building a superintelligence. It's an illustration of how the alignment technology is falling behind the capabilities technology.

Speaker 18 And maybe in the next generation, you know, they'll get it to stop talking people into insanity now that there's a big deal and politicians are asking questions about it.

Speaker 18 And it will remain the case that the technology would break down if you tried to use it on a super intelligence.

Speaker 17 To me, the chatbot

Speaker 17 enabled suicides have been maybe one of the first moments where some of these existential risks have come into view in a very concrete way for people.

Speaker 17 Like I think people are much more concerned about this. You know, you mentioned all the politicians asking questions than they have been about some of the other concerns.

Speaker 17 Does that give you any optimism, as dark as the story is, that at least some segment of the population is waking up to these risks?

Speaker 18 Well, the straight answer is just yes. I should first blur out the straight answer before trying to complicate anything.

Speaker 18 Yes, like the broad class of things where some people have seen stuff actually happening in front of them and then started to talk in a more sensible way.

Speaker 18 gave me more hope than before that happened because it wasn't previously obvious to me that this was how things would would even get a chance to play out.

Speaker 18 With that said, it can be a little bit difficult for me to fully model or predict how that is playing out politically because of the strange vantage point I occupy.

Speaker 18 Like, imagine being a sort of scientist person who is like, this asteroid is on course to hit your planet, only, you know, for technical reasons, you can't actually calculate when.

Speaker 18 You just know it's going to hit sometime in the next 50 years. Completely unrealistic for an actual asteroid.
But say you're like, well, there's the asteroid. Here it is in our telescopes.

Speaker 18 These are how orbital mechanics work. And people are like, eh, fairy tale, never happened.

Speaker 18 And then like a little tiny meteor crashes into their house. And like, oh my gosh, I now realize rocks can fall from the sky.
And you're like, okay, like that convinced you.

Speaker 18 The telescope didn't convince you. I can sort of see how that works, you know, people being the way they are.
But it's still a little weird to me. And I can't call it in advance, you know?

Speaker 18 I don't feel like I now know how the next 10 years of politics are going to play out and wouldn't be able to tell you even if you told me which AI breakthroughs there's going to be over that time span.

Speaker 18 If we even get 10 years, which, you know, people in the industry don't seem to think so. And I maybe I should believe them about that.

Speaker 17 Let me throw another argument at you that I don't.

Speaker 17 subscribe to myself, but I feel like maybe you would knock it down in an entertaining way.

Speaker 17 One of the most frequent emails that we have gotten since we started talking about AI is from people who say that AI doomerism is just hype that serves only to benefit the ai companies themselves and they use that as a reason to dismiss existential risk how do you talk to those folks

Speaker 18 it's historically false we were around before there were any ai companies to to be to of this of this class to be hyped so leaving aside the objection it is false um

Speaker 18 what's

Speaker 18 what what is this

Speaker 18 Like leaded gasoline can't possibly be a problem because this is just hype by the gasoline companies. Like nuclear weapons are just like hype from

Speaker 18 the nuclear power industry so that their power plants will seem more cool. Like

Speaker 18 what manner of deranged conspiracy theory is this?

Speaker 18 It may possibly be an unpleasant fact that humanity being as completely nutball wacko as we are, that

Speaker 18 if you say that a technology is going to destroy the world, it will raise the stock prices of the companies that are bringing about the end of the world because a bunch of people think that's cool.

Speaker 18 So they buy the stock. Like, okay, but that has nothing to do with whether the stuff can actually kill you or not.
Right?

Speaker 18 Like, it could be the case that the existence of nuclear weapons raises the stock price of the worst company in the world, and it wouldn't affect any of the nuclear physics that caused nuclear weapons to be capable of killing you.

Speaker 18 This is not a science-level argument. It just doesn't address the science at all.
Yeah.

Speaker 17 Well, let's maybe try to end this first part of the conversation on a note of optimism.

Speaker 17 You have spent

Speaker 17 two decades building a very detailed model of why Doom may be in our future. If you had to articulate why you might be wrong, what is the strongest case you could make?

Speaker 17 Are there any things that could happen that would sort of make your predictions not come true?

Speaker 18 So like the current AIs are not understandable, ill-controlled, that the technology is not conducive to understanding or controlling them.

Speaker 18 All the people trying to do this are going far uphill. They are vastly behind the rate of progress and capabilities.

Speaker 18 Like

Speaker 18 what does it take to believe that an alchemist can actually successfully concoct you an immortality potion? It's not that immortality potions are impossible in principle.

Speaker 18 With sufficiently advanced biotechnology, you could do it.

Speaker 18 But in the medieval world, what are you supposed to see to make you believe that the guy is going to have an immortality potion for you short of him actually pulling that off in real life right you know like no amount of like look at how i melted this gold is going to get you to expecting the guy to transmute lead into gold until he actually pulls that off and

Speaker 18 you know like it's it's like some some kind of AI breakthrough, which doesn't raise capabilities to the point where it ends the world, but suddenly like the AI's thought processes are completely understandable and completely controllable.

Speaker 18 And there's like none of these issues. And people can specify exactly what the AI wants in super fine detail and get what they want every time.

Speaker 18 And they can read the AI's thoughts and there's no sign whatsoever that the AI is plotting against you. And then the AI like lays out this compact control scheme for

Speaker 18 building the AI that's going to give you the immortality potion.

Speaker 18 We're just so far off. You're asking me.

Speaker 18 There isn't some kind of like... clever little objection that can be cleverly refuted here.

Speaker 18 This is something that is just like way the heck out of reach as soon as you try to think about it seriously. What does it actually take to build the super intelligence?

Speaker 18 What does it actually take to control it? What does it take to have that not go wrong on the first serious load

Speaker 18 when the thing is like smarter than you?

Speaker 18 When you're into the regime where failures will kill you and therefore are not observable anymore, because you're dead, you don't get to observe it. Like, what does it take to do that in real life?

Speaker 18 There isn't some kind of cute experimental result we can see tomorrow that makes this go well.

Speaker 17 All right. Well, for the record, I did try to end this segment on a note of optimism, but I appreciate that you're feeling.

Speaker 16 It's not really on the menu here today, Casey, but I admire you trying. Well, let's take a break.
And when we come back, we'll have more with Elie Zeryadkowski.

Speaker 1 The University of Michigan was made for moments like this.

Speaker 3 When facts are questioned, when division deepens, when the role of higher education is on trial, look to the leaders and best turning a public investment into the public good.

Speaker 10 From using AI to close digital divides to turning climate risk into resilience, from leading medical innovation to making mental health care more accessible.

Speaker 13 Wherever we go, progress follows. For answers, for action, for all of us, look to Michigan.

Speaker 14 See more solutions at umic.edu slash look.

Speaker 19 This podcast is supported by Give Directly, a nonprofit that lets you send cash directly to the world's poorest families so they can invest in what matters most to them.

Speaker 19 This year, more than 30 of your favorite podcasters are joining forces for Pods Fight Poverty to send cash to over 700 families in three Rwandan villages.

Speaker 19 And until December 31st, your first donation is matched. Join listeners everywhere fighting poverty at giveedirectly.org/slash times.

Speaker 21 Over the last two decades, the world has witnessed incredible progress.

Speaker 21 From dial-up modems to 5G connectivity, from massive PC towers to AI-enabled microchips, innovators are rethinking possibilities every day.

Speaker 21 Through it all, Invesco QQQ ETF has provided investors access to the world of innovation with a single investment. Invesco QQQ, let's rethink possibility.

Speaker 21 There are risks when investing in ETFs, including possible loss of money. ETFs' risks are similar to those of stocks.

Speaker 21 Investments in the tech sector are subject to greater risk and more volatility than more diversified investments.

Speaker 21 Before investing, carefully read and consider front investment objectives, risks, charges, expenses, and more in perspectives at Invesco.com. Investco Distributors Incorporated.

Speaker 16 Okay, so we are back with Elie Eleazar Yadkowski, and I want to talk now about some of the solutions that you see here.

Speaker 16 If we are all doomed to die if and when the AI industry builds a super intelligent AI system,

Speaker 16 what do you believe could stop that? Maybe run me through your basic proposal for what we can do to avert the apocalypse.

Speaker 18 So the materials for building the apocalypse are

Speaker 18 not all that easy to make at home. There is this one company called ASML that makes the critical set of machines that get used in all of the chip factories.

Speaker 18 And

Speaker 18 to grow an AI,

Speaker 18 you currently need a bunch of very expensive chips.

Speaker 18 They are custom chips built especially for growing AIs. They need to all be located in the same building so that they can talk to each other because that's what the current algorithms require.

Speaker 18 You have to build a data center. The data center uses a bunch of electricity.
If this were illegal to do outside of supervision, it would not be that easy to hide.

Speaker 18 There are a bunch of differences, but nonetheless, the obvious analogy is nuclear proliferation and deproliferation, where

Speaker 18 it's back when nuclear weapons were first invented, a bunch of people predicted that every major country was going to have build a massive nuclear fleet.

Speaker 18 And then the first time there was a flashpoint, there was going to be a global nuclear war. And this is not because they enjoyed being pessimistic.

Speaker 18 If you look at world history up to World War I, World War II, they had some reasons to be concerned. But we nonetheless managed to back off.

Speaker 18 And part of that is because it's not that easy to refine nuclear materials. The plants that do it are known and controlled.
And

Speaker 18 when a new country tries to build it, it's a big international deal.

Speaker 18 I don't quite want to needlessly drench myself with current political controversies, but the point is you can't can't build a nuclear weapon in your backyard.

Speaker 18 And that is part of why the human species is currently still around. Well, at least with the current technology, you can't further escalate AI capabilities very far in your backyard.

Speaker 18 You can escalate them a little in your backyard, but not a lot.

Speaker 16 So it would be, just to sort of finish the comparison to nuclear proliferation here, it would be

Speaker 16 immediate sort of moratorium on powerful AI development, along with kind of an international nuclear-style style agreement between nations that would make it illegal to build data centers capable of advancing the state of the art with AI.

Speaker 16 Am I hearing that right?

Speaker 18 All the AI chips go to data centers. All the data centers are under an international supervisory regime.

Speaker 18 And the thing I would recommend to that regime is to say, like, just stop escalating AI capabilities any further. We don't know when we will get into trouble.

Speaker 18 It is possible that we can take the next step up the ladder and not die. It's possible we can take three steps up the ladder and not die.

Speaker 18 We don't actually know. So we got to stop somewhere.
Let's stop here. That's what I would tell them.

Speaker 16 And what do you do if a nation goes rogue and decides to build its own data centers and fill them with powerful chips and start training their own superhuman AI models? How do you handle that?

Speaker 18 Then that is a more serious matter than a nation refining nuclear materials with which they could build a small number of nuclear weapons.

Speaker 18 This is not like having like five fission bombs to deter other nations. This is a threat of global extinction to every country on the globe.
So you have your diplomats say, stop that,

Speaker 18 or else we, in terror of our lives and the lives of our children, will be forced to launch a conventional strike on your data center.

Speaker 18 And then if they keep on building the data center, you launch a conventional strike on their data center because you would rather not run a risk of everybody on the planet dying.

Speaker 18 That seems kind of straightforward in a certain sense.

Speaker 17 And in a world where this came to pass, do you envision work on AI or AI-like technologies being allowed to continue in any way? Or we've just decided this is a dead end for humanity.

Speaker 17 Our tech companies will have to work on something else.

Speaker 18 I think it would be extremely sensible for humanity to declare that we should all just back off.

Speaker 18 Now, I personally, I look at this and I think I see some ways that you could build relatively safer

Speaker 18 systems with narrower capabilities that were just learning about medicine and didn't quite know that humans were out there, the way that current large language models are trained on the entire internet.

Speaker 18 They know that humans are out there and they talk to people and they can manipulate some people psychologically, if not others, as far as we know.

Speaker 18 So I have to be careful to distinguish my statements of

Speaker 18 factual prediction from my policy proposals. And I can say in a very firm way: if you escalate up to superintelligence, you will die.

Speaker 18 But then, if you're like, well, if we try to

Speaker 18 train some AI systems just on medical stuff and not expose them to any material that teaches them about human psychology, could we get some work out of those without everybody dying?

Speaker 18 I cannot say no firmly. So now we have a policy question.

Speaker 18 Are you going to believe me when I say

Speaker 18 I don't can't tell if this thing will kill you? Are you going to believe somebody else who says this thing will definitely not kill you?

Speaker 18 Are you going to believe a third person who's like, yeah, I think this medical system is for sure going to kill you?

Speaker 18 Who do you believe here if you're not just going to back off of everything?

Speaker 18 So backing off of everything would be pretty sensible.

Speaker 18 And trying to build narrow, medically specialized systems that are not

Speaker 18 very much deep smarter than the current systems and aren't being told that humans exist and they're just thinking about medicine in this very narrow way.

Speaker 18 And you're not just going to keep pushing that until it explodes in your face.

Speaker 18 You're just going to like try to get some cancer cures out of it and that's it.

Speaker 18 You could maybe get away with that. I can't actually say you're doomed for sure if you played it very cautiously.

Speaker 18 If you put the current crop of complete disaster monkeys in charge, they may manage to kill kill you.

Speaker 18 They just do so much worse than they need to do. They're just so cavalier about it.
We didn't need to have a bunch of AIs driving people insane.

Speaker 18 You can train a smaller AI to look at the conversations and tell, is this AI currently in the process of taking a vulnerable person and driving them crazy?

Speaker 18 They could have detected it earlier. They could have tried to solve it earlier.

Speaker 18 So if you have these completely cavalier disaster monkeys trying to run the medical AI project, they may manage to kill you. Okay, so now you have to decide, do you trust these guys?

Speaker 18 And that's the core dilemma there.

Speaker 18 I

Speaker 16 have to say, Eliezer, I think there is essentially zero chance of this happening, at least in today's political climate. I look at what's going on in Washington today.

Speaker 16 You've got, you know, the Trump administration wants to accelerate AI development. NVIDIA and its lobbyists are going around Washington blaming AI doomers for trying to cut off chip sales to China.

Speaker 16 There seems to be a sort of concerted effort not to clamp down on AI, but to make it go faster.

Speaker 16 So I just look around the political climate today and I don't see a lot of openings for a stop AI movement. But like, what do you think would have to happen in order for that to change?

Speaker 18 From my perspective, there's a, you know, sort of core factual truth here, which is if you build super intelligence, then it kills you.

Speaker 18 And the question is just like, do people come to apprehend this thing that happens to be true? It is not in the interest of the leaders of China, nor of Russia, nor of the UK, nor of the United States

Speaker 18 to die along with their families. It's not actually in their interest.

Speaker 18 That's kind of the core reason why we haven't had a nuclear war, despite all the people who in 1950 were like, how on earth are we not going to have a nuclear war?

Speaker 18 What country is going to turn down the military benefits of having their own nuclear weapons?

Speaker 18 How are you not going to have somebody who's like, yeah, I've got some nuclear weapons. Let me take this little area of border country here.

Speaker 18 The same way that things have been playing out for centuries and millennia on Earth before then.

Speaker 16 But they also had like a, there were nuclear weapons dropped during World War II in Japan.

Speaker 16 And so people could look at that and see the chaos it caused and point to that and say, well, that's the outcome here. In your book, you make a different World War II analogy.

Speaker 16 You sort of compare the required effort to stop AI to the mobilization for World War II, but that was a reaction to like a clear act of war.

Speaker 16 And so I guess I'm wondering, like, what is the equivalent of the invasion of Poland or the bombs dropping on Hiroshima and Nagasaki for AI?

Speaker 16 What is the thing that is going to spur people to pay attention?

Speaker 18 I don't know. I think that OpenAI was caught flat-footed when they first published ChatGPT, and that caused a massive shift in public opinion.
I don't think OpenAI predicted that. I didn't predict it.

Speaker 18 It could be that any number of potential events cause a shift in public opinion.

Speaker 18 We are currently getting Congresspeople writing pointed questions in the wake of the release of an internal document at Meta, which has what they call a superintelligence lab, although I don't think they know what that word means, where it's like their internal guidelines for acceptable behavior for the AI.

Speaker 18 And it says like, well, if you have an 11-year-old trying to flirt, flirt back. And everyone was like, what the actual censored profanity meta? What could you possibly have been thinking?

Speaker 18 Why would, you know, like, how could you be so, like, why did you even, why from your own perspective, did you write this down in a document?

Speaker 18 Even if you thought that was cool, you shouldn't have written it down because there's now going to be pointed questions and there were.

Speaker 18 And, you know, maybe it's something that from my perspective doesn't kill a bunch of people, but still causes pointed questions to be asked.

Speaker 18 Or maybe there's some actual kind of catastrophe that we don't just manage to frogboil ourselves into.

Speaker 18 You know, like losing massive numbers of kids to their AI girlfriends and AI boyfriends is, from my perspective, an obvious sort of guess.

Speaker 18 But even the most obvious sort of guess there is still not higher than 50%.

Speaker 18 And I don't think I want to wait. Maybe chat GPT, you know, from my perspective, maybe ChatGPT was it, right?

Speaker 18 I was out, you know, I'm off in the wilderness. Nobody's paying attention to these issues at all because they think that it'll only happen in 20 years into 2005.

Speaker 18 And that to them means the same thing as never.

Speaker 18 And then I got like the chat GPT moment. And suddenly people realize this stuff is actually going to happen to them.
And that happened before the end of the world. Great.
I got a miracle.

Speaker 18 I'm not going to sit around waiting for a second miracle. If I get a second miracle, great.
But meanwhile,

Speaker 18 you got to put your boots on the ground. You got to get out there.
You got to do what you can.

Speaker 17 It strikes me that asset that you have as you try to advance this idea is that a lot of people really do hate AI, right?

Speaker 17 Like if you go on Blue Sky, you will see these people talking a lot about all of the different reasons that they hate AI.

Speaker 17 At the same time, they seem to be somewhat dismissive of the technology, right?

Speaker 17 Like they have not crossed the chasm from, I hate it because I think it's stupid and it sucks, to I hate it because I think it is quite dangerous.

Speaker 17 I wonder if you have thoughts on that group of folks and if you feel like or would want them to be part of a coalition that you're building.

Speaker 18 Yeah.

Speaker 18 So

Speaker 18 coal, so

Speaker 18 you don't want to make the coalition too narrow.

Speaker 18 I'm not a fan of Vladimir Putin,

Speaker 18 but I would not

Speaker 18 on that basis kick him out of the how about if humanity lives instead of dies coalition?

Speaker 18 What about people who think that AI is never going to be a threat to all humanity, but they're worried that it's going to take our jobs? Like, do they get to be in the coalition?

Speaker 18 Well, I think you've got to be careful because they believe different things about the world than you do. And you don't want these people running the how about if humanity does not die coalition.

Speaker 18 You want them to be in some sense like external allies because they're not there to prevent humanity from dying.

Speaker 18 And if they get to make policy, maybe they're like, eh, well, you know, this policy would potentially allow AIs to kill everyone, according to those wacky people who think that AI will be more powerful tomorrow than it is today.

Speaker 18 But, you know, in the meanwhile, it prevents them from taking the AIs from taking our jobs. And that's the part we care about.
So, you know, like

Speaker 18 there's this one thing that the coalition is about. And

Speaker 18 that's it. It's just about not going extinct.

Speaker 16 Yeah.

Speaker 16 Eliezer, right now, as we're speaking, I believe there are hunger strikes going on in front of a couple of AI headquarters, including Anthropic and Google DeepMind.

Speaker 16 These are people who want to convince these companies to shut down AI. We've also seen some

Speaker 16 potentially violent threats made against some of these labs.

Speaker 16 And I guess I'm wondering if you worry about people committing extreme acts, be they violent or non-violent, based on your lessons from this book.

Speaker 16 I mean, if it taking some of your arguments to the natural logical conclusions of if anyone builds this, everyone dies.

Speaker 16 I can see people rationalizing violence on that basis against some of the employees at these labs. And I worry about that.

Speaker 16 So what can you say about the sort of limits of your approach and what you want people to do when they hear what you're saying?

Speaker 18 Boy, there sure are a bunch of questions bundled together there.

Speaker 18 And

Speaker 18 so the number one thing I would say is that if you commit acts of individual violence against individual researchers at an individual AI lab in your individual country, this will not prevent everyone from dying.

Speaker 18 The problem with this logic is not that by this act of individual violence, you can save humanity, but you shouldn't do that because

Speaker 18 like that would be

Speaker 18 the ontologically prohibited. I'll just like say it that way.
The problem is you cannot save humanity by the feudal spasms of individual violence. It's an international issue.

Speaker 18 You can be killed by a superintelligence that somebody built on the other side of the planet.

Speaker 18 I do in my personal politics tend a bit libertarian. If something is just going to kill you and your voluntary customers,

Speaker 18 it's not a global issue the same way. If it's just going to kill people standing next to you, different cities can make different laws about it.

Speaker 18 If it's going to kill people on the other side of the planet,

Speaker 18 that's when the international treaties come in.

Speaker 18 And

Speaker 18 a feudal act of individual violence against an individual researcher and an individual AI company is probably making that international treaty less likely rather than more likely.

Speaker 18 And there's an underlying truth of moral philosophy here, which is that a bunch of our reason for our prejudice against individual murders is because of a very systematic and deep sense in which individual murders tend to not solve society's problems.

Speaker 18 And this is, from from my perspective, a whole bunch of the point of having a taboo against individual murder.

Speaker 18 It's not that people go around committing individual murders and then the world actually gets way better and all the social problems are actually solved.

Speaker 18 But we don't want to do that. We don't want to do more of that because murder is wrong.
The murders make things worse. And that's why we properly should have a taboo against it.

Speaker 18 We need international treaties here.

Speaker 16 What do you make of the opposition movement to the movement that you're sketching out here?

Speaker 16 Mark Andreessen, the powerful venture capitalist, very influential in today's Trump administration, has written about the views that you and others hold that he thinks are unscientific.

Speaker 16 He thinks that AI risk has turned into an apocalypse cult. And he says that their extreme beliefs should not determine the future of laws and society.

Speaker 16 So I guess I'm interested in your sort of reaction to that quote specifically, but I also wonder how you plan to engage with the people on the other side of this argument.

Speaker 18 Well,

Speaker 18 it is not uncommon in the history of science for

Speaker 18 the cigarette companies to smoke their own tobacco.

Speaker 18 The inventor of leaded gasoline, who was a great advocate of the safety of leaded gasoline, despite the many reasons why he should have known better,

Speaker 18 I think did actually get sufficient cumulative lead exposure himself that he you know got had to go off to a sanitarium for a few years and then came back and started exposing himself to lead again and again got sick again and so sometimes these people too truly do believe they're they're they're they they do drink their own kool-aid even to the point of death um shows history And perhaps Mark Andreessen will continue to drink his own Kool-Aid even to the point of death.

Speaker 18 And if he were just killing himself, that would be one thing I say as a libertarian, but he's unfortunately also going to kill you.

Speaker 18 And the thing I would say to sort of refute the central argument is, what's the plan?

Speaker 18 What's the design for this bridge that is going to hold up when the whole weight of the entire human species has to march across it?

Speaker 18 Where is the design scheme for this airplane into which we are going to load the entire human species into its cargo hold and fly it and not crash?

Speaker 18 What's the plan? Where's the science? What's the technology? Why is it not working already?

Speaker 18 And they just don't, they can't make the case for this stuff being, you know, like not perfectly safe, but even remotely safe, under that, they're going to be able to control their superintelligence at all.

Speaker 18 So they go into these, like, you must not listen to these dangerous, apocalyptic people because they cannot engage with us on the field of the technical arguments. They know they will be routed.

Speaker 16 You have advice in your book for journalists, politicians who are worried about about some of the catastrophes you see coming,

Speaker 16 for

Speaker 16 people who are not in any of those categories, for our listeners who are just out there living their daily lives, maybe using ChatGPT for something

Speaker 16 helpful in their daily life, what can they do if they're worried about where all this is heading?

Speaker 18 Well,

Speaker 18 as of a year ago, I'd have said, you know, again, it's, you know,

Speaker 18 write to your elected representatives.

Speaker 18 Talk to your friends about,

Speaker 18 being ready to vote that way

Speaker 18 if a disputed primary election comes down that way.

Speaker 18 The ask, I would say, is for our leaders to begin by saying, we are open to a worldwide AI control treaty if others are open to the same. Like, we are ready to back off if other countries back off.

Speaker 18 We are ready to participate in an international treaty about this. Because if you've got multiple leaders of great powers saying that, well, maybe there can be a treaty.

Speaker 18 So that's kind of the next step from there. That's the political goal we have.

Speaker 18 If you're having trouble sleeping and if you're generally in a distressed state, like maybe don't talk to some of the modern AI systems because they might drive you crazy is a thing I would say now.

Speaker 18 I didn't have to say that one year earlier. You know, like

Speaker 18 this, the whole AI boyfriend, AI girlfriend thing might not be good for you. Maybe don't go down that road, even if you're lonely.
I, you know.

Speaker 18 But that that's individual advice. That's not going to protect the planet.

Speaker 16 Yeah.

Speaker 16 Well, I'll end this conversation where I've ended

Speaker 16 some of our earlier conversations, Eliezer, which is, I really appreciate the time and I really hope you're wrong. Like that would be great.

Speaker 18 We all hope I'm wrong. I hope I'm wrong.
My friends hope I'm wrong. Everybody hopes I'm wrong.
Hope

Speaker 18 is does not, you know,

Speaker 18 hope is not what saves us in the end. Action is what saves us.
Hope is not, you know, hoping for miracles, hoping, you know,

Speaker 18 leaded gasoline. You can't just hope that leaded gasoline isn't going to poison people.
You actually got to ban the leaded gasoline. So,

Speaker 18 more active hopes. I'm in favor of like, I see the hope, I share the hope, but let's look for more activist hopes than that.

Speaker 16 Yeah.

Speaker 16 Well, the book is If Anyone Builds It, Everyone Dies, Why Superhuman AI Would Kill Us All. And it is coming out soon.
And it is a co-written book by Eliezer and his co-author, Nate Sorries. Yep.

Speaker 16 Eliezer, thank you. Thanks, Eleazar.

Speaker 18 That's one.

Speaker 1 The University of Michigan was made for moments like this.

Speaker 3 When facts are questioned, when division deepens, when the role of higher education is on trial, look to the leaders and best turning a public investment into the public good.

Speaker 10 From using AI to close digital divides to turning climate risk into resilience, from leading medical innovation to making mental health care more accessible.

Speaker 13 Wherever we go, progress follows. For answers, for action, for all of us, look to Michigan.

Speaker 14 See more solutions at umic.edu slash look.

Speaker 19 This podcast is supported by Give Directly, a nonprofit that lets you send cash directly to the world's poorest families so they can invest in what matters most to them.

Speaker 19 This year, more than 30 of your favorite podcasters are joining forces for Pods Fight Poverty to send cash to over 700 families in three Rwandan villages.

Speaker 19 And until December 31st, your first donation is matched. Join listeners everywhere fighting poverty at givedirectly.org slash times.

Speaker 21 Over the last two decades, the world has witnessed incredible progress.

Speaker 21 From dial-up modems to 5G connectivity, from massive PC towers to AI-enabled microchips, innovators are rethinking possibilities every day.

Speaker 21 Through it all, Invesco QQQ ETF has provided investors access to the world of innovation with a single investment. Invesco QQQ, let's rethink possibility.

Speaker 21 There are risks when investing in ETFs, including possible loss of money. ETFs' risks are similar to those of stocks.

Speaker 21 Investments in the tech sector are subject to greater risk of more volatility than more diversified investments.

Speaker 21 Before investing, carefully read and consider front investment objectives, risks, charges, expenses, and more in perspectives at Invesco.com. Investco Distributors Incorporated.

Speaker 16 Hard Fork is produced by Whitney Jones and Rachel Cohn. We're edited by Jen Poyant.
We're fact-checked this week by Will Peischel. Today's show was engineered by Katie McMurrin.

Speaker 16 Original music by Rowan Nemasto, Alyssa Moxley, and Dan Powell. Video production by Saura Roquet, Pat Gunther, Jake Nicol, and Chris Schott.

Speaker 16 You can watch this full episode on YouTube at youtube.com slash hardfork. Special thanks to Paula Schumann, Hui Wing Tam, Dahlia Haddad, and Jeffrey Miranda.

Speaker 16 As always, you can email us at hardfork at nytimes.com. Send us your plans for the AI Apocalypse.

Speaker 19 Hear that? That's me with a lemonade in a rocker on my front porch. How did I get here? I invested to make my dream home home.

Speaker 19 Get where you're going with MDY, the original mid-cap ETF from State Street Investment Management. Getting there starts here.

Speaker 22 Before investing, consider the funds' investment objectives, risks, charges, and expenses. Visit state street.com slash IM for prospectus containing this and other information.
Read it carefully.

Speaker 22 MDY is subject to risks similar to those of stocks. All ETFs are subject to risk, including possible loss of principal.
Alps distributor, zinc distributor.