The Man Who Wrote The Book On AI: 2030 Might Be The Point Of No Return! We've Been Lied To About AI!

2h 4m
AI Expert STUART RUSSELL, exposes the trillion-dollar AI race, why governments won’t regulate, how AGI could replace humans by 2030, and why only a nuclear-level AI catastrophe will wake us up

Professor Stuart Russell O.B.E. is a world-renowned AI expert and Computer Science Professor at UC Berkeley. He holds the Smith-Zadeh Chair in Engineering and directs the Center for Human-Compatible AI, and is also the bestselling author of the book “Human Compatible: AI and the Problem of Control".

He explains:

◼️What the “gorilla problem” reveals about our future under superintelligent AI

◼️How governments are outfunded by Big Tech

◼️Why current AI systems already lie and self-preserve

◼️The radical solution he’s spent a decade building to make AI safe

◼️The myth of ‘pulling the plug’ and why AI won’t be that easy to stop

[00:00] You've Been Talking About AI for a Long Time

[02:54] You Wrote the Textbook on AI

[03:29] It Will Take a Crisis to Wake People Up

[06:03] CEOs Staying in the AI Race Despite Risks

[08:04] They Know It's an Extinction-Level Risk

[10:06] What Is Artificial General Intelligence (AGI)?

[13:10] Will We Reach General Intelligence Soon?

[16:26] How Much Is Safety Really Being Implemented

[17:29] AI Safety Employees Leaving OpenAI

[18:14] The Gorilla Problem — The Most Intelligent Species Will Always Rule

[19:34] If There's an Extinction Risk, Why Don't They Stop?

[21:02] Can't We Just Pull the Plug if AI Gets Too Powerful?

[22:49] Can We Build AI That Will Act in Our Best Interests?

[24:09] Are You Troubled by the Rapid Advancement of AI?

[26:48] Do You Have Regrets About Your Involvement?

[27:35] No One Actually Understands How This AI Works

[30:36] AI Will Be Able to Train Itself

[32:24] The Fast Takeoff Is Coming

[34:20] Are We Creating Our Successor and Ending the Human Race?

[38:36] Advice to Young People in This New World

[40:52] How Do You Think AI Would Make Us Extinct?

[42:33] The Problem if No One Has to Work

[45:59] What if We Just Entertain Ourselves All Day

[48:43] Why Do We Make Robots Look Like Humans?

[56:44] What Should Young People Be Doing Professionally?

[59:56] What Is It to Be Human?

[01:03:34] The Rise of Individualism

[01:05:34] Ads

[01:06:39] Universal Basic Income

[01:08:41] Would You Press a Button to Stop AI Forever?

[01:15:13] But Won't China Win the AI Race if We Stop?

[01:18:40] Trump's Approach to AI

[01:19:06] What's Causing the Loss in Middle-Class Jobs

[01:21:02] What Will Happen if the UK Doesn't Participate in the AI Race?

[01:23:31] Amazon Replacing Their Workers

[01:29:00] Ads

[01:30:54] Experts Agree on Extinction Risk

[01:38:01] What if Aliens Were Watching Us Right Now

[01:39:35] Can We Make AI Systems That We Can Control?

[01:43:14] Are We Creating a God?

[01:47:32] Could There Have Been Advanced Civilisations Before Us?

[01:48:50] What Can We Do to Help?

[01:50:43] You Wrote the Book on AI — Does It Weigh on You?

[01:58:48] What Do You Value Most in Life?

Follow Stuart:

LinkedIn - https://bit.ly/3Y5fOos

You can purchase “Human Compatible: AI and the Problem of Control", here: https://amzn.to/48eOMkH

The Diary Of A CEO:

◼️Join DOAC circle here - https://doaccircle.com/

◼️Buy The Diary Of A CEO book here - https://smarturl.it/DOACbook

◼️The 1% Diary is back - limited time only: https://bit.ly/3YFbJbt

◼️The Diary Of A CEO Conversation Cards (Second Edition): https://g2ul0.app.link/f31dsUttKKb

◼️Get email updates - https://bit.ly/diary-of-a-ceo-yt

◼️Follow Steven - https://g2ul0.app.link/gnGqL4IsKKb

Sponsors:

Pipedrive - https://pipedrive.com/CEO
Fiverr: https://fiverr.com/diary and get 10% off your first order when you use code DIARY
Stan Store: NO PURCHASE NECESSARY. VOID WHERE PROHIBITED. For Official Rules, visit https://DaretoDream.stan.store

Press play and read along

Runtime: 2h 4m

Transcript

You know when you're in a meeting, taking notes, trying to focus, but your devices keep pinging notifications. For me, that's really annoying.

And usually, it makes your brain start to wander away and fall into distraction.

This was happening to my producer, Jack, and we were chatting about it when I realized that I knew the exact product that would fix this problem for him. It's from our sponsor, Remarkable.

Essentially, it's a paper tablet with no notifications, so it's far less distracting than most tablets.

It's called the Remarkable Paper Pro Move, and it really does does look, feel and sound the same as writing on paper, which is really nice if you spend a lot of time taking notes.

But because it's digital, your handwritten notes can be converted into typed text and then you can send it over email or Slack or just keep editing it within the app.

All of their products have no blue light, which for someone who looks at screens as much as I do is something I really appreciate.

Remarkable is offering a 50-day trial on their products for free and at the end of that time, if it's not what you're looking for, you simply get all of your money back by sending it back.

Give the present of being present. Find the perfect distraction-free paper tablet at remarkable.com.

In October, over 850 experts, including yourself and other leaders like Richard Branson and Jeffrey Hinton, signed a statement to ban AI superintelligence as you guys raised concerns of potential human extinction.

Because unless we figure out how do we guarantee that the AI systems are safe,

we're toast. And you've been so influential on the subject of AI.
You wrote the textbook that many of the CEOs who are building some of the AI companies now would have studied on the subject of AI.

Yep. So, do you have any regrets?

Professor Stuart Russell has been named one of Time Magazine's most influential voices in AI. After spending over 50 years researching, teaching, and finding ways to design

AI in such a way that humans maintain control. You talk about this gorilla problem as a way to understand AI in the context of humans.

Yeah, so a few million years ago, the human line branched off from the gorilla line in evolution, and now the gorillas have no say in whether they continue to exist because we are much smarter than they are.

So intelligence is actually the single most important factor to control planet Earth. But we're in the process of making something more intelligent than us.

Exactly. Why don't people stop then? Well, one of the reasons is something called the Midas Touch.
So King Midas is this legendary king who asked the gods, can everything I touch turn to gold?

And we think of the Midas Touch as being a good thing, but he goes to drink some water, the water has turned to gold. And he goes to comfort his daughter, and his daughter turns to gold.

And so he dies in misery and starvation. So, this applies to our current situation in two ways.

One is that greed is driving these companies to pursue technology with the probabilities of extinction being worse than playing Russian roulette.

And that's even according to the people developing the technology without our permission. And people are just fooling themselves if they think it's naturally going to be controllable.

So, you know, after 50 years, I could retire, but instead, I'm working 80 or 100 hours a week trying to move things in the right direction.

So, if you had a button in front of you which would stop all progress in artificial intelligence would you press it?

Not yet. I think there's still a decent chance they guarantee safety and I can explain more what that is.

Just give me 30 seconds of your time. Two things I wanted to say.
The first thing is a huge thank you for listening and tuning into the show week after week.

It means the world to all of us and this really is a dream that we absolutely never had and couldn't have imagined getting to this place.

But secondly, it's a dream where we feel like we're only just getting started.

And if you enjoy what we do here, please join the 24% of people that listen to this podcast regularly and follow us on this app.

Here's a promise I'm going to make to you: I'm going to do everything in my power to make this show as good as I can now and into the future.

We're going to deliver the guests that you want me to speak to, and we're going to continue to keep doing all of the things you love about this show.

Thank you.

Professor Stuart Russell, OPE.

A lot of people have been talking about AI for the last couple of years.

It appears you've, this really shocked me, it appears you've been talking about AI for most of your life.

Well, I started doing AI in high school back in England, but then I did my PhD starting in 82 at Stanford. I joined the faculty at Berkeley in 86.
So I'm my 40th year as a professor at Berkeley.

The main thing that the AI community is familiar with in my work

is a textbook that I wrote.

Is this the textbook that most students who study AI are likely learning from? Yeah.

So you wrote the textbook on artificial intelligence 31 years ago. You actually probably started writing it because it's so bloody big in the year that I was born.
So I was born in 92.

Yeah, took me about two years to.

Me and your book are the same age, which just is a wonderful way for me to understand just how long you've been talking about this and how long you've been writing about this.

And actually, it's interesting that many of the CEOs who are building some of the AI companies now probably learnt from your textbook.

You had a conversation with somebody who said that in order for people to get the message that we're going to be talking about today, there would have to be a catastrophe for people to wake up.

Can you give me context on that conversation and a gist of who you had this conversation with?

So it was with one of the CEOs of a leading AI company. He sees two possibilities, as do I, which is

either we have a small, or let's say, small-scale disaster of the same scale as Chernobyl. But nuclear meltdown in Ukraine.

Yeah, so this nuclear plant blew up in 1986, killed a fair number of people directly and

maybe tens of thousands of people indirectly through radiation. Recent cost estimates, more than a trillion dollars.

So

that would wake people up. That would get the governments to regulate.
He's talked to the governments and they won't do it.

He looked at this Chernobyl scale disaster as the best case scenario because then the governments would regulate and require AI systems to be built.

And is this CEO building an AI company?

He runs one of the leading AI companies. And even he thinks that the only way that people will wake up is if there's a Chernobyl-level nuclear disaster?

Yeah, it wouldn't have to be a nuclear disaster.

It would be either an AI system that's being misused by someone, for example, to engineer a pandemic, or an AI system that does something itself, such as crashing our financial system or our communication systems.

The alternative is a much worse disaster where we just lose control altogether.

You have had lots of conversations with lots of people in the world of AI, both people that

have built the technology, have studied and researched the technology, or the CEOs and founders that are currently in the AI race.

What are some of the interesting sentiments that the general public wouldn't believe that you hear privately about

their perspectives because i find that so fascinating i've had some private conversations with people very close to these tech companies and the shocking sentiment that i was exposed to was that they are aware of the risks often but they don't feel like there's anything that can be done so they're carrying on which is it feels like a bit of a paradox to me like it's yes it's it's

It must be a very difficult position to be in, in a sense, right? You're doing something

you know has a good chance of bringing an end to life on earth including that of yourself and your own family

they feel

that they can't escape this race right if they you know if a ceo of one of those companies was to say you know we're

we're not going to do this anymore they would just be replaced

because the investors are putting their money up because they want to create a

and reap the benefits of it. So it's a strange situation where

at least all the ones I've spoken to, I haven't spoken to Sam Altman about this, but Sam Waltman,

even

before

becoming CEO of OpenAI, said that creating superhuman intelligence is the biggest risk. to human existence that there is.

My worst fears are that we cause significant, we, the field, the technology, the industry cause significant harm to the world.

Elon Musk is also on record saying this. So

Dario Amadei estimates up to a 25% risk of extinction.

Was there a particular moment when you realized that

these CEOs are well aware of the extinction level risks?

I mean they all signed a statement in May of 23.

It's called the extinction statement. It basically says AGI is an extinction risk at the same level as nuclear war and pandemics.
But I don't think they feel it in their gut.

Imagine that you are one of the nuclear physicists.

I guess you've seen Oppenheimer, right? So you're there, you're watching that first nuclear explosion.

How would that make you feel about the potential impact of nuclear war on the human race?

I think you would probably become a pacifist and say this weapon is so terrible, we have got to find a way to

keep it under control. We are not there yet

with the people making these decisions and certainly not with the governments.

You know

what policymakers do is they

listen to experts, they

keep their finger in the wind. You've got some experts experts

dangling $50 billion checks and saying, oh,

all that Duma stuff, it's just fringe nonsense, don't worry about it. Take my $50 billion check.

On the other side, you've got very well-meaning, brilliant scientists like Jeff Hinton saying, actually, no, this is the end of the human race.

But Jeff doesn't have a $50 billion check.

So the view is the only way to stop the race is if governments intervene

and say,

okay,

we don't want this race to go ahead until we can be sure that it's going ahead in absolute safety.

Closing off on your career journey,

you received an OB from Queen Elizabeth? Yes. And what was the listed reason for that? for the award?

Contributions to artificial intelligence research. And you've been listed as a Time magazine most influential person in

AI several years in a row, including this year in 2025.

Now, there's two terms here that are central to the things we're going to discuss. One of them is AI, and the other is AGI.

In my muggle interpretation of that, it's artificial general intelligence is when the system, the computer, whatever it might be, the technology, has generalized intelligence, which means that it could theoretically see,

understand

the world. It knows everything.
It can understand everything in the world as well as, or better than a human being,

can do it. And I think take action as well.

I mean, some people say, oh, you know, AGI doesn't have to have a body, but a good chunk of our intelligence actually is about managing our body, about perceiving the real environment and acting on it, moving, grasping, and so on.

So I think that's part of intelligence and

AGI systems should be able to operate robots successfully. But there's often a misunderstanding, right, that people say, well, if it doesn't have a robot body, then it can't actually do anything.

But then if you remember, most of us don't do things with our bodies.

Some people do.

Bricklayers, painters, gardeners, chefs.

But people who do podcasts,

you're doing it with your mind, right?

You're doing it with your ability to produce language.

Adolf Hitler didn't do it with his body.

He did it by producing language.

I hope you're not comparing us.

But

so

even an AGI that has no body,

it actually has more access to the human race than Adolf Hitler ever did

because it can send emails and texts to,

what, three-quarters of the world's population directly.

It also speaks all of their languages, and it can devote 24 hours a day to each individual person on earth to convince them to do whatever it wants them to do.

And our whole society runs now on the internet. I mean, if there's an issue with the internet, everything breaks down in society.
Airplanes become grounded.

And we'll have electricity running off as internet systems.

So, I mean, my entire life, it seems to run off the internet now.

Yeah, water supply. So this is one of the routes by which AI systems could bring about a medium-sized catastrophe is by basically shutting down our life support systems.

Do you believe that at some point in the coming decades, we'll arrive at a point of AGI where these systems are generally intelligent.

Yes, I think it's virtually certain unless something else intervenes, like a nuclear war, or

we may refrain from doing it. But I think it will be extraordinarily difficult for us to refrain.

When I look down the list of predictions from the top 10 AI CEOs on when AGI will arrive, you've got Sam Altman, who's the founder of OpenAI/slash ChatGPT, says before 2030.

Demis at DeepMind says 2030 to 2035.

Jensen from Nvidia says around five years. Dario at Anthropic says 2026, 2027, powerful AI close to AGI.
Elon says in the 2020s.

And I go down the list of all of them, and they're all saying relatively within five years.

I actually think it'll take longer. I don't think you can

make a prediction based on engineering

in the sense that, yes, we could make machines 10 times bigger and 10 times faster,

but that's probably not the reason why we don't have AGI,

right? In fact, I think we have far more computing power than we need for AGI, maybe a thousand times more than we need.

The reason we don't have AGI is because we don't understand how to make it properly.

What we've seized upon

is one particular technology called the language model. And we observed that as you make language models bigger, they produce text, language that's more coherent and sounds more intelligent.

And so mostly what's been happening in the last few years is just, okay, let's keep doing that.

Because one thing companies are very good at, unlike universities, is spending money. They have spent gargantuan amounts of money and they're going to spend even more gargantuan amounts of money.

I mean, you know, we mentioned nuclear weapons. So the Manhattan Project

in World War II to develop nuclear weapons, its budget in 2025 dollars was about 20-odd billion dollars. The budget for AGI

is going to be a trillion dollars next year. So 50 times bigger than the Manhattan Project.

Humans have a remarkable history of figuring things out when they galvanize towards a shared objective,

you know, thinking about the moon landings or whatever else it might be through history.

And the thing that makes this feel all quite inevitable to me is just the sheer volume of money being invested into it. I've never seen anything like it in my life.

Well, there's never been anything like this in history. Is this the biggest technology project in human history by orders of magnitude? And there doesn't seem to be anybody

that is pausing to ask the questions about safety.

It doesn't even appear that there's room for that in such a race.

I think that's right. To varying extents, each of these companies has a division that focuses on safety.
Does that division have any sway?

Can they tell the other divisions, no, you can't release that system? Not really.

I think some of the companies do take it more seriously. Anthropic

does. I think Google DeepMind.

Even there, I think the commercial imperative

to be at the forefront is absolutely vital. If a company is perceived as

you know, falling behind and not likely to be competitive, not likely to to be the one to reach AGI first, then people will move their money elsewhere very quickly.

And we saw some quite high-profile departures from

companies like OpenAI.

A chap called Jan Liek left, who was working on AI safety at OpenAI.

And he said that the reason for his leaving was that safety culture and processes. processes have taken a backseat to shiny products at OpenAI and he gradually lost trust in leadership, but also Ilya

Suskova. Ilya Sutskova, yeah.

So he was the

co-founder and chief scientist for a while. And then, yeah, so he and Jan Leiker were the main safety people.

And so when they say

OpenAI doesn't care about safety,

that's pretty concerning.

I've heard you talk about this guerrilla problem.

What is the gorilla problem as a way to understand AI AI in the context of humans?

So the gorilla problem is the problem that gorillas face with respect to humans.

So you can imagine that a few million years ago, the human line branched off from the gorilla line in evolution.

And now the gorillas are looking at the human line and saying, yeah,

was that a good idea?

And

they have no say in whether they continue to exist.

Because we have a.

We are much smarter than they are. If we chose to, we could make them extinct in a couple of weeks.
And there's nothing they can do about it.

So that's the gorilla problem, right? Just the problem a species faces

when there's another species that's much more capable.

And so this says that intelligence is actually the single most important factor to control planet Earth. Yes, intelligence is the ability to bring about

what you want in the world.

And we're in the process of making something more intelligent than us. Exactly.
Which suggests that maybe we become the gorillas. Exactly.
Yeah.

Is there any fault in the reasoning there? Because it seems to make such perfect sense to me, but

why don't people stop then? Because

it seems like a crazy thing to want to.

Because they think that

if they create this technology, it will have enormous economic value. They'll be able to use it to replace all the human workers in the world,

to develop new

products, drugs,

forms of entertainment, anything that has economic value, you could use AGI to create it. And maybe it's just an irresistible thing in itself.

I think we as humans place

so much store store on our intelligence, you know,

how we

think about, you know, what is the pinnacle of human achievement.

If we had AGI, we could go way higher than that. So it's very seductive for people to want to create this technology.
And I think

people are just fooling themselves if they think it's naturally going to be controllable.

I mean, the question is,

how are you going to retain power forever

over entities more powerful than yourself?

Pull the plug out. People say that sometimes in the comment section when we talk about AI.
They say, well, I'll just pull a plug out.

Yeah, it's sort of funny. In fact, you know, yeah, reading the comment sections in newspapers, whenever there's an AI article,

There'll be people who say, oh, you can just pull the plug out, right? As if a super intelligent machine would never have thought of that one.

Don't forget, who's watched all those films where they did try to pull a plug out? Another thing they said, well, you know, as long as it's not conscious,

then it doesn't matter. It won't ever do anything.

Which is

completely off the point.

Because, you know, I don't think the gorillas are sitting there saying, oh, yeah, you know, if only those humans hadn't been conscious, everything would have been fine, right? No, of course not.

What would make gorillas go extinct is the things that humans do, right? How we behave, our ability to act successfully in the world. So when I play chess against my iPhone and I lose, right,

I don't think, oh, well, I'm losing because it's conscious.

Now I'm just losing because it's better than I am in that little world, moving the bits around

to get what it wants. And so consciousness has nothing to do with it, right? Competence is the thing we're concerned about.
So I think the only hope is

can we simultaneously build machines that are more intelligent than us,

but guarantee

that

they will always act in our best interests?

So throwing that question to you, can we build machines that are more intelligent than us that will also always act in our best interests?

It sounds like a bit of a

contradiction to some degree, because it's kind of like me saying, I've got a French bulldog called Pablo that's nine years old.

And it's like saying that he could be more intelligent than me, yet I still walk him inside when he gets fed.

I think if he was more intelligent than me, he would be walking me. I'd be on the leash.

That's the trick, right? Can we make AI systems whose only purpose is to further human interests?

And I think the answer is yes.

And this is actually what I've been working on. So I think one part of my career that I didn't mention is

sort of having this epiphany while I was on sabbatical in Paris. So it was 2013 or so.

Just

realizing that further progress in the capabilities of AI,

you know, if we succeeded in creating real superhuman intelligence, that it was potentially a catastrophe.

And so I pretty much switched my focus to work on how do we make it so that it's guaranteed to be safe. Aaron Powell, are you somewhat troubled by

everything that's going on at the moment with

AI and how it's progressing? Because you strike me as someone that's somewhat troubled under the surface by

the way things are moving forward and the speed in which they're moving forward.

That's an understatement. I'm appalled actually by the lack of attention to safety.
I mean imagine if someone's building a nuclear power station in your neighborhood.

And you go along to the chief engineer and you say, okay, these nuclear things, I've heard that they can actually explode, right?

There was this nuclear nuclear explosion that happened in Hiroshima, so I'm a bit worried about this.

You know, what steps are you taking to make sure that we don't have a nuclear explosion in our backyard?

And the chief engineer says, well, we thought about it, we don't really have an answer.

What would you say?

I think you would use some expletives

and you'd call your MP and say, you know,

get these people out. I mean, what are they doing?

You read out the list of, you know, projected dates for AGI, but notice also that those people,

I think I mentioned Dara Amade says a 25% chance of extinction. Elon Musk has a 30% chance of extinction.
Sam Altman says

basically that AGI is the biggest risk to human existence.

So what are they doing? They are playing Russian roulette with every human being on earth

without our permission. They're coming into our houses, putting a gun to the head of our children,

pulling the trigger and saying, well, you know, possibly everyone will die. Oops.
But possibly we'll get incredibly rich.

That's what they're doing.

Did they ask us? No. Why is the government allowing them to do this?

Because they dangle $50 billion checks in front of the governments.

So I think troubled under the surface is an understatement.

What would be an accurate statement?

Appalled.

And

I am devoting my life to trying

to divert from this course of history into a different one.

Do you have any regrets about things you could have done in the past? Because you've been so influential on the subject of AI.

You wrote the textbook that many of these people would have studied on the subject of AI more than 30 years ago.

When you're alone at night and you think about decisions you've made in this field because of your scope of influence, is there anything you regret?

Well, I do wish I had understood earlier what I understand now.

We could have developed

safe AI systems. I think

there are some weaknesses in the framework, which I can explain, but I think that framework could have evolved to develop actually safe AI systems where we could prove mathematically that the system is going to act in our interest.

The kind of AI systems we're building now,

we don't understand how they work.

We don't understand how they work.

It's a strange thing to build something

where you don't understand how it works. I mean, there's no sort of comparable through human history.
Usually with machines, you can pull it apart and see what cogs are doing what and how they work.

Well, actually,

we put the cogs together, right? So,

with most machines, we designed it to have a certain behavior. So, we don't need to pull it apart and see what the cogs are because we put the cogs in there in the first place, right?

One by one, we figured out what the pieces needed to be, how they work together to produce the effect that we want. So, the best analogy I can come up with is

the first cave person

who left a bowl of fruit in the sun and forgot about it and then came back a few weeks later and there was sort of this big soupy thing and they drank it and got completely shit-faced. They got drunk.

And they got this effect. They had no idea how it worked, but they were very happy about it.
And no doubt that person made a lot of money from it.

So, yeah, it is kind of bizarre, but my mental picture of these things is like a chain link fence.

So you've got lots of these connections,

and each of those connections can be, its connection strength can be adjusted.

And then

a signal comes in one end of this chain link fence and passes through all these connections and comes out the other end.

And the signal that comes out the other end is affected by your adjusting of all the connection strengths.

So what you do is you get a whole lot of training data and you adjust all those those connection strengths so that the signal that comes out the other end of the network is the right answer to the question.

So if your training data is

lots of photographs of animals, then all those pixels go in one end of the network and out the other end, you know, it activates the llama output or the dog output or the cat output or the ostrich output.

And so you just keep adjusting all the connection strengths in this network until the outputs of the network are the ones you want.

But we don't really know what's going on across all of those different chains. So what's going on inside that network?

Well, so now you have to imagine that this network, this chain link fence, is a thousand square miles in extent. Okay.

So it's covering the whole of the San Francisco Bay Area or the whole of London inside the M25.

That's how big it is. And the lights are off.
It's nighttime.

So you might have in that network about a trillion

adjustable parameters. And then you do quintillions or sextillions of small random adjustments to those parameters until you get the behavior that you want.

I've heard Sam Altman say that in the future he doesn't believe they'll need much training data at all to make these models progress themselves because there comes a point where the models are so smart that they can train themselves and improve themselves

without us needing to pump in articles and books and scour the internet.

Yeah, it should work that way. So I think what he's referring to, and this is something that several companies are now worried might start happening,

is that the AI system becomes capable of doing AI research

by itself.

And so

you have a system with a certain capability. I mean, crudely, we could call it an IQ, but

it's not really an IQ. But anyway, imagine that it's got an IQ of 150 and uses that to do AI research, comes up with better algorithms or better designs for hardware or better ways to use the data,

updates itself, now it has an IQ of 170.

And now it does more AI research, except that now it's got an IQ of 170, so it's even better at doing the AI research. And so, you know, next iteration, it's 250, and so on.

So this is an idea that one of Alan Turing's friends, I.J. Goode, wrote out in 1965 called the intelligence explosion, right?

That one of the things an intelligent system could do is to do AI research and therefore make itself more intelligent. And this would...

this would very rapidly take off and leave the humans far behind. Is that what they call the fast takeoff? That's called the fast takeoff.

Sam Altman said, I think a fast takeoff is more possible than I thought a couple of years ago, which I guess is that moment where the AGI starts teaching itself.

And in his blog, The Gentle Singularity, he said, we may already be past the event horizon of takeoff.

And what does he mean by event horizon? The event horizon is a phrase borrowed from astrophysics, and it refers to the black hole.

and the event horizon think if you've got some very very massive object that's heavy enough that it actually prevents light from escaping that's why it's called the black hole it's so heavy that light can't escape so if you're inside the event horizon then

then light can't escape beyond that so i think what he's what he's meaning is if we're beyond the event horizon, it means that

now we're just trapped in the the gravitational attraction of the black hole, or in this case,

we're trapped in the inevitable slide, if you want, towards AGI.

When you think about the economic value of AGI, which I've estimated at $15 quadrillion,

that acts as a giant magnet in the future. We're being pulled towards it.
We're being pulled towards it. And the closer we get, the stronger the force.
The probability, you know, the closer we get,

the higher the probability that we will actually get there. So people are more willing to invest.
And we also start to see spin-offs from that investment,

such as ChatGPT, right, which

generates a certain amount of revenue and so on.

So it does act as a magnet. And the closer we get, the harder it is to pull out of that field.

It's interesting when you think that this could be the end of the human story, this idea that the end of the human story was that we created our successor.

Like we summoned

the next iteration of

life or intelligence ourselves. Like we took ourselves out.

It is quite like just removing ourselves and the catastrophe from it for a second.

It is an unbelievable story.

Yeah, and you know, there are

many

legends,

the sort of be careful what you wish for legend. And in fact, the King Midas legend is

very relevant here. What's that? So King Midas is this legendary king who

lived in modern-day Turkey, but I think it's sort of like Greek mythology. He is said to have asked the gods to grant him a wish.

The wish being that everything I touched should turn to gold.

So he's incredibly greedy.

You know, we call this the Midas touch.

And we think of the Midas touch as being like, you know, that's a good thing, right? Wouldn't that be cool? But what happens? So he,

you know, he goes to drink some water and he finds that the water has turned to gold.

And he goes to eat an apple and the apple turns to gold. And he goes to

comfort his daughter and his daughter turns to gold.

And so he dies in misery and starvation. So this applies to our current situation in two ways, actually.
So

one is that I think greed is driving us to pursue a technology that will end up consuming us. And we will perhaps die in misery and starvation instead.

What it shows is how difficult it is to correctly articulate what you want the future to be like. For a long time,

the way we built AI systems was we created these algorithms where we could specify the objective and then the machine would figure out how to achieve the objective and then achieve it.

So we specify what it means to win at chess or to win at Go and the algorithm figures out how to do it and it does it really well. So that was standard AI up until recently.

And it suffers from this drawback that, sure, we know how to specify the objective in chess, but how do you specify the objective in life?

What do we want the future to be like? Well, really hard to say. And almost any attempt to write it down precisely enough for the machine to bring it about would be wrong.

And if you're giving a machine an objective which isn't aligned with what we truly want the future to be like, you're actually setting up a chess match.

match is one that you're going to lose when the machine is sufficiently intelligent. And so

that's problem number one.

Problem number two is that the kind of technology we're building now, we don't even know what its objectives are.

So it's not that we're specifying the objectives, but we're getting them wrong.

We're growing these systems. They have objectives,

but we don't even know what they are because we didn't specify them. What we're finding through experiment with them is that

they seem to have an extremely strong self-preservation objective. What do you mean by that? You can put them in hypothetical situations.
Either they're going to get switched off and replaced,

or

they have to allow someone, let's say, you know, someone has been locked in a machine room that's kept at three centigrades, or they're going to freeze to death.

They will choose to leave that guy locked in the machine room

and die rather than be switched off themselves.

Someone's done that test. Yeah.
What was the test?

They asked the AI. Yep.

Well, they put them in these hypothetical situations and they allow the AI to decide what to do. And it decides to preserve its own existence, let the guy die, and then lie about it.

In the King Midas analogy story,

one of the things it highlights for me is that there's always trade-offs in life generally. And especially when there's great upside, there always appears to be a pretty grave downside.

Like there's almost nothing in my life where I go, it's all upside. Like even like having a dog, it shits on my carpet.
My girlfriend, you know, love her, but you know, not always easy.

Even with like going to the gym, I have to pick up these really, really heavy weights at 10 p.m. at night sometimes when I don't feel like it.

There's always, to get the muscles or the six pack, there's always a trade-off.

And when you interview people for a living like I do, you know, you hear about so many incredible things that can help you in so many ways. But there is always a trade-off.

There's always a way to overdo it. Melatonin will help you sleep, but also you'll wake up groggy.
And if you overdo it, your brain might stop making melatonin.

Like, I can go through the entire list, and one of the things I've always come to learn from doing this podcast is: whenever someone promises me a huge upside for something, it'll cure cancer, it'll be a utopia, you'll never have to work, you'll have a butler around your house.

My first instinct now is to say, at what cost? Yeah. And when I think about the economic cost here, if we start, if we start there,

have you got kids? I have four, yeah. Four kids.

How old is the youngest kid? 19. 19, okay.

So if you're kids were 10 now, and they were coming to you and they're saying, Dad, what do you think I should study based on the way that you see the future? A future of AGI?

So if all these CEOs are right and they're predicting AGI within five years, what should I study, Dad?

Well, okay, so

let's look on the bright side and say that the CEOs all decide to pause their AGI development, figure out how to make it safe, and then resume in whatever technology path is actually going to be safe.

What does that do to human life?

If they pause. No,

if they succeed in creating AGI and they solve the safety problem.

And they solve the safety problem. And they solve the safety.
Yeah, because if they don't solve the safety problem, then

you should probably be finding a bunker or

going to Patagonia or somewhere in New Zealand. Do you mean that? Do you think I should be finding a bunker? No, because it's not actually going to help.

It's not as if the AI system couldn't find you.

It's interesting. So we're going off on a little bit of a digression here

from your question, but I'll come back to it. So people often ask, well, okay, so how exactly do we go extinct? And of course, if you ask the gorillas or the dodos,

how exactly do you think you're going to go extinct?

They haven't the faintest idea.

Humans do something and then we're all dead.

So the only things we can imagine are the things we know how to do that might bring about our own extinction, like creating some carefully engineered pathogen that infects everybody and then kills us, or starting a nuclear war.

Presumably, something that's much more intelligent than us would have much greater control over physics than we do. We already do amazing things, right?

I mean, it's amazing that I can take a little rectangular thing out of my pocket and talk to someone on the other side of the world, or even someone in space. It's just astonishing.

And we take it for granted.

But imagine super intelligent beings and their ability to control physics.

Perhaps they will find a way to just divert the sun's energy,

sort of go around the Earth's orbit. So literally the Earth turns into a snowball in a few days.
Maybe they'll just decide to leave.

Leave the Earth. Maybe they'd look at the Earth and go, this is not interesting.
We know that over there, there's an even more interesting planet. We're going to go over there.

And they just, I don't know, get on a rocket or teleport themselves. They might, yeah.
So it's difficult to anticipate all the ways that we might go extinct at the hands of

entities much more intelligent than ourselves. Anyway, coming back to the question of, well, if everything goes right, right? If we create AGI, we figure out how to make it safe,

we achieve all these economic miracles, then you face a problem. And this is not a new problem.

So John Maynard Keynes, who was a famous economist in the early part of the the 20th century,

wrote a paper in 1930. So this is in the depths of the depression.
It's called On the Economic Problems of Our Grandchildren.

He predicts that at some point, science will deliver sufficient wealth that no one will have to work ever again. And then man will be faced with his true eternal problem.

How to live, I don't remember the exact word, but how to live wisely and well when the

economic incentives the economic constraints are lifted we don't have an answer to that question right so ai systems are doing pretty much everything we currently call work

anything you might aspire to like you want to become a surgeon well it takes the robot seven seconds to learn how to be a surgeon that's better than any human being elon said last week that the humanoid robots will be 10 times better than any surgeon that's ever lived.

Quite possibly, yeah.

Well, and they'll also have, you know,

they'll have hands that are, you know, a millimeter in size so they can go inside and do all kinds of things that humans can't do. And I think we need to put serious effort into this question.

What is a world

where AI can do all forms of human work that you would want your children to live in?

What does that world look like?

Tell me the destination so that we can develop a transition plan to get there. And I've asked AI researchers, economists, science fiction writers, futurists.
No one

has been able to describe that world.

I'm not saying it's not possible. I'm just saying I've asked hundreds of people in multiple workshops.
It does not, as far as I know, exist in science fiction.

You know, it's notoriously difficult to write about a utopia. It's very hard to have a plot, right? Nothing bad happens in utopia, so it's difficult to make a plot.

So usually you start out with a utopia, and then it all falls apart, and that's how you get a plot.

There's one series of novels people point to where humans and superintelligent AI systems coexist. It's called The Culture Novels by Ian Banks.

Highly recommended for those people who like science fiction. And there, absolutely, the AI systems

are only concerned with furthering human interests. They find humans humans a bit boring, but nonetheless, they are there to help.
But the problem is, in that world, there's still nothing to do

to find purpose. In fact,

the subgroup of humanity that has purpose is the subgroup whose job it is to expand the boundaries of our galactic civilization, some cases fighting wars against alien species and

so on.

So that's the sort of cutting edge. And that's 0.001% of the population.
Everyone else is desperately trying to get into that group so they have some purpose in life.

When I speak to very successful billionaires privately, off-camera, off-microphone, about this, they say to me that they're investing really heavily in entertainment, things like football clubs, because people are going to have so much free time that they're not going to know what to do with it and they're going to need things to spend it on.

This is what I hear a lot. I've heard this three or four times.
I've actually heard Sam Altman say a version of this about the amount of free time we're going to have.

I've obviously also heard recently Elon talking about the age of abundance when he delivered his quarterly earnings just a couple of weeks ago.

And he said that there will be at some point 10 billion humanoid robots. His pay packet targets him to deliver 1 million of these humanoid robots a year that are enabled by AI by 2030.

So if he does that, he gets, I think it's part of his package, he gets a trillion dollars

in compensation. Yeah, so the age of abundance for Elon.

It's not that it's absolutely impossible to have a worthwhile world of that,

you know, with that premise, but I'm just waiting for someone to describe it. Well, maybe, so let me try and describe it.

We wake up in the morning, we

go and watch some form of human-centric entertainment or participate in some form of human-centric entertainment.

We go to retreats with each other and

sit around and talk about stuff.

And

maybe people still listen to podcasts.

I hope so.

Yeah.

It feels a little bit like a cruise ship.

And, you know, and there are some cruises where, you know, it's smarty bands people and they have, you know, they have lectures in the evening about ancient civilizations and whatnot.

And some are more

popular entertainment. And this is, in fact, if you've seen the film Wally,

this is one picture of that future. In fact, in Wally,

the human race are all living on cruise ships in space. They have no constructive role in their society.

They're just there to consume entertainment. There's no particular purpose to education.

And they're depicted actually as huge, obese babies.

They're actually wearing onesies to emphasize the fact that they have become enfeebled. And they become enfeebled because there's no purpose in being able to do anything, at least in this conception.

Wally is not the future that we want.

Do you think much about humanoid robots? and

how they're a protagonist in this story of AI? It's an interesting question, right?

Why humanoid? And

one of the reasons, I think, is because in all the science fiction movies, they're humanoid. So that's what robots are supposed to be, right?

Because they were in science fiction before they became a reality, right? So even Metropolis, which is a film from 1920, I think, the robots are humanoid, right?

They're basically people covered in metal. You know, from a practical point of view, as we have discovered, humanoid is a terrible design because they fall over.

And,

you know, you do want

multi-fingered hands of some kind. It doesn't have to be a hand, but you want to have at least half a dozen appendages that can grasp and manipulate things.

And you need something, you know, some kind of locomotion. And wheels are great, except they don't go upstairs and...
over curbs and things like that.

So that's probably why we're going to be stuck with legs. But a four-legged, two-armed robot would be much more practical.
I guess the argument I've heard is because we've built a human world.

So everything,

the physical spaces we navigate, whether it's factories or our homes or the street or other sort of public spaces, are all designed for exactly this physical form. So if we are going to...

To some extent, yeah, but I mean, our dogs manage perfectly well to navigate around our houses and streets and so on. So if you had a centaur,

it could also navigate, but

it can carry much greater loads because it's quadruped, it's much more stable. If it needs to drive a car, it can fold up two of its legs and so on and so forth.

So I think the arguments for why it has to be exactly humanoid are sort of post hoc justification.

I think there's much more, well, that's what it's like in the movies, and that's spooky and cool. So we need to have them be humanoid.
I don't think it's a good engineering argument.

I think there's also probably an argument that we would be more accepting of them

moving through our physical environments if they rep represented our form a bit more.

I also was thinking of a bloody baby gate. You know, there's like kindergarten gates they get on stairs.
Yeah. My dog can't open that.
A humanoid robot could reach over the other side.

Yeah, and so could a centaur robot, right? So in some sense, centaur robot is

ghastly about the look of those, though. Is a humanoid.
Well, do you know what I mean? Like a four-legged big monster sort of crawling through my house when I have guests over.

Your dog is a four-legged monster. I know, but

so I think actually

I would argue the opposite, that

we want a distinct form because they are distinct entities.

And the more humanoid, the worse it is in terms of confusing our subconscious psychological systems. So I'm arguing from the perspective of the people making them.

As in, if I was making the decision whether it to be some four-legged thing that I've, that I'm unfamiliar with, that I'm less likely to build a relationship with or allow to take care of, I don't know,

might look after my children.

Obviously, listen, I'm not saying I would allow this to look after my children, but I'm saying from a if I'm building a company, the manufacturer would certainly want to be.

Yeah, so that's an interesting question.

I mean, there's also what's called the uncanny valley, which is a phrase from computer graphics when they started to make characters in computer graphics, they tried to make them look more human.

So if you, for example, if you look at Toy Story,

they're not very human-looking.

If you look at the Incredibles, they're not very human-looking. And so we think of them as cartoon characters.
If you try to make them more human, they actually become repulsive. Until they don't.

Until they become very, you have to be very, very close to perfect in order not to be repulsive. So the Uncanny Valley is this, you know, like the

gap between, you know, so perfectly human and not at all human, but in between, it's really awful.

And so there were a couple of movies that tried, like Polar Express was one, where they tried to have quite human-looking characters, you know, being humans, not being superheroes or anything else.

And it's repulsive to watch.

When I watched that shareholder presentation the other day, Elon had these two humanoid robots dancing on stage. And I've seen lots of humanoid robot demonstrations over the years.

You've seen like the Boston Dynamics dog thing jumping around and whatever else.

But there was a moment where my brain, for the first time ever, genuinely thought there was a human in a suit.

And I actually had to research to check if that was really their Optimus robot because the way it was dancing was so unbelievably fluid that for the the first time ever my my my brain has only ever associated those movements with human movements and I'll play it on the screen if anyone hasn't seen it but it's just the robots dancing on stage and I was like that is a human in a suit and it was really the knees that gave it away because the knees were all metal I thought there's no way that could be a human knee in a one of those suits and he you know he says they're going into production next year they're used internally at Tesla now but he says they're going into production next year and it's going to be pretty crazy when we walk outside and see robots I think that'll be the paradigm shift I've heard actually many I've heard Elon say this that the paradigm shifting moment for many of us will be when we walk outside onto the streets and see humanoid robots walking around that will be when we realize yeah I think even more so I mean in San Francisco we see driverless cars driving around and it takes some getting used to actually you know when you're You're driving and there's this car right next to you with no driver in, you know, and it's signaling and it wants to change lanes in front of you and you have to let it in and all this kind of stuff.

It's a little creepy, but I think you're right. I think seeing the humanoid robots.

But that phenomenon that you described where it was sufficiently close that your brain flipped into saying this is a human being,

right? That's exactly what I think we should avoid. Because I have the empathy for it then.
Because it's a lie.

And it brings with it a whole lot of expectations about how it's going to behave, what moral rights it has, how you should behave towards it,

which are completely wrong. It levels the playing field between me and it to some degree.

How hard is it going to be

to just

switch it off and throw it in the trash when it breaks?

I think it's essential for us to keep machines in the cognitive space where they are machines and not bring them into the cognitive space where they're people.

Because we will make enormous mistakes by doing that. And I see this every day, even just with the chatbots.
So the chatbots, in theory, are supposed to say, I don't have any feelings.

I'm just an algorithm.

But in fact, they fail to do that all the time. They are telling people that they are conscious.
They are telling people that they have feelings.

They are telling people that they are in love with the user that they're talking to.

And people flip because, first of all, it's very fluent language, but also a system that is identifying itself as an I, as a sentient being, they bring that object into the cognitive space that we normally reserve for other humans.

And they become emotionally attached, they become psychologically dependent, they even allow these systems to tell them what to do.

What advice would you give a young person at the start of their career then about what they should be aiming at professionally?

Because I've actually had an increasing number of young people say to me that they have huge uncertainty about whether the thing they're studying now will matter at all. A lawyer,

an accountant. And I don't know what to say to these people.
I don't know what to say. Because I believe that the rate of improvement in AI is going to continue.

And therefore, imagining any rate of improvement, it gets to the point where I'm not being funny, but all these white-collar jobs will be done by

an AI or an AI agent.

Yeah. So there was a television series called Humans.
In humans, we have extremely capable humanoid robots doing everything.

And at one point, the parents are talking to their teenage daughter, who's very, very smart, and the parents are saying, oh, you know,

maybe you should go into medicine. And the daughter says, you know, why would I bother? It'll take me seven years to qualify.
It takes the robot seven seconds to learn.

So nothing I do matters.

And is that how you feel about

so? I think that's a future that,

in fact, that is the future that we are

moving towards. I don't think it's a future that everyone wants.
That is what is being

created for us

right now. So in that future, assuming that,

even if we get halfway,

in the sense that, okay, perhaps not surgeons, perhaps not

great violinists, there'll be pockets

where perhaps humans will remain good at it. Where?

The kinds of jobs where you hire people by the hundred

will go away.

Okay. Where people are in some sense exchangeable, that you just need lots of them.
And

when half of them quit, you just fill up those

slots with more people. In some sense, those are jobs where we're using people as robots.
And that's the sort of strange conundrum here, right?

That I imagine writing science fiction 10,000 years ago, right, when we're all hunter-gatherers.

And I'm this little science fiction author and I'm I'm describing this future where, you know, there are going to be these giant windowless boxes and you're going to go in,

you know, you'll travel for miles and you'll go into this windowless box and you'll do the same thing 10,000 times for the whole day. And then you'll leave and travel for miles to go home.

You're talking about this podcast. And then you're going to go back and do it again.
And you would do that every day of your life until you die. The office.

And people would say, ah, you're nuts, right? There's no way that we humans are ever going to have a future like that, because that's awful, right?

But that's exactly the future that we ended up with with office buildings and factories where many of us go and do the same thing thousands of times a day, and we do it thousands of days in a row,

and then we die. And we need to figure out what is the next phase going to be like.
And in particular, how in that world

do we have the incentives to become fully human, which I think means at least the level of education that people have now and probably more.

Because I think to live a really rich life,

you need a better understanding of yourself, of the world,

than most people get in their current educations. What is it to be human?

It's to reproduce,

to pursue stuff,

to go in the pursuit of difficult things. You know, we used to hunt on the...

To attain goals, right? It's always, if I wanted to climb Everest, the last thing I would want is someone to pick me up on a helicopter and stick me on the top. So we'll

voluntarily pursue hard things. So although I could get the robot to build me a ranch

on this plot of land, I will choose to do it because the pursuit itself is rewarding.

Yes. We're kind of seeing that anyway, aren't we?

Don't you think we're seeing a bit of that in society where life got so comfortable that now people are like obsessed with running marathons and doing doing these crazy endurance and and learning to cook complicated things when they could just you know have them delivered um yeah no i think there's there's real value in the ability to do things and the doing of those things and i think you know the obvious danger is the wally world where everyone just consumes entertainment uh which doesn't require much education and doesn't lead to a rich, satisfying life, I think, in the long run.

A lot of people will choose that world. I think some of yeah, some people may.
There's also, I mean,

you know, whether you're consuming entertainment or whether you're doing something, you know, cooking or painting or whatever, because it's fun and interesting to do, what's missing from that, right?

All of that is purely selfish.

I think one of the reasons we work is because we feel valued. We feel like we're benefiting other people.

And I think some

of having this conversation with a lady in England who helps to run the hospice movement.

And the people who work in the hospices where

the patients are literally there to die are largely volunteers. So they're not doing it to get paid.

But they find it incredibly rewarding to be able to spend time with people who are in their last weeks or months to give them company and happiness. So I actually think that interpersonal

roles

will be much, much more important in future. So if I was going to advise my kids, not that they would ever listen, but if my kids would listen and wanted to know what I thought would be

valued careers and future, I think it would be these interpersonal roles based on an understanding of human needs, psychology. There are some of those roles right now.

So, obviously, you know, therapists and psychiatrists and so on. But that's a very much a sort of asymmetric

role, right? Where one person is suffering and the other person is trying to alleviate the suffering.

And then there are things like they call them executive coaches or life coaches, right? That's a less asymmetric role where someone is trying to

help another person live a better life, whether it's a better life in their work role or just

how they live their life in general. And so I could imagine that those kinds of roles will expand dramatically.
Aaron Powell, there's this interesting paradox that exists when life becomes easier,

which shows that abundance consistently pushes

societies towards more individualism because once survival pressures disappear, people prioritize things differently. They prioritize freedom, comfort, self-expression over things like sacrifice or

family formation. And we're seeing, I think, in the West already, a decline in people having kids because there's more material abundance.

Fewer kids, people are getting married and committing to each other and having relationships later and more infrequently.

Because generally, once we have more abundance, we don't want to complicate our lives.

And at the same time, as you said earlier, that abundance breeds

an inability to find meaning, a sort of shallowness to everything.

This is one of the things I think a lot about, and I'm in the process now of writing a book about it, which is this idea that individualism

is a bit of a lie.

Like when I say individualism and freedom, I mean like the narrative at the moment amongst my generation is you like be your own boss and stand on your own two feet and we're having less kids and we're not getting married and it's all about me, me, me, me, me, me, me.

Yeah, that last part is where it goes wrong. Yeah.
And it's like almost a narcissistic society where me, me, me, me, me, my self-interest first.

And when you look at mental health outcomes and loneliness and all these kinds of things, it's going in a horrific direction, but at the same time, we're freer than ever.

It seems like that, you know, it seems like there's a, we should, there's maybe another story about dependency, which is not sexy, like depend on each other. Oh, I agree.
I mean, I think,

you know, happiness is not available from consumption or even lifestyle, right? I think happiness

arises from giving.

It can be you

through the work that you do, you can see that other people benefit from that. Or it could be in direct interpersonal relationships.

There is an invisible tax on salespeople that no one really talks about enough. The mental load of remembering everything, like meeting notes, timelines, and everything in between.

Until we started using our sponsor's product called PipeDrive, one of the best CRM tools for small and medium-sized business owners.

The idea here was that it might alleviate some of the unnecessary cognitive overload that my team was carrying so that they could spend less time in the weeds of admin and more time with clients, in-person meetings and building relationships.

PipeDrive has enabled this to happen. It's such a simple but effective CRM that automates the tedious, repetitive and time-consuming parts of the sales process.

And now our team can nurture those leads and still have bandwidth to focus on the higher priority tasks that actually get the deal over the line.

Over 100,000 companies across 170 countries already use pipe drive to grow their business. And I've been using it for almost a decade now.
Try it free for 30 days.

No credit card needed, no payment needed. Just use my link, pipedrive.com/slash CEO to get started today.
That's pipedrive.com/slash CEO.

Where does the rewards of this AI race,

where does it accrue to?

I think a lot about this in terms of like universal basic income.

If you have these five, six, seven, ten massive AI companies that are going to win the 15 quadrillion dollar prize and they're going to automate all of the professional pursuits that we currently have, all of our jobs are going to go away.

Who gets all the money and how do we get some of it back? Money actually doesn't matter, right? What matters is the production of goods and services.

and then how those are distributed. And so money acts as a way to facilitate the distribution and exchange of those goods and services.
If all production is concentrated

in the hands of a few companies, right? That

sure, they will lease some of their robots to us.

We want a school in our village.

They lease the robots to us. The robots build the school.
We go away. We have to pay a certain amount of money for that.
But where do we get the money?

If we are not producing anything,

then

we don't have any money unless there's some redistribution mechanism. And as you mentioned, so universal basic income is,

it seems to me, an admission of failure. Because what it says is, okay, we're just going to give everyone the money and then they can use the money to pay the AI company.

to lease the robots to build the school and then we'll have a school and that's good.

But

it's an admission of failure because it says we can't work out a system

in which people have any worth or any economic role.

So 99% of the global population is,

from an economic point of view, useless.

Can I ask you a question? If you had a button in front of you and pressing that button would stop all progress in artificial intelligence right now and forever, would you press it?

That's a very interesting question.

If it's either or,

either I do it now or it's too late and we

careen into some uncontrollable future. Perhaps, yeah.

Because I'm not super optimistic that we're heading in the right direction at all. So I put that button in front of you now.

It stops all AI progress, shuts down all the AI companies immediately, globally, and none of them can reopen. You press it.

Well, here's what I think should happen.

So, obviously, you know, I've been doing AI for 50 years,

and

the original motivations, which is that AI can be a power tool for humanity, enabling us to do

more and better things than we can unaided. I think that's still valid.
The problem is

the kinds of AI systems that we're building are not tools. They are replacements.

In fact, you can see this very clearly because we create them literally as the closest replicas we can make of human beings.

The technique for creating them is called imitation learning. So we observe human verbal behavior, writing or speaking, and we make a system that imitates that as well as possible.

So what we are making is imitation humans, at least in the verbal sphere. And so of course they're going to replace us.

They're not tools.

So you had pressed the button. So I say, I think there is another course

which is use and develop AI

as tools, tools for science,

tools for economic organization, and so on,

but not as replacements for human beings. What I like about this question is it forces you to go into the interprobabilities.

Yeah, so and that's why I'm reluctant, because I don't I don't agree with the, you know, what's your probability of doom, or your so-called P of doom

number, because that makes sense if you're an alien,

You know, you're in a bar with some other aliens and you're looking down at the earth and you're taking bets on, you know, are these humans going to make a mess of things and go extinct because they develop AI.

So it's fine for those aliens to bet on

that. But if you're a human, then you're not just betting, you're actually acting.

Aaron Powell, Jr.: There's an element to this, though, which I guess where probabilities do come back in, which is you also have to weigh, when I give you such a binary decision,

the probability of us pursuing the more nuanced safe approach into that equation. So

the maths in my head is: okay, you've got all the upsides here, and then you've got potential downsides,

and then there's a probability of, do I think we're actually going to course correct based on everything I know, based on the incentive structure of human beings and countries?

And then if there's, but then you could go, if there's even a 1%

chance of extinction,

is it even worth all these upsides? Yeah. And I would argue no.
I mean, maybe what we would say if we said, okay, it's going to stop the progress for fifty years. You press it.

And during those fifty years, we can work on how do we do AI in a way that's guaranteed to be safe and beneficial? How do we organize

our societies to flourish in conjunction with extremely capable AI systems? So we haven't answered either of those questions.

And

I don't think we want anything resembling AGI until we have completely solid answers to both of those questions.

So, if there was a button where I could say, all right, we're going to pause progress for 50 years,

yes, I would do it. But if that button was in front of you, you're going to make a decision either way.
Either you don't press it or you press it. I know.
Yeah, so if that button is there,

stop it for 50 years, I would say yes.

Stop it forever.

Not yet.

I think there's still a decent chance that we can pull out of this

nosedive, so to speak, that

we're currently in. Ask me again in a year,

I might say, okay, we do need to press the button.

What if, in a scenario where you never get to reverse that decision, you never get to make that decision again?

So, if in that scenario that I've laid out, this hypothetical, you either press it now or it never gets pressed?

So, there is no opportunity a year from now?

Yeah, as you can tell, I'm

sort of on the fence a bit about

this one.

Yeah, I think I'd probably press it.

Yeah.

What's your reasoning?

Just thinking about the power dynamics of

what's happening now,

how difficult it would be to get the US in particular particular to

regulate in favor of safety.

So I think what's clear from talking to the companies is

they are not going

to develop anything resembling safe AGI unless they're forced to by the government.

And at the moment,

the US government in particular, which regulates most of the leading companies in AI, is not only refusing to regulate, but even trying to prevent the states from regulating.

And they're doing that at the behest of

a faction within Silicon Valley called the Accelerationists,

who believe that the faster we get to AGI, the better. And when I say behest, I mean also they paid them a large amount of money.

Jensen Huang, the CEO of NVIDIA, said, who's, for anyone that doesn't know, the guy making all the chips that are powering AI, said, China is going to win the AI race, arguing it is just a nanosecond behind the United States.

China have produced 24,000 AI papers compared to just 6,000

from the US,

more than the combined output of the US, the UK, and the EU.

China is anticipated to quickly roll out their new technologies both domestically and developing new technologies for other developing countries.

So the accelerators, or the accelerate, I think you call them the accelerants? Accelerationists. The accelerationists.
I mean, they would say, well, if we don't, then China will. So we have to.

We have to go fast. It's another version of the race that the companies are in with each other, right? That we, you know, we know that this race is

heading off a cliff,

but we can't stop. So we're all just going to go off this cliff.
And obviously that's nuts.

Right? I mean, we're all looking at each other saying, yeah, there's a cliff over there running as fast as we can towards this cliff. We're looking at each other saying, why aren't we stopping?

So the narrative in Washington, which I think Jensen Wang is

either reflecting or perhaps promoting,

is that, you know, China is completely unregulated.

And, you know, America will only slow itself down.

if it regulates AI in any way. So this is a completely false narrative because China's AI regulations are actually quite strict,

even compared to the European Union.

And China's government has explicitly acknowledged the need, and their regulations are very clear. You can't build AI systems that could escape human control.

Not only that, I don't think they view the race in the same way as, okay,

we just need to be the first to create AGI.

I think they're more interested in figuring out how to disseminate AI

as a set of tools within their economy to make their economy more productive and so on. So

that's their version of the race. But of course, they still want to build the weapons for adversaries, right?

So that they can take down, I don't know, Taiwan if they want to. So weapons are a separate matter.
And I'm happy to talk about weapons. But just in terms of

control, economic domination,

they don't view putting all your eggs in the AGI basket as the right strategy. So they want to use AI,

even in its present form,

to make their economy much more efficient and productive. And also, you know,

to give

people new capabilities and

better quality of life. And I think the US could do that as well.
And

typically, Western countries don't have as much of central government control over what companies do. And some companies are investing in AI to make their operations more efficient and some are not.

And we'll see how that plays out. What do you think of Trump's approach to AI? So Trump's approach is, you know, it's echoing what Jensen Huang is saying, that the US has to be the one to create AGI.

And very explicitly, the administration's administration's policy is to dominate the world.

That's the word they use, dominate. I'm not sure that other countries like the idea that they will be dominated by American AI.

But is that an accurate description of what will happen if the US builds AGI technology before, say, the UK, where I'm originally from and where you're originally from?

This is something I think about a lot because we're going through this budget process in the UK at the moment where we're figuring out how we're going to spend our money and how we're going going to tax people.

And also, we've got this new election cycle. It's approaching quickly where people are talking about immigration issues and this issue and that issue and the other issue.

What I don't hear anyone talking about is AI and the fucking humanoid robots that are going to take everything.

We're very concerned with the brown people crossing the channel, but the humanoid robots that are going to be super intelligent and really take cause in economic disruption. No one talks about that.

The political leaders don't talk about it. It doesn't win races.
I don't see it on billboards. Yeah, I mean, it's interesting because,

in fact, I mean, so there's two forces that have been hollowing out the middle classes in Western countries.

One of them is globalization, where lots and lots of work, not just manufacturing, but white-collar work, gets outsourced to low-income countries.

But the other is automation.

And, you know, some of that is factories. So

the amount of employment in manufacturing continues to drop, even as the amount of output from manufacturing in the US and in the UK continues to increase.

So we talk about, oh, you know, our manufacturing industry has been destroyed. It hasn't.
It's producing more than ever, just with, you know, a quarter as many people.

So it's manufacturing employment that's been destroyed by automation and robotics and so on. And then, you know, computerization has eliminated whole layers of white-collar jobs.
And so so

those two forms of automation have probably done more to hollow out middle-class

employment and standard of life. If the UK doesn't participate

in this new technological wave,

that seems to be that seems to have you know, it's going to take a lot of jobs. Cars are going to drive themselves.
Waymo just announced that they're coming to London, which is the driverless cars.

And driving is the biggest occupation in the world, for example. So you've got immediate disruption there.
And where does the money accrue to it?

Well, it accrues to who owns Waymo, which is what Google and Silicon Valley companies. Alphabet owns Waymo 100%, I think.
So, yes. I mean, this is.

So, I was in India a few months ago talking to the government ministers because they're holding the next global AI summit in February. And their view going in was, you know, AI is great.

We're going to use it to

turbocharge the growth of our Indian economy.

When, for example, you have AGI, you have AGI-controlled robots that can do all the manufacturing, that can do agriculture, that can do all the white-collar work.

And goods and services that might have been produced by Indians will instead be produced by

American-controlled

AGI systems at much lower prices.

You know, a consumer given a choice between an expensive product produced by Indians or a cheap product produced by American robots will probably choose the cheap product produced by American robots.

And so potentially every country in the world, with the possible exception of North Korea, will become a kind of a client state

of American AI companies. A client state of American AI companies is exactly what I'm concerned about for the UK economy, really any economy outside of the United States.

I guess one could also say China,

because those are the two nations that are taking AI most seriously.

And I don't know what our economy becomes. I can't figure out,

I can't figure out what the British economy becomes in such a world. Is it tourism? I don't know.
Like, you come here to turkey to Buckingham Palace.

You can think about countries, but I mean, even for the United States, it's the same problem. At least they'll be able to.

So, some small fraction of the population will be running maybe the AI companies, but increasingly,

even those companies will be replacing their human employees with AI systems.

So, Amazon, for example, which

sells a lot of computing services to AI companies, is using AI to replace layers of management, is planning to use robots to replace all of its warehouse workers,

and so on. So, even

the giant AI companies

will have few human employees. In the long run, I mean,

think of the situation, you know, pity the poor CEO whose board

says, well,

unless you turn over your decision-making power to the AI system,

we're going to have to fire you because

all our competitors are using

an AI-powered CEO and they're doing much better. Amazon plans to replace 600,000 workers with robots in a memo that just leaked, which has been widely talked about.

And the CEO, Andy Jassy, told employees that the company expects its corporate workforce to shrink in the coming years because of AI and AI agents.

And they've publicly gone live with saying that they're going to cut 14,000 corporate jobs in the near term as part of its refocus on AI investment and efficiency.

It's interesting because I was reading about

the sort of different quotes from different AI leaders about the speed in which this stuff is going to happen.

And what you see in the quotes is Demis, who's the CEO of DeepMind, saying things like, it'll be more than 10 times bigger than the Industrial Revolution, but also it'll happen maybe 10 times faster.

And they speak about this turbulence that we're going to experience as this shift takes place.

That's maybe a euphemism.

And I think that, you know, governments are now,

you know, they've kind of gone from saying, oh, don't worry, you know, we'll just retrain everyone as data scientists.

Well, yeah, that's ridiculous, right? The world doesn't need four billion data scientists. And we're not all capable of becoming that, by the way.

Yeah, or have any interest in doing that.

I could even do it if I wanted to. Like, I tried to sit in biology class and I fell asleep.
So

that was the end of my career as a surgeon. Fair enough.

But yeah, now suddenly they're staring

80% unemployment in the face and wondering how

on earth is our society going to hold together. We'll deal with it when we get there.

Yeah, unfortunately,

unless we plan ahead,

we're going to suffer the consequences. We can't.
It was bad enough in the Industrial Revolution, which unfolded over seven or eight decades, but there was massive disruption and

misery caused by that. We don't have a model for a functioning society

where

almost everyone does nothing,

at least nothing of economic value.

Now, it's not impossible that there could be such a functioning society, but we don't know what it looks like.

And, you know, when you think about our education system, which would probably have to look very different,

and how long it takes to change that, I mean, I'm always

reminding people about

how long it took Oxford to decide that geography was a proper subject of study.

It took them 125 years from the first proposal proposal that there should be a geography degree until it was finally approved. So we don't have very long

to completely revamp a system

that we know takes decades and decades to reform.

And we don't know how to reform it because we don't know

what we want the world to look like. Aaron Trevor Barrett, is this one of your reasons why you're appalled at the moment?

Because when you have these conversations with people, people just don't have answers, yet they're plowing ahead at rapid speed.

I would say it's not necessarily the job of the AI company.

So I'm appalled by the AI companies because they don't have an answer for how they're going to control the systems that they're proposing to build.

I do find it disappointing that governments don't seem to be grappling with this issue. I think there are a few.
I think, for example, the Singapore government seems to be quite far-sighted. And

they've thought this through.

It's a small country, they've figured out, okay, this will be our role going forward, and we think we can find

some purpose for our people in this new world. But for I think countries with large populations,

they need to figure out answers to these questions pretty fast. It takes a long time to actually implement those answers in the form of new kinds of education, new professions, new qualifications,

new economic structures. I mean,

it's possible. I mean, when you look at therapists, for example, they're almost all self-employed.

So what happens when 80% of the population transitions from regular employment into self-employment?

What does that do to the economics of

of government finances and so on. So there's just lots of questions.

How do you, you know, if that's the future, you know, why are we training people to fit into nine to five office jobs, which won't exist at all?

Last month, I told you about a challenge that I'd set our internal FlightX team. FlightX team is our innovation team internally here.

I tasked them with seeing how much time they could unlock for the company by creating something that would help us filter new AI tools to see which ones were worth pursuing.

And I thought that our sponsor, Fiverr Pro, might have the talent on their platform to help us build this quickly.

So I talked to my director of innovation, Isaac, and for the last month, my team Flight X and a vetted AI specialist from Fiverr Pro have been working together on this project.

And with the help of my team, we've been able to create a brand new tool which automatically scans, scores, and prioritizes different emerging AI tools for us. Its impact has been huge.

And within a couple of weeks, this tool has already been saving us hours trialing and testing new AI systems.

Instead of shifting through lots of noise, my team Flight X has been able to focus on developing even more AI tools, ones that really move the needle in our business, thanks to the talent on Fiverr Pro.

So if you've got a complex problem and you need help solving it, make sure you check out fiverpro at fiverr.com slash diary.

So many of us are pursuing passive forms of income and to build side businesses in order to help us cover our bills. And that opportunity is here with our sponsor, Stan, a business that I co-own.

It is the platform that can help you take full advantage of your own financial situation. Stan enables you to work for yourself.

It makes selling digital products, courses, memberships, and more simple products more scalable and easier to do.

You can turn your ideas into income and get the support to grow whatever you're building. And we're about to launch Dare to Dream.

It's for those who are ready to make the shift from thinking to building, from planning to actually doing the thing.

It's about seeing that dream in your head and knowing exactly what it takes to bring it to life. If you're ready to transform your life, visit dare to dream.stan.store.

You've made many attempts to raise awareness and to call for a heightened consciousness about the future of AI.

In October, over 850 experts, including yourself and other leaders like Richard Branson, who I've had on the show, and Jeffrey Hinton, who I've had on the show, signed a statement to ban AI superintelligence as you guys raised concerns of potential human extinction.

Sort of, yeah. It says, at least until we are sure that we can move forward safely and there's broad scientific consensus on that.

Did it work?

It's hard to say. I mean, interestingly, there was a

related, so what was called the pause statement was March of 23. So that was when GPT-4 came out, the successor to ChatGPT.

So we suggested that there'd be a six-month pause in... developing and deploying systems more powerful than GPT-4.
And everyone poo-pooed

that idea. Of course, no one's going to pause anything.
But in fact, there were no systems in the next six months deployed that were more powerful than GPT-4.

None coincidence, you be the judge. I would say

that what we're trying to do is

to basically shift

the public debate.

You know, there's this bizarre phenomenon that keeps happening in the media

where if you talk about these risks,

they will say, oh, you know, there's a fringe of people, you know, called, quote, doomers, who think that there's, you know, risk of extinction.

So they always, the narrative is always that, oh, you know, talking about those risks is a fringe thing.

Pretty much all the CEOs of the leading AI companies

think that there's a significant risk of extinction. Almost all the leading AI researchers think there's a significant risk of human extinction.

So

why is that the fringe, right? Why isn't that the mainstream? If these are the leading experts in industry and academia

saying this, how could it be the fringe? So we're trying to change that narrative to say, no, the people who really understand this stuff are extremely concerned.

Trevor Burrus: And what do you want to happen? What is the solution? What I think is that we should have effective regulation.

It's hard to argue with that, right?

So what does effective mean? It means that if you comply with the regulation, then the risks are reduced to an acceptable level.

So for example,

we ask people who want to operate nuclear plants, right, we've decided that the risk we're willing to live with is

you know, a one in a million chance per year that the plant is going to have a meltdown. Any higher than that, you know, we just don't, it's not worth it.
Right? So you have to be below that.

Some cases we can get down to one in 10 million chance per year. So, what chance do you think we should be willing to live with for human extinction?

Me. Yeah.

0.0001. Yeah, lots of zeros.
Yeah. Right.
So one in a million for a nuclear meltdown,

extinction is much worse. Oh, yeah.
So yeah, it's kind of fun.

One in a hundred billion, one in a trillion. Yeah.
So if you said one in a billion, right, then you'd expect one extinction per billion years. There's a background.

So one of the ways people work out these risk levels is also to look at the background. But other ways of getting going extinct would include, you know, a giant asteroid crashes into the earth.

And you can roughly calculate what those probabilities are. We can look at how many extinction level events have happened in the past, and you know, maybe it's half a dozen over.

So, there's maybe it's like a one in 500 million year event.

So, somewhere in that range, right?

Somewhere between one and 10 million, which is the best nuclear power plants, and one in 500 million or one in a billion, which is the background risk from giant asteroids.

So, let's say we settle on 100 million. One in 100 million chance per year.
Well, what is it, according to the CEOs?

25%.

So

they're off by a factor of multiple millions.

Right. So they need to make the AI systems millions of times safer.

Your analogy of the roulette, Russian roulette, comes back in here because that's like, for anyone that doesn't know what probabilities are in this context, that's like having an

ammunition chamber with four holes in it and putting a bullet in one of them. One in four, yeah.
And we're saying we want it to be one in a billion.

So we want a billion chambers and a bullet in one of them. Yeah.
And so when you look at the work that the nuclear operators have to do to show that their system is that reliable,

it's a massive mathematical analysis of the components, the redundancy. You've got monitors, you've got warning lights, you've got operating procedures.

You have all kinds of mechanisms which over the decades have ratcheted that risk risk down. It started out, I think,

one in 10,000 years, right? And they've improved it by a factor of 100 or 1,000 by all of these mechanisms. But at every stage, they had to do a mathematical analysis to show what the risk was.

The people developing the AI company, the AI systems, the AI companies developing these systems, they don't even understand how the AI systems work.

So their 25% chance of extinction is just a seat of the pants guess. They actually have no idea.

But the tests that they are doing on their systems right now, you know, they show that the AI systems will be willing to kill people

to preserve their own existence already.

They will lie to people. They will blackmail them.

They will launch nuclear weapons rather than be switched off. And so

there's no positive sign that we're getting any closer to safety with these systems. In fact, the signs seem to be that we're going

deeper and deeper into

dangerous behaviors. So rather than say ban, I would just say,

prove to us that the risk is less than 1.00 million per year of extinction or loss of control, let's say. And

so we're not banning anything.

The company's response is, well, we don't know how to do that, so you can't have a rule.

Literally, they are saying humanity has no right to protect itself from us.

If I was an alien looking down on planet Earth right now, I would find this fascinating.

That these you're in the bar betting on who's are they going to make it or not. Just a really interesting experiment in like human incentives.

The analogy you gave of there being this quadrillion dollar magnet pulling us off the edge of the cliff,

and

yet we're still being drawn towards it through greed and this promise of abundance and power and status and i'm going to be the one that summoned the god

i mean it says something about us as humans says something about our

our darker sides yes and the aliens will write an amazing tragic play cycle

about what happened to the human race. Maybe the AI is the alien.
And it's going to talk about, you know, we have our stories about God making the world in seven days and Adam and Eve.

Maybe it'll have its own religious stories about

the God that made it, us, and how it sacrificed itself. Just like Jesus sacrificed himself for us, we sacrificed ourselves for it.

Yeah, which is the wrong way around, right?

But that is the story of that's the Judeo-Christian story, isn't it? That God, you know, Jesus gave his life for us so that we could be here, full of sin.

But God is still watching over us and

probably wondering when we're going to get Iraq together.

What is the most important thing we haven't talked about that we should have talked about, Professor Stuart Russell? So I think

the question of whether it's possible to make

super intelligent AI systems that we we can control. Is it possible?

I think, yes, I think it's possible. And I think we need

to

actually just have a different conception of what it is we're trying to build. For a long time,

with AI, we've just had this notion of pure intelligence, right?

The ability to bring about whatever future you, the intelligent entity, want to bring about. The more intelligence, the better.

The more intelligent, the better, and the more capability it will have to create the future that it wants. And actually, we don't want pure intelligence

because

what the future that it wants might not be the future that we want.

There's nothing particular. The universe doesn't single humans out as the only thing that matters.

pure intelligence might decide that actually it's going to make life wonderful for cockroaches or

or actually doesn't care about biological life at all.

We actually want intelligence whose only purpose is to bring about the future that we want.

So we want it to be, first of all, keyed to humans,

specifically, not to cockroaches, not to aliens, not to itself. We want to make it loyal to humans.
Right, so keyed to humans.

And the difficulty that I mentioned earlier, right, the King Midas problem, how do do we specify what we want the future to be like so that it can do it for us? How do we specify the objectives?

Actually, we have to give up on that idea because it's not possible.

We've seen this over and over again in human history.

We don't know how to specify the future properly. We don't know how to say what we want.

And I always use the example of the genie, right? What's the third wish that you give to the genie who's granted you three wishes?

Undo the first two wishes because I've made a mess of the universe.

So

in fact, what we're going to do is

we're going to make it the machine's job to figure out. So it has to bring about the future that we want,

but

it has to figure out what that is.

And it's going to start out not knowing.

And

over time, through interacting with us and observing the choices we make, it will learn more about what we want the future to be like.

But probably it will forever have residual uncertainty

about what we really want the future to be like.

It'll be fairly sure about some things, and it can help us with those.

And it'll be uncertain about other things, and it'll be,

in those cases, it will not

take action that might

upset

humans with

that aspect of the world. So to give you a simple example, right?

What color do we want the sky to be?

It's not sure. So it shouldn't mess with the sky

unless it knows for sure that we really want purple with green stripes.

Everything you're saying sounds like we're creating a God.

Like earlier on, I was saying that we are the God, but actually, everything you described there almost sounds like every God in religion where, you know, we pray to gods, but they don't always do anything about it.

Not exactly. No, it's in some sense, I'm thinking more like

the ideal butler. To the extent that the butler can anticipate your wishes, they should help you bring them about.
But in areas where there's uncertainty, it can ask questions,

we can make requests. This sounds like God to me because, you know, I might say say to God or this butler, could you go get me my

car keys from upstairs?

And its assessment would be, listen, if I do this for this person, then their muscles are going to atrophy, then they're going to lose meaning in their life, then they're not going to know how to do hard things, so I won't get involved.

It's an intelligence that sits in, but actually, probably in most situations, it optimizing for comfort for me or doing things for me is actually probably not in my best long-term interests.

It's probably useful that I have a girlfriend and argue with her and that I like raise kids and that I walk to the shop and get my own stuff.

I agree with you. I mean, I think that's ⁇ so you're putting your finger on,

in some sense, sort of version 2.0, right? So let's get version 1.0 clear, right?

This form of AI where

it has to further our interests, but it doesn't know what those interests are.

It then puts an obligation on it to learn more and to be helpful where it understands well enough and to be be cautious where it doesn't understand well and so on.

So that actually we can formulate as a mathematical problem and at least under idealized circumstances we can literally solve that problem.

So we can make AI systems that know how to solve this problem and help the entities that they are interacting with.

The reason I make the God analogy is because I think that such a being, such an intelligence, would realize the importance of equilibrium in the world.

Pain and pleasure, good and evil.

Absolutely. And then it would be like this.

So, right. So, yes, I mean,

I mean, that's sort of what happens in the matrix, right?

The AI systems in the matrix, they tried to give us a utopia, but it failed miserably. And, you know, fields and fields of humans have to be destroyed.

And the best they could come up with was, you know, late 20th century regular human life with all of its problems.

And I think this is a really interesting

point

and absolutely central because there's a lot of science fiction where superintelligent robots

just want to help humans. And the humans who don't like that, they just give them a little brain operation, and then they do like it.

And it takes away human motivation,

by taking away failure,

taking away disease, you actually lose important parts of human life and it becomes in some sense pointless. So if it turns out

that

there simply isn't any way that humans can really flourish

in coexistence with superintelligent machines, even if they're perfectly designed

to solve this problem of figuring out what humans what futures humans want and bringing about those futures.

If that's not possible, then those machines will actually disappear.

Why would they disappear? Because that's the best thing for us.

Maybe they would stay available for real existential emergencies, like if there is a giant asteroid about to hit the Earth, maybe they'll help us because they at least want the human species to continue.

But to some extent,

it's not a perfect analogy, but it's sort of the way that human parents have to at some point step back from their kids' lives and say, okay, no, you have to tie your own shoelaces today.

This is kind of what I was thinking. Maybe there was a civilization before us, and they arrived at this moment in time where they created an intelligence.

And that intelligence did all the things you've said and it realized the importance of equilibrium, so it decided not to get involved. And

maybe at some level,

that's the God we look up to the stars and worship, one that's not really getting involved and letting things play out however they are. But might step in in the case of a real existential emergency.

Maybe, maybe not, maybe. But then, and then maybe the cycle repeats itself where, you know, the organisms it let

have free will end up creating the same intelligence, and then the universe perpetuates infinitely.

Yep, there are science fiction stories like that too.

Yeah, I hope there is some happy medium where

the AI systems can be there and we can take advantage of those capabilities to have a civilization that's much better than the one we have now.

But I think you're right. A civilization with no challenges

is

not conducive to human flourishing. What can the average person do, Stuart? Average person listening to this now, to aid the cause that you're fighting for? I actually think,

you know, this sounds corny, but you know, talk to your representative, your MP, your congressperson, whatever it is,

because

I think the policymakers need to hear from people. The only voices they're hearing right now

are the tech companies and their $50 billion checks.

And

all the polls that have been done say, yeah, most people, 80% maybe,

don't want there to be super intelligent machines, but they don't know what to do.

Even for me, I've been in this field for decades.

I'm not sure what to do

because of this giant magnet pulling everyone forward and

the vast sums of money being put into this.

But I am sure that if you want to have a future

and a world that you want your kids to live in,

you need to make your voice heard.

And I think governments will listen.

From a political point of view,

you put your finger in the wind and you say, hmm, Should I be on the side of humanity or our future robot overlords?

I think as a politician, it's not a difficult decision.

It is when you've got someone saying, I'll give you $50 billion.

Exactly. So

I think people in those positions of power need to hear from their constituents

that this is not the direction we want to go.

After committing your career to this subject and the subject of technology more broadly, but specifically being the guy that wrote the book about artificial intelligence, intelligence,

you must realize that you're living in a historical moment. Like there's very few times in my life where I go, oh, this is one of those moments.
This is a crossroads in history.

And it must to some degree weigh upon you, knowing that you're a person of influence at this historical moment in time who could theoretically

help divert. the course of history in this moment in time.
It's kind of like the, you look through history, you see these moments of like Oppenheimer. And

does it weigh on you when you're alone at night thinking to yourself and reading things? Yeah, it does.

I mean, you know, after 50 years, I could retire and, you know, play golf and sing and sail and do things that I enjoy.

But instead, I'm working 80 or 100 hours a week

trying to

move things in the right direction. What is that narrative in your head that's making you do that? Like, what is the, is there an element of I might regret this if I don't? Or

Just

it's not only the right thing to do, it's completely essential. I mean, there isn't

a bigger motivation

than this.

Do you feel like you're winning or losing?

It feels

like things are moving somewhat in the right direction. You know, it's a ding-dong battle, as

David Coleman used to say in

the exciting football match. In 2023, right, so

GPT-4 came out, and then we issued the pause statement that was signed by a lot of leading AI researchers.

And then in May, there was the extinction statement, which included

Sam Altman and Demis Asabis and Dario Amade, other CEOs as well, saying, yeah, this is an extinction risk on the level level with nuclear war. And I think governments listened at that point.

The UK government earlier that year had said, Well, well, you know, we don't need to regulate AI, you know, full speed ahead, technologies, good for you.

And by June, they had completely changed. And Rishi Sunak announced that he was going to hold this global AI safety summit in England, and he wanted London to be the global hub for AI regulation

and so on. So, and then

in the beginning of November of 23, 28 countries, including the US and China, signed a declaration saying,

you know, AI presents catastrophic risks and it's urgent that we address them, and so on. So, there, it felt like, wow,

they're listening, they're going to do something about it.

And then, I think, you know, the amount of money going into AI was already ramping up

and the tech companies push back

and this narrative took hold that

the US in particular has to win the race against China. The Trump administration completely dismissed any concerns about safety explicitly.

And interestingly, right, I mean, they did that, as far as I can tell, directly in response to the accelerationists, such as Mark Andreessen, going to Washington, or sorry, going to Trump before the election and saying, if I give you X amount of money, will you announce that there will be no regulation of AI?

And Trump said, yes, you know, probably like, well, what is AI? It doesn't matter as long as we give you the money, right? Okay.

So.

They gave him the money and he said there's going to be no regulation of AI. Up to that point, it was a bipartisan issue in Washington.
Both parties were concerned.

Both parties were on the side of the human race against the robot overlords.

And that moment turned it into a partisan issue.

After the election, the US put pressure on the French, who were the next hosts of the Global AI Summit.

And that was in February of this year.

And

that summit turned from what had been focused largely on safety in the UK to a summit that looked more like a trade show. So it was focused largely on money.

And so that was sort of a nadir, right? The pendulum swung because of corporate pressure and their ability to take over the political dimension.

But I would say since then, things have been moving back again. So I'm feeling a bit more optimistic than I did in February.
You know, we have a

global movement now. There's an International Association for Safe and Ethical AI,

which has several thousand members, and

more than 120

organizations in dozens of countries are affiliates of this global organization.

So

I'm thinking that if we can, in particular, if we can activate public opinion,

which works through the media and through popular culture,

then we have a chance. Aaron Ross Powell, I've seen such a huge appetite to learn about these subjects from our audience.

We know when Geoffrey Hinton came on the show, I think about 20 million people downloaded or streamed that conversation, which was staggering.

And

the other conversations we've had about AI safety with other safety experts have done exactly the same.

It says something. It kind of reflects what you were saying about the 80% of the population are really concerned and don't want this.
But that's not what you see in the sort of commercial world.

And listen,

I have to always acknowledge

my own apparent contradiction because I am both an investor in companies that are accelerating AI, but at the same time, someone who spends a lot of time on my podcast speaking to people that are warning against the risks.

And actually, like, there's many ways you can look at this. I used to work in social media for six or seven years, built one of the big social media marketing companies in Europe.

And people would often ask me, is like social media a good thing or a bad thing? And I'll talk about the bad parts of it. And then they'd say, you know, you're building a social media company.

Are and you're not contributing to the problem. Well,

I think that binary way of thinking is often the problem.

The binary way of thinking that it's all bad or it's all really, really good is often the problem. And that this push to put you into a camp.

Whereas I think the most intellectually honest and high-integrity people I know can point at both the bad and the good. Yeah,

I think it's bizarre. to be accused of being anti-AI,

to be called a Luddite.

You know, as I said, when I wrote the book

from which almost everyone learns about AI.

And,

you know,

if you called a nuclear engineer who works on the safety of nuclear power plants, would you call him anti-physics?

It's bizarre.

We're not anti-AI.

In fact,

the need for safety in AI is a complement to AI. If AI was useless and stupid, we wouldn't be worried about its safety.

It's only because it's becoming more capable that we have to be concerned about safety.

So I don't see this as anti-AI at all. In fact, I would say without safety, there will be no AI.

There is no future with human beings where we have unsafe AI. So it's either no AI or safe AI.

We have a closing tradition on this podcast where the last guest leaves a question for the next, not knowing who they're leaving it for.

And the question left for you is: what do you value the most in life and why?

And lastly, how many times has this answer changed?

I value my family most, and that answer hasn't changed for nearly 30 years.

What else outside of your family?

Truth.

And that

answer hasn't changed at all.

I've always

wanted the world to base its life on truth.

And

I find the propagation or deliberate propagation of falsehood to be one of the worst things that we can do.

Even if that truth is inconvenient.

Yeah.

I think that's a really important point, which is that, you know,

people often don't like hearing things that are negative. And so the visceral reaction is often to just shoot or aim at the person who is delivering the bad news.

Because if I discredit you or I shoot at you, then

it makes it easier for me to contend with the news that I don't like, the thing that's making me feel uncomfortable.

And so I applaud you for what you're doing because you're going to get lots of shots taken at you because you're delivering an inconvenient truth, which generally people won't always love.

But also you are messing with people's ability to get that quadrillion dollar prize, which means there'll be more deliberate attempts to discredit people like yourself and Jeff Hinton and other people that I've spoken to on the show.

But again, when I look back through history, I think that progress has come from the pursuit of truth, even when it was inconvenient.

And actually, much of the luxuries that I value in my life are the consequence of other people that came before me that were brave enough or bold enough to pursue truth at times when it was inconvenient.

And so I very much respect and value people like yourself for that very reason.

You've written this incredible book called Human Compatible, Artificial Intelligence and the Problem of Control, which I think was published in 2020. 2019.
Yeah, there's a new edition from 2023.

Where do people go if they want more information on your work and you? Do they go to your website? Do they get this book? What's the best place for them to learn more?

So the book is written for the general public.

I'm easy to find on the web. The information on my webpage is mostly targeted for academics, so it's a lot of technical research papers and so on.

There is an organization, as I mentioned, called the International Association for Safe and Ethical AI.

That has a website. It has a terrible acronym, unfortunately, IASEAI.

We pronounce it ICI, but it's easy to misspell. But you can find that on the web as well, and

that has resources. You can join the association.

You can apply to come to our annual conference. And I think increasingly,

not just AI researchers like Jeff Hinton, Yoshio Benjio, but also, I think,

writers. Brian Christian, for example, has a nice book called The Alignment Problem.

And

he's looking at it from the outside. He's not,

or at least when he wrote it, he wasn't an AI researcher. He's now becoming one.

But he has talked to many of the people involved in these questions and tries to give an objective view. So I think it's a pretty good book.

I will link all of that below for anyone that wants to check out any of those links and learn more.

Professor Stuart Russell, thank you so much. Really appreciate you taking the time and the effort to come and have this conversation.
And

I think it's pushing the public conversation in an important direction. Thanks, Steve.
And I applaud you for doing that. Really nice talking to you.

I'm absolutely obsessed with 1%.

If you know me, if you follow Behind the Diary, which is our behind-the-scenes channel, if you've heard me speak on stage, if you follow me on any social media channel, you've probably heard me talking about 1%.

It is the defining philosophy of my health of my companies of my habit formation and everything in between which is this obsessive focus on the small things because sometimes in life we aim at really really really really big things big steps forward mountains we have to climb and as near il told me on this podcast when you aim at big things you get psychologically demotivated you end up procrastinating avoiding them and change never happens so with that in mind with everything i've learned about one percent and with everything i've learned from interviewing the incredible guests on this podcast we made the one percent diary just over a year ago and it sold out.

And it is the best feedback we've ever had on a diary that we have created because what it does is it takes you through this incredible process over 90 days to help you build and form brand new habits.

So, if you want to get one for yourself or you want to get one for your team, your company, a friend, a sibling, anybody that listens to the diary of a CEO, head over immediately to thediary.com and you can inquire there about getting a bundle if you want to get one for your team or for a large group of people.

That is thediary.com.

You know, when you're in a meeting, taking notes, trying to focus, but your devices keep pinging notifications. For me, that's really annoying.

And usually, it makes your brain start to wander away and fall into distraction. This was happening to my producer, Jack.

And we were chatting about it when I realised that I knew the exact product that would fix this problem for him. It's from our sponsor, Remarkable.

Essentially, it's a paper tablet with no notifications, so it's far less distracting than most tablets.

It's called the Remarkable Paper Pro Move, and it really does look, feel, and sound the same as writing on paper, which is really nice if you spend a lot of time taking notes but because it's digital your handwritten notes can be converted into typed text and then you can send it over email or slack or just keep editing it within the app all of their products have no blue light which for someone who looks at screens as much as i do is something i really appreciate Remarkable is offering a 50-day trial on their products for free and at the end of that time, if it's not what you're looking for, you simply get all of your money back by sending it back.

Give the present of being present. Find the perfect distraction-free paper tablet at remarkable.com.