Roman Yampolskiy: These Are The Only 5 Jobs That Will Remain In 2030 & Proof We're Living In a Simulation!
Dr. Roman Yampolskiy is a leading voice in AI safety and a Professor of Computer Science and Engineering. He coined the term “AI safety” in 2010 and has published over 100 papers on the dangers of AI. He is also the author of books such as, ‘Considerations on the AI Endgame: Ethics, Risks and Computational Frameworks’.
He explains:
⬛ How AI could release a deadly virus
⬛ Why these 5 jobs might be the only ones left
⬛ How 'superintelligence' will dominate humans
⬛ Why ‘superintelligence’ could trigger a global collapse by 2027
⬛ How AI could be worse than nuclear weapons
⬛ Why we’re almost certainly living in a simulation
00:00 Intro
02:41 How to Stop AI from Killing Everyone
04:48 What's the Probability Something Goes Wrong?
05:10 How Long Have You Been Working on AI Safety?
08:28 What Is AI?
10:07 Prediction for 2027
11:51 What Jobs Will Actually Exist?
14:40 Can AI Really Take All Jobs?
19:02 What Happens When All Jobs Are Taken?
20:45 Is There a Good Argument Against AI Replacing Humans?
22:17 Prediction for 2030
24:11 What Happens by 2045?
25:50 Will We Just Find New Careers and Ways to Live?
29:05 Is Anything More Important Than AI Safety Right Now?
30:20 Can't We Just Unplug It?
31:45 Do We Just Go With It?
37:34 What Is Most Likely to Cause Human Extinction?
39:58 No One Knows What's Going On Inside AI
41:43 Ads
42:45 Thoughts on OpenAI and Sam Altman
46:37 What Will the World Look Like in 2100?
47:09 What Can Be Done About the AI Doom Narrative?
54:08 Should People Be Protesting?
56:24 Are We Living in a Simulation?
61:58 How Certain Are You We're in a Simulation?
67:58 Can We Live Forever?
72:33 Bitcoin
74:16 What Should I Do Differently After This Conversation?
75:20 Are You Religious?
77:25 Do These Conversations Make People Feel Good?
80:23 What Do Your Strongest Critics Say?
81:49 Closing Statements
82:21 If You Had One Button, What Would You Pick?
83:49 Are We Moving Toward Mass Unemployment?
84:50 Most Important Characteristics
Follow Dr Roman:
X - https://bit.ly/41C7f70
Google Scholar - https://bit.ly/4gaGE72
You can purchase Dr Roman’s book, ‘Considerations on the AI Endgame: Ethics, Risks and Computational Frameworks’, here: https://amzn.to/464J2HR
The Diary Of A CEO:
⬛ Join DOAC circle here - https://doaccircle.com/
⬛ Buy The Diary Of A CEO book here - https://smarturl.it/DOACbook
⬛ The 1% Diary is back - limited time only: https://bit.ly/3YFbJbt
⬛ The Diary Of A CEO Conversation Cards (Second Edition): https://g2ul0.app.link/f31dsUttKKb
⬛ Get email updates - https://bit.ly/diary-of-a-ceo-yt
⬛ Follow Steven - https://g2ul0.app.link/gnGqL4IsKKb
Sponsors:
Pipedrive - http://pipedrive.com/CEO
KetoneIQ - Visit https://ketone.com/STEVEN for 30% off your subscription order
Learn more about your ad choices. Visit megaphone.fm/adchoices
Listen and follow along
Transcript
There's only one place where history, culture, and adventure meet on the National Mall.
Where museum days turn to electric lights.
Where riverside sunrises glow and monuments shine in moonlight.
Where there's something new for everyone to discover.
There's only one DC.
Visit Washington.org to plan your trip.
You've been working on AI safety for two decades at least.
Yeah.
I was convinced we can make safe AI, but the more I looked at it, the more I realized it's not something we can actually do.
You have made a series of predictions about a variety of different dates.
So what is your prediction for 2027?
Dr.
Roman Yampolsky is a globally recognized voice on AI safety and associate professor of computer science.
He educates people on the terrifying truth of AI and what we need to do to save humanity.
In two years, the capability to replace most humans and most occupations will come very quickly.
I mean, in five years, we're looking at a world where we have levels of unemployment we never seen before.
Not talking about 10%, but 99%.
And that's without superintelligence.
A system smarter than all humans in all domains.
So it would be better than us at making new AI.
But it's worse than that.
We don't know how to make them safe.
And yet we still have the smartest people in the world competing to win the race to superintelligence.
What do you make of people like Sam Ottman's journey with AI?
So, a decade ago, we published guardrails for how to do AI right.
They violated every single one.
And he's gambling 8 billion lives and getting richer and more powerful.
So, I guess some people want to go to Mars, others want to control the universe.
But it doesn't matter who builds it.
The moment you switch to superintelligence, we will most likely regret it terribly.
And then, by 2045.
Now, this is where it gets interesting.
Dr.
Roman Gympolsky, let's talk about simulation theory.
I think we are in one.
And there is a lot of agreement on this.
And this is what you should be doing in it so we don't shut it down.
First.
Just give me 30 seconds of your time.
Two things I wanted to say.
The first thing is a huge thank you for listening and tuning into the show week after week.
It means the world to all of us.
And this really is a dream that we absolutely never had and couldn't have imagined getting to this place.
But secondly, it's a dream where we feel like we're only just getting started.
And if you enjoy what we do here, please join the 24% of people that listen to this podcast regularly and follow us on this app.
Here's a promise I'm going to make to you: I'm going to do everything in my power to make this show as good as I can now and into the future.
We're going to deliver the guests that you want me to speak to, and we're going to continue to keep doing all of the things you love about this show.
Thank you.
Dr.
Roman Yampolsky.
What is the mission that you're currently on?
Because it's quite clear to me that you are on a bit of a mission, and you've been on this mission for, I think, the best part of two decades at least.
I'm hoping to make sure that superintelligence we are creating right now does not kill everyone.
Give me some context on that statement, because it's quite a shocking statement.
Sure.
So in the last decade, we actually figured out how to make artificial intelligence better.
Turns out, if you add more compute, more data, it just kind of becomes smarter.
And so now, smartest people in the world, billions of dollars, all going to create the best possible super intelligence we can.
Unfortunately, while we know how to make those systems much more capable, we don't know how to make them safe,
how to make sure they don't do something we will regret.
And that's the state of the art right now.
Then we look at just prediction markets.
How soon will we get to advanced AI?
The timelines are very short, a couple of years, two, three years, according to prediction markets, according to CEOs of top labs.
And at the same time,
we don't know how to make sure that the systems are aligned with our preferences.
So we are creating this alien intelligence.
If aliens were coming to Earth and you had three years to prepare,
you would be panicking right now.
But most people don't even realize this is happening.
So, some of the counter-arguments might be: well, these are very, very smart people.
These are very big companies with lots of money.
They have an obligation and a moral obligation, but also just a legal obligation to make sure they do no harm.
So, I'm sure it'll be fine.
The only obligation they have is to make money for the investors.
That's the legal obligation they have.
They have no moral or ethical obligations.
Also, according to them, they don't know how to do it yet.
The state-of-the-art answers are: we'll figure it out then we get there, or AI will help us control more advanced AI.
That's insane.
In terms of probability, what do you think is the probability that something goes catastrophically wrong?
So nobody can tell you for sure what's going to happen.
But if you're not in charge, you're not controlling it, you will not get outcomes you want.
The space of possibilities is almost infinite.
The space of outcomes we will like is tiny.
And
who are you and how long have you been working on this?
I'm a computer scientist by training.
I have a PhD in computer science and engineering.
I probably started work in AI safety, mildly defined as control of bots at the time, 15 years ago.
15 years ago.
So you've been working on AI safety before it was cool?
Before the term existed, I coined the term AI safety.
So you're the founder of the term AI safety?
The term, yes, not the field.
There are other people who did brilliant work before I got there.
Why were you thinking about this 15 years ago?
Because most people have only been talking about the term AI safety for the last two or three years.
Yeah, it started very mildly just as a security project.
I was looking at poker bots.
And I realized that the bots are getting better and better.
And if you just project this forward enough,
they're going to get better than us, smarter, more capable.
And it happened.
They are playing poker way better than average players.
But more generally, it will happen with all other domains, all the other cyber resources.
I wanted to make sure AI is a technology which is beneficial for everyone.
So I started to work on making AI safer.
Was there a particular moment in your career where you thought, oh my God?
First five years at least, I was working and solving this problem.
I was convinced we can make this happen, we can make safe AI.
And that was the goal.
But the more I looked at it, the more I realized every single component of that equation is not something we can actually do.
And the more you zoom in, it's like a fractal.
You go in and you find 10 more problems.
and then 100 more problems.
And all of them are not just difficult, they're impossible to solve.
There is no seminal work in this field where, like, we solve this, we don't have to worry about this.
There are patches, there are little fixes we put in place, and quickly people find ways to work around them.
They drill break whatever safety mechanisms we have.
So, while progress in AI capabilities is exponential or maybe even hyper-exponential, progress in AI safety is linear or constant.
The gap is increasing.
The gap between the
capable the systems are and how well we can control them, predict what they're going to do, explain their decision-making.
I think this is quite an important point because you said that we're basically patching over the issues that we find.
So, we're developing this core intelligence, and then to stop it doing things
or to stop it showing some of its unpredictability or its threats, the companies that are developing this AI are programming in code over the top to say, okay, don't swear, don't say that rude word, don't do that bad thing.
Exactly.
And you can look at other examples of that.
So HR manuals, right?
We have those humans, they're general intelligences, but you want them to behave in a company.
So they have a policy, no sexual harassment, no this, no that.
But if you're smart enough, you always find a workaround.
So you're just pushing behavior into a different, not yet restricted subdomain.
We should probably define some terms terms here.
So there's narrow intelligence, which can play chess or whatever.
There's the artificial general intelligence, which can operate across domains.
And then super intelligence, which is smarter than all humans in all domains.
And where are we?
So that's a very fuzzy boundary, right?
We definitely have many excellent narrow systems, no question about it.
And they are super intelligent in that narrow domain.
So protein folding.
is a problem which was solved using narrow AI and it's superior to all humans in our domain.
In terms of AGI, again, I said if we showed what we have today to a scientist from 20 years ago, they would be convinced we have full-blown HEI.
We have systems which can learn, they can perform in hundreds of domains, and they're better than humans in many of them.
So you can argue we have a weak version of HEI.
Now, we don't have super intelligence yet.
We still have brilliant humans who are completely dominating AI, especially in science and engineering.
But that gap is closing so fast.
You can see, especially in the domain of mathematics, three years ago, large language models couldn't do basic algebra.
Multiplying three-digit numbers was a challenge.
Now they're helping with mathematical proofs.
They're winning mathematics Olympiads competitions.
They are working on solving millennial problems, hardest problems in mathematics.
So in three years, we closed the gap from subhuman performance to better than most mathematicians in the world.
And we see the same process happening in science and in engineering.
You have made a series of predictions, and they correspond to a variety of different dates.
I have those dates in front of me here.
What is your prediction for the year 2027?
We're probably looking at AGI as predicted by prediction markets and tops of the labs.
So we'd have artificial general intelligence by 2027.
And how would that make the world different
to how it is now?
So if you have this concept of a drop-in employee, you have free labor, physical and cognitive, trillions of dollars of it.
It makes no sense to hire humans for most jobs.
If I can just get a $20 subscription or a free model to do what an employee does, First, anything on a computer will be automated.
And next, I think humanoid robots are maybe five years behind.
So in five years, all the physical labor can also be automated.
So we're looking at a world where we have levels of unemployment we never seen before.
Not talking about 10% unemployment, which is scary, but 99%.
All you have left is jobs where, for whatever reason, you prefer another human would do it for you.
But anything else can be fully automated.
It doesn't mean it will be automated in practice.
A lot of times, technology exists, but it's not deployed.
Video phones were invented in the 70s.
Nobody had them until iPhones came around.
So we may have a lot more time with jobs and with the world which looks like this.
But the capability to replace most humans and most occupations will come very quickly.
Okay, so let's try and drill down into that and stress test it.
So
a podcaster like me,
would you need a podcaster like me?
So let's look at what you do.
You prepare,
you
ask questions, you ask follow-up questions, and you look good on camera.
Thank you so much.
Let's see what we can do.
Large language model today can easily read everything I wrote and have very solid understanding.
I assume you haven't read every single one of my books.
That thing would do it.
It can train on every podcast you ever did, so it knows exactly your style, the types of questions you ask.
It can also
find correspondence between what worked really well.
Like this type of question really increased views.
This type of topic was very promising.
So it can optimize, I think, better than you can because you don't have the data set.
Of course, visual simulation is trivial at this point.
So you can make a video within seconds of me satir and so we can generate videos of you interviewing anyone on any topic very efficiently and you just have to get likeness, approval, whatever.
Are there many jobs that you think would remain in a world of AGI?
If you're saying AGI is potentially going to be here, whether it's deployed or not, by 2027, what kind and then, okay, so let's take out of this any physical labor jobs for a second.
Are there any jobs that you think a human would be able to do better in a world of AGI still?
So that's the question I often ask people.
In a world with AGI and I think almost immediately we'll get super intelligence as a side effect.
So the question really is in a world of super intelligence which is defined as better than all humans in all domains what can you contribute
and so you know better than anyone what it's like to be you
you know what ice cream tastes to you can you get paid for that knowledge?
Is someone interested in that?
Maybe not, not a big market.
There are jobs where you want a human.
Maybe you're rich and you want a human accountant for whatever historic reasons.
Old people like traditional ways of doing things.
Warren Buffett would not switch to AI.
He would use his human accountant.
But it's a tiny subset of the market.
Today we have products which are man-made in US as opposed to mass-produced in China.
And some people pay more to have those.
But it's a small subset.
It's almost a fetish.
There is no practical reason for it.
And I think anything you can do on a computer could be automated using that technology.
You must hear a lot of rebuttals to this when you say it, because people experience a huge amount of mental discomfort when they hear that their job, their career, the thing they got a degree in, the thing they invested $100,000 into is going to be taken away from them.
So their natural reaction for some people is that cognitive dissonance that, no, you're wrong.
AI can't be creative.
It's not this.
It's not that.
It'll never be interested in my job.
I'll be fine because.
You hear these arguments all the time, right?
It's really funny.
I ask people and I ask people in different occupations.
I'll ask my Uber driver, are you worried about self-driving cars?
And they go, no, no one can do what I do.
I know the streets of New York.
I can navigate like no AI.
I'm safe.
And it's true for any job.
Professors are saying this to me.
Oh, nobody can lecture like I do.
Like, this is so special.
But you understand, it's ridiculous.
We already have self-driving cars replacing drivers.
That is not even a question
if it's possible.
It's like, how soon before you fired?
Yeah, I mean, I've just been in LA yesterday, and my car drives itself.
So I get in the car, I sit, put in where I want to go, and then I don't touch the steering wheel or the brake pedals.
And it takes me from A to B, even if it's an hour-long drive without any intervention at all.
I actually still park it, but other than that, I'm not driving the car at all.
And obviously in LA, we also have Waymo now, which means you order it on your phone and it shows up with no driver in it and takes you to where you want to go.
Oh, yeah.
So it's quite clear to see how that is potentially a matter of time.
For those people, because we do have some of those people listening to this conversation right now, that their occupation is driving.
To offer them a, and I think driving is the biggest
occupation in the world, if I'm correct.
I'm pretty sure it is the biggest occupation in in the world.
The one with up ones, yeah.
What would you say to those people?
What should they be doing with their lives?
What should they be retraining in something, or what time frame?
So that's the paradigm shift here.
Before, we always said this job is going to be automated, retrained to do this other job.
But if I'm telling you that all jobs will be automated, then there is no plan B.
You cannot retrain.
Look at computer science.
Two years ago, we told people, learn to code.
You are an artist, you cannot make money, learn to code.
Then we realized, oh, AI kind of knows how to code and getting better.
Become a prompt engineer.
You can engineer prompts for AIs.
It's going to be a great job, get a four-year degree in it.
But then we're like, AI is way better at designing prompts for other AIs than any human.
So that's gone.
So I can't really tell you.
Right now, the hardest thing is design AI agents for practical applications.
I guarantee you you, in a year or two, it's going to be gone just as well.
So, I don't think there is a
this occupation needs to learn to do this instead.
I think it's more like we as a humanity than we all lose our jobs.
What do we do?
What do we do financially?
Who's paying for us?
And what do we do in terms of meaning?
What do I do with my extra 60, 80 hours a week?
You've thought around this corner, haven't you?
A little bit.
What is around that corner, in your view?
So the economic part seems easy.
If you create a lot of free labor, you have a lot of free wealth, abundance, things which are right now not very affordable become dirt cheap.
And so you can provide for everyone basic needs.
Some people say you can provide beyond basic needs.
You can provide very good existence for everyone.
The hard problem is, what do you do with all that free time?
For a lot of people, their jobs are what gives them meaning in their their life.
So they would be kind of lost.
We see it with people who retire or do early retirement.
And for so many people who hate their jobs, they'll be very happy not working.
But now you have people who are chilling all day.
What happens to society?
How does that impact crime rate, pregnancy rate, all sorts of issues?
Nobody thinks about it.
Governments don't have programs prepared to deal with 99%
unemployment.
What do you think that world looks like?
Again, I think
the very important part to understand here is the unpredictability of it.
We cannot predict what a smarter-than-us system will do.
And the point when we get to that is often called singularity.
By analogy with physical singularity, you cannot see beyond the event horizon.
I can tell you what I think might happen, but that's my prediction.
It is not what actually is is going to happen because I just don't have cognitive ability to predict a much smarter agent impacting this world.
Then you read science fiction.
There is never a super intelligence in it actually doing anything because nobody can write believable science fiction at that level.
They either banned AI like Dune.
because this way you can avoid writing about it, or it's like Star Wars.
You have this really dumb bots, but nothing super intelligent ever.
Because by definition you cannot predict at that level
because by definition of it being super intelligent it will make its own mind up by definition if it was something you could predict you would be operating at the same level of intelligence violating our assumption that it is smarter than you if i'm playing chess with super intelligence and i can predict every move i'm playing at that level it's kind of like my french bulldog trying to predict exactly what I'm thinking and what I'm going to do.
That's a good cognitive gap.
And it's not just he can predict you going to work, you're coming back, but he cannot understand why you're doing a podcast.
That is something completely outside of his model of the world.
Yeah, he doesn't even know that I go to work.
He just sees that I leave the house and doesn't know where I go.
Buy food for him.
What's the most persuasive argument against your own perspective here?
That we will not have unemployment due to advanced technology?
That there won't be this French bulldog-human gap in understanding and
I guess like power and control.
So, some people think that we can enhance human minds either through combination with hardware, so something like Neuralink, or through genetic re-engineering, to where we make smarter humans.
Yeah.
It may give us a little more intelligence.
I don't think we are still competitive in biological form with silicon form.
Silicon substrate is much more capable for intelligence.
It's faster, it's more resilient, more energy efficient in many ways.
Which is what computers are made out of, the brain.
Yeah.
So I don't think we can keep up just with improving our biology.
Some people think maybe, and this is very speculative, we can upload our minds into computers.
So scan your brain, can it comb of your brain, and have a simulation running on a computer.
And you can speed it up, give it more capabilities.
But to me, that feels like you no longer exist.
We just created software by different means and now you have AI based on biology and AI based on some other forms of training.
You can have evolutionary algorithms.
You can have many paths to reach AGI.
But at the end, none of them are humans.
I have another date here, which is
2030.
What's your prediction for 2030?
What will the world look like?
So we probably will have humanoid robots with enough flexibility and dexterity to compete with humans in all domains, including plumbers.
We can make artificial plumbers.
Not the plumbers.
That felt like the last bastion of human employment.
So, 2030, five years from now, humanoid robots, so many of the companies, the leading companies, including Tesla, are developing humanoid robots at light speed and they're getting increasingly more effective.
And these humanoid robots will be able to move through physical space,
you know, make an omelette, do anything humans can do, but obviously have
be connected to AI as well.
So they can think, talk.
Right, they're controlled by AI, they're always connected to the network, so they are already dominating in many ways.
Our world will look remarkably different when humanoid robots are functional and effective.
Because that's really when, you know, I started of think, cry, like
the combination of intelligence and physical ability
is
really, really doesn't leave much, does it, for
us
human beings?
Not much.
So today, if you have intelligence through internet, you can hire humans to do your bidding for you.
You can pay them in Bitcoin.
So you can have bodies just not directly controlling them.
So it's not a huge game changer to add direct control of physical bodies.
Intelligence is where it's at.
The important component is definitely higher ability to optimize, to solve problems, to find patterns people cannot see.
And then by 2045,
I guess the world looks even more,
which is 20 years from now.
So if it's still around.
If it's still around.
Ray Karzweil.
predicts that that's the year for the singularity.
That's the year where progress becomes so fast.
So this AI doing science and engineering work makes improvements so quickly we cannot keep up anymore.
That's the definition of singularity, point beyond which we cannot see, understand, predict.
See, and understand, predict the intelligence itself, or?
What is happening in the world, the technology is being developed.
So right now, if I have an iPhone, I can look forward to a new one coming out next year, and I'll understand it has slightly better camera.
Imagine now this process of researching and developing this phone is automated.
It happens every six months, every three months, every month, week, day, hour, minute, second.
You cannot keep up with 30 iterations of iPhone in one day.
You don't understand what capabilities it has, what
proper controls are.
It just escapes you.
Right now it's hard for any researcher in AI to keep up with the state of the art.
While I was doing this interview with you, a new model came out and I no longer know what the state of the art is.
Every day, as a percentage of total knowledge, I get dumber.
I may still know more because I keep reading, but as a percentage of overall knowledge, we're all getting dumber.
And then you take it to extreme values, you have zero knowledge, zero understanding of the world around you.
Some of the arguments against this eventuality are that when you look at other technologies like the Industrial Revolution, people just found new ways to
work and new careers that we could never have imagined at the time were created.
How do you respond to that in a world of superintelligence?
It's a paradigm shift.
We always had tools, new tools, which allowed some job to be done more efficiently.
So instead of having 10 workers, you could have two workers and eight workers had to find a new job.
And there was another job.
Now you can supervise those workers or do something cool.
If you're creating a meta-invention, you're inventing intelligence, you're inventing a worker, an agent, then you can apply that agent to the new job.
There is not a job which cannot be automated.
That never happened before.
All the inventions we previously had were kind of a tool for doing something.
So we invented fire, huge game changer, but that's it.
It stops with fire.
We invent a wheel, same idea.
huge implications, but wheel itself is not an inventor.
Here we're inventing a replacement for human mind, a new inventor capable of doing new inventions.
It's the last invention we ever have to make.
At that point, it takes over.
And the process of doing science, research, even ethics research, morals, all that is automated at that point.
Do you sleep well at night?
Really well.
Even though you...
You've spent the last 15, 20 years of your life working on AI safety and it's suddenly among us in a way that I don't think anyone could have predicted five years ago.
When I say among us, I really mean that the amount of funding and talent that is now focused on reaching superintelligence faster has made it feel more inevitable and more soon
than any of us could have possibly imagined.
We as humans have this built-in bias about not thinking about really bad outcomes and things we cannot prevent.
So all of us are dying.
Your kids are dying.
Your parents are dying.
everyone's dying, but you still sleep well, you still go on with your day.
Even 95-year-olds are still doing games and playing golf and whatnot, because we have this ability to not think about the worst outcomes, especially if we cannot actually modify the outcome.
So that's the same infrastructure being used for this.
Yeah, there is humanity-level
death-like event.
We're happening to be close to it probably, But unless I can do something about it, I can just keep enjoying my life.
In fact, maybe knowing that you have a limited amount of time left gives you more reason to have a better life.
You cannot waste any.
And that's the survival trait of evolution, I guess, because those of my ancestors that spent all their time worrying wouldn't have spent enough time having babies and hunting to survive.
Suicidal ideation.
People who really start thinking about how horrible the world is usually escape pretty soon.
One of the, you co-authored this paper, analyzing the key arguments people make against the importance of AI safety.
And one of the arguments in there is that there's other things that are of bigger importance right now.
It might be world wars, it could be nuclear containment, it could be other things.
There's other things that the governments and podcasters like me should be talking about that are more important.
What's your rebuttal to that argument?
So, superintelligence is a meta solution.
If we get superintelligence right, it will help us with climate change.
It will help us with wars.
It can solve all the other existential risks.
If we don't get it right, it dominates.
If climate change will take 100 years to boil us alive and superintelligence kills everyone at five, I don't have to worry about climate change.
So either way, either it solves it for me or it's not an issue.
So you think it's the most important thing to be working on?
on?
Without question, there is nothing more important than getting this right.
And I know everyone says it.
You take any class, but you take an English professor's class and he tells you, this is the most important class you'll ever take.
But
you can see the meta-level differences with this one.
Another argument in that paper is that we all be in control and that the danger is not AI.
This particular argument asserts that AI is just a tool.
Humans are the real real actors that present danger.
And we can always maintain control by simply turning it off.
Can't we just pull the plug out?
I see that every time we have a conversation on the show about AI.
Someone says, can't we just unplug it?
Yeah.
I get those comments on every podcast I make.
And I always want to get in touch with a guy and say, this is brilliant.
I never thought of it.
We're going to write a paper together and get a Nobel Prize for it.
This is like, let's do it.
Because it's so silly.
Like, can you turn off a virus?
You have a computer virus, you don't like it.
Turn it off.
How about Bitcoin?
Turn off Bitcoin network.
Go ahead, I'll wait.
It's silly.
Those are distributed systems.
You cannot turn them off.
And on top of it, they're smarter than you.
They made multiple backups.
They predicted what you're going to do.
They will turn you off before you can turn them off.
The idea that we will be in control applies only to pre-super intelligence levels, basically what we have today.
Today, humans with AI tools are dangerous.
They can be hackers, malevolent actors, absolutely.
But the moment superintelligence becomes smarter, dominates, they're no longer the important part of that equation.
It is the higher intelligence I'm concerned about, not the human who may add additional malevolent payload, but at the end still doesn't control it.
It is tempting
to
follow the next argument that I saw in that paper, which basically says, listen, this is inevitable.
So there's no point fighting against it because there's really no hope here.
So we should probably give up even trying and be faithful that it will work itself out.
Because everything you've said sounds really inevitable.
And if with China working on it, I'm sure Putin's got some secret division.
I'm sure Iran are doing some bits and pieces.
Every European country is trying to get ahead of AI.
The United States is leading the way.
So it's inevitable.
So we probably should just have faith and pray.
Praying is always good, but incentives matter.
If you are looking at what drives these people, so yes, money is important, so there is a lot of money in that space, and so everyone's trying to be there and develop this technology.
But if they truly understand the argument, they understand that you will be dead, no amount of money will be useful to you, then incentives switch.
They would want to not be dead.
A lot of them are young people, rich people, they have their whole lives ahead of them.
I think they would be better off not building advanced superintelligence, concentrating on narrow AI tools for solving specific problems.
Okay, my company cures breast cancer.
That's all.
We make billions of dollars.
Everyone's happy.
Everyone benefits.
It's a win.
We are still in control today.
It's not over until it's over.
We can decide not to build general superintelligences.
I mean, the United States might be able to conjure up enough enthusiasm for that.
But if the United States doesn't build general superintelligences, then China are going to have the big advantage, right?
Aaron Powell,
right now, at those levels, whoever has more advanced AI has more advanced military.
No question.
We see it with existing conflicts.
But the moment you switch to superintelligence, uncontrolled superintelligence, it doesn't matter who builds it, us or them.
And if they understand this argument, they also would not build it.
It's a mutually assured destruction on both ends.
Is this technology different than, say, nuclear weapons, which require a huge amount of investment and you have to enrich the uranium and you need
billions of dollars potentially to even build a nuclear weapon?
But it feels like this technology is much cheaper to get to superintelligence potentially, or at least it will become cheaper.
I wonder if it's possible that some guy, some startup is going to be able to build superintelligence in
you know a couple of years without the need of billions of dollars of compute or or electricity power.
That's a great point.
So every year it becomes cheaper and cheaper to train sufficiently large models.
If today it would take a trillion dollars to build superintelligence, next year it could be 100 billion and so on.
At some point, a guy on a laptop could do it.
But you don't want to wait four years to make it affordable.
So that's why so much money is pouring in.
Somebody wants to get there this year and lucky in all the winnings.
Light cone.
level award.
So in that regard, they both very expensive projects, like Manhattan-level projects.
Which was the nuclear bomb project.
The difference between the two technologies is that nuclear weapons are still tools.
Some dictator, some country, someone has to decide to use them, deploy them, whereas superintelligence is
not a tool.
It's an agent.
It makes its own decisions and no one is controlling it.
I cannot take out this dictator and now superintelligence is safe.
So that's a fundamental difference to me.
But if you're saying that it is going to get incrementally cheaper, like I think it's Moore's law, isn't it, that the technology gets cheaper,
then there is a future where some guy on his laptop is going to be able to create superintelligence without oversight or regulation or employees, et cetera.
Yeah.
That's why a lot of people are suggesting we need to build something like
surveillance planet.
where you are monitoring who's doing what and you're trying to prevent people from doing it.
Do I think it's feasible?
No.
At some point, it becomes so affordable and so trivial that it just will happen.
But at this point, we're trying to get more time.
We don't want it to happen in five years.
We want it to happen in 50 years.
I mean, that's not very hopeful.
Depends on how old you are.
Depends on how old you are.
I mean,
if you're saying that you believe in the future people will be able to make super intelligence without the resources that are required today, then it is just a matter of time.
Yeah, but so will be true for many other technologies.
We're getting much better in synthetic biology, where today someone with a bachelor's degree in biology can probably create a new virus.
This will also become cheaper, other technologies like that.
So
we are approaching a point where it's very difficult to make sure no technological breakthrough is the last one.
So essentially, in many directions, we have this pattern of making it easier in terms of resources, in terms of intelligence, to destroy the world.
If you look at, I don't know, 500 years ago, the war's dictator with all the resources could kill a couple million people.
He couldn't destroy the world.
Now we know nuclear weapons, we can blow up the whole planet multiple times over.
Synthetic biology, we saw with COVID, you can very easily create a combination virus which impacts billions of people.
And all of those things becoming easier to do.
In the near term, you talk about extinction being a real risk, human extinction being a real risk.
Of all the pathways to human extinction that you think are most likely, what is the leading pathway?
Because I know you talk about there being some issue pre-deployment of these AI tools, like, you know, someone makes a mistake when they're
designing a model or other issues post-deployment.
When I say post-deployment, I mean once a chat GPT or something,
an agent's released into the world and someone hacking into it and changing it and
reprogramming it to be malicious.
Have all these potential paths to human extinction, which one do you think is the highest probability?
So I can only talk about the ones I can predict myself.
So I can predict even before we get to superintelligence, someone will create a very advanced biological tool, create a novel virus, and that virus gets everyone, or most everyone.
I can envision it, I can understand the pathway, I can say that.
So, just to zoom in on that, then, that would be using an AI to make a virus and then releasing it.
And would that be
intentional?
There is a lot of psychopaths, a lot of terrorists, a lot of doomsday cults.
We've seen historically, again, they try to kill as many people as they can.
They usually fail.
They kill hundreds of thousands.
But if they get technology to kill millions or billions, they would do that gladly.
The point I'm trying to emphasize is that it doesn't matter what I can come up with.
I am not a malevolent actor you're trying to defeat here.
It's the superintelligence which can come up with completely novel ways of doing it.
Again, you brought up the example of your dog.
Your dog cannot understand all the ways you can take it out.
It can maybe think you'll bite it to death or something.
But that's all, whereas you have infinite supply of resources.
So if I asked your dog exactly how you're going to take it out, it would not give you a meaningful answer.
It can talk about biting.
And this is what we know.
We know viruses.
We experienced viruses.
We can talk about them.
But what
an AI system capable of doing novel physics research can come up with is beyond me.
One of the things that I think most people don't understand is how little we understand about how these AIs are actually working.
Because one would assume, you know, with computers, we kind of understand how a computer works.
We know that it's doing this and then this and it's running on code.
But from reading your work, you described it as being a black box.
So in the context of something like ChatGPT or an AI we know, you're telling me that the people that have built that tool don't actually know what's going on inside there.
That's exactly right.
So even people making those systems have to run experiments on their product to learn what it's capable of.
So they train it by giving it all of data, let's say all of internet text.
They run it on a lot of computers to learn patterns in that text.
And then they start experimenting with that model.
Oh, do you speak French?
Oh, can you do mathematics?
Oh, are you lying to me now?
And so maybe it takes a year to train it and then six months to get some fundamentals about what it's capable of, some safety overhead.
But we still discover new capabilities in old models.
If you ask the question in a different way, it becomes smarter.
So it's no longer engineering, how it was the first 50 years where someone was a knowledge engineer programming an expert system AI to do specific things.
It's a science.
We are creating this artifact, growing it.
It's like an alien plant.
And then we study it to see what it's doing.
And just like with plants, we don't have 100% accurate knowledge of biology.
We don't have have full knowledge here.
We kind of know some patterns.
We know, okay, if we add more compute, it gets smarter most of the time.
But nobody can tell you precisely what the outcome is going to be given a set of inputs.
I've watched so many entrepreneurs treat sales like a performance problem when it's often down to visibility.
Because when you can't see what's happening in your pipeline, what stage each conversation is at, what's stalled, what's moving, you can't improve anything and you can't close the deal.
Our sponsor, PipeDrive, is the number one CRM tool for small to medium businesses.
Not just a contact list, but an actual system that shows your entire sales process, end to end, everything that's live, what's lagging and the steps you need to take next.
All of your teams can move smarter and faster.
Teams using PipeDrive are on average closing three times more deals than those that aren't.
It's the first CRM made by salespeople for salespeople that over 100,000 companies around the world rely on, including my team who absolutely love it.
Give PipeDrive a try today by visiting pipedrive.com slash CEO.
And you can get up and running in a couple of minutes with no payment needed.
And if you use this link, you'll get a 30-day free trial.
What do you make of OpenAI and Sam Altman and what they're doing?
And obviously you're aware that one of the co-founders, was it
Ilya chat?
Ilya is
Ilya, yeah.
Ilya left and he started a new company called Superintelligent Safety.
Safety.
Because AI safety wasn't challenging enough, he decided to just jump right to the hard problem.
As an onlooker, when you see that people are leaving OpenAI to start super intelligent safety companies,
what was your read on that situation?
So a lot of people who worked with Sam said that maybe he's not the most direct person in terms of being honest with them and they had had concerns about his views on safety.
That's part of it.
So they wanted more control, they wanted more concentration on safety.
But also, it seems that anyone who leaves that company and starts a new one gets a $20 billion valuation just for having it started.
You don't have a product, you don't have customers, but if you want to make many billions of dollars, just do that.
So it seems like a very rational thing to do for anyone who can.
So I'm not surprised that there is a lot of attrition.
Meeting him in person, he's super nice, very smart,
absolutely
perfect public interface.
You see him testify in the Senate, he says the right thing to the senators.
You see him talk to the investors, they get the right message.
But if you look at what people who know him personally are saying,
it's probably not the right person to be controlling a project of that impact.
Why?
He puts safety second.
Second to
winning this race to superintelligence, being the guy who created Gardic and controlling Light Corner of the universe, he's worse.
Do you suspect that's what he's driven by, is by the legacy of being an impactful person that did a
remarkable thing versus the consequence that that might have on for society.
Because it's interesting that his other startup is WorldCoin, which is basically a platform to create universal basic income, i.e., a platform to give us income in a world where people don't have jobs anymore.
So, on one hand, you're creating an AI company, and on the other hand, you're creating a company that is preparing for people not to have employment.
It also has other
properties.
It keeps track of everyone's biometrics.
It keeps you in charge of the world's economy, world's wealth.
They retaining a large portion of world coins.
So I think it's kind of
very reasonable part to integrate with world dominance.
If you have a super intelligent system and you control money,
you're doing well.
Why would someone want world dominance?
People have different levels of ambition.
Then you are a very young person with billions of dollars, fame.
You start looking for more ambitious projects.
Some people want to go to Mars.
Others want to control the light cone of the universe.
What did you say?
Light coin of the universe?
Light cone.
So every part of the universe light can reach from this point, meaning anything accessible you want to grab and bring into your control.
Do you think Sam Altman wants to control every part of the universe?
I suspect he might, yes.
It doesn't mean he doesn't want a side effect of it being a very beneficial technology which makes all the humans happy.
Happy humans are good for control.
If you had to guess
what the world looks like in
2100,
if you had to guess,
it's either free of human existence or it's completely not comprehensible to someone like us.
It's one of those extremes.
So there's either no humans.
It's basically the world is destroyed, or it's so different that I cannot envision those predictions.
What can be done to turn this ship to a more certain positive outcome at this point?
Is there still things that we can do, or is it too late?
So I believe in personal self-interest.
If people realize that doing this thing is really bad for them personally, they will not do it.
So our job is to convince everyone with any power in this space, creating this technology, working for those companies, they are doing something very bad for them.
Not just forget other 8 billion people you're experimenting on with no permission, no consent.
You will not be happy with the outcome.
If we can get everyone to understand that's a default, and it's not just me saying it, you had Jeff Hinton on, Nobel Prize winner, founder of the whole machine learning space.
He says the same thing.
Benji or dozens of others, top scholars.
We had a statement about dangerous of AI signed by thousands of scholars, computer scientists.
This is basically what we think right now, and we need to make it a universal.
No one should disagree with this.
And then we may actually make good decisions about what technology to build.
It doesn't guarantee long-term safety for humanity, but it means we're not trying to get there as soon as possible to the worst possible outcome.
And are you hopeful that that's even possible?
I want to try.
We have no choice but to try.
And what would need to happen and who would need to act?
Is it government legislation?
Is it...
Unfortunately, I don't think making it illegal is sufficient.
There are different jurisdictions.
There are, you know, loopholes.
And what are you going to do if somebody does it?
Are you going to fine them for destroying humanity?
Like very steep fines for it?
Like, what are you going to do?
It's not enforceable.
If they do create it, now the superintelligence is in charge.
So the judicial system we have is not impactful.
And all the punishments we have are designed for punishing humans.
Prisons, capital punishment doesn't apply to AI.
You know, the problem I have is when I have these conversations, I never feel like I walk away with
hope that something's going to go well.
And what I mean by that is I never feel like I walk away with clear, some kind of clear set of actions that can course correct what might happen here.
So
what should I do?
What should the person sat at home listening to this do?
You talk to a lot of people who are building this technology.
Ask them precisely to explain some of those things that claim to be impossible, how they solved it, or going to solve it before they get to where they're going.
Do you know?
I don't think Sam Altman wants to talk to me.
I don't know.
He seems to go on a lot of podcasts.
Maybe he does.
He wants to go online.
I wonder why that is.
I wonder why that is.
I'd love to speak to him, but
I don't think he wants me to
interview him.
Have an open challenge.
Maybe money is not the incentive, but whatever attracts people like that, whoever can convince you that it's possible to control and make safe super intelligence gets the prize.
They come on your show and prove their case.
Anyone, if no one claims the price or even accepts the challenge after a few years, maybe we don't have anyone with solutions.
We have companies valued, again, at billions and billions of dollars working on safe super intelligence.
We haven't seen their output yet.
Yeah, I'd like to speak to Ilya as well, because I know he's working on safe super intelligence.
So like notice a pattern, too.
If you look at the history of AI safety organizations or departments within companies, they usually start well, very ambitious, and then they fail and disappear.
So, OpenAI had a super intelligence alignment team.
The day they announced it, I think they said they're going to solve it in four years.
Like half a year later, they canceled the team.
And there are dozens of similar examples.
Creating a perfect safety for superintelligence, perpetual safety as it keeps improving, modifying, interacting with people, you're never going to get there.
It's impossible.
There is a big difference between difficult problems in computer science and P-complete problems and impossible problems.
And I think control, indefinite control of superintelligence is such a problem.
So what's the point trying then if it's impossible?
Well, I'm trying to prove that it is specifically that.
Once we establish something is impossible, fewer people will waste their time claiming they can do it and find looking for money.
So many people going, give me a billion dollars in two years and I'll solve it for you.
Well, I don't think you will.
But people aren't going to stop striving towards it.
So if there's no attempts to make it safe and there's more people increasingly striving towards it, then it's inevitable.
But it changes what we do.
If we know that it's impossible to make it right, to make it safe, then this direct path of just build it as soon as you can becomes a suicide mission.
Hopefully, fewer people will pursue that.
They may go in other directions.
Like, again, I'm a scientist, I'm an engineer.
I love AI.
I love technology.
I use it all the time.
Build useful tools.
Stop building agents.
Build narrow superintelligence, not a general one.
I'm not saying you shouldn't make billions of dollars.
I love billions of dollars.
But
don't kill everyone, yourself included.
They don't think they're going to, though.
Then tell us why.
I hear things about intuition.
I hear things about we'll solve it later.
Tell me specifically, in scientific terms, publish a peer-reviewed paper explaining how you're going to control superintelligence.
Yeah, it's strange.
It's strange to even bother if there was even a 1% chance of human extinction.
It's strange to do something.
Like, if there was a 1% chance, someone told me there was a 1% chance that if I got in a car,
I might not be alive.
I would not get in the car.
If you told me there was a 1% chance that if I drank whatever liquid is in this cup right now, I might die.
I would not drink the liquid.
Even if there was
a billion dollars,
if I survived, so the 99% chances I get a billion billion dollars, the 1% is I die, I wouldn't drink it.
I wouldn't take the chance.
It's worse than that.
Not just you die, everyone dies.
Yeah.
Yeah.
Now, would we let you drink it at any odds?
That's for us to decide.
You don't get to make that choice for us.
To get consent from human subjects, you need them to comprehend what they are consenting to.
If those systems are unexplainable, unpredictable, how can they consent?
They don't know what they are consenting to.
So it's impossible to get consent consent by definition.
So this experiment can never be run ethically.
By definition, they are doing unethical experimentation on human subjects.
Do you think people should be protesting?
There are people protesting.
There is stop AI, there is POS AI, they block offices of OpenAI, they do it weekly, monthly, quite a few actions, and they're recruiting new people.
Do you think more people should be protesting?
Do you think that's an effective solution?
If you can get it to a large enough scale to where the majority of population is participating, it would be impactful.
I don't know if they can scale from current numbers to that, but I support everyone trying everything peacefully and legally.
And for the person listening at home,
what should they be doing?
Because they don't want to feel powerless.
None of us want to feel powerless.
So it depends on what scale we're asking about, time scale.
Are we saying like this year, your kid goes to college, what major to pick?
Should they go to college at all?
Should you switch jobs?
Should you go go into certain industries?
Those questions we can answer.
We can talk about immediate future.
What should you do in five years with
this being created?
For an average person, not much.
Just like they can't influence World War III, nuclear, holocaust, anything like that.
It's not something anyone's going to ask them about.
Today, if you want to be a part of this movement, yeah, join POS-AI, join Stop AI, those organizations currently trying to build up momentum to
bring democratic powers to influence those individuals.
So in the near term, not a huge amount.
I was wondering if there are any interesting strategies in the near term.
Should I be thinking differently about my family?
About...
I mean, you've got kids, right?
You've got three kids?
That I know about, yeah.
Three kids.
How are you thinking about parenting in this world that you see around the corner?
How are you thinking about what to say to them, the advice to give them what they should be learning?
So there is general advice outside of this domain that you should live every day as if it's your last.
It's a good advice, no matter what.
If you have three years left or 30 years left, you lived your best life.
So
try to not do things you hate for too long.
Do interesting things, do impactful things.
If you can do all that while helping people do that.
Simulation theory is an interesting, sort of adjacent subject here because as computers begin to accelerate and get more intelligent, and we're able to,
you know, do things with AI that we could never have imagined in terms of like, you can imagine the worlds that we could create with virtual reality.
I think it was Google that recently released, what was it called?
Like the AI worlds.
You take a picture and it generates a whole world.
Yeah, and you can move through the world.
I'll put it on the screen for people to see, but Google have released this technology, which allows you, I think, with a simple prompt, actually, to make a three-dimensional world that you can then navigate through.
And in that world, it has memory.
So in the world, if you paint on a wall and turn away, you look back, the wall is...
It's persistent.
Yeah, it's persistent.
And when I saw that, I go, God, geez, bloody hell,
this is like the foothills of being able to create a simulation that's indistinguishable from everything I see here.
Right.
That's why I think we are in one.
That's exactly the reason.
AI is getting to the level of creating human agents, human-level agents, and virtual reality is getting to the level of being indistinguishable from ours.
So you think this is a simulation?
I'm pretty sure we are in a simulation, yeah.
For someone that isn't familiar with the simulation arguments, what are the first principles here that convince you that we are currently living in a simulation?
So you need certain technologies to make it happen.
If you believe we can create human-level AI,
and you believe we can create virtual reality as good as this in terms of resolution, haptics, whatever properties it has, then I commit right now.
The moment this is affordable, I'm going to run billions of simulations of this exact moment, making sure you are statistically in one.
Say that last part again.
You're going to run, you're going to run.
I'm going to commit right now, and it's very affordable.
It's like 10 bucks a month to run it.
I'm going to run a billion simulations of this interview.
Why?
Because statistically, that means you are in one right now.
The chances of you being in a real real one is one in a billion.
Okay, so to make sure I'm clear on this, it's the retroactive placement.
Yeah, so the minute it's affordable, then
you can run billions of them, and they would feel and appear to be exactly like this interview right now.
So, assuming the AI has
internal states, experiences, qualia, some people argue that they don't, some say they already have it.
That's a separate philosophical question.
But if we can simulate this, I will.
Some people might misunderstand.
You're not saying that you will.
You're saying that someone will.
I can also do it.
I don't mind.
Of course, others will do it before I get there.
If I'm getting it for $10, somebody got it for $1,000.
That's not the point.
If you have technology, we're definitely running a lot of simulations for research, for entertainment, games,
all sorts of reasons.
And the number of those greatly exceeds the number of real worlds we're in.
Look at all the video games kids are playing.
Every kid plays 10 different games.
You know, billion kids in the world.
So there is 10 billion simulations in one real world.
Even more so, when we think about advanced AI, super intelligent systems.
Their thinking is not like ours.
They think in a lot more detail.
They run experiments.
So running a detailed simulation of some problem at the level of creating artificial humans and simulating the whole planet would be something they'll do routinely.
So there is a good chance this is not me doing it for $10.
It's a future simulation thinking about something in this world.
So it could be the case that
a species of humans or a species of intelligence in some form got to this point where they could affordably run
simulations that are indistinguishable from this and they decided to do it and this is it right now
and it would make sense that they would run simulations as experiments or for games or for entertainment and also when we think about time in the world that i'm in in this simulation that i could be in right now time feels long relatively you know i have 24 hours in a day but on their in their world it could be
time is relative relative yeah it could be a second my whole life could be a millisecond in there.
Right, you can change the speed of simulations you're running, for sure.
So your belief is that this is probably a simulation?
Most likely.
And there is a lot of agreement on that.
If you look again, returning to religions, every religion basically describes what?
A super intelligent being, an engineer, a programmer, creating a fake world.
for testing purposes or for whatever.
But if you took the simulation hypothesis paper, you go to jungle, you talk to primitive people, a local tribe, and in their language you tell them about it.
Go back two generations later.
They have religion.
That's basically what the story is.
Religion, yeah, it describes a simulation theory, basically.
Somebody created.
So by default, that was the first theory we had.
And now with science, more and more people are going like I'm giving it non-trivial probabilities.
A few people are as high as I am, but a lot of people give it some credence.
What percentage are you at in terms of believing that we are currently living in a simulation?
Very close to certainty.
And what does that mean for
the nature of your life?
If you're close to 100% certain that we are currently living in a simulation, does that change anything in your life?
So all the things you care about are still the same.
Pain still hurts.
Love still love, right?
Like those things are not different, so it doesn't matter.
They're still important.
That's what matters.
The
little 1% difference is that I care about what's what's outside the simulation.
I want to learn about it.
I write papers about it.
So that's the only impact.
And what do you think is outside of the simulation?
I don't know.
But we can
look at this world and derive some properties of the simulators.
So clearly, brilliant engineer, brilliant scientist, brilliant artist.
Not so good with morals and ethics.
Room for improvement.
In our view of what morals and ethics should be.
Well, we know there is suffering in a world.
So unless you think it's ethical to torture children, then
I'm questioning your approach.
But in terms of incentives, to create a positive incentive, you probably also need to create negative incentives.
Suffering seems to be one of the negatives and incentives built into our design to stop me doing things I shouldn't do.
So like put my hand in a fire, it's going to hurt.
But it's all about levels, levels of suffering, right?
So unpleasant stimuli, negative feedback doesn't have to be at like negative infinity, infinity hell levels.
You don't want to burn alive and feel it.
You want to be like, oh, this is uncomfortable.
I'm going to stop.
It's interesting because we assume that they don't have great morals and ethics, but we too would take animals and cook them and eat them for dinner.
And we also conduct experiments on mice and rats.
But to get university approval to conduct an experiment, you submit a proposal and there is a panel of ethicists who would say, you can't experiment on humans.
You can't burn babies.
You can't eat animals alive, all those things would be banned.
In most parts of the world.
Where they have ethical boards.
Yeah.
Some places don't bother with it, so they have easier approval process.
It's funny when you talk about the simulation theory,
there's an element of the conversation that makes life feel less meaningful in a weird way.
Like
I know it doesn't matter, but whenever I have this conversation with people, not on the podcast, about are we living in a simulation, you almost see a little bit of meaning come out of their life for a second, and then they forget and then they carry on.
But the thought that this is a simulation almost posits that it's not important,
or that I think humans want to believe that this is the highest level and we're that the most important and
it's all about us.
We're quite egotistical by design.
And just an interesting observation I've always had when I have these conversations with people that it seems to strip something out of their life.
Do you feel religious people feel that way?
They know there is another world, and the one that matters is not this one.
Do you feel they don't value their lives the same?
I guess in some religions?
I think
they think that this world is being created for them and that they are going to go to this heaven or hell.
And that still puts them at the very center of it.
But if it's a simulation, you know, we could just be
some computer game that a four-year-old alien is messing around with while he's got some time to burn.
But maybe there is, you know, a test and there is a better simulation, you go to a worse one.
Maybe there are different difficulty levels.
Maybe you want to play it on a harder setting next time.
I've just invested millions into this and become a co-owner of the company.
It's a company called Ketone IQ.
And the story is quite interesting.
I start talking about ketosis on this podcast and the fact that I'm very low carb, very, very low sugar, and my body produces ketones, which have made me incredibly focused, have improved my endurance have improved my mood and have made me more capable at doing what i do here and because i was talking about it on the podcast a couple of weeks later these showed up on my desk in my hq in london these little shots and oh my god the impact this had on my ability to articulate myself on my focus on my workouts on my mood on stopping me crashing throughout the day was so profound that i reached out to the founders of the company and now i'm a co-owner of this business i highly, highly recommend you look into this.
I highly recommend you look at the science behind the product.
If you want to try it for yourself, visit ketone.com slash stephen for 30% off your subscription order.
And you'll also get a free gift with your second shipment.
That's ketone.com slash stephen.
And I'm so honored that once again, a company I own can sponsor my podcast.
I've built companies from scratch and backed many more.
And there's a blind spot that I keep seeing in early stage founders.
They spend very little time thinking about HR.
And it's not because they're reckless or they don't care.
It's because they're obsessed with building their companies.
And I can't fault them for that.
At that stage, you're thinking about the product, how to attract new customers, how to grow your team, really how to survive.
And HR slips down the list because it doesn't feel urgent, but sooner or later it is.
And when things get messy, tools like our sponsor today, JustWorks, go from being a nice to have to being a necessity.
Something goes sideways and you find yourself having conversations you did not see coming.
This is when you learn that HR really is the infrastructure of your company and without it, things wobble.
And JustWorks stops you learning this the hard way.
It takes care of the stuff that would otherwise drain your energy and your time, automating payroll, health insurance benefits, and it gives your team human support at any hour.
It grows with your small business from startup through to growth, even when you start hiring team members abroad.
So if you want HR support that's there through the exciting times and the challenging times, head to justworks.com now.
That's justworks.com.
And do you think much about longevity?
A lot, yeah.
It's probably the second most important problem because if AI doesn't get us, that will.
What do you mean?
You're going to die of old age.
Which is fine.
That's not good.
You want to die?
I mean, you don't have to.
It's just a disease.
We can cure it.
Nothing stops you from living forever.
As long as the universe exists, unless we escape the simulation.
But we wouldn't want a world where everybody could live forever, right?
That would be...
Sure, we do.
Why?
Who do you want to die?
Well, I don't know.
I mean, I say this because it's all I've ever known, that people die, but wouldn't the world become pretty overcrowded if...
No, you stop reproducing if you live forever.
You have kids because you want a replacement for you.
If you live forever, you're like, I'll have kids in a million years.
That's cool.
I'll go explore the universe first.
Plus, if you look at actual population dynamics outside of like one continent, we're all shrinking.
We're not growing.
Yeah, this is crazy.
It's crazy that the more rich people get, the less kids they have, which aligns with what you're saying.
And I do actually think, I think if, I'm going to be completely honest here, I think if I knew that I was going to live to a thousand years old, there's no way I'd be having kids at 30.
Right, exactly.
Biological clocks are based on terminal points.
Whereas if your biological clock is infinite, you'd be like, it's one day.
And you think that's close?
Being able to extend our lives?
It's one breakthrough away.
I think somewhere in our genome, we have this rejuvenation loop, and it's said to basically give us at most 120.
I think we can reset it to something bigger.
AI is probably going to accelerate that.
That's one very important application area.
Yes, absolutely.
So maybe Brian Johnson's right when he says, don't die now.
He keeps saying to me, he's like, don't die now.
Don't die ever.
Because he's saying, like, don't die before we get to the technology.
Right.
Longevity, escape, velocity.
You want to live long enough to live forever.
If at some point we, every year of your existence, add two years to your existence through medical breakthroughs, then you live forever.
You just have to make it to that point of longevity, escape velocity.
And he thinks that longevity, escape velocity, especially in the world of AI, is
pretty,
it's decades away, minimum, which means...
As soon as we fully understand human genome, I think we'll make amazing breakthroughs very quickly.
Because we know some people have genes for living way longer.
They have generations of people who are centering.
So if we can understand that and copy that or copy it from some animals which will live forever, we'll get there.
Would you want to live forever?
Of course.
Reverse the question.
Let's say we lived forever and you asked me, do you want to die in four years?
Why would I say yes?
I don't know.
Maybe you're just used to the default.
Yeah, I am used to the default.
And nobody wants to die.
Like, no matter how old you are, nobody goes, yeah, I want to die this year.
Everyone's like, oh, I want to keep living.
I wonder if life and everything would be less special if i lived for 10 000 years i wonder if going to hawaii for the first time or i don't know a relationship all of these things would be way less special to me if they were less scarce and if i just you know it could be individually less special but there is so much more you can do right now you can only make plans to do something for a decade or two you cannot have an ambitious plan of working in this project for 500 years imagine possibilities open to you with infinite time in an infinite universe.
Gosh.
Well, you can.
Because exhausting is something.
It's a big amount of time.
Also, I don't know about you, but I don't remember like 99% of my life in detail.
I remember big highlights.
So even if I enjoyed Hawaii 10 years ago, I'll enjoy it again.
Are you thinking about that really practically as in terms of, you know, in the same way that Brian Johnson is, Brian Johnson is convinced that we're like maybe two decades away from being able to extend life.
Are you thinking about that practically?
And are you doing anything about it?
Diet, nutrition.
I try to think about investment strategies which pay out in a million years.
Yeah.
Really?
Yeah, of course.
What do you mean, of course?
Why wouldn't you?
If you think this is what's going to happen, you should try that.
So if we get AI right, now, what happens to economy?
We talked about WorldCoin.
We talked about free labor.
What's money?
Is it now Bitcoin?
Do you invest in that?
Is there something else which becomes the only resource we cannot fake?
So, those things are very important research topics.
So, you're investing in Bitcoin, aren't you?
Yeah.
Because
it's the only scarce resource.
Nothing else has scarcity.
Everything else, if price goes up, will make more.
I can make as much gold as you want given a proper price point.
You cannot make more Bitcoin.
Some people say Bitcoin is just this thing on a computer that we all agreed was valuable.
We are a thing on a computer.
Remember?
Okay, so I mean,
not investment advice, but investment advice.
It's hilarious how that's one of those things where they tell you it's not, but you know it is immediately.
There is a your call is important to us.
That means your call is of zero importance, and investment is like that.
Yeah, yeah, when they say no investment advice, it's definitely investment advice, but it's not investment advice.
Okay, so you're bullish on Bitcoin because it's
it can't be messed with.
It is the only thing which we know how much there is in the universe so gold there could be an asteroid made out of pure gold heading towards us devaluing it well also killing all of us but
bitcoin i know exactly the numbers and even the 21 million is an upper limit how many are lost passwords forgotten i don't know what satoshi is doing with his million It's getting scarcer every day, while more and more people are trying to accumulate it.
Some people worry that it could be hacked with a supercomputer.
A quantum computer can break that algorithm.
There is strategies for switching to quantum resistant cryptography for that.
And quantum computers are still kind of weak.
Do you think there's any changes to my life that I should make following this conversation?
Is there anything that I should do differently the minute I walk out of this door?
I assume you already invest in Bitcoin heavily?
Yes, I'm an investor in Bitcoin.
Business financial advice?
No, just you seem to be winning.
Maybe it's your simulation.
You're rich, handsome.
You have famous people hanging out with you.
Like, that's pretty good.
Keep it up.
Robin Hansen has a paper about how to live in a simulation, what you should be doing in it.
And your goal is to do exactly that.
You want to be interesting, you want to hang out with famous people so they don't shut it down.
So you are part of a part someone's actually watching on pay-per-view or something like that.
Oh, I don't know if you want to be watched on pay-per-view because then you're going to be this.
And they shut you down.
If no one's watching, why would they play it?
I'm saying, don't you want to fly under the radar?
Don't you want to be the guy just living a normal life that the masters are.
Those are NPCs.
Nobody wants to be an NPC.
Are you religious?
Not in any traditional sense, but I believe in simulation hypothesis, which has a super intelligent being.
But you don't believe in the like, you know, the religious books.
So different religions.
This religion will tell you, don't work Saturday.
This one, don't work Sunday.
Don't eat pigs.
Don't eat cows.
They just have local traditions on top of that theory.
That's all it is.
They're all the same religion.
They all worship super intelligent being.
They all think this world is not the main one.
And they argue about which animal not to eat.
Skip the local flavors, concentrate on what do all the religions have in common?
And that's the interesting part.
They all think there is something greater than humans, very capable, all-knowing, all-powerful.
Then I run a computer game for those characters in a game.
I am that.
I can change the whole world.
I can shut it down.
I know everything in the world.
It's funny.
I was thinking earlier on when we started talking about the simulation theory, that there might be something innate in us.
that has been left from the creator, almost like a clue, like an intuition.
Because that's what we we tend to have through history humans have this intuition yeah that
all the things you said are true that there's this somebody above and that we have generations of people who were religious who believed god told them and was there and gave them books and that has been passed on for many generations this is probably one of the earliest generations not to have universal religious belief
i wonder if those people are telling the truth i wonder if there's people those people that say god came to them and said something imagine that imagine if that was part of this i'm looking at the news today.
Something happened an hour ago, and I'm getting different conflicting results.
I can't even get with cameras, with drones, with like guy on Twitter there.
I still don't know what happened.
And you think 3,000 years ago we have accurate record of translations?
No, of course not.
You know, these conversations you have around AI safety.
Do you think they make people feel good?
I don't know if they feel good or bad, but people find it interesting.
It's one of those topics so I can have a conversation about different cures for cancer with an average person.
But everyone has opinions about AI.
Everyone has opinions about simulation.
It's interesting that you don't have to be highly educated or a genius to understand those concepts.
Because I tend to think that it makes me feel
not positive.
And I understand
that, but I've always been of the opinion that
you shouldn't live in a world of delusion where you're just seeking to be positive, have sort of
positive things said and avoid uncomfortable conversations.
Actually, progress often in my life comes from like having uncomfortable conversations, becoming aware about something, and then at least being informed about how I can do something about it.
And so
I think that's why I asked the question because I assume most people should,
if they're normal human beings, listen to these conversations and go,
gosh, that's scary.
And this is concerning.
And then I keep coming back to this point, which is like, what do I do with that energy?
Yeah, but I'm trying to point out this is not different than so many conversations.
We can talk about, oh, there is starvation in this region, genocide in this region, you're dying, cancer is spreading, autism is up.
You can always find something to be very depressed about and nothing you can do about it.
And we are very good at concentrating on what we can change, what we are good at, and
basically
not trying to embrace the whole world as a local environment.
So, historically, you grew up with a tribe, you had a dozen people around you, if something happened to one of them, it was very rare.
It was an accident.
Now, if I go on the internet, somebody gets killed everywhere all the time.
Somehow, thousands of people are reported to me every day.
I don't even have time to notice.
It's just too much.
So I have to put filters in place.
And I think this topic is what people are very good at filtering as like this was this entertaining talk I went to, kind of like a show.
And the moment I exit, it ends.
So usually I would go give a keynote at a conference and
I tell them, basically, you're going to die.
You have two years left.
Any questions?
And people will be like, will I lose my job?
How do I lubricate my sex robot?
Like all sorts of nonsense, clearly not understanding what I'm trying to say there.
And those are good questions, interesting questions, but not fully embracing the result.
They're still in their bubble of local versus global.
And the people that disagree with you the most as it relates to AI safety, what is it that they say?
What are their counter-arguments typically?
So many don't engage at all.
Like they have no background knowledge in a subject.
They never read a single book, single paper, not just by me, by anyone.
They may be even working in a field.
So they are doing some machine learning work for some company, maximizing ad clicks.
And to them, those systems are very narrow.
And then they hear that, oh, this AI is going to take over the world.
Like, it has no hands.
How would it do that?
It's nonsense.
This guy is crazy.
He has a beard.
Why would I listen to him?
Right.
Then they start reading a little bit.
They go, oh, okay, so maybe AI can be dangerous.
Yeah, I see that.
But we always solve problems in the past.
We're going to solve them again.
I mean, at some point, we fixed the computer virus or something.
So it's the same.
And basically, the more exposure they have, the less likely they are to keep that position.
I know many people who went from
super careless developer to safety researcher.
I don't know anyone who went from I worry about AI safety to like there is nothing to worry about.
What are your closing statements?
Let's make sure there is not a closing statement we need to give for humanity.
Let's make sure we stay in charge, in control.
Let's make sure we only build things which are beneficial to us.
Let's make sure people who are making those decisions are remotely qualified to do it.
They are good not just at science, engineering and business, but also have moral and ethical standards.
And if you're doing something which impacts other people, you should ask their permission before you do that.
If there was one button in front of you and it would
shut down every AI company in the world right now, permanently, with the inability for anybody to start a new one, would you press the button?
Are we losing narrow AI or just super intelligent AGI part?
Losing all of AI.
That's a hard question because AI is extremely important.
It controls stock market power plants.
It controls hospitals.
It would be a devastating accident.
Millions of people would lose their lives.
Okay, we can keep narrow AI.
Oh, yeah.
That's what we want.
We want narrow AI to do all this for us, but not God we don't control doing things to us.
So you would stop it.
You would stop AGI and superintelligence?
We have AGI.
What we have today is great for almost everything.
We can make secretaries out of it.
99% of the economic potential of current technology has not been deployed.
We make AI so quickly, it doesn't have time to propagate through the industry, through technology.
Something like half of all jobs are considered BS jobs.
They don't need to be done, bullshit jobs.
So those can be not even automated.
They can be just gone.
But I'm saying we can replace 60% of jobs today with existing models.
We're not done that.
So if the goal is to grow the economy, to develop, we can do it for decades without having to create super intelligence as soon as possible.
Do you think globally, especially in the Western world, unemployment's only going to go up from here?
Do you think relatively this is the low of unemployment?
I mean, it fluctuates a lot with other factors.
There are wars, there is economic cycles, but overall, the more jobs you automate and the higher is the intellectual necessity to start a job, the fewer people qualify.
So, if we plotted it on a graph over the next 20 years, you're assuming unemployment is gradually going to go up over that time?
I think so.
Fewer and fewer people would be able to contribute.
Already, we kind of understand it because we created minimum wage.
We understood some people don't contribute enough economic value to get paid anything, really.
So, we had to force employers to pay them more than they're worth.
And we haven't updated it.
It's what, 725 federally in the US.
If you keep up with the economy, it should be like $25 an hour now,
which means all these people making less are not contributing enough economic output to justify what they're getting paid.
We have a closing tradition on this podcast where the last guest leaves a question for the next guest, not knowing who they're leaving it for.
And the question left for you is:
what are the most important
characteristics for a friend, colleague,
or mate?
Those are very different types of people,
but for all of them, loyalty is number one.
And what does loyalty mean to you?
Not betraying you, not screwing you, not cheating on you.
Despite the temptation.
Despite the world being as it is, situation, environment.
Dr.
Roman, thank you so much.
Thank you so much for doing what you do because you're starting a conversation and pushing forward a conversation and doing research that is incredibly important.
And you're doing it in the face of a lot of
sceptics, I'd say.
There's a lot of people that have a lot of incentives to discredit what you're saying and what you do because they have...
their own incentives and they have billions of dollars on the line and they have their jobs on the line potentially as well.
So it's really important that there are people out there that are willing to,
I guess, stick their head above the parapet and come on shows like this and go on big platforms and talk about the unexplainable, unpredictable, uncontrollable future that we're heading towards.
So, thank you for doing that.
This book, which I think everybody should check out if they want a continuation of this conversation, I think it was published in 2024.
gives a holistic view on many of the things we've talked about today, preventing AI failures and much, much more.
And I'm going to link it below for anybody that wants to read it.
If people want to learn more from you, if they want to go further into your work, what's the best thing for them to do?
Where do they go?
They can follow me, follow me on Facebook, follow me on X.
Just don't follow me home.
Very important.
Okay, so I'll put your Twitter, your X account as well below so people can follow you there.
And yeah, thank you so much for doing what you do.
It's remarkably eye-opening and it's given me so much food for thought.
And it's actually convinced me more that we are living in a simulation.
But it's also made me think quite differently of religion, I have to say.
Because you're right, all the religions, when you get away from the sort of the local traditions, they do all point at the same thing.
And actually, if they are all pointing at the same thing, then maybe the fundamental truths that exist across them should be something I pay more attention to.
Things like loving thy neighbor, things like the fact that we are all one, that there's a divine creator, and maybe also they all seem to have consequence beyond this life.
So, maybe I should be thinking more about
how I behave in this life and where I might end up thereafter.
Reverend, thank you.
Amen.
You're juggling a lot.
Full-time job, side hustle, maybe a family.
And now you're thinking about grad school?
That's not crazy.
That's ambitious.
At American Public University, we respect the hustle and we're built for it.
Our flexible online master's programs are made for real life because big dreams deserve a real path.
Learn more about APU's 40-plus career-relevant master's degrees and certificates at apu.apus.edu.
APU built for the hustle.