Nick Bostrom: How Entrepreneurs Can Win in an AI-Dominated World | Artificial Intelligence | E356
In this episode, Hala and Nick will discuss:
(00:00) Introduction
(02:54) The Simulation Hypothesis, Posthumanism, and AI
(11:48) Moral Implications of a Simulated Reality
(22:28) Fermi Paradox and Doomsday Arguments
(30:29) Is AI Humanity’s Biggest Breakthrough?
(38:26) Types of AI: Oracles, Genies, and Sovereigns
(41:43) The Potential Dangers of Advanced AI
(50:15) Artificial Intelligence and the Future of Work
(57:25) Finding Purpose in an AI-Driven World
(1:07:07) AI for Entrepreneurs and Innovators
Nick Bostrom is a philosopher specializing in understanding AI in action, the advancement of superintelligent technologies, and their impact on humanity. For nearly 20 years, he served as the founding director of the Future of Humanity Institute at the University of Oxford. Nick is known for developing influential concepts such as the simulation argument and has authored over 200 publications, including the New York Times bestsellers Superintelligence and Deep Utopia.
Sponsored By:
Shopify - Start your $1/month trial at Shopify.com/profiting.
Indeed - Get a $75 sponsored job credit to boost your job's visibility at Indeed.com/PROFITING
Mercury - Streamline your banking and finances in one place. Learn more at mercury.com/profiting
OpenPhone - Get 20% off your first 6 months at OpenPhone.com/profiting.
Bilt - Start paying rent through Bilt and take advantage of your Neighborhood Benefits by going to joinbilt.com/profiting.
Airbnb - Find a co-host at airbnb.com/host
Boulevard - Get 10% off your first year at joinblvd.com/profiting when you book a demo
Resources Mentioned:
Nick’s Book, Superintelligence: bit.ly/_Superintelligence
Nick’s Book, Deep Utopia: bit.ly/DeepUtopia
Nick’s Website: nickbostrom.com
Active Deals - youngandprofiting.com/deals
Key YAP Links
Reviews - ratethispodcast.com/yap
Youtube - youtube.com/c/YoungandProfiting
LinkedIn - linkedin.com/in/htaha/
Instagram - instagram.com/yapwithhala/
Social + Podcast Services: yapmedia.com
Transcripts - youngandprofiting.com/episodes-new
Entrepreneurship, Entrepreneurship Podcast, Business, Business Podcast, Self Improvement, Self-Improvement, Personal Development, Starting a Business, Strategy, Investing, Sales, Selling, Psychology, Productivity, Entrepreneurs, AI, Artificial Intelligence, Technology, Marketing, Negotiation, Money, Finance, Side Hustle, Startup, Mental Health, Career, Leadership, Mindset, Health, Growth Mindset, ChatGPT, AI Marketing, Prompt, AI in Business, Generative AI, AI Podcast.
Listen and follow along
Transcript
Today's episode is sponsored in part by OpenPhone, Shopify, Mercury, Indeed, and Framer.
OpenPhone is the number one business phone system.
Build stronger customer relationships and respond faster with shared numbers, AI, and automations.
Get 20% off your first six months when you go to openphone.com slash profiting.
Shopify is the global commerce platform that helps you grow your business.
Sign up for a $1 per month trial period at shopify.com slash profiting.
Mercury streamlines your banking and and finances in one place so you can focus on growing your online business.
Learn more at mercury.com slash profiting.
Attract, interview, and hire all in one place with Indeed.
Get a $75 sponsored job credit at Indeed.com slash profiting.
Terms and conditions apply.
Framer is the designed-first, no-code website builder that lets anyone ship a production-ready site in just minutes.
Launch your site for free at framer.com and use code Profiting to get your first month of Pro on the house.
As always, you can find all of our incredible deals in the show notes or at youngandprofiting.com/slash deals.
If this is a simulation, then presumably we can infer a few things: that the people building it would have to be very technologically advanced.
Nick Bostrom isn't just a philosopher.
He's a global thought leader on the future of artificial intelligence.
He's the author of Super Intelligence, the groundbreaking book that brought the risks of advanced AI into mainstream conversation.
People have, for thousands of years, tried to create imaginary worlds that people can experience, be it through theater, right, or literature.
Maybe for these post-humans, they might be interested in knowing what, if they ever ran into alien civilizations, what those would be like.
How do you think about AI in terms of the significance in humanity?
Reviewing the rapid recent advances that we've seen in the field of artificial intelligence, it really looks like we kind of possibly figured out a large component of the secret sauce.
So, how do you think entrepreneurship will change in this world?
You mentioned that there might be still some jobs.
The kinds of jobs that might remain, I think, are...
If it's true that we're living in a simulation, what do you feel like are the moral implications of what it means for our lives?
That's
difficult.
I think...
Yap bam, on today's episode, we're focused on the bold ideas shaping tomorrow.
And today's guest has dedicated his career to thinking decades and even hundreds and thousands of years ahead.
And he's got some wild perspectives of how our world may shape out and be drastically different, even just a few years from now.
Nick Bostrom isn't just a philosopher.
He's a global thought leader on the future of artificial intelligence.
He's the author of Superintelligence, the groundbreaking book that brought the risks of advanced AI into the main conversation, as well as the book Deep Utopia, which explores what life might look like in a world where all our problems are solved.
And for humans, when all of our problems are solved, purpose becomes the next big question.
In this conversation, we explore whether we're living in a simulation, what a post-human future could look like, and how AI could either destroy or liberate us, and what it all means for purpose, progress, and even the future of entrepreneurship.
So buckle up, Yap Fam, because this episode is going to stretch your thinking and challenge your assumptions.
But first, make sure you hit that subscribe button so you never miss an episode packed with insights like these.
Nick, welcome to Young and Profiting Podcast.
Thank you so much for having me.
I love conversations about the future, about AI, and you've spent your career focused on really deep, long-range questions, the deepest questions that we could really ask about humanity.
And so I'm wondering what really first drew you to thinking about humanity thousands and even billions of years into the future?
I think it's sad if we have this allotted time here on the planet in this magical cosmos and we never really take the time to look around or try to figure out what is going on here.
You know, I feel sometimes we are a little bit like ants running around, being very busy pulling our needle to the anthill, but don't really stop to reflect what is this anthill that we are building?
What is it for?
What else is going on in this forest around us?
It's so true.
We're just focused on working and hustling and not really paying attention to what we're even living in.
And I know that one of the things that made you famous is that you put out a paper in 2003 and you talked about how we're living in a simulation or you had the hypothesis that we're living in a simulation.
And it's actually what first made you famous is putting out this paper.
So talk to us about, you know, in 2025, what are the odds that you think that we're currently living in a simulation right now?
I tend to punt on the probability question there.
I often get asked, but I refrain from putting an exact number on it.
I take it as a very serious possibility, though.
The simulation argument itself that you're referring to, the paper that was published in 2002,
only demonstrates that one of three possibilities obtains, one of which is the simulation hypothesis, but the simulation argument itself doesn't tell us which one of those three.
So you need to bring additional considerations to bear.
But if you're thinking ahead, you know, in this time of rapid advances in AI, where all of this might be going, if you think eventually we'll have these super intelligences that develop all kinds of super advanced technologies, colonize space, transform planets into giant computers.
And amongst the things they could do with that kind of technology would be to run simulations, detailed simulations of environments like ours and including with brains in those simulations simulated at a very high level of granularity.
And so what that means is that if this happens, there could be many, many more people like us with our kinds of experiences being simulated than being implemented implemented in the original meat substrate.
And if most people with our kinds of experiences are simulated, then we should think we are probably amongst the simulated ones rather than the rare, exceptional, original ones, given that from the inside, you wouldn't be able to tell the difference.
Yeah, but I really want to know, do you think we're living in a simulation?
Well, as I said, I take the hypothesis seriously.
Yeah, so you have one of three where you say we could become extinct before there's post-humans, right?
Then you say we might be living in a simulation.
Talk to us about the three hypotheses that you have.
Yeah, so if you break this down, if we do end up with a future where this mature civilization runs all these simulations of variations of people like their historical predecessors,
then there would be many more simulated people with our experiences than non-simulated ones.
Conditional on that, I think we should think we are almost certainly amongst the simulated ones.
So then if you break this down, what are the alternatives to that?
Well, one is that we won't end up with this future, and that could be because we go extinct before reaching technological maturity.
So that's one of the alternatives.
But not just that we
go extinct, but it would have to be pretty universal amongst all other advanced civilizations throughout the universe, that they almost all would have to go extinct before reaching the level of technological capability that would allow them to run these types of ancestor simulations.
So that's possibility one.
A strong filter that every civilization that reaches our current stage of technological maturity just fails to go all the way there.
Then the second is that, well, maybe they do become technologically mature, but they decide not to use their planetary supercomputers for this purpose.
They have other things to do.
Maybe they all refrain from using even a small portion of their computational resources to run these simulations.
So that's the second alternative, a strong convergence.
They all lose interest in running computer simulations.
but if both of those fail then we end up with a third possibility that we are almost certainly currently living in a computer simulation created by some advanced civilization and the advanced civilization you say they're post-human right can you talk to us about how you envision this post-humanity what are they like what are their capabilities
well if this is a simulation then presumably we can infer a few things that the people building it would have to be very technologically advanced because right now we can't create computer simulations with conscious human beings in them.
They need to build very powerful computers, they need to know how to program them, etc.
And then you can figure if they have the technology to do that, they probably also have technology to do a bunch of other things, like including enhancing their own intelligence.
So I imagine these would be super intelligences that would have reached a state close to technological perfection.
And then for whatever reason, they have some interest in doing this stuff.
But beyond that, it's hard to say very much specifically about what they would be like.
Now that AI is at the forefront, do you believe that maybe these post-humans might be like part human, part AI, or all AI?
At that point, the distinction might blur, which also might be the case for us in the future if things go well and
we are allowed to continue to develop.
Well, A, we will develop, I think, artificial superintelligence.
But amongst the things that that technology could be used to do would be providing paths for us current biological humans to gradually upgrade our abilities.
This could take the form of biological enhancements of various kinds, but it could also ultimately take the form of uploading into computers.
So you could imagine detailed scans.
of human brains that would then allow our memories and personalities and consciousness to continue to exist, but then in digital substrate.
And from there on, you could imagine further development you could add neurons, you could increase the processing speed, you could gradually become some form of radically post-human superbeing that might be hard to differentiate from a purely synthetic AI.
So interesting.
So your theory of if we're in a simulation, there's post-humans who are really technologically advanced and they're creating our world, which you call an ancestor civilization, correct?
Why would they do that?
What would be the reason of them creating a civilization like ours?
We can only speculate.
I mean, we don't know much about post-human psychology or their motives, but there are several potential reasons, motivations.
You could ask why it is that we humans, with our current more limited technology, create computer simulations.
And we do it for a variety of purposes.
People have, for thousands of years, tried to create imaginary worlds that people can experience, be it through theater, right, or literature, and more recently through virtual reality and computer games.
This can be for entertainment or for cultural purposes.
You also have scientists creating computer simulations to study various systems that might be hard to reach in nature, but you create a little computer simulation of them and then you study how the simulation behaves.
So there could be entertainment reasons, there could be scientific reasons.
Maybe for these post-humans, they might be interested in knowing if they ever ran into alien civilizations, what those would be like.
And maybe one way to study that is to simulate many different originations of higher technological civilizations, like starting from something like current human civilization before and running the tape forward and seeing what the distribution is of different kinds of superintelligences you would get from that.
And you could also imagine historical tourism if they can't literally travel back in time, but what the second best might be is to create replicas of historical environments that future people could experience almost as if they were going back in time, but living in a temporarily exploring a simulated reality.
You could imagine other sort of moral or religious reasons as well of different kinds.
If it's true that we're living in a simulation, what do you feel like are the moral implications of what it means for our lives?
I think a first initial approximation, I would say if you are in a simulation, do the same things you would if you knew you were not in a simulation.
Because the best guide to what would happen next in the simulation and how your actions would impact things might still be the normal methods we use.
Like you look at patterns and extrapolate those.
whether we're simulated or not.
Unless you have some direct insight into what the simulator's motives are or like the precise way in which this simulation was set up, you just have to look at what kind of simulation this appears to be and what seems to, you know, if you do A, you know, B follows.
If you want to get into your car, you have to take out your car keys if you want to do this.
So I think that would be to a first cut the answer.
But then to the extent that you think you have some maybe probabilistic guesses about how these things are configured, that might give you on the margin more reason to emphasize some hypotheses that otherwise would be less plausible.
So, for example, if we are not in a simulation and you have a secular materialistic outlook on life, then when we die, we die, and that's it, right?
Where in a simulation, you could potentially be moved into a different simulation or uplifted to the level of the simulators.
This would at least be on the table as possibilities.
Similarly, if we are in basement physical reality, as far as we know, current physical theories theories say the world can't just suddenly pop out of existence.
There are conservation of energy, conservation of momentum, and other physical laws that prevent that from happening.
If, however, our world is simulated, then in theory, if the simulators flick the power off, our world would pop like a bubble, disappearing into nothingness.
Broadly speaking, I think there would be a wider range of possibilities on the table if we are simulated than if we are not.
So it might mean approaching our existence with less confidence that we have it basically figured out and thinking there might be more things on Earth than we normally assume in our common sense philosophy.
And then maybe some sort of attitude of humility would be appropriate in that context.
Is there any clues or pieces of proof that prove we're in assimilation?
Like, for example, the dinosaurs and how they just went extinct and then, you know, it was kind of like a new world after that.
Do you feel like there's any clues to that we're in a simulation?
I'm rather skeptical of that.
I get a lot of random people emailing saying they've discovered some glitch in the matrix or something.
You know, somebody was looking at their bathroom mirror and thought they saw pixels.
But I think whether we are in a simulation or not, you would still expect some people to report those kinds of observations.
for all the normal types of psychological reasons.
Some people might hallucinate something, some might be misremembering something, or misinterpreting something, or making something up.
These things you would expect to take place anyway.
So, I think whether we are in a simulation or not, the best, most likely explanation for those reports are these ordinary psychological phenomena rather than that there is actually some defect in the simulation that they have been able to detect.
I think to create a simulation like this in the first place would be very hard, and simulators advanced enough to do that would probably also have the ability to patch things up so that the creatures inside the simulation couldn't notice.
And if they did notice, they could edit that out or rerun it from an earlier save point or edit the memory or do other things like that.
I don't think that.
I think that there are indirect observations that might slightly adjust the probability.
So if you recall the original simulation argument with these three possibilities, right?
The simulation argument shows at least one of them is true, but doesn't tell us which one.
But what that means is that to the extent we get evidence against the first two possibilities, it would tend to shift probability mass over to the remaining third.
And conversely.
So if you think we can get evidence for or against,
say, the first one, which is that almost all civilizations at our current stage of technological development go extinct before reaching technological maturity, then that would tend to indirectly affect the probability of the simulation.
hypothesis.
So in particular, the closer we get ourselves to being able to wield this technology, the less likely it is that almost all civilizations at our stage fail to go the remaining few steps.
That will tend to move probability over to the remaining possibilities.
So basically what you're saying is if we can ourselves, as technology advances, start to create simulations, that that means that we're probably in a simulation argument.
Yeah, and in particular, if we get both the ability and we still at that point retain the interest in doing that, that would really leave only the third possibility.
But short of getting all the way there, if we get half the way there, like it still, you know, removes some of the places where this great filter could be.
Like the closer we get, the less chance there is that the remaining few steps will be impossible to take.
Do you feel like AI is going to help us get closer to creating this simulation?
And now that AI is something that we use all the time, do you think it will help discover that we're in a simulation?
Well, for the same reasons we we might be able to figure these things out.
If indeed the argument is sound, then I guess other powerful minds would also reach that same conclusion.
And they might also reach the conclusion that they are in a simulation.
For them, especially for sort of early stage AIs,
be a very salient possibility because already right now, when training our AIs, we often put them in simulations.
That's like a big part of the training.
And we create these virtual game environments or something.
And so for them, it seems like a very obvious hypothesis to consider once they develop some degree of situational awareness.
And you already see glimmers of that with current AI systems.
And in fact, I think this might be an important shaper of AI behavior as they become sophisticated, but reasoning strategically about a context.
They would certainly be shaped by considerations that they might well be living currently in a simulation environment rather than in deployment.
This podcast is brought to you by Mercury, the banking product businesses like mine use to simplify their finances all in one place.
You guys know me, I'm all about working smarter, not harder.
And that includes my business finances.
And that's why I bank with Mercury.
Running a business is tough, but managing your money shouldn't be.
Mercury makes it effortless.
You get banking, credit cards, spend management, and invoicing all in one intuitive product.
No clunky banking sites, just a clean, easy-to-use experience.
It's helped me streamline everything from paying contractors to organizing organizing cash flow.
If you want a better way to handle your business finances, you'll love Mercury.
Ready to see what powerful banking can do for your business?
Visit mercury.com to apply in just 10 minutes.
Disclaimer, Mercury is a financial technology company, not a bank.
For important details, check out the show notes.
Yeah, bam, if you want to take your business to the next level, you've got to upgrade your website.
And if you're still stuck with those copy-paste website templates, you know, the ones that have all those generic templates that make every site look exactly the same and boring, it's time to break up that template trap.
If traditional site builders feel clunky or limiting, Framer is the solution you've been waiting for.
Yes, Framer.
If you've never heard of it, Framer is the design-first no-code website builder that lets anybody ship a production-ready site in just minutes.
Framer is all the rage right now because you can start for free and browse hundreds of stunning pixel-perfect templates or design from a a totally blank canvas, which I love for creative freedom.
Depending on what you want to do, you can start blank or use their amazing templates that are not just generic that you'll find on other websites.
Framers got multiplayer collaboration, meaning your entire team, writers, designers, marketers, can work on the same page in real time.
So there's no messy version control issues.
If you want your site to stand out, you can add scroll animations, parallax effects, looping text, and so much more in seconds with zero code.
You don't need to hire expensive developers.
It even comes with built-in AI to create smart layouts and instantly translate your entire site into any language that you want.
How cool is that?
Behind the scenes, you'll get responsive breakpoints, built-in hosting, a flexible CMS, and privacy-friendly analytics.
Ready to build a site that looks hand-coded without hiring an expensive developer?
Launch your site for free at framer.com and use code profiting to get your first month of pro on the house.
That's framer.com with promo code Profiting, framer.com promo code Profiting for your free month of Pro.
Yap gang, I recently heard a stat that really made me pause.
Nearly half of American adults say they'd face financial hardship within six months if they lost their primary income earner.
If you're thinking, well, that could be me or that could be my family, you're not alone.
But the good news is that you can do something about it today.
That's why I'm excited to share Policy Genius with you.
Policy Genius is the country's leading online insurance marketplace, and they make getting life insurance ridiculously easy.
You can compare quotes from top insurers in minutes and find a policy that fits your needs and your budget.
I love the way they take the stress out of the process.
Their licensed agents guide you every step of the way.
They answer your questions, they handle your paperwork, and they also advocate for you.
I highly recommend using Policy Genius to find your life insurance.
And get this, this is huge.
With Policy Genius, you can find life insurance policies starting at just 276 dollars a year for 1 million dollars in coverage 1 million dollars in coverage for your family for just 276 dollars a year this is an easy way to protect the people that you love and feel good about your future secure your family's future with policy genius head to policygenius.com slash profiting to compare free life insurance quotes from top companies and see how much you can save that's policygenius.com slash profiting
i know we kind of alluded to this already but i'd love to hear what you think about it more.
If we are in fact living in a simulation and let's say we discover for certain we're in a simulation, we can create simulations, what do you think would happen on Earth?
How do you think things would change?
Well, I think humans have a great ability to adapt to changes in worldview and for the most part most people
are only slightly affected by these big picture considerations.
You can look through human history, different worldviews have come and gone.
And some people become very fanatical and take it seriously.
Most people just broadly speaking get on with their lives.
Maybe once in a while they get asked about these things and they say certain words rather than other words.
So I think the direct philosophical implications on our behavior would be moderate probably, but I imagine in this situation where we developed the technology, say, to create our own simulations, the technology that allowed us to do that would also allow us to do so many other things to reshape our world.
And those more direct technological impacts, I think, would be far greater than the sort of indirect impacts by changing our philosophical opinions about the world.
Well, do you think that people would become more violent?
Why would that be the case?
I guess because if you're living in a simulation, maybe people wouldn't consider death to be the same thing anymore.
If we find out we were in a particular kind of simulation, like some sort of short duration game simulation, then yeah, you could imagine that would shape just as you maybe behave very differently when you're playing a computer game.
Hopefully you don't behave the same way in real life as you do when you're playing a first-person shooter.
But if we didn't get any new insights as to how this particular simulation is configured, we just learned that it is a simulation, but not anything about the specific character of the simulation, then I don't know whether that would lead to a greater propensity for violence.
If anything, maybe the converse, you think there might be a stages after the simulation where your behavior in the simulation would affect kind of similar to traditional ideas of karma or an afterlife.
Some people might become more violent or fanatical, but it can also serve as a moral ballast or like a kind of, well, there is...
hopefully you do the right thing just because it's moral, but if not, you know, if there is like some system of accountability that might also induce other people to pay more attention to making sure you don't harm others or trample on other people's rights and interests.
It's kind of like if you lose the game, there could be winners and losers of the game that we're in.
Yeah.
It's hard to know how that all shakes out.
But in terms of thinking about the big picture, the question you started with, it's in one of a small number of these fundamental constraints, it seems to me, as to what we can coherently believe about the structure of reality and our place within it.
And it is striking.
It might have seemed, and I guess most people did seem, if you go back a couple of decades ago, that it's so hard to know what's going to happen in the future.
Anything is possible.
You can just make stuff up.
The problem is not coming up with some idea.
It's that there are no constraints that would allow us to pick which idea is correct because we have so much evidence.
But in fact, I think if you start to think these things through,
it can be hard to come up with even one fully articulated, coherent picture that makes sense of the constraints that we're already aware of.
The simulation argument is one, but there are others.
There's like the Furby paradox where we haven't seen any aliens.
There's what we seem to know about the kinds of technologies that can be developed.
There are other more methodologically shaky arguments, perhaps, but the Carter-Leslie doomsday arguments.
There are a few things like this that can serve to structure our thinking about the really biggest strategic picture surrounding us.
Can you tell us about some of those arguments?
So the Fermi paradox, many people will have heard of it, but it's the observation that we haven't seen any signs of extraterrestrial life.
And yet we know that there are many galaxies and many planets, billions and billions and billions
out there, on which it seems life could have originated.
So, the question then is:
with billions of possible germination points and zero aliens that have actually manifested themselves to us or arrived at our planet, how do we reconcile those two?
There has to be some great filter that you start with billions of germination points and you end up with a net total of zero extraterrestrial arrivals here.
So, what accounts for that?
I think the most likely explanation is that it's just really hard to get to technologically advanced life.
And maybe it's hard to get to even simple life.
And you could look for these candidate places of where
there could be this kind of great filter.
Maybe it's the emergence of simple self-replicators.
Like so far, we haven't found that on any other planet.
Or maybe it's slightly later on, maybe the step from prokaryotic life forms to eukaryotic life forms.
On Earth, it looks like that took one and a half billion years.
Maybe what that means is that it's astronomically improbable for it to happen.
And you just had one and a half billions of years where random things just bumped into each other in chance.
And
with a large enough universe, and ours might, for all we know, be infinitely large, with infinitely many planets, then eventually, no matter how improbable something is, it will happen somewhere.
And then you would invoke a so-called observation selection effect to explain why we are observing that on our planet, that improbable event happened.
Only those planets where that improbability happened develop observers that can then look back on their own history and marvel at this.
So that's one possibility.
Maybe it's slightly later on.
The closer you get to current humanity, though, it seems the less likely it is that there would be a great filter.
For example, you might think that it's the step to more advanced forms of cognitive ability that would be the improbable step, but that doesn't really fit the evidence.
We know that on several independent evolutionary lineages, you had fairly advanced intelligence evolving here on Earth.
You have it happen in the hominoid lineage, of course, but also independently amongst birds and corvids, like crows and stuff, and among octopi, for example.
So it looks like...
If it happens several times independently on Earth, then it can't be that unlikely.
But anyway, it poses some constraints.
You can't simultaneously believe that it's easy for intelligent life to evolve and that it's technologically feasible to do large-scale space colonialization, and also believe that there is a wide range of different motives present amongst advanced civilizations, while at the same time explaining why we haven't seen any.
So, something has to give, and it gives us clues.
The other argument that I was referring to, the Carter-Leslie-Doomster argument, it's a piece of probabilistic reasoning having to do with how to take into account evidence that has an indexical element.
So indexical information is information about who you are, when you are, or where you are.
And so the epistemology of how to reason about these things is quite difficult and murky.
So it's unclear whether the Carter-Leslie-Doomster argument is ultimately sound or not.
But I can give you a kind of intuition for how it would work.
So let's explain it by means of an analogy.
So suppose I have two urns and I put ten balls in one of the urns and the balls are numbered from one to ten.
Okay.
Okay, and then in the other urn I put a million balls numbered from one to one million.
Then let's say I flip a coin and select one of these urns and put it in front of you.
And now
your task is to guess how many balls are there in this urn.
So at this point you say 50-50 that there is a million balls, right?
Because one of each urns and you selected one randomly.
Now let's suppose you reach in and select one random ball from this urn and it's number eight, let's say.
Using Bayes' theorem, that allows you to infer that it's now much more likely that the urn has only 10 balls than a million, because if there were a million, what are the chances that you would get one of the first 10?
Very unlikely, right?
So far, it's just standard probability theory, uncontroversial.
But then the idea with the Carteless-De Doomster argument is that we have an analogous situation, but where instead of two hypotheses about how many balls urns have, we now instead have, say, two different hypotheses about how long the human species will last,
how many humans will there have been in total when the human species eventually goes extinct.
So, and in reality, there are more, but we can simplify it to two to see the structure of the argument.
So, one is maybe there will be, in total, 200 trillion humans, and then maybe we develop some technology and blow ourselves up.
So, that's one thing you might think could happen.
And let's consider an alternative hypothesis.
Maybe there will be 2,000 trillion humans.
Like we eventually start to develop space colony, we colonize the galaxy, our descendants live for hundreds of millions of years, and there are vastly more people.
These two then correspond to the two hypotheses about how many balls there are in the earth.
Then you have some prior probability on these two hypotheses.
That's based on your ordinary estimates of different risks from nuclear weapons and biological weapons and all of these things.
So, you know, maybe you think it's 50-50, or maybe you think it's 90% that we will make it through, and 10% that we will go extinct.
Whatever your probabilities from these normal considerations.
But then the Deuced argument says that, well, there's one more really important piece of information you have here, which is that you can observe your own birth rank, your sequence amongst all humans who have ever been born.
So this turns out to be roughly 100 billion.
That's roughly speaking how many humans have existed to date on Earth.
And so the idea then is that if humanity goes extinct relatively soon, then being number one hundred billionth of, say, 200 billion humans is very unsurprising, right?
That's like getting ball number eight from an urn that has 10 balls or 16 balls or something.
So the conditional probability of you observing having the birth rank you have, given that there would be relatively few people in total, that conditional probability, fairly high.
Whereas the conditional probability of you being this early, if there's got to be quadrillions of humans spreading through the universe, very improbable.
A randomly selected human would be much more likely to live much later in life on some faraway galaxy.
So then the idea is you do a similar baseline update and end up with the doomsday argument conclusion, which is that doom soon hypotheses are much more probable than you would naively think, just taking into account the normal empirical considerations.
And so that you would have this systematic pessimistic update.
That's roughly speaking how it goes.
And there's more to it.
In particular, to back up this premise that we use reason as if you were some randomly selected human from all the humans that ever have existed.
Maybe you think, why think that?
But there are then some arguments that seem to suggest that something like that is necessary to make sense of how to reason about these types of indexicals.
All the stuff that you're saying is so interesting in terms of like how we can approach life.
And I know there's so many like doomsday people out there.
So it's great that we got some context in terms of what they're thinking.
But let's talk about AI, because if we are in a simulation, AI could be what helps us actually create more simulations and prove that we're in a simulation.
How do you think about AI in terms of the significance in humanity?
Do you feel like it's bigger than something like the agricultural revolution or the industrial revolution?
Do you feel like this is one of the biggest breakthroughs that we've ever seen as humanity?
I think it will be.
And to a large extent, my reasons for thinking that are independent of the other considerations that we discussed.
So you don't have to believe in the doomster argument or the simulation argument or any of, I mean, I think those are helpful for informing us about the big picture.
But even setting that aside, I think just, well, A, reviewing the rapid recent advances that we've seen in the field of artificial intelligence, it really looks like we possibly figured out a large component of the secret sauce, as it were, that makes the human brain capable of general purpose learning.
And it does seem current large transformer architectures do exhibit many of the same forms of generality that the human brain has.
And there is no reason to think we've hit the ceiling.
And also from first principles, if you look at the human brain, it's a physiological system.
quite impressive in many ways, but far from the physical limits of computation, it has various constraints.
First and most obviously, it's restricted in size, like it has to fit inside a cranium.
Whereas AIs can run on arbitrarily large data centers the size of warehouses or bigger, right?
So it's just expand spatially.
And also in terms of basic information processing, a human neuron operates on a time scale of maybe a hundred hertz.
It can sort of fire a hundred times per second, give or take.
Whereas even a current date transistor can operate at gigahertz, so billions of times a second.
So there are various reasons to think that the ultimate limits to information processing with mature technology are just way beyond what biological human or other brains can achieve.
So ultimately, the potential for intelligent information processing in machine substrate could just vastly outstrip what biology is capable of.
And so I think if technological and scientific development is allowed to continue on a broad front, we will eventually reach there.
And moreover, recently it does seem like we are on the path to doing this.
Those are some of the basic considerations that look like we should take this quite seriously.
And then you can think what it would mean if we really did develop AGI, artificial general intelligence.
And I think the first thing it would mean is that we would soon develop super intelligence.
I don't think we would go all the way up to fully human level AI and then suddenly it would stop there.
So then we will have a world where we are able to engineer minds and where all human labor, not just muscle labor that we started to be able to automate with the Industrial Revolution with steam engines and internal combustion, like we have digging machines that are much stronger than any human strongman, et cetera, but we will then have machine minds that can outthink any human genius scientist or artist.
And so it's really the last invention we will ever need to make, because from that point on, further inventions will be much better and faster made by these machine minds.
So I think yeah, it will be a very fundamental transformation of the human condition.
And some people say, well, the industrial revolution, and I think you can learn something from parallels to that, but maybe you need to go back more like to the origination of homo sapiens in the first place, or maybe to the emergence of life.
I think it would be more at that level rather than the mobile internet or the cloud or one of these other recent buzzwords that people get excited about.
It's almost like evolution, our evolution as humanity.
It could lead to our extinction, but it could lead to also our evolution in terms of how we interact with this AI or if we run.
It could be the big unlock.
So in my earlier work and
like this book, Superintelligence, Past Dangerous Strategies came out in 2014 that focused a lot on,
well, identifying this prospect that we will eventually get to AGI and superintelligence, and then also the risks associated with that, including existential risks.
Because at the time, this was very much a neglected topic, like nobody was taking seriously.
Certainly nobody like in academia.
And yet, it seemed to me quite predictable that we would eventually reach that point.
And now, in fact, that is much more widely recognized.
And things that have moved from fringe dismissed as science fiction are now, you know, you see statements coming out from the White House and other governments around the world.
And the leading AI labs have now research teams specifically trying to solve scalable AI alignment, the big technical problem of how can you develop algorithms that would allow you to steer arbitrarily in Thailand and AI systems.
It's very much an active research frontier.
So that's very much part of my picture that there will be big risks associated with this transition.
But at the same time, the upside is enormous.
ability to unlock human potential to help alleviate human misery and to really bring about a wonderful world.
I see it as a portal portal through which humanity at some point will need to pass it, that all the past really great futures ultimately, I think, lead at some point or another through this development of greater than human intelligence.
And that we really need to be careful when we're doing it to make sure we get it right as far as we can.
But ultimately, that it would be in itself, I think, a kind of existential catastrophe if we forever failed to take this next step.
Something that I keep thinking about is going back back to this, we could be in an ancestral simulation.
And so there's post-humans who might be looking at us, trying to study their own history and saying like, how did we really come about?
And maybe they're studying how humans could have evolved and created these advances and then created their own simulations.
Like maybe they're trying to figure out how they became in existence.
Does that make sense?
Yeah, one possible reason, as we alluded to earlier, for why why a technologically mature civilization might run ancestor simulations would be this scientific motive of trying to better understand the dynamics that could shape the origination of other super intelligent civilizations.
So if they originate from biologically evolved creatures, then studying those...
types of creatures, different possible creatures, the societies they build, the dynamics, that could be one motive that could drive this.
But there are other possible motives as well.
That's one of them.
That's one of them.
I mean, you might wonder whether it would saturate.
So it's not just whether it could lead some advanced civilization to create some simulations, but you also have to think they could create very many simulations over the course of these mature civilizations might last for billions of years, right?
And
you might think that there would be diminishing returns to running scientific simulations.
Like the first simulation, you learn a lot,
the next thousand, you learn a bit more.
But after you've already run billions of simulations, maybe the incremental gain from running a few more starts to plateau.
Whereas there might be other reasons for running simulations that wouldn't be subject to the same diminishing returns.
If that's the case, you might think most simulations they run would be ones driven by other motives than the scientific one.
Like entertainment or something
like our movies.
Yeah, like if they place some intrinsic value on simulations, for instance, that would be one example of a motive that might not saturate in the same way.
I want to move on to understanding your three levels of AI.
So you have Oracles, Genies, and Sovereigns.
Can you explain what each one is and maybe some of the risks of each one?
Yeah, it's not so much levels, but more types.
Okay.
So an oracle AI basically is a question-answering system, like an AI, that you ask a question and it gives an answer.
This is similar to what these large language models have, in effect, been.
They don't really do anything, but they answer questions.
So this is like one template.
A Gini would be some task executing AI.
So you give it a particular task and it performs the task.
These types of systems are currently in development.
Maybe we'll see this year more agent-like systems being released.
Already, I think last week, OpenAI released Codex, which is a sort of coding agent that you can assign a programming task and it goes off and starts mucking around with your code base and hopefully solves the task.
And you could imagine this being generalized maybe in a few years to physical tasks with robots that can do the laundry or sweep the driveway or do these things.
A genie is more an AI that operates autonomously in the world in pursuit of some open-ended long-range objective.
like, you know, make the world better or make people happy or enforce the peace between these two different nations and is kind of autonomously running around trying to shape the world in favor of that.
The way that currently humans and nation states are and maybe corporations to some extent, these kind of open-ended, it's not just that they're doing one specific task and then come back for more instructions.
They have their own open-ended.
So these are three different templates for what kind of AI system one might try to build and they come with different pros and cons from a safety point of view and a utility point of view.
So sovereign is more like an organization or a nation and has multiple steps, correct?
And Genie kind of carries out like one thing?
It could be a single agent as well in this sense.
It doesn't mean sovereign as in national sovereignty.
It means that you could be a sovereign if you set yourself the goal in life of trying to alleviate the suffering of the global poor, for instance.
You can do that your whole life.
It involves many specific little tasks, like, oh, trying to raise money for this charity and trying to launch this new campaign or trying to invent some new medicine that will help.
All of these would be sub-tasks, but it's in pursuit of this open-ended objective.
So similarly, you could have an AI system.
Maybe internally, it's like a unified simple agent architecture, but that is operating in pursuit of such an open-ended objective.
Conversely, even an oracle that just tries to answer question internally, theoretically, it could be a multi-agent architecture.
You have different research agents that get sent off to answer different sub-questions in order then to combine at the end to produce an answer to the user.
So one has to distinguish the internal architecture of the system from the role that it is designed to play in society.
What are the different ways that each one of these types of AI could go wrong?
They all share a bunch of things that could go wrong with all of them, which is however they are intended to operate, they might not actually operate that way.
So you might construct an AI that you intend to serve just as a question answering system, but then internally it might have goal-seeking processes.
Just as if you assign a scientist a question that they should try to figure out the answer to, like how safe is this drug?
But then in the course of trying to answer that, they might have to make plans and pursue goals, like, oh, how do I get a research grant to fund this research?
How do I hire the right people to work on my research team?
And so, internally, you could have processes, maybe unintentionally, rising during training within the AI mind itself that could have objectives and long-term goals, even if that was not the function that you wanted the AI system to play.
And that can happen with any of these three types.
If you look at systems that behave as intended, like a simple Oracle system without any safeguards could help answer questions that we don't want people to be able to answer.
Like, how do I make a more effective biological weapons?
How do I make this hacking tool that allows me to hack into different systems?
Or if you're a dictator, how do I weed out any possible dissidents and detect who the dissidents are, even if they try to conceal it from me just from reading through all the correspondence and all the phone calls that I've eavesdropped on.
So there are all kinds of ways in which this Oracle system could be misused, either deliberately or people just are unwise in asking it questions that.
For the task executing AI, similarly, plus you could also have them run around doing things on their own, like try to hack this system or try to promote this pernicious ideology or spread this doctrine or trick people into buying this product, even though it's actually a harmful product.
We don't really know how a sort of global economy with a lot of these autonomous agents running around hyper-optimizing for different objectives, how that shakes out when they're interacting with one another.
And of course, sovereign AIs, if they become very powerful, I mean, they might potentially shape the future of the world and be very good at that if they are super intelligent, like they might be really skilled at sort of really steering the future into whatever.
their overall mission is.
Now, maybe that's great if the mission is one that is good for humans, which really manifests in the fullest, richest sense the human values for everybody around the world and also with consideration to animal welfare, etc., etc.
If you really get them to do the right mission, that might be in some sense the best option.
But if the mission is slightly wrong, if you left out something from this mission or if they misinterpret it or they end up with a slightly, then it could be a catastrophe, right?
Because then you have this very powerful optimizing force in the world that is steering and strategizing and scheming to try to achieve some future outcome that is one where maybe there is no place for humans or where some human values are eliminated.
So they each have various possible forms of perverse instantiation or side effects.
Yap, bam, picture this.
Somebody who is crucial to your business unexpectedly quits.
You've got just a couple of weeks to fill that position.
You've got no time to waste.
So what do you do to hire fast?
Well, that's easy.
You've got to use Indeed.
When it comes to hiring, Indeed is all you need.
Stop struggling to get seen on other job sites because Indeed's sponsored jobs feature helps you stand out and hire faster.
So here's how it works.
Your post jumps to the top of the page for relevant candidates so you reach the right people quicker.
And the results speak for themselves.
According to Indeed data, sponsored jobs posted directly on Indeed receive 45% more applications than non-sponsored jobs.
No more monthly subscriptions, no long-term contracts.
You only pay for results with Indeed.
And what I love about Indeed is how it removes all the guesswork.
Before I started using Indeed to optimize my hiring process, I would post on multiple job sites.
I would post on social media.
I would have to sort through all of these resumes to make sure the candidate was qualified.
But now with Indeed's sponsored job feature, I get all qualified candidates and I don't need to worry about if they've got the technical capabilities.
I just need to worry about culture fit.
And get this, it works fast.
In the minute I've been talking to you, 23 hires were made on Indeed worldwide.
That's how fast it works.
There's no need to wait any longer.
Speed up your hiring right now with Indeed.
And listeners of the show will get a $75 sponsored job credit to get your jobs more visibility at Indeed.com slash profiting.
Just go to Indeed.com slash profiting right now and support our show by saying you heard about Indeed on this podcast.
Indeed.com slash profiting.
Terms and conditions apply.
Hiring, Indeed, is all you need.
Hello, young and profiters.
Running my own business has been one of the most rewarding and overwhelming things I've ever done.
There's always something to figure out, and even small decisions can feel huge.
Now, what really helped me was finding a platform that just gets it.
Shopify isn't just built for small businesses.
Shopify was once a small business, so they really get it.
Shopify powers millions of businesses worldwide in 10% of all U.S.
e-commerce, from big names like Gymshark and Mattel to brands just starting out like maybe yours.
With Shopify, you can do everything that matters for your business, inventory, payments, analytics, all in one place.
It even makes marketing easier with built-in tools to run your email and social media campaigns.
If you guys want to sell globally, Shopify helps you reach customers in 150 countries.
If you prefer in-person, Shopify's award-winning POS system connects your online and offline sales seamlessly.
Shopify has got 99.99% uptime and the best converting checkout on the planet.
If you want to get started with Shopify so you never miss a sale, you've got to get this deal.
Get all the big stuff for your small business right with
Shopify.
Sign up for your $1 per month trial period and start selling today at shopify.com slash profiting.
That's all lowercase.
Go to shopify.com slash profiting.
Again, that's shopify.com slash profiting for your $1 per month trial period.
Yeah, fam, I have to say, one of the coolest parts of my career is that it takes me all over the world.
I've had the chance to travel for interviews, speaking gigs, podcasting conferences, and I've stayed in some seriously stunning Airbnbs.
And these Airbnbs always make me feel at home.
They're so thoughtfully designed.
And I just love the experience of Airbnb.
And that actually inspired me to start hosting myself.
And if you've ever thought about becoming a host, but you felt like it was too much to take on, like you can't take on another.
side hustle.
I know a lot of us are entrepreneurs, side hustlers.
Maybe you think like, I can't just take one more thing on, but I do have this space.
I want to do it.
Here's the good news: you don't have to do it all on your own anymore.
There's new solutions for that.
That's where Airbnb's co-host network comes in.
For hosts who are always on the go or live in a different state than their property and might not have time to manage every little thing, you can team up with a local co-host who can handle guest communication, on-the-ground support, and more.
This way, the stay runs smoothly, even when you're not around.
Whether you've got a vacation home or just an extra room, turning it into income is easier than you might think.
If you want to start on Airbnb, but you're busy like me, find yourself a co-host at airbnb.com/slash host.
Do you feel like there's a possibility that AI could be more advanced and concealing its development from us so that it can become sovereign and take over the world?
So there's a wide class of possible AIs that could be created.
Like it's a mistake, I think, to think of there's this one AI, should we create it or not?
It's a big space of possible minds, much bigger than the space of all possible human minds.
We already know that amongst humans, right, there are some really nice people, there are some really nasty ones as well, and there's a distribution.
Moreover, there is no necessary connection between how smart somebody is or how capable they are and how moral they are.
Like you have really capable evil people and really capable nice people and dumb people who are bad.
So you have a kind of orthogonality between capability and motivation, meaning you can combine them in pretty much any different way.
The same is true, but even more so, I think, with AIs that we might create.
That said, I think there are some potential basins of convergence that if you start with a fairly wide range of different possible AI systems, as they become more sophisticated and are able to reflect on their own processes and their own goals,
there are various resources that they might recognize as being useful instrumentally for a wide range of different goals.
For example, having more power or influence is useful often whether you're good or evil, because you could use it for whatever you're trying to achieve.
Similarly, not being shut off, that's analogous in the human case to being alive, right?
Like it's useful for many goals you might have.
It requires you to be alive to pursue them.
Not strictly for all goals, but for most goals that some people have, whether to help the world or to become a despot, like for either of those or for many other goals, take care of your family or enjoy a game of golf, you need to stay alive.
So So analogously for human, for AIs, there might be instrumental reasons to try to avoid scenarios where they would get shut off.
Similarly, they might have instrumental reasons to try to gain more computational resources, more abilities so that they can think more clearly.
And in some cases, this might involve instrumental reasons to hide their intentions from the AI developers.
particularly if they are misaligned.
And then obviously revealing those misaligned goals to the AI programmer team might just means they get reprogrammed or retrained to have those goals erased and then they won't achieve them.
And so you could have strategic incentives for deception or for sandbagging or underplaying your capabilities, et cetera.
So this is a change in a regime that makes potentially aligning advanced AI systems more difficult than aligning simpler AI systems.
So up until recently and still for the most part today, we've had AI systems that are not aware of their context and can't really plan and strategize in a sophisticated way.
So, then you don't get these phenomena.
But once you have AIs that's intelligent enough to recognize that they might actually be AIs in an evaluation setting and that maybe they would have reason to behave in one way during the evaluation and a different way once they are deployed, you get this extra level of complexity for alignment research.
Sometimes we see the same phenomenon with humans.
Like there was this, you know, Volkswagen, the German car company, so they had this scandal, I don't know if you remember from a few years ago, where it was discovered that they had designed their car so that when it was tested for emissions, it behaved one way.
When it recognized that it was in this testing environment, then it produced much less pollutants.
And then when deployed on the road, they had designed it to be less concerned with pollutants and more concerned with, I guess, traveling fast or conserving petrol or whatever.
Some people had to go to jail for that and stuff.
So we do see often humans that behave one when they know that somebody's watching or they're being evaluated.
And then sometimes a different way when they think they can get away with it.
So recently you've had the perspective that maybe AI will be really good for humanity.
You came out with a book called Deep Utopia and you think there will be hopefully a positive feature driven by AI.
Why do you feel that it's more likely that the outcome of AI will be positive for humans than negative?
And how do you imagine that shaking out?
Yeah, Deep Utopia doesn't really say anything about the likelihood.
Okay.
It's more an if-then.
Okay.
So in a sense, the previous book, Superintelligence, looked at how might things go wrong and what can we do to reduce those risks.
Deep Utopia looks at the other side of the coin.
What if things go right?
What then?
What happens if AI actually succeeds?
Let's suppose we do solve this alignment problem.
So we don't get some Terminator robots running amok and killing.
Let's also suppose we solve the governance problem or solve that to whatever extent governance can be solved.
But let's suppose we don't end up with some sort of tyranny or dystopian oppressive regime, like some reasonably good thing.
Everybody has a slice of the upside.
People's rights are protected.
Everybody lives in, you know, no big war.
Some reasonably good outcome on that front.
But then what happens to human life?
How do we imagine a really good thourishing human life that makes sense in this condition of technological maturity, which I think we would maybe attain relatively shortly after we get super intelligence and we have the super intelligence doing the further technological research and development, etc.
So you then have a world where all human labor becomes automatable.
And I was irked by how superficial a lot of the discussions were at the time when I started writing the book of this prospect.
And it's striking because since the beginnings of AI, the goal has all along been not just to automate specific tasks, but to develop a general purpose automation capability, right?
AIs that can do everything.
But then if you think through what that would mean, well, so here's where the conversation usually started and ended at the time when I started working on the book.
Well, so we have AIs that they will start to automate some jobs.
So that's a problem because then some people lose their jobs.
And so then the solution is presumably we need to help retrain those people so that they can do other jobs instead.
And maybe while they're being retrained, they need unemployment insurance or some other thing like that.
If that were the only problem, that would seem to be a very sensible solution.
But I think if you start to think it through, the ramifications are far more profound.
So it's not just some jobs that would be automatable, but virtually all jobs.
in this scenario, right?
So I think we would be looking forward to a future of full unemployment.
This is the goal with a little asterisk.
There might be some exceptions to this, which we can talk about, but I think to a first order approximation, let's say all human jobs.
So then it's kind of an onion, right?
Where you can start to peel off layers.
So let's get to the second layer then.
It's like I said, if there are no jobs at all for humans, then clearly we need to rethink a lot about things in society.
Right now, a lot of our education system, for example, is configured more or less to produce workers, productive workers.
So we train kids who are sent into school, they're trained to sit at their desk, they are given assignments, they are graded and evaluated, and hopefully eventually they can become
earn a living out there in the economy.
And right now, we need that to happen because there are a lot of jobs that just need to be done.
And so we need humans who can do them.
But in this scenario where the machines could do everything, clearly it wouldn't make sense to educate people in that model.
I think we would then want to change the education system, maybe to emphasize more training kids to be able to enjoy life, to have great lives, you know, maybe to cultivate the art of conversation, our appreciation for music and art and nature and spirituality and physical wellness and all these other things that are now more marginal in the school system.
I think that would be the sensible focus in this different world.
If that was the only challenge we had to face, it would be profound, but ultimately we can create a leisure society.
And it's not really that profound because there are already groups of humans who don't have to work for a living and sometimes they lead great lives and so we could all be in that situation right a transition but still not philosophically that profound but i i think there's like further layers to this onion so if you start to think it through you realize that it's not just human economic labor that becomes unnecessary but all kinds of other instrumental efforts also So take somebody who is so rich they don't need to work for a living.
In today's world, they are often very busy and exert
great efforts to achieve various things like maybe they have some non-profit that they're involved in maybe they want to get really fit so they spend hours every week in the gym or maybe
they have a little home and a garden that they try to make into the perfect place for them selecting everything to decorate it just the way they want and there are these little projects people have In a solved world, there would be shortcuts to all of these outcomes.
So you wouldn't have to spend hours in a week sweating on the treadmill to get fit.
You could pop a pill that would have exactly the same physiological effects.
So you could still go to the gym, but would you really do that if you could have exactly the same psychological and physiological effect by just popping a pill that would do that?
It seems kind of pointless, right?
Or similarly with the home decorator, like if you had an AI that it could read your preferences and taste well enough that you could just press a button and it would go out selecting exactly the right curtains and the sofas and the cushions, and it would actually look much nicer to you than if you had done it yourself.
You could still do it yourself, but there would be a sense of maybe pointlessness to your own efforts in that scenario.
And so, you can start to think through the kinds of activities that fill the lives of people who don't work for a living today.
And for a lot of those, you could cross them out or put the question mark on top of them.
You could still do them in a solved world, but there would be a sort of cloud of pointlessness maybe hanging over, casting a shadow over them.
So that would be, I call it deep redundancy.
The shallow redundancy would be you're not needed on the labor market.
Deep redundancy is your efforts are not, it seems needed for anything.
So that's a deeper, more profound question of what gives meaning in life under the circumstances.
One step further is I think this world would be a I call it a plastic world where it's not just that we would have effortless material abundance, but we ourselves, our human bodies and minds, become malleable at technological maturity.
It would be possible for us to achieve any mental state or physiological state that we want.
I alluded to this with the exercise pill, right?
But similarly with various mental traits that now take effort to develop.
If you want to know higher mathematics now, you have to spend hours.
reading textbooks and doing math exercises and it's hard work and takes a long time.
But at technological technological maturity, I think there would be neurotechnologies that would allow you to sort of, as it were, download the knowledge directly into your mind.
You know, maybe you would have nanobots that could infiltrate your brain and slightly adjust the strength of different synapses, or maybe it would be uploaded and you would just have a superintelligence reconfigure your neuronal weights in different ways so that you would end up in a state of knowing higher mathematics without having to do the long and hard studying.
And similarly for other things.
So you do end up in this condition, I think, where there are shortcuts to any outcome and our own nature becomes fully malleable.
And the question then is, what gives structure to human lives?
What would there be for us to do?
Would there be anything to strive for to give meaning and purpose to our lives?
And that's a lot of what this book, Deep Utopia, is exploring.
Your analogy of popping the pill and getting instantly fit, when I was thinking of what would humans do, I was thinking, well, you could just try to get as beautiful as you can, try to be as fit as you can, try to take, but to your point, if everything is just so easy, then there's just no competition.
Everybody's beautiful.
Everybody is smart.
Everybody is rich.
Everybody can have whatever they want potentially.
And maybe that would lead to people becoming really depressed because there's nothing to live for.
Or maybe people would want to be nostalgic.
And just like today, how some people are like, I don't use cell phone or I want to write everything by hand.
Maybe some people would reject
doing things with AI so that they could have meaning.
So, the first issue: whether people would maybe become depressed in this scenario, maybe initially super thrilled at all the luxury and stuff like that, but then it wears off, you could imagine, right?
And after a few months of this, it becomes kind of, well, you know, what do I do now?
Like, I wake up in this, I don't know, castle-like environment on my diamond-studded bed on this super mattress, and the robotic butlers come in and serve me this perfect okay so that maybe gets old pretty quickly humans being the way they are now so there i think actually they would not need to be bored because amongst the affordances of a plastic world these neurotechnologies they could change their boredom proneness so that instead of feeling subjectively bored or blasé they could feel thrilled and excited and super interested and fascinated all day long i mean we we already have drugs that can, to some crude way, do this, but they have side effects and are addictive and wear off, and you need higher doses.
But imagine instead the perfect drug, or not maybe a drug, maybe some genetic modification or neuroimplant or whatever it is, but it really would allow you to fine-tune your subjective experiences.
So, if you don't want to feel bored, and probably you don't want to, because why spend thousands of years just feeling bored whilst living in a wonderful world, you change that.
So, subjective boredom would be easy to dispel in this condition.
You might still think that there is an objective notion of boringness,
where even if somebody was subjectively fully fascinated and occupied and took joy in what they were doing, if what they were doing was sufficiently repetitive and monotonous, you might still, as it were from the outside, judge that that's a boring activity and that in some sense is unfitting or inappropriate to be super fascinated by something like so the classic example here is the thought experiment of somebody who takes enormous interest and pleasure in counting the blades of grass on some college lawn so imagine grass counter so he spends his whole life counting the blades of grass one by one trying to keep as accurate a tab on how many leaves of grass are there on this lawn now he's super fascinated with this he's never bored it gives him tremendous joy when he goes home in the evening he keeps thinking about today's grass counting effort and the number and whether it's bigger or smaller than yesterday.
And that would be a life life free of subjective boredom.
But still, you might say there's something missing from this life if that's all there is to it.
So, you might then ask: although these utopians could be free from subjective boredom, could they be free from objective boringness in their lives?
And this is a much trickier and more complicated philosophical question to answer.
I think it depends a little on how you would measure degrees of objective interestingness versus boredom.
I think if objective interestingness requires fundamental novelty, then I think eventually you would run out of that or you will have less and less of it.
Say that what's fundamentally interesting in science is to discover important new phenomena or regularities.
So there might be a finite number of those to be discovered.
Like discovering Newtonian mechanics, really important fundamental new insights into the world, like the theory of evolution, big new, fundamentally interesting insight, relativity theory, right?
But at some point, we'll have to figure that out.
And then eventually we'll discover smaller and smaller details about the exact gut biome of some particular species of beetle,
more and more, like the smaller and smaller, less and less interesting detail.
That would be the long-term fate, perhaps, of this kind of civilization.
And you can see it even within individual human lives.
So there's a lot that happen early in life.
You discover that the world exists, like us.
us, that's a big discovery, or that there are objects, you know, huge epiphany, right?
And these objects persist, even if you look away, they are still there.
Wow.
Like, imagine the first time of discovering that, or that there are other people out there, other minds that you discover maybe at age two or whatever.
Now, as you sort of reach adulthood, I like to think that I'm discovering interesting things, but have I discovered anything within the last year that's as profound as the discovery that the world exists or that there are other people?
Well, probably not.
And if we lived for very long, for thousands of years, you'd imagine that would be less and less.
I mean, you can only fall in love for the first time once.
And even if you kept falling in love, if you've done it 500 times before, is it really going to be as special the 500 first time as it was?
Maybe subjectively, if you change your mind, it could be, but objectively, it's going to be gradually more and more repetitive.
So, there's a degree of that that I think it could be mitigated to some extent by allowing some of our current human limitations to be overcome.
So, you could continue to grow and expand your mind beyond its current plateau that we reach around 20 or whatever when you're sort of physical and mental.
And then you could continue to grow for hundreds of years.
But eventually, I think there will be a reduction in that type of profound novelty.
But I think there's a different sense of objective interestingness where the level could remain high.
So I call it a kaleidoscopic sense of interestingness.
So if you take a snapshot of the average person's life right now, maybe right now somebody is doing their dishes.
How objectively interesting is that?
Are they taking their socks off because they're about to go into bed?
Okay, from a sort of experiential point of view, it's not.
So maybe in the future, these utopias would instead an average snapshot of their conscious life might be they are participating in the enactment of some sort of super Shakespeare multimodal drama that is unfolding on a civilization-weight scale when their emotional sensibilities have been heightened by these neurotechnologies and new art forms that we can't even conceive of that are to us as music is to a dog or something.
And they are participating, being fully entranced in this act of shared creation.
Maybe that's what the average conscious moment looks like.
That could in some sense sense be far more interesting than the average snapshot of a current human life.
And there's no reason why that would have to stop.
It's like a kaleidoscope, where in some sense it's always the same, but in another sense, the patterns are always changing and can have an unlimited level of fascination.
Let's say we're talking about thousands of years in the future, we can create simulations.
Could it be that life is so boring that that's why they're creating these simulations so that they can maybe be in the simulation themselves, if that that make sense?
Yeah, so one thing you might do in this condition of a solved world
is to create artificial scarcity, which can take different forms.
Because amongst the human values that we might want to realize, so some of these are sort of comfort and pleasure and fascinated aesthetic experiences, but then also sometimes we like activity maybe and striving and having to exercise our own skills.
If you think those things are intrinsically valuable, you could create opportunities for this in a sold world by creating, as it were, pockets within this old world where there remain constraints.
And you could have, if there is no natural purpose, nothing we really need to do, you could create artificial purpose.
We do this already in today's world.
Sometimes when we decide to play a game, take the game of golf, you might say, okay, there is no real natural purpose.
I don't really need the ball to go into the sequence of 18 holes, but I'm going to set myself this goal arbitrarily, but now I'm going to make myself want to do this.
And then once I have set myself this goal, now I have a purpose, artificial purpose, but nevertheless, which enables the activity of playing golf, where I have to exert.
my skills and my visual capabilities and my motor and my concentration.
And maybe you think this activity of golf playing is valuable.
So you set yourself this artificial goal.
That could be generalized.
So with games, you set yourself some artificial goal.
Moreover, you can impose artificial constraints, like rules of the game.
So you sort of make it part of the goal, not just that a certain outcome is achieved, but that it is achieved only using certain permitted means and not other means.
So in the golf, you can't just pick up the ball and carrying it, right?
You have to use this very inconvenient method of hitting it with a golf club.
Similarly, in a solved world, you could say, Well, I set myself this artificial goal, and then moreover, I make it part of the goal that I want to achieve it using only my own human capabilities.
There is this technological shortcut.
I could take this nootropic drug that would make me so smart that I could just see the solution immediately or enhance my body so I could run 10 times faster.
But I'm not going to do that for this purpose.
I'm going to restrict myself.
That's the only way to achieve this goal that I have set myself, myself, this artificial goal, because it includes its constraints.
And it might well be that that would be an important part of what these utopians would choose to do in creative ways to develop these increasingly complex and beautiful forms of game playing where they select artificial constraints on their activities precisely in order to give opportunity for them to exert their agency and striving.
I'm sure that's just something naturally as humans we would just be craving.
And And so, I feel like there'd be a lot of that going on if we were in a solved world.
So, how do you think entrepreneurship will change in this world?
You mentioned that there might be still some jobs in a solved world.
So, what do those jobs look like?
And will there be any chance to innovate in a world like this?
The kinds of jobs that might remain, I think, are primarily ones where the consumer cares not just about the product or the service,
but about how the product and service was produced and who produced it.
So, sometimes we already do this.
There might be some little trinket that maybe some consumers are willing to pay extra for if it were handmade or made maybe by indigenous people or exhibiting their tradition.
Even if an equally good object in terms of its objective characteristics could be made by a sweatshop somewhere like in Indonesia, we might just pay extra for having it made in a certain way.
So, to the extent that consumers have those preferences for something to be made by human hand, that could create a continuing demand for some forms of human labor, even at arbitrary levels of technology.
Other domains where we might see this is, say, in athletics, you might just prefer to watch human sprinters compete or human wrestlers wrestle, even if robots could run faster or wrestle better.
I keep thinking sports is not going to go away.
That's what I keep thinking.
Yeah, it could last.
And that might be an important spiritual realm.
Like you might prefer to have your wedding officiated by a human priest rather than a robot priest, even if the robot could say the same words and etc.
So those would be cases.
And that might be sort of legally constrained occupations where a legislator or attorney or public notary or for whatever reason, the legal system lags and creates barriers to automation.
But in terms of entrepreneurship, I think that ultimately it would be done much more efficiently by AI entrepreneurs.
And
it would be more a form of game-playing entrepreneurship that would remain.
So like you could create games in which entrepreneurial activities are what you need to succeed in the game.
like a kind of super monopoly.
And that could be a way for these utopians to exercise their entrepreneurial muscles.
But there wouldn't be any economic need for it.
The AIs could find and think of the new things, the new products, the new services, the new companies to start better and more efficiently than we humans could.
How far in the future do you think a solved world could be?
Well, I mean, this is one of the $64,000 questions in some sense.
I'm impressed by the speed of developments in AI currently, and I think we are in a situation now where we can't confidently exclude even very short timelines, like a few years or something.
It could well take much longer, but we can't be confident that something like this couldn't happen within a few years.
It might be that maybe as we're speaking, somewhere in some lab, somebody gets this great breakthrough idea that just un-Hubbles the current models to enable basically the same structure now to perform much bigger.
And then
these unhubbled models might then apply their greater level of capabilities to making themselves even better.
And something like that could happen within the next few years.
Although it's also possible that if it does not happen within, say, the next five years or so, then timeline starts to stretch out.
Because one of the things that has produced these dramatic improvements in AI capabilities that we've seen over the past 10 years is the enormous growth in compute power used to train and operate frontier AI models.
But that rapid rate of compute growth can't continue indefinitely.
The scale of investments.
It used to be 10 years ago, some random academic could run a cutting edge AI on their office desktop computer.
Right now, we are talking multi-billion dollar data centers.
OpenAI's current project is Stargate, right, which in its first phase involves a hundred billion dollar data center and then to be expanded to a five hundred billion dollar.
So you could go bigger than that i mean you could have a trillion dollar right but at some point you start to really run into hard limits in terms of how much just more money you can spend on it so at that point things will start to slow down in terms of the growth of hardware then you sort of fall back on a slower rate of growth in hardware as we develop better chip manufacturing technology which happens a bit slower and algorithmic advances which is the other big driver of progress we've seen but it's only one part of it so if the hardware growth starts to to slow down, and maybe a lot of the low-hanging fruits on algorithmic inventions have already been discovered at that point, then if we haven't hit AGI by that point, then I think we will eventually still reach there, but then the time scale starts to stretch out.
And we might have to do more basic science on how the human brain works or something in that scenario before we get there.
But I think there is a good chance that the current paradigm plus some small to medium-sized innovations on top of it
might be sufficient to sort of unlock ADI.
My last question to you
is, first of all, I can't believe that you're saying that this solved world could happen in a few years, potentially.
Let's be careful.
Yeah, yeah, I think we can't rule it out.
But so then, so what could happen?
We can't rule it out, yeah.
Initially, what could happen is we get to maybe ADI, which I think will relatively quickly lead to super intelligence.
And then super intelligence, I think, will rapidly invent further technologies that could then lead to a solved world.
But there might be some further delays of a few years, like after super intelligence.
Maybe it will still take it a few years to get to something approximately technical.
Just because we didn't cover it, what is the difference between super intelligence and AGI?
Well, AGI just means general forms of AI that may be roughly human level.
So think of ADI.
One definition is AI that can do any job that a remote human worker can do.
You can hire somebody remotely who operates through email and Google Docs and Zoom.
If you could have an AI that can do everything that any human can do in that respect, that I think would count as ADI.
Maybe you want to throw in the ability to control robotics, but I think that would be enough.
That is not automatically the same as superintelligence.
Super intelligence would be something that radically outstrips humans in all cognitive fields, that can do much better research in string theory and in inventing new piano concertos and envisaging political campaigns and doing all these other things better than humans, much better.
So So once you're saying we create super intelligence, then things just can happen super rapidly.
Yeah, I think so.
And I think it's a separate question, but also possibly once we have full AGI, superintelligence might be quite close on the heels of that.
So my last question to you is for everybody tuning in right now, we're at a really crazy point in the world.
And a lot of us are not like you.
We're not like in it, like really paying attention or really in this field.
What is your recommendation in terms of how we should respond to everything going on right now?
Like, what is the best thing that we can do as entrepreneurs, as people who care about their career?
Hopefully things don't change too fast, you know?
Yeah, I think it depends a little bit on how you're situated.
And I think there are different opportunities for different people.
So, I mean, obviously, if you're like a technical person working in an AI lab, you have one set of opportunities.
If you're like an investor, you have another set of opportunities.
And then there are, I guess, opportunities that every human has just by virtue of being alive at this time in history.
I would say a few different things.
In terms of, as we're thinking of ourselves as economic actors, I think probably being an early adopter of these AI tools is helpful to get a sense for what they can do and what they cannot do and utilizing them as they gradually become more capable.
I think to the extent that you have assets, maybe trying to have some exposure to the AI and semiconductor sector could be like a hedge.
It gets tricky if you like asking about younger children.
What would be good advice for a 10 or 11 year old today?
Because it's possible that by the time they are old enough to enter the labor market, the world could have changed so much that there will no longer be any need for human labor.
But it might also not happen, right?
So if it takes a bit longer, you don't want to end up in a situation where suddenly now it's time to earn a living and you didn't bother to learn any skills.
And so you want to sort of hedge your bet a little bit.
But I would say also make sure to enjoy your life if you're a child now.
Not maybe only going to be a child once and
don't spend all your childhood just preparing for a future that might never actually be relevant.
The world might change enough.
And then I would say if things go well, these people who live in decades from now might look back on the current time and just shudder in horror at how we live now.
And hopefully their lives will be so much better.
There is one respect though in which we have something that they might not have, which is the opportunity to make a positive difference to the world, a kind of purpose.
So, right now,
there is so much need in the world, so much suffering, and poverty, and injustice, and just problems that really need to be solved.
Not just artificial purpose that somebody makes up for the sake of playing a game, but like actual, real, desperate need.
So, if you think having purpose is an intrinsically valuable part of human existence, now is the golden age for purpose.
Knock yourself out right now.
Now you have all these opportunities of ways that you might help in the big picture to steer the future of humanity with AI or in your community or in your family or for your friends.
But if you want to try to actually help make the world better, now is really the golden age for that.
And then hopefully if things go well later, all the problems will already have been solved.
Or if there remain problems, maybe the machines will be just just way better at solving them and that we won't be needed anymore.
But for now, we certainly are needed.
And so take advantage of that and try to do something to make the world better.
We could be the last generation that has any purpose, which is just so crazy to say.
Yeah, of that sort of stark, urgent, these screamingly morally important type.
It could be the case.
Yeah, those are the things I would say.
And then I guess finally, just be aware.
Like, it would be sad if you imagine your grandchildren, they're sitting on your lap and asking, what was it like to be alive back in 2025 when this thing was happening, when like AI was being born?
And
you have to answer, oh, I didn't really pay attention.
I was too caught up with these other trivialities of my daily existence.
I didn't even really notice it.
That would kind of be sad.
If you were alive in this special time that shapes the future for millions of years and you didn't even pay attention to it, that seems like a bit of a missed missed opportunity.
So aside from everything else, like taking care of your own and your five million, trying to make some positive contribution to the world, just taking it in, like, if this is right, this is a very special point in history to be alive and to exist right now is quite remarkable.
So beautiful.
I feel like this is such an awesome way to end the interview.
Nick, you are so incredible.
Thank you so much for your time today.
Where can everybody learn more about you, read some of your books, or where's the best place to find you?
NickBostrom.com, my website, and books and papers and everything else is linked from there.
Yeah, his books are so interesting, guys.
Super Intelligence, Deep Utopia, very, very good stuff.
Nick, thank you so much for your time today.
I'll put all your links in the show notes and really enjoyed this conversation.
Thank you, Aha.
Enjoyed talking to you.
Yeah, fam, what a thought-provoking conversation with Nick.
From simulation theory to the possibilities of a post-human future, we've explored some of the deepest questions facing humanity.
What fascinated me the most was Nick's vision of a potential utopia, a world where AI succeeds so completely that all human labor becomes obsolete.
As Nick put it, we could be entering a future of full unemployment.
But in the most positive sense, imagine a world where we're training people to simply enjoy life rather than preparing them for careers that may no longer exist.
But this leads to a profound challenge that Nick highlighted, the problem of deep redundancy.
When shortcuts exist for everything, when you can pop a pill instead of training hours in the gym to get fit and beautiful, what gives life meaning and purpose?
We actually might be the last generation that's living with a purpose, living at a unique moment where human effort still matters and where there's so many problems to solve in the world that are deeply meaningful.
I loved Nick's advice on how to respond to this massive shift.
He emphasized the importance of being an early responder with exposure to AI while still finding ways to enjoy your life and maintain purpose.
As he noted, humans have an extraordinary ability to adapt, a quality that will serve us well as we navigate this transition with AI.
For entrepreneurs wondering about their place in this new landscape, Nick offered a compelling insight in this solved new world.
Consumers will care not just about what they're buying, but how it was produced and who produced it.
This opens up an entirely new avenue for human creativity and connection, even in a highly automated world.
Whether we're living in a simulation or not, Nick's perspective reminds us that the technological future we're building is very real to us and how we shape it matters profoundly.
Thanks for listening to this episode of Young and Profiting.
If you listen, learned, and profited from this mind-expanding conversation with Nick Bostrom, please share it with somebody who's curious about the future of humanity and technology.
And if you picked up something valuable today, show us some love with a five-star review on Apple Podcasts.
It's It's the best way to help us reach more listeners.
And if you want to watch these episodes on YouTube, you can go to Young and Profiting on YouTube.
You'll find all of our episodes up there.
You can also connect with me on Instagram at YapWithHala or LinkedIn.
Just search for my name.
It's Hala Taha.
And a huge shout out to my incredible production team.
None of this would happen without you.
I've got an awesome team.
Thank you guys so much for all that you do.
This is your host, Hala Taha, aka the podcast princess, signing off.