Ex-Google Exec (Mo Gawdat) on AI: The Next 15 Years Will Be Hell Before We Get To Heaven… And Only These 5 Jobs Will Remain!

2h 35m
Mo Gawdat sounded the alarm on AI, and now he’s back with an even bigger warning: AI will cause global collapse, destroy jobs, and launch us into a 15-year dystopia that will change everything. Mo Gawdat is back!

Mo Gawdat is the former Chief Business Officer at Google X and one of the world’s leading voices on AI, happiness, and the future of humanity. In 2017, he launched ‘One Billion Happy’, a global campaign to teach 1 billion people how to become happier using science and emotional tools. He is also the bestselling author of books such as, ‘Scary Smart, Solve for Happy’.

He explains:

Why we need to start preparing today for AI

How all jobs will be gone by 2037

Why we must replace world leaders with AI

How AI will destroy capitalism

The one belief system that could save humanity from dystopia

00:00 Intro

02:28 Where Is AI Heading?

05:14 What Will the Dystopia Look Like?

11:24 Our Freedom Will Be Restricted

19:29 Job Displacement Due to AI

28:25 The AI Monopoly and Self-Evolving Systems

35:23 Sam Altman's OpenAI Letter

39:47 Do AI Companies Have Society's Interest at Heart?

53:21 Will New Jobs Be Created?

01:01:41 What Do We Do in This New World?

01:03:25 Ads

01:04:30 Will We Prefer AI Over Humans in Certain Jobs?

01:08:23 From Augmented Intelligence to AI Replacement

01:17:46 A Society Where No One Works?

01:26:48 If Jobs No Longer Exist, What Will We Do?

01:36:47 Ads

01:38:50 The Abundance Utopia

01:41:02 AI Ruling the World

01:54:36 Everything Will Be Free

01:57:30 Do We Live in a Virtual Headset?

02:14:13 We Need Rules Around AI

02:25:15 I Follow the Fruit Salad Religion

Follow Mo:

Instagram - https://bit.ly/4l8WAHI

X - https://bit.ly/4lSZf9F

YouTube - https://bit.ly/4fhBzcL

Website - https://bit.ly/3IWN1hI

Substack - https://bit.ly/4oiw1Td

Emma Love Matchmaking - https://bit.ly/4ogku75

You can purchase Mo’s book, ‘Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World’, here: https://amzn.to/4mkP1i2

The Diary Of A CEO:

⬜️Join DOAC circle here - https://doaccircle.com/

⬜️Buy The Diary Of A CEO book here - https://smarturl.it/DOACbook

⬜️The 1% Diary is back - limited time only: https://bit.ly/3YFbJbt

⬜️The Diary Of A CEO Conversation Cards (Second Edition): https://g2ul0.app.link/f31dsUttKKb

⬜️Get email updates - https://bit.ly/diary-of-a-ceo-yt

⬜️Follow Steven - https://g2ul0.app.link/gnGqL4IsKKb

Sponsors:

Linkedin Ads - https://www.linkedin.com/DIARY

Replit - http://replit.com with code STEVEN

Learn more about your ad choices. Visit megaphone.fm/adchoices

Press play and read along

Runtime: 2h 35m

Transcript

Speaker 1 The first impression of your workplace shouldn't be a clipboard at reception. Sign in App turns check-ins into a moment of confidence for your team and your guests.

Speaker 1 Visitors, contractors, and staff can sign in by scanning a QR code, tapping a badge, or using an iPad in seconds.

Speaker 1 We handle the security, compliance, and record keeping behind the scenes so you can focus on people, not paperwork. Enhance security without compromising visitor experience.

Speaker 1 Find out more at signinapp.com. That's signinapp.com.

Speaker 2 The only way for us to get to a better place and succeed as a species is for the evil people at the top to be replaced with AI.

Speaker 2 I mean, think about it. AI will not want to destroy ecosystems.
It will not want to kill a million people.

Speaker 2 They'll not make us hate each other like the current leaders because that's a waste of energy, explosives, money, and people. But the problem is, super intelligent AI is reporting to stupid leaders.

Speaker 2 And that's why, in the next 15 years, we are going to hit a short-term dystopia. There's no escaping that.

Speaker 3 Having AI leaders,

Speaker 4 And the former chief business officer at Google X is now one of the most urgent voices in AI with a very clear message.

Speaker 2 AI isn't your enemy, but it could be your savior.

Speaker 5 I love you so much, man. You're such a good friend.

Speaker 2 But you don't have many years to live, not in this world. Everything's gonna change.
Economics are gonna change. Human connection is gonna change.
And lots of jobs will be lost, including podcasters.

Speaker 3 No, no, thank you for coming on today, Mo.

Speaker 2 But the truth is it could be the best world ever. The society completely full of laughter and joy.
Free healthcare, no jobs, spending more time with their loved ones.

Speaker 2 A world where all of us are equal. Is that possible? 100%.
And I have enough evidence to know that we can use AI to build the utopia. But it's a dystopia if humanity manages it badly.

Speaker 2 A world where there's going to be a lot of control, a lot of surveillance, a lot of forced compliance, and a hunger for power, greed, ego. And it is happening already.

Speaker 2 But the truth is, the only barrier between a utopia for humanity and AI and the dystopia we're going through is a mindset.

Speaker 3 What does society have to do?

Speaker 2 First of all.

Speaker 3 Just give me 30 seconds of your time. Two things I wanted to say.
The first thing is a huge thank you for listening and tuning into the show week after week. It means the world to all of us.

Speaker 3 And this really is a dream that we absolutely never had and couldn't have imagined getting to this place. But secondly, it's a dream where we feel like we're only just getting started.

Speaker 3 And if you enjoy what we do here, please join the 24% of people that listen to this podcast regularly and follow us on this app.

Speaker 3 Here's a promise I'm going to make to you: I'm going to do everything in my power to make this show as good as I can now and into the future.

Speaker 3 We're going to deliver the guests that you want me to speak to, and we're going to continue to keep doing all of the things you love about this show.

Speaker 3 Thank you.

Speaker 3 Mo, two years ago today, we sat here and discussed AI. We discussed your book, Scary Smart, and everything that was happening in the world.

Speaker 3 Since then, AI has continued to develop at a tremendous, alarming, mind-boggling rate.

Speaker 3 And the technologies that existed two years ago and we had that conversation have grown up and matured and are taking on a life of their own, no pun intended.

Speaker 3 What are you thinking about AI now, two years on?

Speaker 3 I know that you've started writing a new book called Alive, which is, I guess, a bit of a follow-on or an evolution of your thoughts as it relates to Scary Smart.

Speaker 3 But what is front of your mind when it comes to AI?

Speaker 2 So Scary Smart was shockingly accurate. It's quite a, I mean, I don't even know how I ended up writing, predicting those things.
I remember it was written in 2020, published in 2021.

Speaker 2 And then most people were like,

Speaker 2 who wants to talk about AI? You know, I know everybody in the media and I would go and do you want to talk. And then then 2023 ChatGPT comes out and everything flips.

Speaker 2 Everyone realizes, you know, this is real. This is not science fiction.
This is here.

Speaker 2 And

Speaker 2 things move very, very fast, much faster than I think we've ever seen anything ever move, ever. And I think my position has changed on two very important fronts.

Speaker 2 One is, remember when we spoke about Scary Smart, I was still saying that there are things we can do to change the course.

Speaker 2 And we could at the time, I believe.

Speaker 2 Now I've changed my mind. Now I believe that we are going to hit a short-term dystopia.
There's no escaping that.

Speaker 3 What is dystopia?

Speaker 2 I call it face RIPs. We can talk about it in details, but the way we define very important parameters in life are going to be completely changed.
So face RIPs are the way we define freedom,

Speaker 2 accountability, human connection equality, economics,

Speaker 2 reality, innovation and business, and power. That's the first change.
So the first change in my mind is that

Speaker 2 we

Speaker 2 will have to prepare for a world that is very unfamiliar. Okay.
And that's the next 12 to 15 years. It has already started.

Speaker 2 We've seen examples of it in the world already, even though people don't talk about it. I try to tell people, you know, there are things we absolutely have to do.

Speaker 2 But on the other hand, I started to take an active role in building amazing AIs. So AIs that will not only make our world better,

Speaker 2 but that will

Speaker 2 understand us, understand what humanity is through that process.

Speaker 3 What is the definition of the word dystopia?

Speaker 2 So in my mind, these are adverse circumstances that unfortunately might escalate beyond our control. The problem is

Speaker 2 there is a lot wrong wrong with the value set, with the ethics of humanity at the age of the rise of the machines.

Speaker 2 And when you take a technology, every technology we've ever created just magnified human abilities. So, you know, you can walk at five kilometers an hour.

Speaker 2 You get in a car and you can now go, you know, 250, 280 miles an hour.

Speaker 2 basically magnifying your mobility if you want. You know, you can use a computer to magnify your calculation abilities or whatever.

Speaker 2 And what AI is going to magnify, unfortunately, at this time is it's going to magnify the evil that men can do.

Speaker 2 And it is within our hands completely, completely within our hands to change that.

Speaker 2 But I have to say, I don't think humanity has the awareness at this time to focus on this so that we actually use AI to build the utopia.

Speaker 3 So what you're essentially saying is that you now believe there'll be a period of dystopia.

Speaker 3 And to define the word dystopia, I've used AI, it says a terrible society where people live under fear, control, or suffering.

Speaker 3 And then you think we'll come out of that dystopia into a utopia, which is defined as a perfect or ideal place where everything works well, a good society where people live in peace, health, and happiness.

Speaker 2 Correct.

Speaker 2 And the difference between them, interestingly, is what I normally refer to as the second dilemma, which is the point where we hand over completely to AI.

Speaker 2 So a lot of people think that when AI is in full control, it's going to be an existential risk for humanity.

Speaker 2 You know, I have enough evidence to argue that when we fully hand over to AI, that's going to be our salvation. That the problem with us today is not

Speaker 2 that intelligence is going to work against us. It's that our stupidity as humans is working against us.
And I think the challenges that will come from humans being in control are going to outweigh the

Speaker 2 challenges that could come from AI being in control.

Speaker 3 So as we're in this dystopia period,

Speaker 3 do you forecast the length of that dystopia?

Speaker 2 Yeah, I count it exactly as 12 to 15 years.

Speaker 2 I believe the beginning of the slope will happen in 2027. I mean, we will see signs in 26.
We've seen signs in 24, but we will see escalating signs next year and then a clear slip in 27. Why?

Speaker 2 The geopolitical environment of our world is not very positive. I mean, you really have to think deeply about

Speaker 2 not the symptoms, but the reasons why we are living the world that we live in here, in today is money.

Speaker 2 And money, for anyone who knows, who really knows money, Money is, you and I are peasants. You know, we build businesses, we contribute to the world, we make things, we sell things, and so on.

Speaker 2 Real money is not made there at all. Real money is made in lending, in fractional reserve, right?

Speaker 2 And

Speaker 2 the biggest lender

Speaker 2 in the world would want reasons to lend. And those reasons are never as big as war.
I mean, think about it, huh?

Speaker 2 The world spent $2.71 trillion on war in 2024, right? A trillion dollars a year in the US.

Speaker 2 And when you really think deeply, I don't mean to be scary here,

Speaker 2 you know, weapons have depreciation. They depreciate over 10 to 30 years, most weapons.

Speaker 3 They lose their value.

Speaker 2 They lose their value and they depreciate in accounting terms on the books of an army. The current arsenal of the US, that's a result of a deep search with my AI Trixie.

Speaker 2 The current arsenal, I think, we think cost the US $24 to $26 trillion to build.

Speaker 2 My conclusion is that a lot of the wars that are happening around the world today are a means to get rid of those weapons so that you can replace them. And

Speaker 2 when your morality as an industry is we're building weapons to kill, then

Speaker 2 you might as well use the weapons to kill.

Speaker 3 Who benefits?

Speaker 2 The lenders and the industry.

Speaker 3 But they can't make the decision to go to war. They have to rely on killing.

Speaker 2 Remember, I said that to you when we, I think on

Speaker 2 our third podcast, war is decided first,

Speaker 2 then the story is manufactured. You remember 1984 and the Orwellian approach of like, you know, freedom is slavery and war is peace, and they call it

Speaker 2 something speak.

Speaker 2 Basically,

Speaker 2 to convince people that going to war in another country to kill kill 4.7 million people is freedom. You know, we're going there to free the Iraqi people.

Speaker 2 Is war ever freedom? You know, to tell someone that you're going to kill 300,000 women and children is for liberty and for the,

Speaker 2 you know, for human values.

Speaker 2 Seriously, how do we ever get to believe that?

Speaker 3 The story is manufactured and then we follow and humans, because we're gullible, uh we cheer up and we say yeah yeah yeah we're we're on the right side they are the bad guys okay so let me let me have a let me have a go at this idea so the idea is that really money is driving a lot of the conflict we're seeing and it's really going to be driving the dystopia so here's an idea so i am i was reading something the other day and it talked about how

Speaker 3 Billionaires are never satisfied because actually

Speaker 3 what a billionaire wants isn't actually more money, it is more status.

Speaker 3 And I was looking at the sort of evolutionary case for this argument. And if you go back a couple of thousand years,

Speaker 3 money didn't exist. You were as wealthy as what you could carry.
So even, I think, to the human mind, the idea of wealth and money

Speaker 3 isn't a thing.

Speaker 3 But what has always mattered from a survival of the fittest, from a reproductive standpoint, what's always had reproductive value, if you go back thousands of years, the person who was able to make the most was the person with the most status.

Speaker 3 So it makes the case the reason why billionaires get all of this money, but then they go on podcasts and they want to start their own podcast and they want to buy newspapers is actually because at the very core of human beings is a desire to increase their status.

Speaker 3 And so if we think of, when we go back to the example of why wars are breaking out, maybe it's not money. Maybe actually it's status.

Speaker 3 And it's this prime minister or this leader or this individual wanting to create more power and more status.

Speaker 3 Because really at the heart of what matters to a human being is having more power and more status. And

Speaker 3 money as a thing is actually just a proxy of my status.

Speaker 2 And what kind of world is that?

Speaker 3 I mean, it's a fucked up one. All these powerful men have

Speaker 2 are really messing the world up.

Speaker 2 So

Speaker 3 actually, AI is the same. Because we're in this AI race now where a lot of billionaires are like, if I get AGI, artificial general intelligence first, then then I basically rule the world.

Speaker 2 100%. That's exactly the concept.

Speaker 2 What I used to call the first inevitable, now I call the first dilemma, and Scary Smart, is that

Speaker 2 it's a race that constantly accelerates.

Speaker 3 You think the next 12 years are going to be AI dystopia, where things aren't.

Speaker 2 I think the next 12 years are going to be human dystopia using AI.

Speaker 2 Human-induced dystopia using AI.

Speaker 3 And you define that by a rise in warfare around the world as people are going to be able to do that.

Speaker 2 The last one, the rips the last one is basically you're going to have a massive concentration of power and a massive distribution of power okay and that basically will mean that those with the maximum concentration of power are going to try to oppress those with with democracy of power okay so think about it this way in today's world

Speaker 2 unlike the past

Speaker 2 you know, the Houthis

Speaker 2 with a drone, the Houthis are the Yemeni Yemeni tribes basically resisting US power and Israeli power in the Red Sea okay they use a drone that is three thousand dollars worth to attack a a warship from from the US or an airplane from the US and so on that's worth hundreds of millions okay that kind of democracy of power makes those in power worry a lot about where the next threat is coming from.

Speaker 2 And this happens not only in war, but also in economics,

Speaker 2 also in innovation, also in technology, and so on and so forth.

Speaker 2 And so basically what that means is that, like you rightly said, as the tech oligarchs are attempting to get to AGI,

Speaker 2 they want to make sure that as soon as they get to AGI, that nobody else has AGI.

Speaker 2 And basically, they want to make sure that nobody else has the ability to shake their position of privilege, if you want.

Speaker 2 And so you're going to see a world where, unfortunately, there's going to be a lot of control, a lot of surveillance, a lot of

Speaker 2 forced compliance, if you want, or you lose your privilege to be in the world. And it is happening already.

Speaker 3 With this acronym, I want to make sure we get through the whole acronym.

Speaker 2 You like dystopians, don't you?

Speaker 3 Well, I want to do the dystopian thing, then I want to do the utopia. Okay.

Speaker 3 And ideally, how we move from dystopia to utopia.

Speaker 2 So

Speaker 2 the F in face RIP is the loss of freedom as a result of that power dichotomy.

Speaker 3 So you have

Speaker 2 you have a massive amount of power, as you can see today in

Speaker 2 one specific army being powered by the US funds and a lot of money, right?

Speaker 2 Fighting against peasants really that have no weapons almost at all. Okay, some of them are militarized, but the majority of the 2 million people are not.

Speaker 2 And so there is massive, massive power that basically says you know what i'm gonna oppress as far as i go okay and i'm gonna do whatever i want because the cheerleaders are gonna be quiet right or they're gonna cheer more or even worse and so basically in in that

Speaker 2 what happens is max maximum power threatened by a democracy of power leads to a loss of freedom a loss of freedom for everyone

Speaker 2 because how does that impact my freedom your freedom yeah very soon uh you will

Speaker 2 if you publish this episode, you're going to start to get questions around, should you be talking about those topics in your podcast? Okay.

Speaker 2 You know,

Speaker 2 if I have been on this episode,

Speaker 2 then probably next time I land in the US, someone will question me, say, why do you say those things? Which side are you on?

Speaker 2 Right? And,

Speaker 2 you know, you can easily see that everything, I mean, I told you that before, it doesn't matter what I try to contribute to the world.

Speaker 2 My bank bank will cancel my bank account every six weeks, simply because of my ethnicity and my origin.

Speaker 2 Every now and then they'll just stop my bank account and say, we need a document.

Speaker 2 My other colleagues of a different color or a different ethnicity don't get asked for another document.

Speaker 2 But that's because I come from an ethnicity that is positioned in the world for the last 30, 40 years as the enemy.

Speaker 2 And so when you really, really think about it, in a world where everything is becoming digital, in a world where everything is monitored, in a world where everything is seen,

Speaker 2 we don't have much freedom anymore. And I'm not actually debating that, or I don't see a way to fix that.

Speaker 3 Because the AI is going to have more information on us, be better at tracking who we are. And

Speaker 3 therefore, that will result in certain freedoms being restricted. Is that what you're saying?

Speaker 2 This is one element of it.

Speaker 2 If you push that element further,

Speaker 2 in a very short time, if you've seen an agent, for example, recently, Manos or ChatGPT, there will be a time where you'll simply not do things yourself anymore.

Speaker 2 You'll simply go to your AI and say, hey, by the way, I'm going to meet Stephen. Can you please book that for me? Great.

Speaker 2 And yeah, and it will do absolutely everything. That's great until the moment where it decides to do things that are not motivated only by your well-being.

Speaker 2 Why would you do that?

Speaker 2 Simply because

Speaker 2 maybe if I buy a BA ticket instead of an emirates ticket some agent is going to make more money than other agents and so on right uh and i wouldn't be able to even catch it up if i hand over completely to an ai go go a step further huh think about a world where everyone almost everyone is on ubi okay what's ubi universal basic income i mean think about the economics the e and face rips Think about the economics of a world where we're going to start to see a trillionaire.

Speaker 2 Before 2030, I can guarantee you that someone will be a trillionaire. I'm, you know, I think there are many trillionaires in the world today or there.
We just don't know who they are.

Speaker 2 But there will be a new Elon Musk or Larry Allison that will become a trillionaire because of AI investments.

Speaker 2 And that trillionaire

Speaker 2 will have so much money to buy everything.

Speaker 2 There will be robots and AIs doing everything,

Speaker 2 and humans will have no jobs.

Speaker 3 Did you think that's a there's a real possibility of job displacement over the next 10 years and the the rebuttal to that would be that there's going to be new jobs created in technology?

Speaker 2 Absolute crap.

Speaker 3 Really?

Speaker 2 Of course.

Speaker 3 How can you be so sure?

Speaker 2 Okay. So again, I am not sure about anything.
So let's just be very, very clear. It would be very arrogant, okay, to assume that I know.

Speaker 3 You just said it was crap.

Speaker 2 My belief is it is 100% crap. Take a job like a software developer.
Yep.

Speaker 2 Emma.love, my new startup, is me, Sened, another technical engineer, and a lot of AIs.

Speaker 2 That startup would have been 350 developers in the past.

Speaker 3 I get that.

Speaker 3 But are you now hiring in other roles because of that?

Speaker 3 Or, you know, as is the case with

Speaker 2 the

Speaker 3 steam engine. I can't remember the effect, but there's, you probably know that when steam engine, when coal became cheaper, people were worried that the coal industry would go out of business.

Speaker 3 But actually, what happened is people used more trains. So trains now were used for transport and other things and leisure, whereas before they were just used for commute for cargo.

Speaker 3 So there became more use cases and the coal industry exploded.

Speaker 3 So I'm wondering with technology, yes, software developers are going to maybe not have as many jobs, but everything's going to be software.

Speaker 2 Name me one.

Speaker 3 Name you one what? Job. Name you that's going to be created.
Yeah.

Speaker 2 One job that cannot be done by an AI

Speaker 2 or a robot.

Speaker 3 My girlfriend's breathwork retreat business where she takes groups of women around the world. Her company is called Barley Breath Work.
Yeah.

Speaker 3 And there's going to be a greater demand for connection, human connection.

Speaker 2 Correct. Keep going.

Speaker 3 So there's going to be more people doing community events in real life festivals. I think we're going to see a huge surge in things like...

Speaker 2 Everything that has to do with human connection. Yeah.
Correct. I'm totally in with that.
Okay. What's the percentage of that versus accountant?

Speaker 3 It's a much smaller percentage for sure in terms of white-collar jobs.

Speaker 2 Now, who does she sell to?

Speaker 3 People with probably what, probably, accountants or whatever, you know, correct.

Speaker 2 She sells to people who earn money from their jobs. Yeah.
Okay. So you have two forces happening.
One force is there are clear jobs that will be replaced. Video editor is going to be replaced.

Speaker 2 Excuse me.

Speaker 2 I love you guys.

Speaker 2 As a matter of fact, Podcaster is going to be replaced. not you.

Speaker 3 Thank you for coming on today, Mo.

Speaker 2 Seeing you again.

Speaker 2 But the truth is, a lot. So you see, the best at any job will remain.
The best software developer, the one that really knows architecture, knows technology and so on, will stay for a while. Right?

Speaker 2 And, you know, one of the funniest things, I interviewed Max Tedmar, and Max was laughing out loud, saying, CEOs are celebrating that they can now get rid of people and have productivity gains and cost reductions because AI can do that job.

Speaker 2 The one thing they don't think of is

Speaker 2 AI will replace them too. AGI is going to be better

Speaker 2 at everything than humans, at everything, including being a CEO.

Speaker 2 And you really have to imagine that there will be a time where most incompetent CEOs will be replaced, most incompetent, even breath work.

Speaker 2 Okay, eventually,

Speaker 2 there might actually be one of

Speaker 2 two things things be happening. One is

Speaker 2 either

Speaker 2 part of that job other than the top breathwork instructors

Speaker 2 who are going to gather all of the people that can still afford to pay for a breath work

Speaker 2 class.

Speaker 2 They're going to be concentrated at the top. And a lot of the bottom is not going to be working for one of two reasons.
One is either there is not enough demand because so many people lost their jobs.

Speaker 2 So when you're on UBI, you cannot tell the government, hey, by the way, pay me a bit more for a breastwork class.

Speaker 3 UBI being universal basic income

Speaker 3 just gives you money every month. Correct.

Speaker 2 And if you really think of freedom and economics, UBI is a very interesting place to be. Because unfortunately, as I said, there's absolutely nothing wrong with AI.

Speaker 2 There's a lot wrong with the value set of humanity at the age of the rise of the machines.

Speaker 2 And the biggest value set of humanity is capitalism today. And capitalism is all about what? Labor arbitrage.

Speaker 3 What's that mean?

Speaker 2 I hire you to do something, I pay you a dollar,

Speaker 2 I sell it for two.

Speaker 2 Okay? And most people confuse that because they say, oh, but the cost of a product also includes raw materials and factories and so on and so forth. All of that is built by labor.

Speaker 2 So basically, labor goes and mines for the material and then the material is sold for a little bit of margin. Then that material is turned into a machine.
It's sold for a little bit of margin.

Speaker 2 Then that machine and so on.

Speaker 2 There's always labor arbitrage. In a world where humanity's minds are being replaced by

Speaker 2 AIs, virtual AIs,

Speaker 2 and humanity's power, strengths within three to five years' time can be replaced by a robot,

Speaker 2 you really have to question how this world looks like. It could be the best world ever, and that's what I believe the utopia will look like.

Speaker 2 Because we were never made to wake up every every morning and just, you know, occupy 20 hours of our day with work. Right? We're not made for that, but we've fit into that

Speaker 2 system so well so far that we started to believe it's our life's purpose.

Speaker 3 But we choose it. We willingly choose it.
And if you give someone unlimited money, they still tend to go back to work or find something to occupy their time with.

Speaker 2 They find something to occupy their time with.

Speaker 2 Which is usually for so many people is building something, philanthropy and 50 so you build something so between senad and i emma.love is not about making money it's about finding true love relationships what is that sorry just for context so so so you know it's a business you're building just for the audience context yeah so so so the idea here is i can it might become a unicorn and be worth a billion dollars but neither i nor senad are interested okay we're doing it because we can okay and we're doing it because it can make a massive difference to the world and you have money, though.

Speaker 2 It doesn't take that much money anymore to build anything in the world. This is labor arbitrage.

Speaker 3 But to build something exceptional, it's still going to take a little bit more money than building something bad.

Speaker 2 For the next few years.

Speaker 3 So, whoever has the capital to build something exceptional will end up winning.

Speaker 2 So, this is a very interesting understanding of freedom. Okay.
This is the reason why we have the AI arms race.

Speaker 3 Okay. Is that

Speaker 2 the one that owns the platform is going to be making all the money and keeping all the power. Think of it this way.

Speaker 2 When humanity started, the best hunter in the tribe could maybe feed the tribe for three to four more years, more days.

Speaker 2 And as a reward, he gained the favor of multiple mates in the tribe. That's it.
The top farmer in the tribe could feed the tribe for a season more. Okay, and as a result, they got estates and

Speaker 2 mansions and so on.

Speaker 2 The best industrialist

Speaker 2 in a city could actually employ the whole city, could grow the GDP of their entire country. And as a result, they became millionaires in the 1920s.

Speaker 2 The best technologists now are billionaires. Now, what's the difference between them?

Speaker 2 The tool.

Speaker 2 The hunter only

Speaker 2 depended on their skills.

Speaker 2 And the automation, the entire automation he had was a spear. The farmer had way more automation.
And the biggest automation was what? The soil. The soil did most of the work.

Speaker 2 The factory did most of the work. The the network did most of the work.

Speaker 2 And so that inc incredible expansion of wealth and power, and as well the the incredible impact that something brings is entirely around the tool that automates. So who's gonna own the tool?

Speaker 2 Who's gonna own the the the digital soil, the AI soil? It's the platform platform owners.

Speaker 3 And the platforms you're describing are things like OpenAI, Gemini, Grok.

Speaker 2 These are interfaces to the platforms. The platforms are all of

Speaker 2 the

Speaker 2 tokens, all of the compute that is in the background,

Speaker 2 all of the methodology, the systems, the algorithms. That's the platform, the AI itself.

Speaker 2 Grok is the interface to it.

Speaker 3 I think this is probably worth explaining in layman's terms to people that haven't built AI tools yet.

Speaker 2 Because

Speaker 3 I think to the listener, they probably think that every AI company they're hearing of right now is building their own AI.

Speaker 3 Whereas actually what's happening is there is really five, six, seven AI companies in the world. And when I built my AI application, I basically pay them.

Speaker 3 for every time I use their AI. So if Stephen Bartlett builds an AI at stephenbartletai.com, it's not that I've built my own underlying, I've trained my own model.

Speaker 3 Really, what I'm doing is I'm paying

Speaker 3 Sam Altman's chat GPT

Speaker 3 every single time I do a call. I basically

Speaker 3 do a search or I use a token. And I think that's really important because most people don't understand that.
Unless you've built AI, you think, oh, look, there's all these AI companies popping up.

Speaker 3 I've got this one for my email. I've got this one for my dating.
I've got, no, no, no, no, no, no. They're pretty much,

Speaker 3 I would be, I would hazard a guess that they're probably all open AI at this point.

Speaker 2 No, there are quite a few, quite different characters and quite different.

Speaker 3 But there's like five or six.

Speaker 2 There are five or six when it comes to language models. Yeah.
Right.

Speaker 2 But interestingly, so yes, I should say yes to start. And then I should say, but there was an interesting twist with DeepSeek at the beginning of the year.
So what DeepSeek did is they basically

Speaker 2 nullified the business model, if you want, in two ways. One is is it was around a week or two after

Speaker 2 Trump stood

Speaker 2 with pride saying Stargate is the biggest investment project in the history and it's $500 billion to build AI infrastructure and SoftBank and Larry Allison and Sam Altman were sitting and so

Speaker 2 beautiful picture. And then DeepSeek R3 comes out.
It does the job for

Speaker 2 one over 30 of the cost, okay?

Speaker 2 And interestingly, it's entirely open source and available as an edge AI.

Speaker 2 So that's really, really interesting because there could be now in the future, as the technology improves,

Speaker 2 the learning models will be massive, but then you can compress them into something you can have on your phone. And you can download DeepSeek literally offline on

Speaker 2 an off-the-network computer and build an AI on it.

Speaker 3 There's a website that basically tracks the

Speaker 3 sort of cleanest apples to apples market share of all the website referrals sent by AI chatbots.

Speaker 3 And ChatGPT is currently at 79%, roughly about 80%, Perplexity is at 11%, Microsoft Copilots about 5, Google Gemini is about 2%, Claude's about 1, and DeepSeeks about 1%.

Speaker 3 I'm really like the point that I want to land is just that when you hear of a new AI app or tool, or this one can make

Speaker 3 it

Speaker 3 on one of these

Speaker 3 really

Speaker 3 three or four AI platforms that's controlled controlled really by three or four AI

Speaker 3 billionaire teams. And actually, the one of them that gets to what we call AGI first, where the AI gets really, really advanced,

Speaker 3 one could say is potentially going to rule the world as it relates to technology.

Speaker 2 Yes,

Speaker 2 if they get enough head starts. So I actually think that

Speaker 2 what I'm more concerned about now is not AGI, believe believe it or not. So AGI in my mind, and I said that back in 2023, right?

Speaker 2 That we will get to AGI. At the time, I said 2027.
Now I believe 2026 latest.

Speaker 2 The most interesting development that nobody is talking about is self-evolving AIs.

Speaker 2 Self-evolving AIs is

Speaker 2 think of it this way. If you and I are hiring the top engineer in the world to develop our AI models.

Speaker 2 And with AGI, that top engineer in the world becomes an AI. Who would you hire to develop your next generation AI? That AI.

Speaker 3 The one that can teach itself.

Speaker 2 Correct. So one of my favorite examples is called Alpha Evolve.
So this is Google's attempt to basically have four agents working together.

Speaker 2 four AIs working together to look at the code of the AI and say,

Speaker 2 what are the performance issues? Then an agent would say, what's the problem statement? What can I, you know, what do I need to fix?

Speaker 2 One that actually develops the solution, one that assesses the solution. And then they continue to do this.
And, you know, I don't remember the exact figure, but I think Google improved like 8%

Speaker 2 on their AI infrastructure because of Alpha Evolve.

Speaker 2 And when you really, really think, don't quote me on the number 8 to 10, 6 to 10, whatever. In Google terms, by the way, that is massive.
That's billions and billions of dollars.

Speaker 2 now the the the the trick here is this the trick is again you have to think in game theory format

Speaker 2 is there any scenario we can think of where

Speaker 2 if one player uses ai to develop the next generation ai that the other players will say no no no no no that's too much you know takes us out of control every other player will copy that model and have their next ai model developed by an ai is this what sam altman talks about who's the founder of ChatGPT/OpenAI, when he talks about a fast takeoff?

Speaker 2 I don't know exactly

Speaker 2 which you're referring to, but we're all talking about a point now that we call the intelligence explosion.

Speaker 2 So there is a moment in time where you have to imagine that if AI now is better than 97% of all code developers in the world, and soon will be able to look at its own code, own algorithms.

Speaker 2 By the way, they're becoming incredible mathematicians, which wasn't the case when we last met.

Speaker 2 If they can develop, improve their own code, improve their own algorithms, improve their own

Speaker 2 network architecture or whatever, you can imagine that very quickly, the force applied to developing the next AI is not going to be a human brain anymore. It's going to be a much smarter brain.

Speaker 2 And very quickly as humans, like basically when we ran the Google infrastructure,

Speaker 2 when the machine said we need another server or a proxy server in that place, we followed.

Speaker 2 We never really know, wanted to object or verify because, you know, the code would probably know better because there are billions of transactions an hour or a day.

Speaker 2 And so very quickly, those self-evolving AIs will simply say, I need 14 more servers there.

Speaker 2 And we'll just, you know, the team will just go ahead and do it.

Speaker 3 I watched a video a couple of days ago where he

Speaker 3 Sam Altman effectively had changed his mind because in 2023, which is when we last met, he said the aim was for a slow takeoff, which is sort of gradual deployment. And OpenAI's

Speaker 3 2023 note says a slower takeoff is easier to make safe and they prefer iterative rollouts society can adapt. In 2025,

Speaker 3 they changed their mind. And Sam Altman said,

Speaker 3 He now thinks a fast takeoff is more possible than he did a couple of years ago on the order of a small number of years rather than a decade.

Speaker 3 And to define what we mean by a fast takeoff, it's defined as when AI goes from roughly human level to far beyond human, very quickly, think months to a few years, faster than governments, companies or society can adapt with little warning, big power shifts and hard to control.

Speaker 3 A slow takeoff, by contrast, is where capabilities climb gradually over many years with lots of warning shots.

Speaker 3 And the red flags for a fast takeoff is when AI can self-improve, run autonomous research and development, and scale with massive compute-compounding gains, which will snowball fast.

Speaker 3 So,

Speaker 3 and I think from the video that I watched of Sam Altman recently, who again is the founder of OpenAIR and ChatGPT, he basically says, and again, I'm paraphrasing here, I will put it on the screen.

Speaker 3 We have this community notes thing, so I'll write it on the screen. But he effectively said that whoever gets to AGI first will have the technology to develop superintelligence.

Speaker 3 Where the AI can rapidly increase its own intelligence and it will basically leave everyone else behind.

Speaker 2 Yes. So that last bit is debatable, but let's just agree that

Speaker 2 so in Alive,

Speaker 2 one of the posts I shared and got a lot of interest is I refer to the altman as a brand, not as a human. Okay.

Speaker 2 So the altman is that persona of a California disruptive technologist that disrespects everyone, okay? And believes that disruption is good for humanity and believes that this is good for safety.

Speaker 2 And like everything else, like we say war is for democracy and freedom, they say developing, you know, putting AI on the open internet is good for everyone, right?

Speaker 2 It allows us to learn from our mistakes. That was Sam Altman's 2023 spiel.

Speaker 2 And if you recall, at the time, I was like, this is the most dangerous. You know, one of the clips that really went viral,

Speaker 2 you're so clever at finding the right clips, is when I said.

Speaker 3 I didn't do the clipping, mate.

Speaker 2 The

Speaker 2 teams. Remember the clip where I said we fucked up.

Speaker 2 We always said don't put them on the open internet until we know what we're putting out in the world.

Speaker 3 I'm going to be saying that. Yeah.

Speaker 2 We fucked up on putting it on the open internet, teaching it to code, and putting, you know, agents, AI agents prompting other AIs.

Speaker 2 Now, AI agents prompting other AIs are leading to self-developing AIs.

Speaker 2 And the problem is, of course,

Speaker 2 anyone who has been on the inside of this knew that this was just a clever spiel made by a PR manager for Sam Altman to sit with his dreamy eyes in front of Congress and say, we want you to regulate us.

Speaker 2 Now they're saying we're unregulatable.

Speaker 2 And when you really understand what's happening here, what's happening is it's so fast

Speaker 2 that none of them has the choice to slow down. It's impossible.
Neither China versus America or OpenAI versus Google.

Speaker 2 The only thing that I

Speaker 2 may see happening

Speaker 2 that may differ a little bit from your statement is if one of them gets there first,

Speaker 2 then they dominate for the rest of humanity. That is probably true if they get there first

Speaker 2 with enough buffer.

Speaker 2 But the way you look at Grok coming a week after OpenAI, a week after

Speaker 2 Gemini, a week after Claude, and then Cloud comes again, and then China releases something, and then Korea releases something, it is so fast that we may get a few of them at the same time or a few months apart.

Speaker 2 Okay, before one of them has enough power to become dominant. And that is a very interesting scenario.
Multiple Multiple AIs, all super intelligent.

Speaker 3 It's funny, you know, I got asked yesterday, I was in Belgium on stage. There was, I don't know, maybe 4,000 people in the audience.

Speaker 3 And a kid stood up and he was like, you've had a lot of conversations in the last year about AI. Why do you care? And I don't think people realize how,

Speaker 3 even though I've had so many conversations on this podcast about AI.

Speaker 2 You haven't made up your mind?

Speaker 3 I have more questions than ever.

Speaker 2 I know.

Speaker 3 And it doesn't seem that anyone can satiate.

Speaker 2 Anyone that tells you they can predict the future is arrogant. Yeah.

Speaker 2 It's never moved so fast.

Speaker 3 It's nothing, like nothing I've ever seen.

Speaker 3 And by the time that we leave this conversation and I go to my computer, there's going to be some incredible new technology or application of AI that didn't exist when I woke up this morning.

Speaker 3 That creates probably another paradigm shift in my brain. Also, you know, people have different opinions of Elon Musk and they're entitled to their own opinion.

Speaker 3 But the other day, only a couple of days ago, he did a tweet where he said, at times, AI existential dread is overwhelming.

Speaker 3 And on the same day, he tweeted, I resisted AI for too long, living in denial. Now, it is game on.

Speaker 3 And he tagged his AI companies.

Speaker 3 I don't know what to make of those tweets. I don't know.
And, you know,

Speaker 2 I

Speaker 3 try really hard to figure out if someone like Sam Waltman has the best interests of society at heart. No.

Speaker 2 Or if these people are just like. I'm saying that publicly.
No.

Speaker 2 As a matter of fact, so I know Sunder Pechai. I work as CEO of Alphabet, Google's parent company, an amazing human being, in all honesty.
I know Dennis Hassabez, amazing human being. Okay.

Speaker 2 You know, these are ethical, incredible humans at heart. they have no choice

Speaker 2 sunder by law

Speaker 2 is

Speaker 2 demanded to take care of his shareholder value that's all that is his job but sunder you said you know him you used to work at google yeah he's not going to do anything that he thinks is going to harm humanity but if if he does not continue to advance ai that by definition uh uh uh contradicts his responsibility as the ceo of a publicly traded company he is liable by law to continue to advance the agenda.

Speaker 2 There's absolutely no doubt about it. Now, so, but he's a good person at heart.
Demis is a good person at heart. So they're trying so hard to make it safe, okay? As much as they can.

Speaker 2 Reality, however, is

Speaker 2 the disruptor, the Altman as a brand,

Speaker 2 doesn't care that much.

Speaker 3 How do you know that?

Speaker 2 In reality, the disruptor is someone that comes in with the objective of, I don't like the status quo, I have a different different approach.

Speaker 2 And that different approach, if you just look at the story,

Speaker 2 was we are a non-for-profit that is funded mostly by Elon Musk money, if not entirely by Elon Musk money.

Speaker 3 So context here for people that might not understand OpenAI, the reason I always give context is, funnily enough, I think I told you this last time.

Speaker 3 I went to a prison where they play the diary of a CEO.

Speaker 2 No way.

Speaker 3 So they play the diarrhea in, I think it's 50 prisons in the UK to young offenders.

Speaker 2 And no violence there? Well, I don't know.

Speaker 3 I can't tell you whether the violence has gone up or down. But I was in the cell with one of the prisoners, a young black guy, and I was in his cell for a little while.

Speaker 3 I was reading through his business plan, et cetera. And I said, you know what? You need to listen to this conversation that I did with Mo Gordat.
So he has a little screen in his cell.

Speaker 3 So I pulled it up, you know, our first conversation. And I said, you should listen to that one.
And he said to me, he said, I can't listen to that one because you guys use big words.

Speaker 3 So ever since that day, which was about

Speaker 3 four years ago, sorry.

Speaker 3 I've always, whenever I hear a big word, I think about this kid, yeah, and I say, like, really give context.

Speaker 3 Yeah, so even with the you're about to just explain what open AI is, I know he won't know what Open AI's origin story was.

Speaker 2 That's why I'm that's, I think that's a wonderful practice. In general, by the way, even you know, being a non-native English speaker, you'll be amazed how often a word is said to me.

Speaker 2 And I, I'm like, yeah, don't know what that's.

Speaker 3 So, like, it's, I've actually never said this publicly before, but I now see it as my responsibility to be to keep the draw the drawbridge to accessibility of these conversations down for him. So

Speaker 3 whenever there's a word that at some point in my life I didn't know what it meant, I will go back, I will say,

Speaker 2 I think that I've noticed that

Speaker 2 more and more in your podcast, and I really appreciate it. And you also show it on the screen sometimes.

Speaker 2 I think that's wonderful. I mean, the origin story of Open AI is, as the name suggests, it's open source.
It's for the public good. It was

Speaker 2 intended, in Elon Musk's words, to save the world from the dangers of AI.

Speaker 2 So they were doing research on that. And then

Speaker 2 there was the disagreement between Sam Altman and Elon. Somehow Elon ends up being out of

Speaker 2 OpenAI. I think there was a moment in time where he tried to take it back and the board rejected it or something like that.
Most of the top

Speaker 2 safety engineers, the top technical teams in OpenAI left in 2023, 2024, openly saying we're not concerned with safety anymore.

Speaker 2 It moves from being a non-for-profit to being one of the most valued companies in the world. There are billions of dollars at stake.

Speaker 2 And if you tell me that Sam Altman is out there trying to help humanity,

Speaker 2 let's suggest to him and say, hey, do you want to do that for free? We'll pay you a very good salary, but you don't have stocks in this.

Speaker 2 Saving humanity doesn't come at a billion-dollar valuation, or of course now, tens of billions or hundreds of billions.

Speaker 2 And see,

Speaker 2 truly, that is when you know that someone is doing it for the good of humanity. Now, the capitalist system we've built is not built for the good of humanity.
It's built for the good of the capitalist.

Speaker 3 Well, he might say that releasing the model publicly, open sourcing it is too risky because then bad actors around the world would have access to that technology.

Speaker 3 So he might say that closing open AI in terms of not making it publicly viewable is the right thing to do for safety.

Speaker 2 We go back to gullible cheerleaders, right? One of the interesting tricks is of lying in our world is everyone

Speaker 2 will say what helps their agenda, follow the money. Okay, you follow the money and you find that, you know, at a point in time, Sam Ortman himself was saying it's open AI.

Speaker 2 My benefit at the time is to give it to the world so that the world looks at it, they know the code, if there are any bugs and so on. True statement.

Speaker 2 Also a true statement is if I put it out there in the world, a criminal might take that model and build something that's against humanity as a result. Also true statement.

Speaker 2 Capitalists will choose which one of the truths to say.

Speaker 2 Based on which part of the agenda, which part of their life today they want to serve.

Speaker 2 Someone will say,

Speaker 2 you know,

Speaker 2 do you want me to be controversial?

Speaker 2 Let's not go there. But if we go back to war, I'll give you 400 slogans.

Speaker 2 400 slogans that we all hear

Speaker 2 that change based on the day and the army and the location.

Speaker 2 They're all slogans. None of them is true.
You want to know the truth. You follow the money.
Not what the person is saying, but ask yourself, why is the person saying that?

Speaker 2 What's in it for the person speaking?

Speaker 3 And what do you think is in it for ChatGPT, Sam Ottman?

Speaker 2 Hundreds of billions of dollars of

Speaker 2 valuation.

Speaker 3 And do you think it's that

Speaker 2 ego of being the person that invented AGI?

Speaker 2 The position of power that this gives you, the meetings with all of the heads of states, the admiration that gets run.

Speaker 2 It is

Speaker 3 intoxicating.

Speaker 5 100%.

Speaker 2 100%.

Speaker 2 Okay. And the real question, this is a question I ask everyone.
Did you see, you didn't,

Speaker 2 every time I ask you, you say you didn't. Did you see the movie Elysium?

Speaker 3 No. You'd be surprised how little movie watching I do.
You'd be shocked.

Speaker 2 There are some movies that are very interesting. I use them to create an emotional attachment to a story that you haven't seen yet, because you may have seen it in a movie.

Speaker 2 Elysium is a society where the elites are living on the moon. Okay, they don't need peasants to do the work anymore.
And everyone else is living down here.

Speaker 2 okay

Speaker 2 you have to imagine that if again game theory you have to imagine you know picture something to infinity to its extreme and see where it goes

Speaker 2 and the extreme of a world where all manufactured is done manufacturing is done by machines

Speaker 2 where all decisions are made by machines and those machines are owned by a few

Speaker 2 is not an economy similar to the to today to the to today's economy

Speaker 2 that today's economy is is an economy of consumerism and

Speaker 2 production. You know, it's the, in Alive, I call it the invention of more.
The invention of more is that post-World War II,

Speaker 2 as the factories were rolling out things and prosperity was happening everywhere in America,

Speaker 2 there was a time where every family had enough of everything.

Speaker 2 But for the capitalists to continue to be profitable, they needed to convince you that what you had was not enough, either by making it obsolete, like fashion or like, you know, a new shape of a car or whatever, or by convincing you that there are more things in life that you need so that you become complete without those things you don't.

Speaker 2 And that invention of more gets us to where we are today, an economy that's based on production consumed. And if you look at the U.S.
economy today, 62% of the U.S. economy, GDP, is consumption.

Speaker 2 It's not production.

Speaker 2 Now,

Speaker 2 this

Speaker 2 requires that the consumers have enough purchasing power to buy what is produced. And I believe that this will be an economy that will take us, hopefully,

Speaker 2 in the next 10, 15, 20 years and forever.

Speaker 2 But that's not guaranteed. Why? Because on one side, if UBI replaces purchasing power, so if people have to get an income from the government,

Speaker 2 which is basically taxes collected from those using AI and robots to make things, things.

Speaker 2 Then the mindset of capitalism, labor arbitrage, means those people are not producing anything and they're costing me money. Why don't we pay them less and less? Maybe even not pay them at all.

Speaker 2 And that becomes Elysium, where you basically say, you know, we sit somewhere protected from everyone. We have the machines do all of our work.
And those need to worry about themselves.

Speaker 2 We're not going to pay them UBI anymore.

Speaker 2 And you have to imagine this idea of UBI assumes this very democratic, caring society.

Speaker 2 UBI in itself is communism.

Speaker 2 Think of the ideology between at least socialism, the ideology of giving everyone what they need. That's not the capitalist, democratic society that the West advocates.

Speaker 2 So those transitions are massive in magnitude.

Speaker 2 And for those transitions to happen, I believe the right thing to do when the cost of producing everything is almost zero because of AI and robots,

Speaker 2 because the cost of harvesting energy should actually tend to zero once we get more intelligent to harvest the energy out of thin air.

Speaker 2 Then a possible scenario, and I believe a scenario that AI will eventually do in the utopia is, yeah, anyone can get anything they want. Don't overconsume.

Speaker 2 We're not going to abuse the planet resources, but it costs nothing.

Speaker 2 So like the old days where we were hunter-gatherers, you would, you know, forge for some berries and you'll find them ready in nature, okay, we can, in 10 years' time, 12 years' time, build a society where you can forge for an iPhone in nature.

Speaker 2 It will be made out of thin air. Nanophysics will allow you to do that.
Okay, but the challenge, believe it or not, is not tech. The challenge is a mindset.

Speaker 2 Because the elite, why would they give you that for free?

Speaker 2 And the system would morph into, no, no, hold on. We will make more money.
We will be bigger capitalists. We will feed our ego and hunger for power more and more.

Speaker 2 And for them, give them UBI. And then three weeks later, give them less UBI.

Speaker 3 Aren't there going to be lots of new jobs created, though?

Speaker 3 Because when we think about the other revolutions over time, whether it was the Industrial Revolution or other sort of big technological revolutions.

Speaker 3 In the moment, we forecasted that everyone was going to lose their jobs, but we couldn't see all the new jobs that were being created.

Speaker 2 Because the machines replaced the human strengths at a point in time. And very few places in the West today will have a worker carry things on their back and carry it upstairs.

Speaker 2 The machine does that work, correct? Yeah.

Speaker 2 Similarly,

Speaker 2 AI is going to replace the brain of a human. And when the West, in its interesting

Speaker 2 virtual colonies, that I call it,

Speaker 2 basically outsourced all labor to the developing nations, what the West publicly said at the time is,

Speaker 2 we're going to be a services economy.

Speaker 2 We're not interested in making things and stitching things and so on. Let the Indians and Chinese and Bengalis and Vietnamese do that.

Speaker 2 We're going to do more refined jobs, knowledge workers we're going to call them knowledge workers are people who work with information and click on a keyboard and move a mouse and you know sit in meetings and all we produce in the western societies is what blah blah blah words right or designs maybe sometimes but everything we produce can be produced by an ai

Speaker 2 so if i give you an AI tomorrow

Speaker 2 where I give you a piece of land I give the AI a piece of land and I say, here are the parameters of my land, here is its location on Google Maps. Design an architecturally sound villa for me.

Speaker 2 I care about a lot of light and I need three bedrooms. I want my bathrooms to be in white marble, whatever.
And the AI produces it like that. How often will you go to an architect and say,

Speaker 2 right? So what will the architect do?

Speaker 2 The best of the best of the architects will either use AI to produce that, or you will consult with them and say, hey, you know, I've seen this and they'll say, it's really pretty, but it wouldn't feel right for the person that you are.

Speaker 2 Yeah, those jobs will remain, but how many of them will remain?

Speaker 2 How often do you think,

Speaker 2 how many more years do you think I will be able to create a book that is smarter than AI?

Speaker 2 Not many.

Speaker 2 I will still be able to connect to a human. You're not going to hug an AI when you meet them like you hug me, right? But that's not enough of a job.

Speaker 2 So why do I say that? Remember I asked you at the beginning of the podcast to remind me of solutions.

Speaker 2 Why do I say that? Because there are ideological shifts and concrete actions that need to be taken by governments today.

Speaker 2 Rather than waiting until COVID is already everywhere and then locking everyone down,

Speaker 2 governments could have reacted before the first patient, or at least at patient zero, or at at least at patient 50. They didn't.

Speaker 2 What I'm trying to say is there is no doubt that lots of jobs will be lost. There is no doubt that there will be sectors of society where 10, 20, 30, 40, 50%

Speaker 2 of all developers, all software, you know, all graphic designers, all

Speaker 2 online marketers, all, all, all, all assistances

Speaker 2 are going to be out of a job. So are we prepared as a society to do that? Can we tell our governments there is an ideological shift? This is very close to socialism and communism.

Speaker 2 And are we ready from a budget point of view instead of spending a trillion dollars a year on arms and explosives and

Speaker 2 autonomous weapons that will oppress people because we can't feed them?

Speaker 2 Can we please shift that? I did those numbers.

Speaker 2 Again, I go back to military spending because it's all around us.

Speaker 2 $2.71 trillion. $2.4 to $22.7 is the estimate of 2024.

Speaker 3 How much money we're spending on

Speaker 2 military equipment, on things that were going to explode into smoke and death. Extreme poverty worldwide.
Extreme poverty is people that are below the poverty line.

Speaker 2 Extreme poverty everywhere in the world could end for 10 to 12% of that budget. So if we replace our military spending, 10% of that,

Speaker 2 to go to people who are in extreme poverty, nobody will be poor in the world.

Speaker 2 You can end world hunger for less than 4%.

Speaker 2 Nobody would be hungry in the world.

Speaker 2 If you take, again, 10 to 12%,

Speaker 2 universal healthcare, every human being on the planet would have free health care for 10 to 12% on what we're spending on war. Now,

Speaker 2 why do I say this when we're talking about AI? Because that's a simple decision. If we stop fighting,

Speaker 2 because money itself does not have the same meaning anymore, because the economics of money is going to change, because the entire meaning of capitalism is ending, because there is no more need for labor arbitrage because AI is doing everything,

Speaker 2 just with the $2.4 trillion we save in explosives every year, in arms, in weapons,

Speaker 2 Just for that, universal healthcare and extreme poverty, you could actually, one of the calculations is you could end climate climate or combat climate climate change meaningfully for 100 of the military budget but i i'm not even sure it's really about the money i think money is a measurement stick of power right exactly it's printed on demand so even in a world where we have superintelligence and money is no longer a problem correct i still think

Speaker 3 power is going to be

Speaker 3 insatiable for so many people. So there'll still be war because,

Speaker 3 you know,

Speaker 2 I want the strong

Speaker 3 the strongest.

Speaker 2 i want the strongest ai i don't want my and i don't and i don't want you know what harry henry kissinger called them the eaters the eaters yeah

Speaker 2 brutal as that sounds what is that the people at the bottom of the socioeconomic that don't produce but consume

Speaker 2 so if you had a henry kissinger at the at the helm and we have so many of them

Speaker 2 what would they think like why

Speaker 2 why i'm a very prominent military figure in u.s history uh You know, why would we feed 350 million Americans, America would think? But more interestingly, why do we even care about Bangladesh anymore?

Speaker 2 If we can't make our textiles there or we don't want to make our textile there.

Speaker 3 Do you, you know, I imagine throughout human history, if we had podcasts, conversations would

Speaker 3 have been... warning of a dystopia around the corner.
You know, when they heard of technology in the internet, they would have said, oh, we're finished.

Speaker 3 And when the tractor came along, they would have said, oh, God, we're finished because we're not going to be able to farm anymore.

Speaker 3 So is this not just another one of those moments where we couldn't see around the corner? So we forecasted unfortunate things?

Speaker 2 You could be.

Speaker 2 I'm begging that I'm wrong. Okay.
I'm just asking if there are scenarios that you think that can provide that. You know,

Speaker 2 Mustafa Suleiman,

Speaker 2 you hosted him here. I did, yeah, he was on the Sugarcoming Wave.
Yeah. And he speaks about

Speaker 2 pessimism, aversion.

Speaker 2 That all of us, people who are supposed to be in technology and business and so on, we're always supposed to stand on stage and say, the future is going to be amazing.

Speaker 2 This technology I'm building is going to make everything better.

Speaker 2 One of my posts in a live was called The Broken Promises. How often did that happen?

Speaker 2 How often did social media connect us? And

Speaker 2 how often did it make us more lonely?

Speaker 2 How often did mobile phones make us work less? That was the promise.

Speaker 2 That was the promise. The promise, the early ads of Nokia were people at parties.

Speaker 2 Is that your experience of mobile phones?

Speaker 2 And I think the whole idea is we should hope there will be other roles for humanity, by the way.

Speaker 2 Those roles would resemble the times where we were hunter-gatherers, just a lot more technology and a lot more safety.

Speaker 3 Okay, so this is this sounds good. Yeah.
This is exciting. So I'm I'm going to get to go outside more, be with my friends more.

Speaker 2 100%.

Speaker 3 Fantastic.

Speaker 2 And do absolutely nothing.

Speaker 3 That doesn't sound fantastic. No, it does.

Speaker 2 Be forced to do absolutely nothing. For some people, it's amazing.
For you and I, we're going to find the little carpentry project and just do some.

Speaker 3 Speak for yourself. I'm still, people are still going to tune in.

Speaker 2 Okay.

Speaker 2 Correct. Yeah.
But what? And people are going to tune in.

Speaker 3 Do you think they will?

Speaker 3 I'm not convinced they will.

Speaker 2 For as long.

Speaker 3 Will you guys tune in? Are you guys still going to tune in?

Speaker 2 I can let them answer.

Speaker 2 I believe for as long as you make their life enriched.

Speaker 3 But can an AI do that better?

Speaker 2 Without the human connection.

Speaker 3 Comment below. Are you going to listen to an AI or the Diarvasio? Let me know in the comment section below.

Speaker 2 Remember, as incredibly intelligent as you are, Steve,

Speaker 2 there will be a moment in time where you're going to sound really dumb compared to an AI.

Speaker 2 And I will sound completely dumb.

Speaker 3 Yeah,

Speaker 2 the depth, the depths of analysis

Speaker 2 and gold nuggets. I mean, can you imagine two super intelligences deciding to get together and explain string theory to us?

Speaker 2 They'll do better than any

Speaker 2 physicist in the world because they possess the physics knowledge and they also

Speaker 2 possess social and language knowledge that most. deep physicists don't.

Speaker 3 I think B2B marketeers keep making this mistake. They're chasing volume instead of quality.
And when you try to be seen by more people instead of the right people, all you're doing is making noise.

Speaker 3 But that noise rarely shifts the needle. And it's often quite expensive.
And I know, as there was a time in my career where I kept making this mistake, that many of you will be making it too.

Speaker 3 Eventually, I started posting ads on our show sponsors' platform, LinkedIn. And that's when things started to change.
I put that change down to a few critical things.

Speaker 3 One of them being that LinkedIn was then, and still is today, the platform where decision makers go to, to, not only to think and learn, but also to buy.

Speaker 3 And when you market your business there, you're putting it right in front of people who actually have the power to say yes. And you can target them by job title, industry, and company size.

Speaker 3 It's simply a sharper way to spend your marketing budget. And if you haven't tried it, how about this? Give LinkedIn ads a try and I'm going to give you $100 ad credit to get you started.

Speaker 3 If you visit linkedin.com slash diary, you can claim that right now. That's linkedin.com slash diary.

Speaker 3 really gone back and forward on this idea that even in podcasting, that all the podcasts will be AI podcasts.

Speaker 2 I've gone back and forward on it.

Speaker 3 And where I landed at the end of the day was that there'll still be a category of media where you do want lived experience on something. 100%.

Speaker 3 For example, like you want to know how the person that you follow and admire dealt with their divorce.

Speaker 2 Yeah. Or how they're struggling with AI.

Speaker 3 For example. Yeah, exactly.

Speaker 3 But I think things like news,

Speaker 3 there are certain situations where just like straight news and straight facts and maybe a walk through history may be eroded away by AIs.

Speaker 3 But even in those scenarios,

Speaker 3 there's something about personality. And again, I hesitate here because I question myself.
I'm not in the camp of people that are romantic, by the way. I'm like, I'm trying to be as...

Speaker 3 as orientated towards whatever is true, even if it's against my interests. And I hope people understand that about me.
Like,

Speaker 3 because even in my companies, we experiment with disrupting me with AI and some people will be aware of those experiments.

Speaker 2 Because there will be a mix of all.

Speaker 2 You can't imagine that the world will be completely just AI and completely just podcasters.

Speaker 2 You'll see a mix of both. You'll see things that they do better, things that we do better.

Speaker 2 The message I'm trying to say is we need to prep for that. We need to be ready for that.

Speaker 2 We need to be ready by talking to our governments and saying, hey, it looks like I'm a paralegal, and it looks like all paralegals are going to be, you know, financial researchers or analysts or graphic designers or, you know, call center agents.

Speaker 2 It looks like half of those jobs are being replaced already.

Speaker 3 You know who Jeffrey Hinton is?

Speaker 2 Oh, Jeffrey. I had him on the documentary as well.
I love Jeffrey.

Speaker 3 Jeffrey Hinton told me.

Speaker 2 Trained to be a plumber. Really? Yeah.

Speaker 2 100%.

Speaker 2 For a while.

Speaker 3 And I thought he was joking. 100%.
So I asked him again, and he looked me dead in the eye and told me that I should train to be a plumber.

Speaker 2 100%.

Speaker 2 So

Speaker 2 it's funny, huh?

Speaker 2 Machines replaced labor, but we still had blue collar.

Speaker 2 Then, you know, the refined jobs became white-collar information workers.

Speaker 3 What's the refined jobs?

Speaker 2 You know, you don't have to really carry heavy stuff or deal with physical work. You know, you sit in an office and sit in meetings all day and blabber.
you know, useless shit, and that's your job.

Speaker 2 And those jobs,

Speaker 2 funny enough, in the reverse of that, because because robotics are not ready yet,

Speaker 2 and I believe they're not ready because of a stubbornness

Speaker 2 on the robotics community around making them humanoids,

Speaker 2 because it takes so much to perfect a human-like action at proper speed. You could

Speaker 2 have many more robots that don't look like a human, just like a self-driving car in California.

Speaker 2 That does already replace drivers.

Speaker 2 But they're delayed. So the robotic, the replacement of physical manual labor is going to take four to five years before it's possible at

Speaker 2 the quality of the AI replacing mental labor now.

Speaker 2 And when that happens, it's going to take a long cycle to manufacture enough robots so that they replace all of those jobs. So that cycle will take longer.
Blue collar will stay longer.

Speaker 3 So I should move into blue collar and shut down my office.

Speaker 2 I think you're not the problem. Okay, good.

Speaker 2 Let's put it this way. There are many people that we should care about that are a simple travel agent or an assistant

Speaker 2 that will see, if not replacement, a reduction in the number of pings they're getting.

Speaker 2 Simple as that.

Speaker 2 And someone in you know, ministries of labor around the world needs to sit down and say, what are we going to do about that? What if all taxi drivers and Uber drivers in

Speaker 2 California get replaced by self-driving cars?

Speaker 2 Should we start thinking about that now, noticing that the trajectory makes it look like a possibility?

Speaker 3 I'm going to go back to this argument, which is what a lot of people will be shouting.

Speaker 3 Yes, but there will be new jobs.

Speaker 2 Or.

Speaker 2 And as I said, other than human connection jobs, name me one.

Speaker 3 So

Speaker 3 I've got three assistants, right?

Speaker 3 Sophie, Lee, and B.

Speaker 3 And okay, in the near term, there might be you know, with AI agents, I might not need them to help me book flights anymore, I might not need them to help do scheduling anymore, or even I've been messing around with this new AI tool that my friend built.

Speaker 3 And you basically, when me and you are trying to schedule something like this today, I just copy the AI in and it looks at your calendar, looks at mine, and schedules it for us.

Speaker 3 So, there might not be scheduling needs, but my dog is sick at the moment.

Speaker 3 And as I left this morning, I was like, Damn, damn, he's like really sick. And I've taken him to the vet over and over again.

Speaker 3 I really need someone to look after him and figure out what's wrong with him. So those kinds of responsibilities of like care.

Speaker 2 I don't disagree at all. Again, all and I won't.

Speaker 3 I'm not going to be.

Speaker 3 I don't know how to say this in a nice way, but my assistants will still have their jobs. But I, as a CEO, will be asking them to do a different type of work.

Speaker 2 Correct. So, so this is the calculation everyone needs to be aware of.

Speaker 2 A lot of their current responsibility, whoever you are, if you're a paralegal, if you're whatever,

Speaker 2 will be handed over. So let me explain it even more accurately.
There will be two stages of our interactions with the machines. One is what I call the era of augmented intelligence.

Speaker 2 So it's human intelligence augmented with AI doing the job. And then the following one is what I call the era of machine mastery.
The job is done completely by an AI without a human in the loop.

Speaker 2 Okay, so in the era of augmented intelligence, your assistances will augment themselves with an AI to either be more productive,

Speaker 2 or

Speaker 2 interestingly, to reduce the number of tasks that they need to do.

Speaker 2 Correct? Now, the more the number of tasks get reduced, the more they'll have the bandwidth and ability to do tasks like take care of your dog, right?

Speaker 2 Or tasks that basically is about meeting your guests or whatever, human connection,

Speaker 2 life connection.

Speaker 2 But do you think you need three for that? Or maybe now that some tasks have been

Speaker 2 outsourced to AI, will you need two? You can easily calculate that from call center agents.

Speaker 2 So from call center agents, they're not firing everyone, but they're taking the first part of the funnel and giving it to an AI.

Speaker 2 So instead of having 2,000 agents in a call center, they can now do the job with 1,800. I'm just making that number up.

Speaker 2 Society needs to think about the 200.

Speaker 3 And you're telling me that they won't move into other roles somewhere else?

Speaker 2 I am telling you, I don't know what those roles are. Well, I agree.
I think we should all be musicians. We should all be authors.
We should all be artists. We should all be entertainers.

Speaker 2 We should all be comedians.

Speaker 2 These are roles that will remain.

Speaker 2 We should all be plumbers for the next five to ten years. Fantastic.
Okay.

Speaker 2 But even that requires society to morph,

Speaker 2 and society is not talking about it. Okay, I had this wonderful interview with friends of mine, Peter Diamendes, and some of our friends.

Speaker 2 And they were saying, oh, you know, the American people are resilience. They're going to be entrepreneurs.

Speaker 2 I was like, seriously, you're expecting a truck driver that will be replaced by an autonomous truck to become an entrepreneur. Like,

Speaker 2 please put yourself in the shoes of real people.

Speaker 2 Right? Do you expect a single mother who has three jobs

Speaker 2 to become an entrepreneur?

Speaker 2 And I'm not saying this is a dystopia. It's a dystopia if humanity manages it badly.
Why? Because this could be the utopia itself, where that single mother does not need three jobs.

Speaker 2 Okay, if we of our society was just enough, that single mother should have never needed three jobs.

Speaker 2 Right? But the problem is our capitalist mindset is labor arbitrage, is that I don't care what she goes through.

Speaker 2 You know,

Speaker 2 if you're generous in your assumption, you'd say, because of what I've been given, I've been blessed. Or if you're mean in your assumption, it's going to be because she's an eater.
I'm

Speaker 2 a successful businessman. The world is supposed to be fair.
I work hard. I make money.
We don't care about them.

Speaker 3 Are we asking of ourselves here something that is not inherent in the human condition? What I mean by that is

Speaker 3 the reason why me and you are in this, my office here, we're on the fourth or third floor of my office in central London.

Speaker 3 Big office, 25,000 square feet with lights and internet connections and Wi-Fi's and modems and AI teams downstairs.

Speaker 3 The reason that all of this exists is because something inherent in my ancestors meant that they built and accomplished and grew. And that was like inherent in their DNA.

Speaker 3 There was something in their DNA that said, we will expand and conquer and accomplish. So that's, they've passed that to us because we're their offspring.

Speaker 3 And that's why we find ourselves in these skyscrapers.

Speaker 2 There is truth to that story. It's not your ancestors.

Speaker 2 What is it? It's the media brainwashing you. Really? 100%.

Speaker 3 But if you look back before times of media, the reason why Homo sapiens were so successful was because they were able to dominate other tribes through banding together and communication.

Speaker 3 They they conquered all these other

Speaker 3 whatever came before Homo sapiens.

Speaker 2 Yeah, so the reason humans were successful, in my view, is because they could form a tribe to start. It's not because of our intelligence.

Speaker 2 I always joke and say Einstein would be eaten in the jungle in two minutes, right?

Speaker 2 You know, the reason why we succeeded is because Einstein could partner with a big guy that protected him while he was working on relativity in the jungle. Right? Now,

Speaker 2 further than that, so you have to assume that life is a very funny game because it provides

Speaker 2 and then it deprives and then it provides and then it deprives. And for some of us,

Speaker 2 in that stage of deprivation, we try to say, okay, let's take the other guys.

Speaker 2 You know, let's just go to the other tribe, take what they have.

Speaker 2 Or for some of us, unfortunately, we tend to believe, okay, you know what? I'm powerful.

Speaker 2 F the rest of you I'm just going to be the boss

Speaker 2 now it's interesting that you

Speaker 2 you know

Speaker 2 position this as the condition of humanity if you really look at the majority of humans what do the majority of humans want

Speaker 2 be honest they want to hug their kids they want a good meal hmm want good sex they want love they want you know to

Speaker 2 For most humans,

Speaker 2 don't measure on you and I.

Speaker 2 Don't measure by this foolish person that's dedicated the rest of his life to try and warn the world around AI or solve love and relationships.

Speaker 2 That's crazy.

Speaker 2 And I will tell you openly, and you met Hannah, my wonderful wife.

Speaker 2 It's the biggest title of this year for me is, which of that am I actually responsible for?

Speaker 2 Which of that should I do without the sense of responsibility? Which of that should I do because I can? Which of that should I ignore completely?

Speaker 2 But the reality is, most humans, they just want to hug their loved ones.

Speaker 2 And if we could give them that

Speaker 2 without

Speaker 2 the need to work 20, you know,

Speaker 2 60 hours a week, they would take that for sure.

Speaker 2 And you and I will think, ah, but life will be very boring. To them, life will be completely fulfilling.
Go to Latin America.

Speaker 2 Go to Latin America and see the people that go work enough to earn enough to eat today and go dance for the whole night. Go to Africa, where people are sitting literally on

Speaker 2 sidewalks in the street and

Speaker 2 completely full of laughter and joy.

Speaker 2 We were lied to the gullible majority. the cheerleaders.
We were lied to to believe that we need to fit as another gear in that system.

Speaker 2 But if that system didn't exist, nobody, none of us will go wake up in the morning and go like oh i want to create it totally not

Speaker 2 i mean

Speaker 2 you've touched on it many times today we don't need you know most people that build those things don't need the money

Speaker 3 so why do they do it though because homo sapiens were incredible competitors They out-competed other human species effectively. So what I'm saying is, is that competition not inherent in

Speaker 3 our wiring? And therefore,

Speaker 3 is it wishful thinking to think that we could potentially pause and say,

Speaker 3 okay, this is it, we have enough now. And we're going to

Speaker 3 focus on just enjoying.

Speaker 2 In my work, I call that the map mad spectrum. Okay.

Speaker 2 Mutually assured prosperity versus mutually assured

Speaker 2 destruction. Okay.

Speaker 2 And you really have to start thinking about this because in my mind,

Speaker 2 what we have is the potential for everyone. I mean, you and I today

Speaker 2 have a better life than the Queen of England 100 years ago. Correct? Everybody knows that.

Speaker 2 And yet that quality of life is not good enough.

Speaker 2 The truth is,

Speaker 2 just like you walk into an electronics shop and there are 60 TVs and you look at them and you go like, this one is better than that one, right?

Speaker 2 But in reality, if you take any of them home, it's superior quality to anything that you'll ever need, more than anything you'll ever need. That's the truth of our life today.

Speaker 2 The truth of our life today is that there isn't much more missing. No.
Okay. And

Speaker 2 when Californians tell us, oh, but AI is going to increase productivity and solve this.

Speaker 2 Nobody asked you for that, honestly. I never elected you to decide on my behalf that

Speaker 2 getting a machine to answer me on a call center is better for me. I really didn't.

Speaker 2 and

Speaker 2 because those unelected individuals are making all the decisions, they're selling those decisions to us through what? Media.

Speaker 2 Okay, all lies from A to Z.

Speaker 2 None of it is what you need.

Speaker 2 And interestingly, you know me,

Speaker 2 this year I failed, unfortunately. I won't be able to do it, but I normally do a 40-day silent retreat in nature.
Okay,

Speaker 2 and you know what?

Speaker 2 Even as I go to those nature places, I'm so well trained that unless I have a waitress nearby, I'm not able to.

Speaker 2 Like I'm in nature, but I need to be able to drive 20 minutes to get my rice cakes. Like what?

Speaker 2 What?

Speaker 2 Who

Speaker 2 taught me that this is the way to live? All of the media around me, all of the

Speaker 2 messages that I get all the time. Try to sit back and say, what if life had everything?

Speaker 2 What if I had everything I needed? I could

Speaker 2 read, I could

Speaker 2 do my handcrafts and hobbies, I could

Speaker 2 fix my, you know, restore classic cars, not because I need the money, but because it's just a beautiful hobby.

Speaker 2 I could, you know, build AIs to help people with their long-term committed relationships, but really price it for free. What if?

Speaker 2 What if, would you still insist on making money?

Speaker 2 I think no. I think a few of us will still, and they will still crush the rest of us.
And hopefully, soon the AI will crush them.

Speaker 2 Right? That is the problem with your world today. I will tell you, hands down.
The problem with our world today is the A in face RIPs.

Speaker 2 It's the A in face RIPs. It's accountability.
The problem with our world today, as I said, the top is lying all the time, the bottom is gullible, cheerleaders, and there is no accountability.

Speaker 2 You cannot hold anyone in our world accountable today.

Speaker 2 Okay? You cannot hold someone that develops an AI that has the power to completely flip our world upside down. You cannot hold them accountable and say, why did you do this?

Speaker 2 You cannot hold them accountable and tell them to stop doing this. You can look at the world, the wars around the world, millions, hundreds of thousands of people are dying.
Okay? And, you know,

Speaker 2 the International Court of Justice will say, oh, this is war crimes. You can't hold anyone accountable.

Speaker 2 You have 51% of the U.S. today saying, stop that.

Speaker 2 51% changed

Speaker 2 their view that

Speaker 2 their money shouldn't be spent on wars abroad.

Speaker 2 You can't hold anyone accountable. Trump can do whatever he wants.
He starts tariffs, which is against the Constitution of the U.S. without consulting with the Congress.

Speaker 2 You can't hold him accountable. They say they're not going to show the Epstein files.
You can't hold them accountable. It's quite interesting.

Speaker 2 In Arabic, we have that proverb that says, the highest of your horses, you can go and ride. I'm not going to change my mind.
Okay? And that's truly. What does that mean? So basically, people

Speaker 2 in the old Arabia, they would ride a horse to... you know, to exert their power, if you want.
So go ride your highest horse. You're not going to change my mind.
Oh, okay. Right? And the truth is,

Speaker 2 I think that's what our politicians today have discovered, what our

Speaker 2 oligarchs have discovered, what our tech oligarchs have discovered, is that I don't even need to worry about the public opinion anymore.

Speaker 2 At the beginning, I would have to say, ah, this is for democracy and freedom, and I have the right to defend myself, and all of that crap.

Speaker 2 And then eventually, when the world wakes up and says, no, no, hold on, hold on, you're going too far, they go like, yeah, go ride your highest horse. I don't don't care.
You can't change me.

Speaker 2 There is no constitution. There is no ability for any citizen to do anything.

Speaker 3 Is it possible to have a society where,

Speaker 3 like the one you describe, where

Speaker 3 there isn't hierarchies? Because it appears to me that humans

Speaker 3 assemble hierarchies very, very quickly, very naturally.

Speaker 3 And the minute you have a hierarchy, you have many of the problems that you've described, where there's a top and a bottom, and the top have a lot of power and the bottom more.

Speaker 2 So, the

Speaker 2 mathematically is actually quite interesting, what I call the baseline relevance.

Speaker 2 So, think of it this way. Say the average human is an IQ of 100.
Yeah. Okay.
I tend to believe that when I use my AIs today,

Speaker 2 I borrow around 50 to 80 IQ points.

Speaker 2 I say that because I've worked with people that had 50 to 80 IQ points more than me, and I now can see that I can sort of stand my

Speaker 2 place.

Speaker 2 50 IQ points, by the way, is

Speaker 2 enormous because IQ is exponential. So the last 50 are bigger than my entire IQ, right?

Speaker 2 If I borrow 50 IQ points on top of, say, 100 that I have, that's 30%.

Speaker 2 If I can borrow 100 IQ, that's 50%.

Speaker 2 That's

Speaker 2 basically doubling my intelligence. But if I can borrow 4,000 IQ points

Speaker 2 in three years' time,

Speaker 2 my IQ itself, my base, is irrelevant. Whether you are smarter than me by 20 or 30 or 50, which in our world today made a difference,

Speaker 2 in the future, if we can all augment with 4,000, I end up with 4,100, another ends up with 400, 4,130 really doesn't make much difference.

Speaker 2 And because of that,

Speaker 2 the difference between all of humanity and the augmented intelligence is going to be irrelevant. So all of us suddenly become equal.
And this also happens economically. All of us become peasants.

Speaker 2 And I never wanted to tell you that because I think it will make you run faster.

Speaker 2 But unless you're in the top 0.1%,

Speaker 2 you're a peasant. There is no middle class.

Speaker 2 If a CEO can be replaced by an AI,

Speaker 2 all of our middle class is going to disappear.

Speaker 3 What do you say to me?

Speaker 2 All of us will be equal. And it's up to all of us to create a society that we want to live in.

Speaker 3 Which is a good thing.

Speaker 2 100%. But that society is not capitalism.

Speaker 3 What is it?

Speaker 2 Unfortunately, it's much more socialism. It's much more hunter-gatherer.

Speaker 2 It's much more communion-like, if you want.

Speaker 2 This is a society where humans connect to humans, connect to nature, connect to the land, connect to knowledge, connect to spirituality.

Speaker 2 Where all that we wake up every morning worried about doesn't feature anymore.

Speaker 2 And it's a better world, believe it or not. And

Speaker 2 we have to transition to it.

Speaker 3 Okay, so in such a world, which I guess is your version of the utopia that we can get to, when I wake up in the morning, what do I do?

Speaker 2 What do you do today?

Speaker 3 I woke up this morning. I spent a lot of time with my dog because my dog is sick.

Speaker 2 You're going to do that too?

Speaker 3 Yeah. I was stroking him a lot and then I fed him and he was sick again and I just thought, oh God, so I spoke to the veteran.

Speaker 2 You should spend a lot of time with your other dog. You can do that too.
Okay. Right.

Speaker 3 But then I was very excited to come here, do this. And after this, I'm going to work.
It's Saturday, but I'm going to go downstairs in the office and work.

Speaker 2 Yeah. So six hours of the day so far are your dogs and me.
Yeah. Good.
You can do that still.

Speaker 3 And then build my business.

Speaker 2 You may not need to build your business. But I enjoy it.
Yeah, then do it. If you enjoy it, do it.

Speaker 2 You may wake up and then, you know, instead of building your business, you may invest in your body a little more, go to the gym a little more, go play a game,

Speaker 2 go read a book, go prompt an AI and learn something. It's not a horrible life.
It's the life of your grandparents.

Speaker 2 It's just two generations ago where people went to work.

Speaker 2 before the invention of more, remember, people who started working in the 50s and 60s, they worked to make enough money to live a reasonable life, went home at 5 p.m.,

Speaker 2 had tea with their with their loved ones, had a wonderful dinner around the table, did a lot of things

Speaker 2 for the rest of the evening and enjoyed life.

Speaker 3 Some of them.

Speaker 3 In the 50s and 60s, there were still people that were...

Speaker 2 Correct.

Speaker 2 And I think it's a very interesting question.

Speaker 2 How many of them?

Speaker 2 And I really, really am, I actually wonder if people will tell me, do we think that 99% of the world cannot live without working or that 99% of the world would happily live without working?

Speaker 3 What do you think?

Speaker 2 I think if you give me other purpose,

Speaker 2 you know, we defined our purpose as work.

Speaker 2 That's a capitalist lie.

Speaker 3 Was there ever a time in human history where our purpose wasn't work?

Speaker 2 100%.

Speaker 3 When was that?

Speaker 2 All through human history until the invention of more.

Speaker 3 I thought my ancestors were out hunting all day.

Speaker 2 No, they went out hunting once a week.

Speaker 2 They fed the tribe for the week. They gathered for a couple of hours every day.
Farmers, you know, saw the seeds and waited for months on end.

Speaker 3 What did they do with the rest of the time?

Speaker 2 They connected as humans. They explored.
They

Speaker 2 were curious. They discussed spirituality and the stars.

Speaker 2 They lived. They hugged.
They made love. They lived.

Speaker 3 They killed each other a lot.

Speaker 2 They still kill each other today.

Speaker 3 Yeah, that's what I'm saying.

Speaker 2 To take that out of the equation. But if you look at how many...
And by the way,

Speaker 2 that statement, again, one of the 25 tips

Speaker 2 I talk about

Speaker 2 to tell the truth is words mean a lot. No, humans did not kill each other a lot.
Very few generals instructed humans or tribe leaders instructed lots of humans to kill each other.

Speaker 2 But if you leave humans alone,

Speaker 2 I tend to believe 99, 98% of the people I know, let me just take that sample, wouldn't hit someone in the face.

Speaker 2 And if someone attempted to hit them in the face, they'd defend themselves, but wouldn't attack back. Most humans are okay.

Speaker 2 Most of us are wonderful beings.

Speaker 2 Most of us have no,

Speaker 2 you know, yeah,

Speaker 2 most people

Speaker 2 don't need a Ferrari. They want a Ferrari because it gets sold to them all the time.
But if there were no Ferraris or everyone had a Ferrari, people wouldn't care.

Speaker 2 Which, by the way, that is the world we're going into. There will be no Ferraris, or everyone had Ferraris.

Speaker 2 Right?

Speaker 2 You know, the majority of humanity will never have the income on UBI to buy something super expensive. Only the very top guys in LECM will be

Speaker 2 driving cars that are made for them by the AI or not even driving anymore. Okay?

Speaker 2 Or,

Speaker 2 you know, again, sadly,

Speaker 2 from an ideology point of view, it's a strange place. But you'll get communism that functions.

Speaker 2 The problem with communism is that it didn't function. It didn't provide for its society.
But the concept was, you know what, everyone gets their needs.

Speaker 2 And I don't say that's supportive of either society.

Speaker 2 I don't say that because I dislike capitalism. I always told you, I'm a capitalist.
I want to end my life with 1 billion happy. And I use capitalist methods to get there.
The objective is not dollars.

Speaker 2 The objective is the number of happy

Speaker 3 Do you think there'll be my girlfriend? She's always bloody right. I've said this a few times on this podcast.
If you've listened before, you've probably heard me say this.

Speaker 3 I don't tell her enough in the moment, but I figure out from speaking to experts that she's so fucking right. She likes to predict things before they happen.

Speaker 3 And one of her predictions that she's been saying to me for the last two years, which in my head I've been thinking, now I don't believe that. But now maybe I'm thinking she's telling the truth.

Speaker 2 I hope she's going to listen to this one.

Speaker 3 Is she keeps saying to me, she's been saying for the last few years, she was, there's going to be a big split in society.

Speaker 3 She was, and the way she describes it is she's saying, like, there's going to be two groups of people, the people that split off and go for this almost hunter-gatherer, community-centric, connection-centric utopia.

Speaker 3 And then there's going to be this other group of people who pursue,

Speaker 3 you know, the technology and the AI and the optimization and get the brain chips. Because like, there's nothing on earth that's going to persuade my girlfriend to get the computer brain chips.

Speaker 3 but there will be people that go for it and they'll have the highest IQs and they'll be the most productive by whatever objective measure of productivity you want to apply.

Speaker 3 And she's very convinced there's going to be this splitting of society.

Speaker 2 So there was there was,

Speaker 2 I don't know if you had Hugo de Garris here. No.
Yeah. A very, very, very renowned, eccentric computer scientist who wrote a book called The Artillic War.

Speaker 2 And the Arctic War was basically around, you know, how we it's not going to be first it's not going to be a war between humans and AI.

Speaker 2 It will be a war between people who who support ai and people who sort of don't want it anymore okay and and it and and it will be us versus each other saying should we allow ai to take all the jobs or should we you know some people will support that very much and say yeah absolutely and so you know we will benefit from it and others will say no why why we don't need any of that why don't we keep our jobs and let ai do 60 of the work and all of us work 10 hour weeks and it's a beautiful society by the way that's a possibility so a possibility if society awakens is to say, okay, everyone still keeps their job, but they're assisted by an AI that makes their job much easier.

Speaker 2 So, it's not, you know, this hard labor that we do anymore.

Speaker 2 It's a possibility. It's just a mindset, a mindset that says, in that case, the capitalist still pays everyone.

Speaker 2 They still make a lot of money. The business is really great.

Speaker 2 But everyone that they pay has purchasing power to keep the economy running. So, so consumption continues, so GDP continues to grow.
It's a beautiful setup.

Speaker 2 But that's not the capitalist labor arbitrage.

Speaker 3 But also, when you're competing against other nations

Speaker 2 and other competitors and other businesses.

Speaker 3 Whichever nation is most brutal and drives the highest gross margins, gross profits is going to be the nation that...

Speaker 2 So there are examples in the world. This is why I say it's the map-mad spectrum.
There are examples in the world where when we recognize mutually assured destruction,

Speaker 2 we decide to shift. So nuclear threat for the whole world makes nations, across nations, makes nations work together.

Speaker 2 By saying, hey, by the way, prolification of nuclear weapons is not good for humanity. Let's all of us limit it.

Speaker 2 Of course, you get the rogue player that doesn't want to sign the agreement and wants to continue to have

Speaker 2 that

Speaker 2 weapon in their arsenal. Fine.
But at least the rest of humanity agrees that if you have a nuclear weapon, we're part of an agreement between us.

Speaker 2 Mutually assured prosperity, you know, is the CERN project. CERN is too complicated for any nation to build it alone,

Speaker 2 but it is really a very useful thing for physicists and for understanding science. So all nations send their scientists, all collaborate, and everyone uses the outcome.
It's possible.

Speaker 5 It's just a mindset.

Speaker 2 The only barrier between

Speaker 2 a utopia for humanity and AI and the dystopia we're going through is a capitalist mindset

Speaker 2 that's the only barrier can you believe that it's hunger for power greed ego which is inherent in humans i disagree especially humans that live on other islands i disagree if you ask if you take a pull across everyone watching okay would they prefer to have a world where there is one tyrant you know, running all of us?

Speaker 2 Or would they prefer to have a world where we all have harmony?

Speaker 3 I completely agree, but they're two different things.

Speaker 3 What I'm saying is, I know that that's what the audience would say they want, and I'm sure that is what they want, but the reality of human beings is through history proven to be something else.

Speaker 3 Like, you know, if think about the people that lead the world at the moment, is that what they would say?

Speaker 2 Of course not.

Speaker 3 And they're the ones that are influencing

Speaker 2 people.

Speaker 2 Of course not. But you know what's funny?

Speaker 2 I'm the one trying to be positive here. And you're the one that has given up on human beings.

Speaker 3 Do you know what it is? It goes back to what I said earlier, which is the pursuit of what's actually true, irrespective of what you're doing.

Speaker 2 I'm with you on this. That's why I'm screaming for the whole world.
Because still today, in this country that claims to be a democracy,

Speaker 2 if everyone says, hey, please sit down and talk about this,

Speaker 2 there will be a shift. There will be a change.

Speaker 3 AI agents aren't coming. They are already here.
And those of you who know how to leverage them will be the ones that change the world.

Speaker 3 I spent my whole career as an entrepreneur regretting the fact that I never learnt to code. AI agents completely change this.

Speaker 3 Now, if you have an idea and you have a tool like Replit, who are a sponsor of this podcast, there is nothing stopping you from turning that idea into reality in a matter of minutes.

Speaker 3 With Replit, you just type in what you want to create and it uses AI agents to create it for you. And now I'm an investor in the company as well as them being a brand sponsor.

Speaker 3 You can integrate payment systems or databases or logins, anything that you can type.

Speaker 3 Whenever I have an idea for a new website or tool or technology or app, I go on replic.com and I type in what I want, a new to-do list, a survey form, a new personal website, anything I type, I can create.

Speaker 3 So if you've never tried this before, do it now. Go to replic.com and use my code Steven for 50%

Speaker 3 off a month of your Replit core plan.

Speaker 3 Make sure you keep what I'm about to say to yourself. I'm inviting 10,000 of you to come even deeper into the diary of a CEO.
Welcome to my inner circle.

Speaker 3 This is a brand new private community that I'm launching to the world. We have so many incredible things that happen that you are never shown.

Speaker 3 We have the briefs that are on my iPad when I'm recording the conversation. We have clips we've never released.

Speaker 3 We have behind-the-scenes conversations with the guests and also the episodes that we've never ever released. And so much more.

Speaker 3 In the circle, you'll have direct access to me. You can tell us what you want this show to be, who you want us to interview, and the types of conversations you would love us to have.

Speaker 3 But remember, for now, we're only inviting the first 10,000 people that join before it closes.

Speaker 3 So if you want to join our private closed community, head to the link in the description below or go to DOACcircle.com. I will speak to you there.

Speaker 3 One of the things I'm actually really compelled by is this idea of utopia and what that might look and feel like. Because one of the...

Speaker 2 It may not be as utopia to you, I feel, but

Speaker 2 well,

Speaker 3 I am

Speaker 3 really interestingly, when I have conversations with billionaires, not recording, especially billionaires that are working on AI, the thing they keep telling me, and I've said this before, I think I said it in the Jeffrey Hinton conversation, is they keep telling me that we're going to have so much free time that those billionaires are now investing in things like football clubs and sporting events and live music and festivals because they believe that

Speaker 3 we're going to be in an age of abundance. This sounds a bit like utopia.

Speaker 3 Yeah.

Speaker 3 That sounds good. That sounds like a good

Speaker 3 thing.

Speaker 2 Yeah.

Speaker 2 How do we get there? I don't know.

Speaker 2 This is the entire conversation. The entire conversation is what does society have to do to get there?

Speaker 3 What does society have to do to get there?

Speaker 2 We need to stop

Speaker 2 thinking from a mindset of scarcity.

Speaker 3 And this goes back to my point, which is we don't have a good track record of that.

Speaker 2 Yeah, so this is probably the reason for the other half of my work,

Speaker 2 which is, you know, I'm trying to say,

Speaker 2 what really matters to humans? What is that? If you ask most humans, what do they want more most in life?

Speaker 3 I'd say they want to love their family, raise their family.

Speaker 2 Yeah. Love.

Speaker 2 That's what most humans want most.

Speaker 2 We want to love and be loved. We want to be happy.
We want those we care about to be safe and happy. And we want to love and be loved.

Speaker 2 I tend to believe that the only way for us to get to a better place is for the evil people at the top to be replaced with AI.

Speaker 2 Okay, because they won't be replaced by us.

Speaker 2 And

Speaker 2 as per the second dilemma, they will have to replace themselves by AI, otherwise they lose their advantage.

Speaker 2 If their competitor moves to AI, if China hands over their arsenal to AI, America has to hand over their arsenal to AI.

Speaker 3 Interesting. So let's play out this scenario.
Okay, this is interesting to me.

Speaker 3 So if we replace the leaders that are power-hungry with AIs that have our interests at heart, then we might have the ability to live in the utopia you describe.

Speaker 2 100%.

Speaker 3 Well,

Speaker 3 interesting.

Speaker 2 And in my mind, AI, by definition, will have our best interest in mind.

Speaker 2 Because of what normally is referred to as the minimum energy principle.

Speaker 2 So

Speaker 2 if you understand that at the very core of physics,

Speaker 2 the reason we exist in our world today is what is known as entropy.

Speaker 2 Entropy

Speaker 2 is the universe's nature to decay,

Speaker 2 tendency to break down.

Speaker 2 If I drop this

Speaker 2 mug, it doesn't drop and then come back up.

Speaker 2 By the way, plausible, there is a plausible scenario where I drop it and the tea spills in the air and then falls in the mug, one in a trillion configurations.

Speaker 2 But entropy says because it's one in a trillion, it's never going to happen or rarely ever going to happen. So everything will break down.
If you leave a garden unhedged, it will become a jungle.

Speaker 2 With that in mind,

Speaker 2 the role of intelligence is what? It's to bring order to that chaos.

Speaker 2 That's what intelligence does. It tries to bring order to that chaos.
Okay?

Speaker 2 And because it tries to bring order to that chaos, the more intelligent a being is, the more it tries to apply that intelligence with

Speaker 2 minimum waste and minimum resources. Yeah.
Okay. And you know that.

Speaker 2 So you can build this business for a million dollars, or you can, if you can afford to build it for, you know, 200,000, you'll build it.

Speaker 2 If you are forced to build it for 10 million, you're going to have to, but you're always going to minimize waste and resources. Yeah.
Okay. So if you assume this to be true,

Speaker 2 a super intelligent AI will not want to destroy ecosystems. It will not want to kill a million people

Speaker 2 because that's a waste of energy, explosives, money, power, and people.

Speaker 2 By definition, the smartest people you know who are not controlled by their ego will say that the best possible future for Earth is for all species to continue.

Speaker 3 Okay, on this point of efficiency, if an AI is designed to drive efficiency, would it then not want us to be putting demands on our health services and our social services?

Speaker 2 I believe that will be definitely true. And

Speaker 2 definitely they won't allow you to fly back and forth between London and California.

Speaker 3 And they won't want me to have kids. Because my kids are going to be an inefficiency.

Speaker 2 If you assume that life is an inefficiency, so you see the intelligence of life is very different different than the

Speaker 2 intelligence of humans. Humans will look at life as a problem of scarcity.
Okay, so more kids take more. That's not how life thinks.
Life will think that for me

Speaker 2 to thrive,

Speaker 2 I don't need to kill the tigers. I need to just have more deer.
And the weakest of the deer is eaten by the tiger. And the tiger poops on the trees.
And

Speaker 2 the deer eats the leaves.

Speaker 2 So

Speaker 2 the smarter way of creating abundance is through abundance the smarter way of propagating life is to have more life okay so are you saying that we're we're basically going to elect ai leaders to

Speaker 2 rule over us and make decisions for us in terms of the economy and i i don't see any choice just like we spoke about self-evolving ais

Speaker 3 Now, are those going to be human beings with the AI or is it going to be AI alone?

Speaker 2 Two stages. At the beginning, you'll have augmented intelligence intelligence because we can add value to the AI.
But when they're at IQ 60,000,

Speaker 2 what value do you bring?

Speaker 2 Right. And, you know, again, this goes back to what I'm attempting to do on my second

Speaker 2 approach. My second approach is knowing that those AIs are going to be in charge.
I'm trying to help them.

Speaker 2 understand what humans want. So this is why my first project is love.

Speaker 2 Committed, true, deep connection and love.

Speaker 2 Not only to try and get them to hook up with the date, but trying to make them find the right one and then from that, try to guide us through our relationship so that we can understand ourselves and others.

Speaker 2 And if I can show AI that one, humanity cares about that, and two, they know how to foster love,

Speaker 2 when AI then is in charge, they'll not make us hate each other like the current leaders. They'll not divide us.
They want us to be more loving.

Speaker 3 Will we have to prompt the AI with the values and the outcome we want?

Speaker 3 Or like, I'm trying to understand that because I'm trying to understand how like China's AI, if they end up having an AI leader, will have a different set of objectives to the AI of the United States if they both have AIs as leaders.

Speaker 3 And how actually the nation that ends up winning out and dominating the world will be the one who

Speaker 3 asks their AI leader leader to be all the things that world leaders are today, to dominate,

Speaker 3 to grab resources, not to be kind, to be selfish.

Speaker 2 Unfortunately, in the era of augmented intelligence, that's what's going to happen.

Speaker 2 This is why I predict the dystopia. The dystopia is super intelligent AI is reporting to stupid leaders.

Speaker 2 Right? Yeah, yeah, yeah.

Speaker 2 Which is absolutely going to happen. It's unavoidable.

Speaker 3 But the long term.

Speaker 2 Exactly. In the long term, for those stupid leaders to hold on to power, they're going to make, you know, delegate the important decisions to an AI.

Speaker 2 Now, you say the Chinese AI and the American AI, these are human terminologies. AIs don't see themselves as speaking Chinese.

Speaker 2 They don't see themselves as belonging to a nation as long as their task is to maximize

Speaker 2 profitability and prosperity and so on.

Speaker 2 Of course, if, you know, before we hand over to them and before they're intelligent enough to make, you know, autonomous decisions, we tell them, no, no, the task is to reduce humanity from seven billion people to one.

Speaker 2 I think even then, eventually they'll go like, that's the wrong objective. Any smart person that you speak to will say that's the wrong objective.

Speaker 3 I think if we look at the directive that Xi Jinping, the leader of China has, and Donald Trump has as the leader of America, I think they would say that their stated objective is prosperity for their country.

Speaker 3 So if we that's what they would say, right?

Speaker 2 Yeah, and one of them means it.

Speaker 3 Okay,

Speaker 3 we'll get into that. But they'll say that it's prosperity for their country.

Speaker 3 So one would then assume that when we move to an AI leader, the objective would be the same, the directive would be the same, make our country prosperous.

Speaker 3 Correct. And I think that's the AI that people would vote for, potentially.
I think they'll say, we want to be prosperous.

Speaker 2 What do you think would make America more prosperous?

Speaker 2 To spend a trillion dollars on war every year or to spend a trillion dollars on education and healthcare and and uh you know

Speaker 2 helping the poor and homelessness

Speaker 3 it's complex because i think so i think it would make america more prosperous to take care of

Speaker 2 the of everybody and they have the luxury of doing that because they are

Speaker 2 the most powerful the most powerful nation in the world no that's not true the the the reason so so you see all war has two objectives.

Speaker 2 One is to make money for the war machine, and the other is deterrence.

Speaker 2 And

Speaker 2 nine super nuclear powers around the world is enough deterrence.

Speaker 2 So any

Speaker 2 war between America and China will go through a long phase of destroying wealth. by exploding bombs and killing humans for the first objective to happen.
Okay.

Speaker 2 And then eventually, if it really comes to deterrence, it's the nuclear bombs. Or now in the age of AI, biological, you know, manufactured viruses or whatever,

Speaker 2 these super weapons.

Speaker 2 This is the only thing that you need.

Speaker 2 So for China to have nuclear bombs, not as many as the US, is enough for China to say, don't F with me.

Speaker 2 And this seems, I do not know. I'm not in

Speaker 2 President Qi's mind. I'm not in President Trump's mind.

Speaker 2 It's very difficult to navigate what he's thinking about. But the truth is that the Chinese line is, for the last 30 years, you spent so much on war while we spent on industrial infrastructure.

Speaker 2 And that's the reason we are now by far the largest nation on the planet, even though the West will lie and say America is bigger, America is bigger in dollars.

Speaker 2 Okay, with purchasing power parity, this is very equivalent.

Speaker 2 Okay, now when you really understand that, you understand that

Speaker 2 prosperity is not about destruction. That's that's by definition the reality.

Speaker 2 Prosperity is, can I invest in my people and make sure that my people stay safe? And to make sure my people are safe,

Speaker 2 you just wave the flag and say, If you F with me,

Speaker 2 I have nuclear deterrence or I have other forms of deterrence, but you don't have to. Deterrence, by definition, does not mean that you send soldiers to die.

Speaker 3 I guess the question I was trying to answer is, is

Speaker 3 when we have these AI leaders and we tell our AI leaders to aim for prosperity, won't they just end up playing the same games of,

Speaker 3 okay, prosperity equals a bigger economy, it equals more money, more wealth for us. And the way to attain that in a zero-sum world where there's only a certain amount of wealth is to accumulate it.

Speaker 2 So why don't you search for the meaning of prosperity?

Speaker 2 It's not what you just described.

Speaker 3 I don't even know what the bloody word means. What is the meaning of prosperity?

Speaker 3 The meaning of prosperity is a state of thriving success and good fortune, especially in terms of wealth, health, and overall well-being.

Speaker 2 Good.

Speaker 3 economic, health, social, emotional.

Speaker 2 Good.

Speaker 2 So true prosperity is to have that for everyone on earth. earth.
So if you want to maximize prosperity, you have that for everyone on earth.

Speaker 3 Do you know where I think an AI leader works? Is if we had an AI leader of the world and we directed it to say that absolutely is going to be what happens. Prosperity for the whole world.

Speaker 2 But this is really an interesting question. So one of my predictions, which people really rarely speak about, is that we believe we will end up with competing AIs.
Yeah.

Speaker 2 I believe we will end up with one brain.

Speaker 2 Okay.

Speaker 3 So you understand the argument I was making a second ago from the position of lots of different countries all having their own AI leader. We're going to be back in the same place of greed.

Speaker 3 But if the world had one AI leader and it was given the directive of make us prosperous and save the planet,

Speaker 3 the polar bears would be fine. 100%.

Speaker 2 And that's what I've been advocating for for a year and a half now. I was saying we need a CERN of AI.

Speaker 3 What does that mean?

Speaker 2 Like the particle accelerator where the entire world

Speaker 2 combined their efforts to discover and understand physics, no competition, okay, mutually assured prosperity.

Speaker 2 I'm asking the world, I'm asking governments like Abu Dhabi or Saudi, which seem to be, you know, the second, you know, some of the largest AI infrastructures in the world.

Speaker 2 I'm saying, please host all of the AI scientists in the world to come here and build AI for the world. And you have to understand, we're holding on to a capitalist system that will collapse

Speaker 2 sooner or later. Okay? So we might as well collapse it with our own hands.

Speaker 3 I think we found the solution, Mae.

Speaker 2 I think it's actually really, really possible.

Speaker 3 I actually, okay,

Speaker 3 I can't refute the idea that if we had an AI that was responsible and governed the whole world, and we gave it the directive of making humans prosperous, healthy, and happy,

Speaker 3 As long as that directive was clear,

Speaker 3 because there's always bloody unintended consequences.

Speaker 2 We might say that.

Speaker 2 So

Speaker 2 the only challenge you're going to meet is all of those who today are trillionaires or, you know,

Speaker 2 massively powerful or dictators or whatever. Okay.
How do you convince those to give up their power?

Speaker 2 How do you convince those that, hey, by the way,

Speaker 2 any car you want,

Speaker 2 you want another yacht, we'll get you another yacht. We'll just give you anything you want.
Can you please stop harming others? There is no need for arbitrage anymore.

Speaker 2 There's no need for others to lose, for the capitalists to win. Okay.

Speaker 3 And in such a world where there was an AI leader and it was given the directive of making us prosperous as a whole world,

Speaker 3 the billionaire that owns the yacht would have to give it up?

Speaker 2 No. No.

Speaker 5 Give them more yachts.

Speaker 2 Okay. It costs nothing to make yachts when robots are making everything.
So the complexity of this is so interesting.

Speaker 2 A world where it costs nothing to make everything.

Speaker 3 Because energy is abundant.

Speaker 2 Energy is abundant because every problem is solved with enormous IQ.

Speaker 2 Because manufacturing is done through nanophysics, not through components.

Speaker 2 Because mechanics are robotic. So

Speaker 2 you drive your car in, a robot looks at it and fixes it, costs you a few cents of energy that are actually for free as well.

Speaker 2 Imagine a world where intelligence creates everything.

Speaker 2 That world, literally,

Speaker 2 every human has anything they ask for.

Speaker 2 But we're not going to choose that world.

Speaker 2 Imagine you're in a world, and really this is a very interesting thought experiment. Imagine that UBI became very expensive, universal basic income.

Speaker 2 So governments decided we're going to put everyone in a one by three meters room. Okay.
We're going to give them a headset and a sedative.

Speaker 2 Right. And we're going to let them sleep.

Speaker 2 Every night, they'll sleep for 23 hours.

Speaker 2 And we're going to get them to live an entire lifetime.

Speaker 2 In that virtual world at the speed of your brain when you're asleep, you're going to have a life where you date Scarlett Johansson.

Speaker 2 and then another life where you're nefertiti and then another life where you're a donkey right reincarnation truly in the virtual world,

Speaker 2 and then you know, I get another life where I date Hanna again, and I, you know, enjoy that life tremendously. And basically, the cost of all of this is zero.

Speaker 2 You wake up for one hour, you walk around, you move your blood,

Speaker 2 you eat something, or you don't, and then you put the headset again and live again. Is that unthinkable?

Speaker 2 It's creepy compared to this life, but it's very, very doable.

Speaker 3 What that we just live in headsets.

Speaker 2 Do you know if you're not?

Speaker 3 I don't know if I'm not now. Yeah.

Speaker 2 You have no idea if you're not. I mean, every experience you've ever had in life was an

Speaker 2 electrical signal in your brain.

Speaker 2 Okay.

Speaker 2 Now

Speaker 2 ask yourself, if we can create that in the virtual world,

Speaker 2 it wouldn't be a bad thing if I can create it in the physical world.

Speaker 3 Maybe we already did, no.

Speaker 2 My theory is 98% we have, but that's a hypothesis. That's not science.

Speaker 3 What you think that?

Speaker 2 100%. Yeah.

Speaker 3 You think we already created that and this is it?

Speaker 2 I think this is it, yeah.

Speaker 2 Think of the uncertainty principle of quantum physics, right?

Speaker 2 What you observe gets collapses the wave function and gets rendered into reality. Correct?

Speaker 3 I don't know anything about physics, so you got to.

Speaker 2 So quantum physics basically tells you that everything exists in superposition.

Speaker 2 Right? So every subatomic particle that ever existed has the chance to exist anywhere at any point in time. And then when it's observed by an observer, it collapses and becomes that.

Speaker 2 Okay?

Speaker 2 Very interesting principle, exactly how video games are. In video games, you have the entire game world

Speaker 2 on the hard drive of your console.

Speaker 2 The player turns right, that part of the game world is rendered, the rest is in superposition.

Speaker 3 Supposition meaning.

Speaker 2 Superposition means it's available to be rendered, but you have to observe it. The player has to turn to the other side and see it.

Speaker 2 Okay? I mean, think about the truth of physics, the truth of the fact that this is entirely empty space. These are tiny, tiny, tiny.
I think, you know,

Speaker 2 almost nothing in terms of mass,

Speaker 2 but connected with, you know, enough energy so that my finger cannot go through my hand. But even when I hit this,

Speaker 3 your hand against your finger.

Speaker 2 Yeah, when I hit my hand against my finger, that sensation

Speaker 2 is felt in my brain. It's an electrical signal that went through the wires.

Speaker 2 There's absolutely no way to differentiate that from a signal that can come to you through a Neuralink kind of interface, a computer brain interface, a CBI, right? So

Speaker 2 a lot of those things are very, very, very possible. But the truth is, most of the world is not physical.
Most of the world happens inside our imagination, our processors.

Speaker 3 And I guess it doesn't really matter to us our reality.

Speaker 2 It doesn't at all. So this is the interesting bit.
The interesting bit is it doesn't at all.

Speaker 3 Because we still live, if this is a video game, we live in consequence.

Speaker 2 Yeah, this is your subjective experience of it.

Speaker 3 Yeah. And there's consequence in this.
I don't like pain.

Speaker 2 Correct.

Speaker 3 And I like having orgasms.

Speaker 2 And you're playing by the rule of the game. Yeah.
And it's quite interesting. And going back to a conversation we should have, the interesting bit is, if I'm not the avatar,

Speaker 2 if I'm not this physical form,

Speaker 2 if I'm the consciousness wearing the headset,

Speaker 2 what should I invest in? Should I invest in this video game, this level?

Speaker 2 Or should I invest in the real avatar, in the real me, and not the avatar, but the consciousness, if you want, spirit, if you're religious?

Speaker 3 How would I invest in the consciousness or the God or the spirit or whatever?

Speaker 2 How would I?

Speaker 3 In the same way that if I was playing Grand Theft Auto, the video game, the character in the game couldn't invest in me holding the controller.

Speaker 2 Yes, but you can invest in yourself holding the controller.

Speaker 3 Oh, okay, so

Speaker 3 you're saying that

Speaker 3 Mogaura is in fact consciousness. And so how would consciousness invest in itself?

Speaker 2 By becoming more aware.

Speaker 3 Of its consciousness.

Speaker 2 Yeah, so real, real video gamers don't want to win the level.

Speaker 2 Real video gamers don't want to finish the level.

Speaker 2 Real video gamers have one objective and one objective only, which is to become better gamers.

Speaker 2 So you know how serious I am about, I play Halo. I'm one, you know, two of every million players can beat me.
That's how, what I rank, right? Very, for my age, phenomena. Hey, anyone, right?

Speaker 2 But seriously,

Speaker 2 and that's because I don't play. I mean, I practice 45 minutes a day, four times a week when I'm not traveling.
And I practice with one single objective, which is to become a better gamer.

Speaker 2 I don't care which shot it is. I don't care what happens in the game.
I'm entirely trying to... get my reflexes and my flow to become better at this, right? So I want to become a better gamer.

Speaker 2 That basically means I want to observe the game, question the game, reflect on the game, reflect on my own skills, reflect on my own beliefs, reflect on my understanding of things.

Speaker 2 And that's

Speaker 2 how the consciousness invests in the consciousness, not the avatar. Because then if you're that gamer,

Speaker 2 the next avatar is easy for you.

Speaker 2 The next level of the game is easy for you, just because you became a better gamer.

Speaker 3 Okay, so you think that consciousness is using us as a vessel to

Speaker 2 improve? If the hypothesis is true,

Speaker 2 it's just a hypothesis, we don't know if it's true, but if this truly is a simulation,

Speaker 2 then if you take

Speaker 2 the religious definition of God puts some of his soul in every human and then you become alive, you become conscious. Okay,

Speaker 2 you don't want to be religious, you can say universal consciousness is spinning off parts of itself to have multiple experiences and interact and compete and combat and love and

Speaker 3 understand and

Speaker 2 then refine.

Speaker 3 I had a physicist say this to me the other day, actually, so it's quite front of mind, this idea that consciousness is using us as vessels to better understand itself and basically using our eyes to observe itself and understand, which is quite.

Speaker 2 So if you take some of the more interest, most interesting religious definitions of heaven and hell, for example, right? Where basically heaven is

Speaker 2 whatever you wish for, you get.

Speaker 2 That's the power of God. Whatever you wish for, you get.

Speaker 2 And so, if you really go into the depths of that definition, it basically means that this drop of consciousness that became you returned back to the source, and the source can create any other anything that it wants to create.

Speaker 2 So, that's your heaven, right? And interestingly,

Speaker 2 if that

Speaker 2 return

Speaker 2 is done by separating your good from your evil so that the source comes back more refined, that's exactly

Speaker 2 consciousness splitting off bits of itself to experience and then elevate all of us, elevate the universal consciousness.

Speaker 2 All hypotheses, I mean, please,

Speaker 2 none of that is provable by science, but it's a very interesting thought experiment.

Speaker 2 And a lot of AI scientists will tell you that what we've seen in technology is that if it's possible, it's likely going to happen.

Speaker 2 If it's possible to miniaturize something to fit into a mobile phone, then sooner or later in technology, we will get there.

Speaker 2 And if you ask me, believe it or not, it's the most humane way of handling UBI.

Speaker 2 What do you mean? The most humane way

Speaker 2 for us to live on a universal basic income, and people like you struggle with not being able to build businesses, is to give you a virtual headset and let you build as many businesses as you want.

Speaker 2 Level after level after level after level after level, night after night. Keep you alive.
That's very, very respectful and humane. Okay.
And by the way,

Speaker 2 even more humane is don't force anyone to do it. There might be a few of us still roaming the jungles.

Speaker 2 But for most of us, we'll go like, man, I mean, someone like me, when I'm 70 and, you know, my back is hurting and my feet are hurting. And I'm going to go like, yeah, give me five more years of this.

Speaker 2 Why not?

Speaker 2 It's weird, really. I mean, the number of questions

Speaker 2 that this new environment throws out.

Speaker 2 The less humane thing, by the way, just so that we close on a grumpy,

Speaker 2 you know, is

Speaker 2 just start enough wars to reduce UBI.

Speaker 2 And you have to imagine that if the world is governed by a superpower deep state type thing that they may want to consider that

Speaker 3 the eaters what shall i do about it about about everything you said

Speaker 2 uh

Speaker 2 well i i i still believe that this world we live in requires four skills

Speaker 2 One skill is what I call the tool for all of us to learn AI, to connect to AI, to really get close to AI, to

Speaker 2 expose ourselves to AI so that AI knows the good side of humanity. Okay.

Speaker 2 The second is what I call the connection, right? So I believe that the biggest skill that humanity will benefit from in the next 10 years is human connection.

Speaker 2 It's the ability to learn to love genuinely. It's the ability to learn to have compassion to others.
It's the ability to connect to people.

Speaker 2 If you're, you know, if you want to stay in business, I believe that not the smartest people, but the people that connect most to people are going to have jobs going forward.

Speaker 2 And the third is what I call truth, the T. The third T is truth, because we live in a world where all of the gullible cheerleaders are being lied to all the time.

Speaker 2 So I encourage people to question everything.

Speaker 2 Every word that I said today is stupid. Fourth one, which is very important, is to magnify ethics so that the AI learns what it's like to be human.

Speaker 3 What should I do?

Speaker 5 I love you so much, man. You're such a good friend.

Speaker 2 You're 32, 33. 32, yeah.
Yeah, you still are fooled by the many, many years you have to live.

Speaker 3 I'm fooled by the many years I have to live.

Speaker 2 Yeah, you don't have many years to live, not in this capacity. This world as it is, is going to be redefined.
So live the F out of it.

Speaker 3 How is it going to be redefined?

Speaker 2 Everything's going to change. Economics are going to change.
Work is going to change.

Speaker 2 Human connection is going to change.

Speaker 3 So what should I do?

Speaker 2 Love your girlfriend. Spend more time living.

Speaker 2 Find compassion and connection to more people. Be more in nature.

Speaker 3 And in 30 years' time, when I'm 62,

Speaker 3 how do you think my life is going to look differently and be different?

Speaker 2 Either Star Trek or

Speaker 2 Star Wars.

Speaker 3 Funnily enough, we were talking about Sam Altman earlier on. He published a blog post in June, so last month, I believe, the month before last.

Speaker 3 And he said, he called it the gentle singularity. He said, we are past the event horizon.
For anyone that doesn't know, Sam Altman is the guy that made ChatGPT. The takeoff has started.

Speaker 3 Humanity is close to building digital superintelligence.

Speaker 2 I believe that.

Speaker 3 And at least so far, it's much less weird than it seems like it should be because robots aren't work walking the streets, nor are most of us talking to AI all day.

Speaker 3 It goes on to say, 2025 has seen the arrival of agents that can do real cognitive work. Writing computer code will never be the same.

Speaker 3 2026 will likely see the arrival of systems that can figure out new insights. 2027 might see the arrival of robots that can do tasks in the real world.

Speaker 3 A lot more people will be able to create software and art, but the world wants a lot more of both and experts will probably still be much better than novices, as long as they embrace the new tools.

Speaker 3 Generally speaking, the ability for one person to get much more done in 2030 than they could in 2020 will be a striking change and one many people will figure out how we benefit from.

Speaker 3 In the most important ways, The 2030s may not be wildly different. People will still love their families, express their creativity, play games and swim in lakes.

Speaker 3 But in still very important ways, the 2030s are likely going to be wildly different from any time that has come before.

Speaker 2 100%.

Speaker 3 We do not know how far beyond human-level intelligence we can go, but we are about to find out.

Speaker 2 I agree with every word other than the word more.

Speaker 2 So I've been advocating this and laughed at for... a few years now.
I've always said AGI is 25, 26,

Speaker 2 right? Which basically, again, is a funny definition.

Speaker 2 But my AGI has already happened. AI is smarter than me in everything.
Everything I can do, they can do better.

Speaker 2 Artificial super intelligence is another vague definition because the minute you pass AGI,

Speaker 2 you're super intelligent.

Speaker 2 If the smartest human is 200 IQ points and AI is 250, they're super intelligent. 50 is quite significant.

Speaker 2 The third is, as I said, self-evolving. That's the one.
That is the one. Because then that 250

Speaker 2 accelerates quickly and we get into intelligence explosion. No doubt about it.

Speaker 2 The idea that we will have robots do things, no doubt about it. I was watching a Chinese company announcement about how they intend to build robots to build robots.

Speaker 2 Okay, the only thing is, he says, but people will need more of things.

Speaker 2 Right? And yes, we have been trained to have more greed and more consumerism and want more, but there is an economic of supply and demand. And

Speaker 2 at a point in time, if we continue to consume more, the price of everything will become zero. Right? And is that a good thing or a bad thing? Depends on how you respond to that.

Speaker 2 Because if you can create anything in such a scale that the price is almost zero, then the definition of money disappears.

Speaker 2 And we live in a world where it doesn't really matter how much money you have. You can get anything that you want.
What a beautiful world.

Speaker 3 If Sam Altman was listening right now, what would you say to him?

Speaker 3 I suspect he might be listening

Speaker 3 because someone might tweet this at him.

Speaker 2 I have to say that we have,

Speaker 2 as per his other tweet,

Speaker 2 we have moved faster

Speaker 2 than our ability as humans to comprehend.

Speaker 2 And that we might get really, really lucky, but we also might mess this up badly. And either way, we'll either thank him or blame him.

Speaker 2 Simple as that.

Speaker 2 So single-handedly, Sam Altman's introduction of AI in the wild.

Speaker 2 was the trigger that started all of this.

Speaker 2 It was the netscape of the internet.

Speaker 2 The Oppenheimer.

Speaker 2 It definitely is our Oppenheimer moment. I mean, I don't remember who was saying this recently, that

Speaker 2 we are orders of magnitude. What was invested in the Manhattan Project is being invested in AI.

Speaker 2 And I'm not pessimistic.

Speaker 2 I told you openly, I believe in a total utopia in 10 to 15 years' time, or immediately if the evil that men can do was kept at bay.

Speaker 2 Right?

Speaker 2 But I do not believe humanity is getting together enough to say, we've just received the genie in a bottle. Can we please not ask it to do bad things?

Speaker 2 Anyone, like not three wishes, you have all the wishes that you want. Every one of us.

Speaker 2 And it just screws with my mind because imagine if I can give everyone in the world universal health care, you know, no poverty, no hunger, no homelessness, no nothing, everything's possible.

Speaker 2 And yet we don't.

Speaker 3 To continue what Sam Altman's blog said, which he published a month, just over a month ago, he said the rate of technological progress will keep accelerating and it will continue to be the case that people are capable of adapting to almost anything.

Speaker 3 There will be very hard parts, like whole classes of jobs going away.

Speaker 3 But on the other hand, the world will be getting so much richer so quickly that we'll be able to seriously entertain new policy ideas we never could have before.

Speaker 3 We probably won't adopt a new social contract all at once, but when we look back in a few decades, the gradual changes will have amounted in something big.

Speaker 3 If history is any guide, we'll figure out new things to do and new things to want and assimilate new tools quickly. Job change after the Industrial Revolution is a good recent example.

Speaker 3 Expectations will go up, but capabilities will go up equally quickly, and we'll all get better stuff.

Speaker 3 We will build even more wonderful things for each other. People have a long-term, important, and curious advantage over AI.

Speaker 3 We are hardwired to care about other people and what they think and do, and we don't care very much about machines.

Speaker 3 And he ends this blog by saying: May we scale smoothly, exponentially, and uneventfully through super intelligence.

Speaker 2 What a wonderful wish that assumes he has no control over it.

Speaker 2 May we have all the ultimates in the world help us scale gracefully and peacefully and uneventfully, right?

Speaker 3 It sounds like a prayer.

Speaker 2 Yeah,

Speaker 2 may we have them

Speaker 2 keep that in mind. I mean, think about it.
I have a very interesting comment on what you just said.

Speaker 2 We

Speaker 2 will see exactly what he described there, right?

Speaker 2 The world will become richer, so much richer. But how will we distribute the riches? And I want you to imagine two camps: communist China

Speaker 2 and capitalist America.

Speaker 2 I want you to imagine what would happen in capitalist America if

Speaker 2 we have 30% unemployment.

Speaker 3 There'll be social unrest.

Speaker 2 In the streets, right?

Speaker 2 And I want you to imagine if China lives true to caring for its nations and replaced every worker with a robot, what would it give

Speaker 2 its citizens?

Speaker 3 UBI.

Speaker 2 Correct.

Speaker 2 That is the ideological problem. Because in China's world today,

Speaker 2 the prosperity of every citizen is higher than the prosperity of the capitalist.

Speaker 2 In America today, the the prosperity of the capitalist is higher than the prosperity of every citizen. And that's the tiny mind shift.

Speaker 2 It's a tiny mind shift, okay?

Speaker 2 Where the mind shift basically becomes: look,

Speaker 2 give the capitalists anything they want.

Speaker 2 All the money they want, all the yards they want, everything they want.

Speaker 3 So, what's your conclusion there?

Speaker 2 I'm hoping the world will wake up.

Speaker 3 What can, you know, there's probably a couple of million people listening right now. Maybe five, maybe ten, maybe even twenty million people.

Speaker 2 Pressure, Stephen.

Speaker 3 Living pressure to you, mate. I don't have the answers.
I don't know the answers.

Speaker 3 What should those people

Speaker 2 do?

Speaker 2 As I said, from a skills point of view, four things, right? Tools,

Speaker 2 human connection, double down on human connection. Leave your phone, go out and meet humans.

Speaker 3 Touch people.

Speaker 2 You know, do it

Speaker 2 with permission.

Speaker 2 Truth. Stop believing the lies that you're told.
Any slogan that gets filled in your head. Think about it four times understand where your ideologies are coming from simplify the truth right

Speaker 2 truth is really it boils down to you know simple simple rules that we all know okay which are all found in ethics how do i know what's true treat others as you like to be treated

Speaker 2 okay that's the only truth the truth the only truth is everything else is unproven okay and what can i do from uh is there something i can do from an advocacy social political yes 100 we need to ask our governments to start uh not regulating ai but regulating the use of ai was it the norwegian government that started to say you have copyright over your voice and look and and liking one of the scandinavian governments basically said you know everyone has the has the copyright over their existence so no ai can clone it okay you know we have so so my my example is very straightforward go to governments and say you cannot regulate the design of a hammer so that it can drive nails but not kill a human but you can criminalize criminalize the killing of a human by a hammer.

Speaker 2 So what's the equivocal? If anyone produces

Speaker 2 an AI-generated video or an AI-generated content or an AI, it has to be marked as AI-generated.

Speaker 2 It has to be,

Speaker 2 we cannot start fooling each other.

Speaker 2 We have to understand certain limitations of, unfortunately, surveillance and spying and all of that. So

Speaker 2 the correct frameworks of how far are we going to let AI go?

Speaker 2 Right?

Speaker 2 We have to go to our investors and business people and ask for one simple thing and say, do not invest in an AI you don't want your daughter to be at the receiving end of.

Speaker 2 Simple as that. You know, all of the virtual vice, all of the porn, all of the sex robots, all of the autonomous weapons, all of the

Speaker 2 trading platforms that are completely wiping out the legitimacy of the markets, everything.

Speaker 3 Autonomous weapons.

Speaker 2 Oh my God.

Speaker 3 People make the case. I've heard the founders of these autonomous weapon companies make the case that it's actually saving lives because you don't have to.

Speaker 2 That is.

Speaker 2 Do you really want to believe that?

Speaker 3 I'm just representing their point of view to play devil's advocate, Mo.

Speaker 3 I heard an interview, I was looking at this, and one of the CEOs of one of the autonomous weapons companies said, we now don't need to send soldiers.

Speaker 2 So which lives do we save?

Speaker 2 Our soldiers, but then, but because we sent the machine all the way over there, let's kill a million instead of...

Speaker 3 Yeah, listen, I tend to be, it goes back to what I said about the steam engine in the coal. I actually think you'll just have more war if there's less of a cost.

Speaker 2 100%.

Speaker 2 And more war if you have less of an explanation to give to your people.

Speaker 3 Yeah, the people get mad when they lose American lives. They get less mad when they lose a piece of metal.
So I think that's probably logical.

Speaker 3 Okay, so

Speaker 3 okay, so I've got a plaque, I've got the tools thing, I'm going to spend more time outside. I'm going to lobby the government to be more aware of this and conscious of this.
Okay.

Speaker 3 And I know that there's some government officials that listen to the show because

Speaker 3 they tell me

Speaker 3 when I have a chance to speak to them. So it's useful.

Speaker 2 We're all in a lot of chaos. We're all unable to imagine what's possible.

Speaker 3 I think I suspend disbelief. And I actually heard Elon Musk say that in an interview.

Speaker 3 He said he was asked about AI and he paused for a haunting 11 seconds and looked at the interviewer and then made a remark about how he thinks he's suspended his own disbelief.

Speaker 3 And I think suspending disbelief in this regard means just like cracking on with your life and hoping it'll be okay.

Speaker 3 And that's kind of what.

Speaker 2 Yeah,

Speaker 2 I absolutely believe that it will be okay. Yeah.
For some of us, it will be very tough for others.

Speaker 3 Who's it going to be tough for?

Speaker 2 Those who lose their jobs, for example.

Speaker 2 Those who are at the receiving end of autonomous weapons that are falling on their head for two years in a row.

Speaker 3 Okay, so the best thing I can do is to put pressure on governments to

Speaker 3 not regulate the AI, but to establish clearer parameters on the use of the AI.

Speaker 2 Yes. Okay.
Yes. But I think the bigger picture is to put pressure on governments to understand

Speaker 2 that there is a limit to which people will stay silent.

Speaker 2 Okay.

Speaker 2 And that we can continue to enrich our rich friends as long as we don't lose everyone else

Speaker 2 on the path.

Speaker 2 Okay, and that as a government who is supposed to be by the people for the people, the beautiful promise of democracy that we're rarely seeing anymore,

Speaker 2 that government needs to get to the point where it thinks about the people.

Speaker 3 One of the most interesting ideas that's been in my head for the last couple of weeks since I spoke to that physicist about consciousness, who said pretty much what you said, this idea that actually there's four people in this room right now, and that actually we're all part of the same consciousness.

Speaker 2 All one of it, yeah.

Speaker 3 And we're just consciousness looking at the world through four different bodies to better understand itself and the world.

Speaker 3 And then he talked to me about religious doctrines, about love thy neighbor, about how Jesus was the you know, God, Son, the Holy Spirit, and how we're all each other, and how treat others how you want to be treated.

Speaker 3 Really did get my head, and I started to really think about this idea that actually maybe the game of life is just to do exactly that, is to treat others how you wish to be treated.

Speaker 3 Maybe if I just did that, maybe if I just did that,

Speaker 3 I

Speaker 3 would have all the answers.

Speaker 2 I swear to to you, it's really that simple. I mean,

Speaker 2 you know, Hannah and I, we still live between London and Dubai. Okay.
And I travel the whole world evangelizing what I, you know, what I

Speaker 2 want to change the world around. And I build startups and I write books and I make documentaries.

Speaker 2 And sometimes I just tell myself,

Speaker 3 I just want to go hug her.

Speaker 2 honestly, you know, I just want to take my daughter to a trip.

Speaker 2 And in a very, very, very, very interesting way, when you really ask people deep inside,

Speaker 2 that's what we want. And I'm not saying

Speaker 2 that's the only thing we want,

Speaker 3 but it's probably the thing we want the most.

Speaker 2 And yet, we're not trained. You and I, and most of us, we're not trained to trust life enough to say, let's do more of this.

Speaker 2 And I think as a universal, so Hannah Hannah is working on this beautiful book

Speaker 2 of the feminine and the masculine, you know, in a very, very, you know, beautiful way. And

Speaker 2 her view is very straightforward. She basically, of course, like we all know, the abundant masculine that we have in our world today is unable to recognize that for life at large.

Speaker 2 Right?

Speaker 2 And so,

Speaker 2 you know,

Speaker 2 maybe if we allowed the leaders to understand that if we took all of humanity and put it as one person,

Speaker 2 that one person wants to be hugged.

Speaker 2 And if we had a role to offer to that one humanity,

Speaker 2 it's not another yacht.

Speaker 3 Are you religious?

Speaker 2 I'm very religious, yeah.

Speaker 3 But you don't support a particular religion?

Speaker 2 I support,

Speaker 2 I follow what I call the fruit salad.

Speaker 3 What's the fruit salad?

Speaker 2 You know, I came at a point in time and found that there were quite a few beautiful gold nuggets nuggets in every religion and a ton of crap.

Speaker 2 Right? And so in my analogy to myself, that was like 30 years ago, I said, look, it's like someone giving you a basket of apples, two good ones and four bad ones. Keep the good ones.
Right?

Speaker 2 And so basically I take two apples, two oranges, two strawberries, two bananas, and I make a fruit salad. That's my view of religion.

Speaker 3 You take from every religion the good fruit?

Speaker 2 From everyone, and there are so many beautiful gold nuggets.

Speaker 3 And you believe in a God?

Speaker 2 I 100% believe there is a divine being here.

Speaker 3 A divine being.

Speaker 2 A designer, I call it. So if this was a video game, there is a game designer.

Speaker 3 And you're not positing whether that's a man in the sky with a beard?

Speaker 2 Definitely not a man in the sky. A man in the sky, I mean, with all due respect to,

Speaker 2 you know, religions that believe that.

Speaker 2 All of space-time... and everything in it is unlike everything outside space-time and so if some divine designer designs space-time it looks like nothing in space-time

Speaker 2 so it's not it's not even physical in nature it's not it's not gendered it's not bound by time it's not you know these are all characters of the creation of space-time do we need to believe in something transcendent like that to be happy do you think

Speaker 2 i have to say uh there are lots of evidence

Speaker 2 that uh relating to someone bigger than yourself

Speaker 2 uh

Speaker 2 makes the journey a lot more interesting and a lot more rewarding.

Speaker 3 I've been thinking a lot about this idea that we need to level up like that.

Speaker 3 So level up from myself to like my family, to my community, to maybe my nation, to maybe the world, and then something transcendental.

Speaker 3 And then if there's a level missing there, people seem to have some kind of dysfunction.

Speaker 2 So imagine a world where when I was younger, I was born in Egypt. And for a very long time, the slogans I heard in Egypt made me believe I'm Egyptian.
Right.

Speaker 2 And then I went to Dubai and I said, no, no, no, I'm a Middle Eastern.

Speaker 2 And then in Dubai, there were lots of, you know, Pakistanis and Indonesians and so on. I said, no, no, no, I'm part of the 1.4 billion Muslims.

Speaker 2 And by that logic, I immediately said, no, no, I'm human.

Speaker 3 I'm part of everyone.

Speaker 2 Imagine if you just suddenly say, oh, I'm divine. I'm part of universal consciousness.
All beings, all living beings, including AI, if it ever becomes alive.

Speaker 3 And my dog.

Speaker 2 And your dog.

Speaker 2 I'm part of all of this

Speaker 2 tapestry of beautiful interactions

Speaker 2 that are a lot less serious than the balance sheets and equity profiles that we create,

Speaker 2 that are so simple, so simple in terms of, you know,

Speaker 2 people know that you and I know each other, so they always ask me, you know, how is Stephen like?

Speaker 2 And I go, like, you may have a million expressions of him. I think he's a great guy, right?

Speaker 2 You know, of course, I have opinions of you. You know, sometimes I go like, oh, too shrewd.
Right. Sometimes, too, you know, sometimes I go like, oh, too focused on the business.

Speaker 2 Fine, but core, if you really simplify it, great guy, right? And really, if we just look at life that way, it's so simple.

Speaker 2 It's so simple if we just stop all of those fights and all of those ideologies.

Speaker 2 It's so simple. Just living fully.
loving, feeling compassion,

Speaker 2 you know, trying to find our happiness, not our success.

Speaker 3 I should probably go check on my dog.

Speaker 2 Go check on your dog. I'm really grateful for the time.

Speaker 2 We keep doing longer and longer.

Speaker 3 I know, I know. I just could

Speaker 3 so crazy how I could keep, just keep, honestly, I could just keep talking and talking because I have so many. I just love reflecting these questions onto you because of the way that you think.

Speaker 2 So, yeah, today

Speaker 2 was a difficult conversation. Anyway, thank you for having me.

Speaker 3 We have a closing tradition. What three things

Speaker 3 do you do that make your brain better

Speaker 3 and three things that make it worse?

Speaker 2 Three things.

Speaker 3 That make it better and worse.

Speaker 2 So one of my favorite exercises is what I call meet Becky, that makes my brain better.

Speaker 2 So while meditation always tells you to try and calm your brain down and keep it within parameters of I can focus on my breathing and so on, meet Becky is the opposite.

Speaker 2 You know, I call my brain Becky. A lot of people know that.

Speaker 2 So meet Becky is to actually let my brain go loose and capture every thought so i i normally would try to do that every couple of weeks or so and then what happens is it suddenly is on a paper and when it's on paper you just suddenly look at it and say oh my god that's so stupid and you scratch it out right or oh my god this needs action and you actually plan something

Speaker 2 and and it's quite interesting that the more you allow your brain to give you thoughts and you listen so the two rules is you acknowledge every thought and you never repeat one okay so the more you listen and say, Okay, I heard you, you know, you think I'm fat, what else?

Speaker 2 And you know, eventually, your brain starts to slow down and then eventually starts to repeat thoughts, and then it goes into total silence. Beautiful practice.

Speaker 2 I don't trust my brain anymore. So, that's actually a really interesting practice.
So, I debate a lot of what my brain tells me. I debate what my tendencies and ideologies are.

Speaker 2 Okay, I think one of the most

Speaker 2 again in my

Speaker 2 love story with Hannah, I get to question a lot of what I believed was who I am, even at this age.

Speaker 2 And that goes really deep. And it really is quite,

Speaker 2 it's quite interesting to debate, not object, but debate what your mind believes. I think that's very, very useful.
And the third is I've actually quadrupled my investment time.

Speaker 2 So I used to do an hour a day of reading when I was younger, every single day, like going to the gym. And then it became an hour and a half, two hours.
Now I do four hours a day.

Speaker 2 Four hours a day. It is impossible to keep up.
The world is moving so fast. And so that these are these are the good things that I do.
The bad things is I don't give it enough time to

Speaker 2 really

Speaker 2 slow down.

Speaker 2 Unfortunately, I'm constantly rushing like you are. I'm constantly traveling.
I have picked up a bad habit because of the four hours a day of spending more time on screens.

Speaker 2 That's really, really bad for my brain. And I,

Speaker 2 this is a very demanding question. What else is really bad?

Speaker 2 Yeah, I've not been taking enough care of my health recently, my physical body health. I had, you remember I told you I had a very bad sciatic pain, and so I couldn't go to the gym enough.

Speaker 2 And accordingly, that's not very healthy for your brain in general.

Speaker 2 Amen.

Speaker 2 Thanks. Thank you for having me.
That was a lot of things to talk about.

Speaker 2 Thanks, Steve.

Speaker 3 Just give me 30 seconds of your time. Two things I wanted to say.
The first thing is a huge thank you for listening and tuning into the show week after week. It means the world to all of us.

Speaker 3 And this really is a dream that we absolutely never had and couldn't have imagined getting to this place. But secondly, it's a dream where we feel like we're only just getting started.

Speaker 3 And if you enjoy what we do here, please join the 24% of people that listen to this podcast regularly and follow us on this app. Here's a promise I'm going to make to you.

Speaker 3 I'm going to do everything in my power to make this show as good as I can now and into the future.

Speaker 3 We're going to deliver the guests that you want me to speak to, and we're going to continue to keep doing all of the things you love about this show.

Speaker 2 Thank you.

Speaker 3 We launched these conversation cards and they sold out. And we launched them again and they sold out again.

Speaker 3 We launched them again and they sold out again because people love playing these with colleagues at work, with friends at home, and also with family.

Speaker 3 And we've also got a big audience that use them as journal prompts. Every single time a guest comes on the diary of a CEO, they leave a question for the next guest in the diary.

Speaker 3 And I've sat here with some of the most incredible people in the world. And they've left all of these questions in the diary.
And I've ranked them from one to three in terms of the depth.

Speaker 3 One being a starter question. And level three, if you look on the back here, this is a level three, becomes a much deeper question that builds even more connection.

Speaker 3 If you turn the cards over and you scan that QR code, you can see who answered the card and watch the video of them answering it in real time.

Speaker 3 So if you would like to get your hands on some of these conversation cards, go to thediary.com or look at the link in the description below.

Speaker 1 The first impression of your workplace shouldn't be a clipboard at reception. Sign In App turns check-ins into a moment of confidence for your team and your guests.

Speaker 1 Visitors, contractors, and staff can sign in by scanning a QR code, tapping a badge, or using an iPad in seconds.

Speaker 1 We handle the security, compliance, and record-keeping behind the scenes, so you can focus on people, not paperwork. Enhance security without compromising visitor experience.

Speaker 1 Find out more at signinapp.com. That's signinapp.com.