Will AI Save Humanity or End It? with Mustafa Suleyman

1h 45m
Trevor (who is also Microsoft’s “Chief Questions Officer”) and Mustafa Suleyman, CEO of Microsoft AI and co-founder of Google’s DeepMind do a deep dive into whether the benefits of AI to the human race outweigh its unprecedented risks.

Press play and read along

Runtime: 1h 45m

Transcript

Speaker 1 So I feel like people fall in one of two camps on AI. They either think it's going to destroy all of humanity.

Speaker 2 42% of CEOs surveyed fear artificial intelligence could destroy humanity. This is something that put in the wrong hands could destroy humanity.

Speaker 1 Or they think it's going to solve every single problem. Mustafa Suleiman.

Speaker 4 Mustafa Suleiman.

Speaker 1 Mustafa Suleiman is an artificial intelligence pioneer.

Speaker 5 He is the AI CEO at Microsoft. He is very big in the artificial intelligence world.

Speaker 3 How do we manage these technologies so that we can coexist with them safely?

Speaker 9 Can humans and AI coexist with each other peacefully without one taking over the other?

Speaker 3 This is What Now with Trevor Noah.

Speaker 11 ABC Wednesday, Shifting Gears is back. It has arisen.
Tim Allen and Kat Dennings return in television's number one new comedy.

Speaker 12 What what?

Speaker 11 With a star-studded premiere including Jenna Elfman, Nancy Travis, and

Speaker 8 hey buddy!

Speaker 13 A big home improvement reunion.

Speaker 8 Welcome. Oh boy.

Speaker 4 That guy's a tool.

Speaker 11 Shifting Gears, season premiere Wednesday, 8-7 Central on ABC and stream on Hulu.

Speaker 14 Mint is still $15 a month for premium wireless.

Speaker 15 And if you haven't made the switch yet, here are 15 reasons why you should.

Speaker 14 One, it's $15 a month.

Speaker 17 Two, seriously, it's $15 a month.

Speaker 19 Three, no big contracts.

Speaker 18 Four, I use it. Five, my mom uses it.

Speaker 16 Are you playing me off?

Speaker 18 That's what's happening, right?

Speaker 6 Okay, give it a try at mintmobile.com/slash switch.

Speaker 4 Upfront payment of $45 for three months plan, $15 per month equivalent required. New customer offer first three months only, then full price plan options available.
Taxes and fees extra.

Speaker 4 See mintmobile.com.

Speaker 3 Mustafa, how are you, man?

Speaker 7 I'm very good, man. This is great.

Speaker 8 This is good.

Speaker 21 It's funny that

Speaker 3 there's almost two or three different types of conversations I have with people. There's ones where I'm hanging out, it's my friends, we're discussing just whatever, you know, shooting the shit.

Speaker 3 Then there's some where I bring a person in and I'm trying to like get something from them or learn about their world.

Speaker 3 And then there's the third type of interview that often stresses me the most because I feel like I'm speaking to a person who is who has like an outsized influence on our world.

Speaker 3 And if I mess it up, I don't ask the questions that the world needs. And I feel like you're one of those people because

Speaker 3 even before your current job, you were considered one of, like, if there was a Mount Rushmore

Speaker 24 of

Speaker 8 the founders of AI,

Speaker 3 with none of the baggage, with none of the baggage of Mount Rushmore.

Speaker 8 No colonial history. No colonial history.

Speaker 3 Yeah, no colonial history. But if there was like a large Mount Rushmore, your face would be up there,

Speaker 3 you know, as being part of DeepMind and the founders of DeepMind. And then now

Speaker 3 you are helping Microsoft, like one of the biggest, you know, tech companies in the world by, you know, market cap and just by influence shape its view on AI. And so

Speaker 3 maybe that's where I wanted to start.

Speaker 3 this conversation because it's it almost feels like where we meet you in the journey now you know what would you say has been the biggest shift in in your life and what you've been doing doing in AI?

Speaker 3 Going from a startup that was on the cusp and this fledgling world of AI to now being at the head of what's going to shape all of our lives in AI?

Speaker 28 Wow, what an opener.

Speaker 29 I mean, seriously, no pressure.

Speaker 32 You know, the crazy thing is that I've just got incredibly lucky.

Speaker 35 I mean,

Speaker 36 I...

Speaker 44 was strange enough to start working on something that everybody thought was impossible, was totally sci-fi, that was, you know, just dismissed by even the kind of best academics, let alone any of the big tech companies didn't take it seriously.

Speaker 51 I mean, 2010, you know, just to really ground on that, we had just got mobile phones.

Speaker 52 three years earlier.

Speaker 39 The App Store was just coming alive.

Speaker 57 You couldn't even easily upload a photo from your phone to, you know, an app or the cloud.

Speaker 60 Right.

Speaker 62 And somehow, somehow, you know, my courageous, visionary co-founders, Demis Hasabis and Shane Legg,

Speaker 29 had the foresight to know that

Speaker 68 technology ultimately, digital technologies ultimately become these learning algorithms, which, when fed more data and given more compute, have a very good chance of learning the structure and nature of the universe.

Speaker 73 And so I was just very privileged to be friends with them, be part of that mission.

Speaker 43 To, you know, I was only 25 years old and,

Speaker 77 you know, kind of had the fearlessness to believe that if we could create something that truly understood us as humans, then that actually represents one of the best chances we have of improving the human condition.

Speaker 43 And that's always been my motivation to create technologies that actually serve us and make the world a better place.

Speaker 83 Like I was cheesy before it was cheesy.

Speaker 8 Free cheese.

Speaker 3 You said something that, like, that sparked a thought in my mind. And I think a lot of people would love to better understand this.

Speaker 3 We see headlines all the time saying, AI, this, AI, and your job. AI does the thinking, the not thinking, though.
And you said something that engineers will gloss over quite quickly.

Speaker 3 You'll go, data and compute, and then a model.

Speaker 84 And then

Speaker 3 help

Speaker 3 me break that down. Help me like just explain that to me in the simplest terms possible.

Speaker 3 What changed and what are you actually doing? Because it's like we always had data, right?

Speaker 3 We've had documents, we've had files, we've had information. We always had computers.
Well, not always, but we had computers for decades.

Speaker 3 What changed and what is AI actually coming from?

Speaker 32 So

Speaker 47 the best intuition that I have for it is that that our physical world can be converted into an information world.

Speaker 75 And information is basically this abstract idea.

Speaker 28 Like it's mathematics.

Speaker 49 It doesn't exist in the physical world.

Speaker 73 But it's a representation of the physical objects.

Speaker 10 And the algorithm...

Speaker 90 sounds complicated, but really it's just a mechanism for learning the structure of information and the relationship of one pixel to another pixel or one word to another word, or one bit in the audio stream to the next bit in the audio stream.

Speaker 98 So I know that sounds very like abstract, but the structure of reality or the structure of information is actually a highly learnable sort of function, right?

Speaker 33 And that's what we saw in the very, very early part of AI between 2010 and sort of 2016.

Speaker 55 These models could learn to understand, or at least not understand, but maybe they could learn to generate a new image just by reading a million images.

Speaker 96 And that meant that it was learning that, you know, if an eye was here and another eye was there, then most likely there would be some form of a nose here.

Speaker 81 And although it didn't apply the word nose and eye, it just had a statistical correlation between those three objects, such that if you then wanted to imagine, well, where would the mouth be, it wouldn't put the mouth in the forehead.

Speaker 28 It would put the mouth just below the nose.

Speaker 92 And that's what I mean about the structure of information.

Speaker 81 The algorithm learns the relationship between the common objects in the training data.

Speaker 100 And it did that well enough that it could generate new examples of the training data.

Speaker 81 First, in 2011, it was handwritten black and white digits.

Speaker 55 Then by 2013, it was like cats in YouTube images, which was what Google did back in 2013.

Speaker 57 Then as it got better, it could do it with audio.

Speaker 81 And then over time, you know, roll forward another 10 years, it did it with text.

Speaker 76 And so it's just the same core mechanism for learning the structure of information that has scaled all the way through.

Speaker 3 So it's interesting.

Speaker 3 I heard you say what it does. And I also noticed at a moment you said it understands.
And then you said, well, no, wait.

Speaker 3 And I've actually noticed quite a few engineers and people who work in AI and tech struggle with explaining it to laymans using, you know, human language, but then very quickly going like, no, no, no, no, no, it's, it's, it's not human language.

Speaker 3 It's like, does it think or does it do what we think thinking is?

Speaker 110 Yeah, I mean, this is a profound question.

Speaker 81 And basically, it shows us the limitations of our own vocabulary.

Speaker 87 Because

Speaker 24 what is thinking?

Speaker 50 You know,

Speaker 29 it sounds like a silly question, but it's actually a very profound question.

Speaker 100 What is understanding? If I can simulate understanding

Speaker 94 so perfectly that I can't distinguish between what was generated by the simulation and what was generated by the thinking or understanding human being,

Speaker 92 then if those two outputs are equivalently impressive, does it matter what's actually happening under the hood, whether it's thinking or understanding or whether it's conscious?

Speaker 69 That's a very, very difficult thing to ask because we're kind of behaviorists in the sense that as humans, we trust each other, we learn from each other, and we connect connect to each other socially by observing our actions.

Speaker 118 You know, I don't know what's happening inside your brain, behind your eyes, inside your heart, in your soul, but I certainly hear the voice that you give me, the words that you say, I watch the actions that you do, and I observe those behaviors.

Speaker 98 And the really difficult thing about the moment that we're entering with this new

Speaker 121 AI agentic era as they become not just pattern recognition systems but whole agents is that we have to engage with their behaviors increasingly as though they're like sort of digital people.

Speaker 81 And this is a threshold transformation in the history of our species because they're not tools.

Speaker 71 They're clearly not humans.

Speaker 98 They're not part of nature.

Speaker 40 They're kind of a fourth relation, a fourth emergent kind of, I don't know how to describe it, other than a fourth relation.

Speaker 3 Yeah, I mean, you've called AI the most powerful general purpose technology that we've ever invented.

Speaker 3 And

Speaker 3 when I read that line in your book, I was thinking to myself, I was like, man,

Speaker 3 you are now at the epicenter of helping Microsoft shape this at scale.

Speaker 3 And then it made me wonder,

Speaker 3 what are you actually then trying to build?

Speaker 123 Because

Speaker 3 everyone has a different answer to this question, I've realized.

Speaker 3 If you ask Sam Altman, Chat GPT, Sam Altman says, I'm trying to build artificial general intelligence. And I go like, oh, I I like the app.
He's like, I don't care about the app, actually.

Speaker 3 I want to make the God computer. And then you speak to somebody else and they say, oh, I'm trying to make AI that can help companies.
I'm trying to make AI that helps.

Speaker 3 So what are you actually trying to build?

Speaker 104 I care about creating technologies

Speaker 114 that reduce human suffering.

Speaker 36 I want to create things that are truly aligned to human interests.

Speaker 50 I care about humanist superintelligence.

Speaker 47 And that means that at every single step, new inventions have to pass the following test.

Speaker 35 In aggregate net net, does it actually improve human well-being, reduce human suffering, and overall make the world a better place?

Speaker 43 And it seems like ridiculous that we would want to apply that test.

Speaker 71 Surely we would all take it for granted that no one would want to invent something that causes net harm.

Speaker 109 Right.

Speaker 120 But, you know, there's certainly been other inventions in the past that we could think of that, you know, arguably have delivered net harm.

Speaker 74 Right.

Speaker 59 And we have a choice about what we bring into the world.

Speaker 99 And so even though it's in the context of Microsoft, the most valuable company in the world today, we have to start with values and what we care about.

Speaker 130 And to me, a humanist superintelligence is one that always puts the human first and works in service of the human.

Speaker 126 And obviously, there'll be a lot of debate and interpretation over the next few decades about what that means in practice, but I think it's the correct starting point.

Speaker 3 I've always wondered how the two sides of your brain

Speaker 3 sort of wrestle with each other around these topics. Because,

Speaker 3 you know, someone asked me, they were like, oh, who are you having on? I go, Mustafa's coming on, Mustafa Salaiman. And they're like, oh, what is he doing? I explained a little bit.

Speaker 3 And I was like, oh, so like an AI guy.

Speaker 13 Then I was like, yeah, but he's also a philosopher.

Speaker 3 And they're like, what do you mean he's a philosopher? Then I was like, no, no, no. Like, actually, like, actually, this is somebody who has studied philosophies, engaged.

Speaker 3 Like, you you you you think about the human ramifications of the non-human technologies that are being built by and for humans

Speaker 3 and

Speaker 3 and what you know what it is for me is i i i i always judge people by what they choose to yada yada if that makes sense you know so i've talked to some people in tech and i say what about what about the dangers and they go oh look i mean of course we've got to be aware of the dangers but but the future and it's so big and then i remember once i i met you the first time actually I met you, I said,

Speaker 3 the technology is amazing. And then you went, the dangers.
Let me tell you about the dangers. Let me tell you about the things we need to consider.
And I was like, what just happened here?

Speaker 8 Do you know what I mean?

Speaker 3 I was just like, is this guy working against himself? And so I wonder now, like, when you're in that space, when you're working on something that is that big,

Speaker 3 how do you find the balance? Because we would be lying if we said humans could live in a world where we could ignore technology.

Speaker 3 As I've seen people say that, my opinion is that you can't ignore a technology, right? You can't just be like, no, we'll act like it doesn't exist.

Speaker 3 But on the other hand, we also can't act like the technology is inevitable because then we've given ourselves up.

Speaker 3 So when you're the person who's actually at the epicenter of trying to build our future, and I know it's not you alone, please don't get me wrong.

Speaker 109 But how do you...

Speaker 3 How do you think about that? How do you grapple with philosophy versus business, philosophy versus technology, human versus like an outcome? What are you thinking of?

Speaker 8 You've called out my split brain personality and now I'm like pinging which side of me should answer. I can answer twice.
Yes,

Speaker 8 you can pick your answer. I'll give both.

Speaker 88 I think part of it is just being

Speaker 101 English, you know.

Speaker 131 I'm kind of like,

Speaker 94 I'm more comfortable than the average American thinking about the kind of cynical...

Speaker 31 dark side of things.

Speaker 3 It's those rainy days.

Speaker 31 It's rainy days, man.

Speaker 3 It's those rainy days.

Speaker 87 And I just think, I don't know, like truth exists in the honesty of looking at all sides.

Speaker 99 And I think if you have a kind of bias one way or another, it just doesn't feel real to me.

Speaker 34 And I guess that's kind of my philosopher or kind of academic side that is a core part of who I am.

Speaker 81 Like I'm comfortable, you know, living in, you know, what to some people might seem like a set of contradictions because to me, they're not contradictions.

Speaker 116 They're truth manifested in a single individual.

Speaker 135 But if you are honest about it, it's also manifested in every single one of us, too.

Speaker 94 You know, I happen to be in a position, but like the company has to wrestle with these things.

Speaker 80 Our governments have to wrestle with these things.

Speaker 135 Every single one of us as citizens has to confront this reality because...

Speaker 90 you know, every single technology just accelerates massive transformation, which can deliver unbelievable benefits and also create side effects.

Speaker 128 And it's like that idea has been repeated so many times, it now kind of sounds trite.

Speaker 104 But once you get over the trite part, you still have to engage with the fact that the very same thing that is going to reduce the cost of production of energy over the next two decades by 100x,

Speaker 10 reduce the cost of energy by 100x.

Speaker 7 You think that's what I can do?

Speaker 115 100%.

Speaker 28 Like I feel very optimistic about that.

Speaker 3 Wait, wait, so now say that again.

Speaker 3 Reduce the cost of energy.

Speaker 93 I think energy is going to become a pretty much cheap and abundant resource.

Speaker 99 I mean, even solar panels alone are probably going to come down by another 5x in the next 10 years.

Speaker 87 Like just that breakthrough alone is going to reduce the price of most things.

Speaker 3 And what is that through? Is that like the AI being more efficient, teaching us how to create different energy grids, teaching us how to create energy differently? Like

Speaker 3 what would you predict it coming from?

Speaker 115 Well, I mean, so at the most abstract level, these are pattern matching systems that find more efficient ways than we are able to invent ourselves as humans for combining new materials.

Speaker 26 Now, that might be in grid management and distribution.

Speaker 47 It might be inventing new synthetic materials for

Speaker 94 batteries and storing renewable energy.

Speaker 96 It might be in more efficient solar

Speaker 44 voltaic cells that can actually capture more per square inch, for example.

Speaker 74 I mean, there are so many breakthroughs that you know, we are kind of on the cusp of that require just one or two more pushes to get them over the line, even the superconductors from last year those things any one of those could come in right and if they do we see massive transformation in the economy i mean imagine if by 2045 you know energy is let's say 10 to 100x cheaper we will be able to desalinate water from the ocean

Speaker 81 anywhere, which means that we would have clean water in places that might be 50 degrees or whatever, you know, 120 degrees hot, right?

Speaker 48 Which means that we can grow crops in arid environments, which will mitigate the flow of migration because of climate change, which means that we could run AC units in places that we never could before.

Speaker 73 You know, there are so many knock-on effects of fundamental technologies, general purpose technologies like energy coming down by 10 to 100x.

Speaker 81 So there are huge reasons to be optimistic that everybody is going to get access to these technologies and the benefits of these technologies over the next couple of decades.

Speaker 108 And that will make life much easier and much cheaper for everybody on the planet.

Speaker 3 So

Speaker 3 let's jump into that a little bit. Like

Speaker 3 it could make energy how many times cheaper?

Speaker 138 Well, I was saying 100x cheaper over 20 years.

Speaker 3 100x cheaper over 20 years. So

Speaker 3 this is one of those instances that I've struggled with because, you know, like depending on where you get information and how you get information, it changes how you perceive the issue. Right.

Speaker 3 So I remember being really angry when I saw how much water is consumed by like typing one query into Copilot, ChatGPT, any AI model.

Speaker 3 Then I was even more angry when I saw how much water is consumed by like getting a picture, you know, made.

Speaker 3 And then I saw something else that was like, oh, this is nothing compared to cars and, you know,

Speaker 3 produce and like making hamburgers and that. And then I was like, okay, like, where's the information coming from? Where's it not coming from?

Speaker 3 The response to the price of

Speaker 3 AI, like, is it driven by

Speaker 3 the AI industry saying, no, this is actually not that bad? Or like, how do you think we should look at it, or how do you look at it?

Speaker 86 I mean, look, it consumes vast amounts of resources, precious metals, electricity, water, no question about that, right?

Speaker 62 On the energy side of things, all of the big companies now are almost entirely 100% renewable.

Speaker 3 Certainly Microsoft, 100% renewable.

Speaker 63 I think we have 33 gigawatts of renewable energy in our entire fleet of computation.

Speaker 47 For comparison, I think Seattle consumes 2 gigawatts per year of energy.

Speaker 26 So just to put that into perspective.

Speaker 3 The whole of Seattle. The whole of Seattle consumes 2 gigawatts.
And Microsoft is creating how much?

Speaker 109 33 overall in the fleet. This is worldwide.

Speaker 3 Yeah, no, no, no, but still.

Speaker 43 And the vast majority of it is 100% renewable.

Speaker 60 So coming from solar or wind or water.

Speaker 57 But it also consumes a lot of water in the process.

Speaker 81 Like we have to cool these systems down.

Speaker 72 And, you know, for sure that consumes a lot.

Speaker 26 Now,

Speaker 43 I don't know that there is an easy way of, you know, there's no shortcut there.

Speaker 78 It's expensive.

Speaker 94 It consumes, you know, resources, it consumes a lot of resources from the environment.

Speaker 84 But I think net net, when you look at the beneficial impact, to me, it's justified.

Speaker 52 Like, you know, you wouldn't give up your car or tell people to give up their car anytime soon because it uses aluminium and rubber.

Speaker 78 And this is an essential part of your existence.

Speaker 99 And I think AI is going to become an essential part of everybody's existence and justify the environmental costs, even though that doesn't mean that we have to go and consume diesel generators and

Speaker 81 carbon emitting.

Speaker 34 We get to start again from scratch, which is to say new technology arrives, new standard has to be applied to it, which means that our water has to also be cleaned and recycled, which many of the data centers do now take full life life cycle responsibility for cleaning the water.

Speaker 93 And the same with the energy, it has to be renewable.

Speaker 90 So there's no easy way out.

Speaker 49 It's just a rough reality that producing things at this scale is definitely going to consume more resources from the environment.

Speaker 3 It's funny, every time I try and think of it,

Speaker 3 I think of the

Speaker 3 gift and curse that comes with anything that scales.

Speaker 3 You know, the analogy I'll always use for myself as I'll go, I think of like an aeroplane.

Speaker 3 Before an aeroplane is invented, especially like a large jumbo jet, the amount of people who can die while being transported somewhere is much lower, really, if we're honest.

Speaker 3 You know, a car, four people, six people, whatever it might be, still tragic, but a smaller number.

Speaker 3 The plane comes along, you can go further, you can go faster, but it also means there can be something more devastating on the other side of that plane crashing or something going wrong.

Speaker 3 And it feels like that scales with AI as well.

Speaker 3 It sounds like you're saying to me, on the one side of it, this technology could completely change our relationship with economies and finance and society.

Speaker 3 But then there's the looming other side of it that could crash. And so maybe that's a good place for us to start diving into this: is what's noise and what's very real for you as somebody who sees it?

Speaker 3 Because everyone gets a different headline about AI. It doesn't matter where you are in the world.

Speaker 3 It doesn't matter your religion, your race, whatever it is, everyone gets a different headline about AI.

Speaker 3 But when you're looking at it as somebody who is working on creating it every single day,

Speaker 3 what is real and what is noise in and around AI?

Speaker 100 So I think it's pretty clear to me that we're going to face mass job displacement sometime in the next 20 years.

Speaker 87 Because

Speaker 47 whilst these technologies are, for the first part of their introduction, augmenting, like they add to you as a human, they save you time.

Speaker 8 Yeah, it's like a bionic leg, but for like cognitive laborers, you know,

Speaker 87 I think like you could, you know, who was it?

Speaker 38 I think it was Steve Jobs that called it like the bicycle for the mind.

Speaker 50 You know, it's just sort of exercising, you know, digital technologies allow you to exercise new parts of your mind that you didn't know you had access to.

Speaker 49 And I think that's definitely true.

Speaker 59 But much of the work that people do today is quite routine and quite predictable.

Speaker 35 It's kind of mechanized,

Speaker 20 yeah, like cognitive manual labor.

Speaker 87 And so that stuff,

Speaker 72 the machines are going to get very, very good at those things.

Speaker 142 And the benefits to introducing those technologies are going to be very clear for the company, for the shareholder, for the government, for the, you know.

Speaker 25 And so we'll see like rapid displacement and people have to figure out, okay, what, what is my contribution to the labor market?

Speaker 8 I think those fears are very real.

Speaker 50 And that's where governments have to take a strong hand because there needs to be a mechanism for taxation redistribution.

Speaker 50 Taxation is a tool for incentivizing certain types of technologies to be introduced in certain industries.

Speaker 59 And so it's not just about generating revenue, it's about limiting, adding friction to the introduction of certain technologies so that we can figure out how to create other opportunities for people as this transition takes place.

Speaker 3 Yeah, it's funny. One of my favorite quotes I ever

Speaker 3 heard was:

Speaker 3 I think it was Sweden's head of infectious diseases. I think that's what his job was.
I spoke to him during the pandemic and we were just talking about life in Sweden and what they do.

Speaker 3 And I asked him a question about labor and Sweden and how everything works out there. And he said something fascinating.
It was,

Speaker 3 no, I think he actually was in the labor department on that side. He said, in Sweden, unlike in America, he said, in Sweden, we don't care about jobs.
We care about the workers.

Speaker 3 And I remember that breaking my mind because I went, oh yeah, everyone always talks about like the job as if the job is something that is affixed to a human being.

Speaker 3 But really, the human is the important part of the equation. The job is just what the human does.
And so our focus has to be on making sure that the human always has a job.

Speaker 3 But from what you're saying, We don't know what the job will be because the jobs that we know now are sort of easy to replace.

Speaker 3 It's data entry, data capturing, sending an email, doing an Excel spreadsheet. That stuff is easy actually when it comes to AI.
And then now we don't know what the next part of it is.

Speaker 3 And so maybe my next question to you then is

Speaker 3 when you're in that world, the philosopher's side of your brain, like

Speaker 3 what do you think the onus is on us and

Speaker 3 the tech companies and all that to work on discovering what the new job is? Or do we not know what it will be?

Speaker 143 Well, but also I would tweak what you said that the job of society or the function of society is to create jobs that are meaningful for people.

Speaker 22 I'm not sure I buy that.

Speaker 42 I think

Speaker 76 many people do jobs which are super unfulfilling and that they would be quite happy to give up if they had an income.

Speaker 3 This is true.

Speaker 72 And so like we're probably very lucky that we get paid for the thing that we would be doing if we didn't get paid.

Speaker 10 I would certainly be doing that.

Speaker 29 And so I think the function of society is to create

Speaker 99 a peaceful, supportive environment for people to find their passion and live a life that is fulfilling.

Speaker 39 That doesn't necessarily have to overlap with job or work.

Speaker 126 I would, I, I mean, maybe I'm too much of a utopian, but I dream of a world where people get to choose what work they do and have true freedom.

Speaker 43 And people get tense about that idea because they're like, you know, work is about identity.

Speaker 145 And this is my role in society.

Speaker 90 And this is what is meaningful to me.

Speaker 125 And if I didn't have my job, I would be.

Speaker 8 It's like, nah, come on, man.

Speaker 55 Take a minute to think seriously.

Speaker 122 If you didn't have to work today,

Speaker 41 what would you do with your life?

Speaker 127 This is one of my favorite questions that I always ask people.

Speaker 133 If you didn't have to worry about your income, what would you do?

Speaker 29 And, you know, if you get into the habit of asking that question, people say some crazy things.

Speaker 3 It's so inspiring.

Speaker 8 Yeah.

Speaker 37 And so, yeah, maybe I'm a utopian dreamer, but I do think that is a relevant question for us to think about by 2045.

Speaker 102 I think it's a real chance that if we get this technology right, it will produce enough value, you know, aggregate value to the world,

Speaker 78 both in terms of the reduction of the cost of stuff, because of energy, because of healthcare, because of food systems, and basically because we won't have a lot of these like middle, you know, tier jobs that we'll have to figure out a way to fund people through their life.

Speaker 70 And I think that just unleashes immense creativity and it will create other problems, right?

Speaker 29 It will create quite a profound existential problem.

Speaker 35 I'm sure you have friends who don't work anymore and are kind of, you know, it's not as though they're retired.

Speaker 81 They're like maybe middle-aged or even younger, maybe they grew up rich.

Speaker 19 It's a hard thing to figure out.

Speaker 7 Like, who am I?

Speaker 101 Why am I here?

Speaker 102 What do I want to do?

Speaker 55 Those are like profound human questions that I think we can only answer in community with real connection to other people, spending time in the physical world, having real experiences.

Speaker 66 And like it or not, that I think is what's coming.

Speaker 94 And I think it's going to be pretty beautiful.

Speaker 3 It's funny you say that because I found when I think of my friends, the

Speaker 3 grappling that they have to do in and around their identity and work, I find is directly related to the world or the market that they live in.

Speaker 3 So my American friends have the greatest connection and binding to their jobs. And as I've gotten to know them, I've understood why.
In America, your job is your healthcare.

Speaker 3 So if you don't have a job, you don't have healthcare. And if you don't have health care, you're worried about your survivability.

Speaker 3 And if you don't have survivability, then what do you, you know what I mean? And then do you have housing? And if you don't have housing, then who are you as a person? You look at all of these things.

Speaker 3 It's very hard in America to separate job from life. It's almost impossible.

Speaker 3 And then when you start traveling around the world, you go to, you know, countries where they have a, like a really strong safety net.

Speaker 3 And you find that people don't really associate themselves with their jobs in the same way because now their life isn't determined by their job.

Speaker 3 Their job affects their life, but it doesn't make their life.

Speaker 3 And then I remember back to times when I'd be in a township in South Africa or even in what we call the homelands where our grandmothers would live.

Speaker 3 And, you know, that was like the extended family, people literally living in huts and it's dirt roads. And everyone would go, oh, what a terrible weight.

Speaker 3 But I'll tell you now, there was no homeless person there. There was no one like stressing about a job in the same way.

Speaker 3 I'm not saying nobody wanted a job, but the gap between them thinking they didn't exist because there was no job was a lot greater than the people who were living in a world where, you know, your job was you.

Speaker 3 And so it's interesting that you say that because I

Speaker 3 do wonder how easy it'll be for us to grapple with it, like, like what that time will be.

Speaker 38 But it also just shows how much variation there is.

Speaker 110 You know, we come from, you know, in terms of how humans live their lives.

Speaker 24 Yeah.

Speaker 41 I feel like we come from, you know, whatever our different backgrounds, we're still quite Western-centric, and we're just sort of quite homogeneous that, you know, we've like sort of had 300 years of specialization, education, Protestant work ethic, atomization of families, smaller and smaller units, spread, you know, leave your home, you know, sort of physical locale where your community is.

Speaker 50 And I think there's a kind of loneliness epidemic as a result.

Speaker 110 Like, I feel, you know, you probably, like me, you know, pour your life and soul into your work.

Speaker 121 And then what was, I guess, what was it like for you when you switched your job, right?

Speaker 50 Like, because that was obviously a massive part of your identity is what you did every day 24-7.

Speaker 3 But you see, to that point, point,

Speaker 3 I

Speaker 3 left the daily show to go spend more time in South Africa at home.

Speaker 3 And,

Speaker 3 you know, one of my best friends had a beautiful phrase that he said to me. He said,

Speaker 3 in life, sometimes you have to let go of something old to hold on to something new.

Speaker 3 It's not always apparent what the value is of

Speaker 3 something that we're sacrificing.

Speaker 22 It's not always apparent.

Speaker 3 But if we are unable to assign that value ourselves, we'll get stuck. So leaving the daily show, I leave a ton of money behind.
I leave, you know, the status, everything.

Speaker 3 But no one has assigned a value to my friends. No one's assigned a value to my family.
No one's assigned a value to the languages that people speak to me in my country.

Speaker 3 There's no economist article on that. So I don't know what the value of that is.
Someone can look at my bank account and go like, that's value. But they don't tell me.

Speaker 3 what my friend's actual value is. And so I think that's where, you know, it's hard.
I just had had to like decide it for myself. And I think we all have to.

Speaker 3 But I think some people won't have the luxury because of how, you know,

Speaker 3 how close they are to the line. You know, when you talk about those jobs that are going to disappear, there's somebody who's going, I don't have the luxury of pontificating.
Exactly right.

Speaker 3 Because tomorrow is what's coming. I can't think about like, ah, what will be.
And that's like a real luxury, I think.

Speaker 80 And I think that's why talking about the dangers and the fears now is so important because this is happening super fast.

Speaker 93 I mean, the transition has surprised me and all my other peers in the industry in terms of how quickly it's working.

Speaker 90 And at the same time, you know,

Speaker 62 we're also kind of like unsure about whether the nation state is going to be able to sort of respond to the transition too, because, you know, you're maybe lucky because you already had enough income that you didn't have to worry about it.

Speaker 52 And you could, it was really just like connecting to your heart.

Speaker 44 But many people are going to be like, well, I'm going going to have to still be able to provide food for my family and carry on doing my work throughout this crazy transition.

Speaker 3 So then let me ask you this.

Speaker 3 You see,

Speaker 3 that is such an interesting thought. You and your peers were shocked and are shocked at the rate of how AI is going and growing.
To me, that blows my mind because I go like, of course I'm shocked.

Speaker 3 I don't know how to code. You get what I'm saying? Of course I'm going to be shocked.

Speaker 3 But now when you say that, I then wonder as somebody who's been, you are truly an OG in the game of AI, like really, really.

Speaker 3 You're not like one of the people who's jumped on now because it's blowing up. You were in it before there was money and now you're in it in the thick of things.

Speaker 3 Where do you think we are in AI's development?

Speaker 3 Are we looking at a baby? Are we looking at a teenager? Are we looking at like a 20-something-year-old? Like,

Speaker 3 where do you think we are when we look at AI's development?

Speaker 88 I think part of the challenge with the human condition in this mad, globalized, digitized world is that we're so overwhelmed with information and we're so ill-equipped biologically to deal with scale and exponentials.

Speaker 127 Like it's just very few, like when I say 2045, like I'm just used to living in 2045.

Speaker 8 Like it's just my weird, like I've always been like that.

Speaker 110 And it's kind of become second nature in me to casually drop that.

Speaker 66 But you know, if I do that with some random people I meet at the bar, I'm obviously just a freak.

Speaker 67 There's just no one thinks about that.

Speaker 7 People barely think about what they're going to do in two weeks, let alone 20 years.

Speaker 125 So, and likewise, you know, people are sort of not equipped to think of what does an exponential actually mean.

Speaker 70 Now, I'm lucky enough that I got a practical intuition for the definition of an exponential because between 2010 and 2020, for 10 years, me and a bunch of other random people worked on AI.

Speaker 131 And it was sort of working, but basically didn't work.

Speaker 82 The flat part of the exponential, even though we could see little doublings happening every 18 months, it started from a base of like

Speaker 41 effectively zero, think of it.

Speaker 32 And so it isn't until the last few doublings that you see this massive shift in capability.

Speaker 89 I mean, for example, like four years ago,

Speaker 72 before GPT-3,

Speaker 57 a language model could barely predict the next word in one sentence.

Speaker 63 Like it's just kind of random.

Speaker 73 It was a little bit off, often didn't make sense.

Speaker 3 This is 2013.

Speaker 70 No, this is 2020, three or four years ago.

Speaker 3 2023.

Speaker 92 So, no, no, not 2023, three or four years ago.

Speaker 8 So it's like 2020 or 2021.

Speaker 99 Let's say 2021, something like that.

Speaker 55 I mean, literally,

Speaker 120 you look at the output of the language model.

Speaker 74 Because I worked on Lambda at Google in 2021, and it was super cool, but the models just before that were just like terrible.

Speaker 47 And I think many people play with GBT3.

Speaker 102 And a lot of people were like, oh, meh, this is like, what do I do with this thing?

Speaker 64 But for those of us that were lucky enough to see the flat part of the exponential, we could get a better intuition that the doublings were actually happening and that the next set of doublings, you know, to double from this base of, oh, it's kind of okay, but it's not that accurate.

Speaker 81 We knew that it was going to be perfect with stylistic control, with no bias, with minimized hallucinations, with perfect retrieval from real-time data.

Speaker 123 And so then it's actually quite predictable what capabilities are going to come next.

Speaker 67 So, for example, the last couple of years, the models have been

Speaker 122 gone from generating perfect text or really good text, let's say, to then just learning the language of code.

Speaker 45 How did we know that it was going to become a human-level performance programmer?

Speaker 60 Because there's no difference in the structure of the input information between code and text and images and videos and audio.

Speaker 137 It's the same mechanism.

Speaker 88 data, compute, and the algorithm that learns the pattern in that data.

Speaker 28 So then you can say, okay, well, what are the next modalities that it's it's going to learn?

Speaker 81 That's kind of why I make the prediction about material science, right?

Speaker 89 Or other aspects of biology or physics.

Speaker 63 If the structure of the data has been recorded and is clean and is high quality, then the patterns in it can be learned well enough to make predictions about how they're going to unfold next.

Speaker 146 And that's why it's called a general purpose technology, because it's fundamental.

Speaker 99 There's no specialized hand-coded programming for each data domain.

Speaker 8 Damn.

Speaker 21 I mean, the.

Speaker 3 You know what it reminds me of?

Speaker 3 have you ever seen that thing where they talk about when they, when they're trying to explain exponential to you, there's one example they give,

Speaker 3 which is folding paper. Yeah.

Speaker 3 You know, so they go, if you fold a piece of paper in half, and then you fold it in half, and then you fold it in half, and you fold it in half, and I think you can't do it more than seven times or something.

Speaker 3 But then they go, if you could keep folding it in half, the number to get to like space is really small.

Speaker 75 I think it's like the 64th gets to the moon or something.

Speaker 47 Yeah, but crazy.

Speaker 8 But I remember seeing that, and I was like, wait, wait, wait, wait, wait, what?

Speaker 3 They're like, yeah, if you, if you do it it one, and then you do one way. And I was like, wait, 64? I was like, no, what do you mean? Like, 64,000? They're like, no, no, no.
64.

Speaker 3 And that's when you start to understand how we just generally don't understand exponential and

Speaker 3 these like compound gains. And so now that's where I wanted to ask you about the idea of containment.

Speaker 3 Your book,

Speaker 3 of everyone I've read, I mean, everyone who's written about AI, who's like in it, in it, in it, was the only book that I would say spent the majority of its time talking about the difficulties of grappling with AI.

Speaker 3 Yeah, you talked about the beauty of what we could do with medicine and technology. And we should get into that to talk about some of the breakthroughs that you made at DeepMind.

Speaker 3 But like containment seems like the greatest challenge facing us. And we don't even realize and we don't really talk about it.

Speaker 3 Talk me through what containment means to you and why you think we should all be focusing on it.

Speaker 144 So the trend that's happening is that power is being miniaturized and concentrated and it's being made cheap and widely available to everybody.

Speaker 97 Why do I say power?

Speaker 72 Because making an accurate prediction about how text is going to unfold or what code to produce or what you know frame to extend given a video

Speaker 99 that is power like predictions are power.

Speaker 86 Intelligence is an accurate prediction given some new environment.

Speaker 47 That's really fundamentally what we do as humans.

Speaker 114 We're prediction engines. And these things are prediction engines.

Speaker 72 So they're going to be able to make phone calls, write emails, use APIs, write arbitrary code, make PDFs, use Excel, act like a project manager that can do stuff for you on your behalf.

Speaker 8 So you, as a creator or as a business owner, you're going to be able to hire a team of AIs specialized in marketing or HR or strategy or whatever it is of coding.

Speaker 90 And that's going to give you leverage in the world.

Speaker 26 I mean, you said about the kind of, you know, the strange like function of scale.

Speaker 72 What this is going to do is scale up every single individual and every single business to be able to deliver way, way more because the cost of production is going to be, you know, basically zero marginal cost.

Speaker 127 Now, on the one hand, that's amazing because it means the time between you having a creative idea and being able to prototype it or experiment with it in some way or even build it up to the max sale is going to go to shrink to basically, you know, nothing.

Speaker 103 You just think something, vibe code it up in natural language, produce the app, build the, you know, website, try out the idea that you have.

Speaker 30 But the flip side of that is that anybody can now not just broadcast their ideas like we had with the arrival of podcasts or the arrival of blogs on the web before that.

Speaker 81 It meant that anyone could talk to everyone. Yeah.

Speaker 120 Which was amazing.

Speaker 3 No one controlled the infrastructure in a way. Exactly.

Speaker 126 And it's super cheap for anybody to go publish a website or do a blog or do a podcast.

Speaker 137 So the same

Speaker 72 trend is going to happen for the ability for people to produce stuff, do actions.

Speaker 94 So in social media, it was like anyone can now broadcast.

Speaker 51 Now with AI, anyone can now take action.

Speaker 117 You can like build a business, you know, start a channel, create content, you know, whatever it is that you, you believe in.

Speaker 43 I mean, you might be a religious person, you're trying to evangelize for your, you know, or you're trying to persuade somebody of your political ideas.

Speaker 93 Everyone is going to have a much easier time of executing on their vision.

Speaker 43 And obviously the benefits of that are pretty clear, but the downside of that is that That inevitably causes conflict because we just disagree with each other.

Speaker 7 Yeah.

Speaker 55 You know, we don't hate each other.

Speaker 90 You're not evil.

Speaker 66 I'm not evil.

Speaker 3 We've got different views.

Speaker 118 And if I can just kind of at the click of a button execute my crazy ideas and you can execute your crazy ideas that are like practical actions affecting the real world and everyone's doing the same thing, then

Speaker 56 inevitably that is going to cause an immense amount of conflict.

Speaker 67 At the same time, the nation state, which is supposed to have a monopoly over power in order to create peace, that's the contract that we make with the nation state.

Speaker 8 Oh, yeah.

Speaker 116 Nation state is getting weaker and kind of struggling, right?

Speaker 33 And so containment is a belief that completely unregulated power that proliferates at zero marginal cost is a fundamental risk to peace and stability.

Speaker 81 And it's an assumption that you have to gently restrict in the right way

Speaker 81 mass proliferation of super powerful systems because of the one-to-many impact that they're going to have.

Speaker 3 If I hear what you're saying correctly,

Speaker 3 It's almost like you're saying if something is hard to do, only a few can do it.

Speaker 3 And if only a few can do it, it's easy to regulate how it's done because you only have to regulate a few. But if something is easy to do, everyone can do it.

Speaker 3 And now it becomes infinitely harder to regulate because everyone can do it.

Speaker 28 Yeah, that's absolutely spot on. That's a much better way of putting it than I put it.
Exactly.

Speaker 24 Friction

Speaker 87 is

Speaker 101 important

Speaker 90 for maintaining peace and stability. If you have no friction and the cost of execution is zero and scale scale can be near instant,

Speaker 131 that's where you could like, yeah, I just, maybe I spend too much time in 2045, but I can see a world where that kind of environment really just creates a lot of chaos.

Speaker 3 Well, no, I agree with you. Here's what I think of it.
I think of it,

Speaker 3 let's use a real, you know, current day example, news. I lived in news for a long time and I saw it firsthand.

Speaker 3 When there were three news networks in America,

Speaker 3 if something was like off with the news, people knew where to go immediately. You knew who to hold accountable.
You knew who, you know, got into trouble or didn't.

Speaker 3 But there was like a, it's like, we know where to go.

Speaker 72 Then you get cable news.

Speaker 3 It expands. It becomes a lot harder now.
Wait, who's saying the news? Who's saying the truth? Who's not saying that? Do you punish them? But still, you could go to them.

Speaker 3 You know, so somebody like Fox News can get sued for saying something about a Dominion voting system. But Dominion knew where to go.
They went, we're going after Fox News.

Speaker 3 So in a strange way, even in that world, the system is still sort of working because there's friction, right?

Speaker 3 It is where it is and it has to be broadcast. Then the internet, streaming, YouTube, et cetera, you don't even know who the person is, where the person is, if it's a person.

Speaker 3 And then if they say something that's not true and it enrages the masses, where do we go?

Speaker 57 And it's not just that it's going to say something.

Speaker 78 It's going to do something.

Speaker 3 Ah, damn, Mustafa.

Speaker 8 It's going to build the app.

Speaker 3 It's going to build the website.

Speaker 43 It's going to do the thing.

Speaker 112 And so, look, I think this is the point about confronting the reality of what's coming.

Speaker 7 Yeah, but wait, wait, go back on that.

Speaker 3 You see, that's something I always forget. Oh, man.
See, we always think of AI as just like saying.

Speaker 3 Let's talk a little bit more about the doing, because that is what makes it unique. You know, on one of the episodes, we had

Speaker 3 Yuval Noah Harari, the book. Yeah.

Speaker 107 Of course.

Speaker 60 I'm good friends with Yuval. He's awesome.

Speaker 3 And Yuval, you know, was on for his book, Nexus, and we're talking about information and systems.

Speaker 41 And stories. And stories.

Speaker 3 And one of the things he kept going on about was he said, I know AI is a tool, but we've never had a tool that makes itself. And you talk about that as well.

Speaker 3 We've never had a hammer that makes the hammer without you getting involved. It's just make the thing, make the thing, make the thing.
Atom bomb is one thing, but no atom bomb makes an atom bomb.

Speaker 3 And so that was.

Speaker 66 Well, there's a lot of ideas there.

Speaker 65 So firstly,

Speaker 44 actions are stories.

Speaker 41 And Yuval's point was that the history of civilization has been about creating stories.

Speaker 41 Religious stories, historical stories, ideological stories, like stories of oppression, of domination, of persuasion.

Speaker 125 And it was really humans that had the,

Speaker 76 it was the friction of being able to pass on that story through spoken word and then through digitization, which slowed down the spread of change.

Speaker 81 And that was an important regulator and filter.

Speaker 47 So as we've talked about, the digitization speeds up the distribution of those stories, which allows that information to spread.

Speaker 39 But it's not just that.

Speaker 34 It also is an actual agent that is going to operate like a project manager in your little mini persuasion army.

Speaker 39 And people are going to use those things, not just for phishing attacks or for, you know, sort of selling stories, but for actually making the PowerPoint presentation, of building the app, of planning, you know, making the project plan.

Speaker 144 And so it's kind of operating just as a, member of your team would.

Speaker 116 And I think that's where all the benefit comes from, but it's also where there's like massive risks at the same time.

Speaker 81 And then the other point that you made about like, it can edit itself.

Speaker 66 This is a new threshold.

Speaker 41 You know, a technology that is able to observe what it produced, modify that observation.

Speaker 114 Like it can critique its own image that it produced and say, well, it looks like this part of the hand was kind of weak.

Speaker 93 Okay, so we'll generate another one.

Speaker 90 Or it'll produce a poem or a strategy and then edit that and update it.

Speaker 76 And that's just editing its output, but it can also edit its kind of input, its own code, its own system processing, in order to improve with respect to its objective.

Speaker 127 And that's called recursive self-improvement.

Speaker 45 You know, where it can just

Speaker 81 iteratively improve its own code with respect to some objective.

Speaker 97 And I've long said that that is a threshold which

Speaker 108 presents significantly more risk than any other aspect of AI development that we've seen so far.

Speaker 59 I mean, that really is a kind of subset of technologies that if we're really going to focus on humanist superintelligence, being skeptical and critical and auditing the use and development of recursive self-improving methods, that's where I think there's genuine risk.

Speaker 3 We're going to continue this conversation right after this short break.

Speaker 9 With Plan B emergency contraception, we're in control of our future. It's backup birth control you take after unprotected sex that helps prevent pregnancy before it starts.

Speaker 9 It works by temporarily delaying ovulation, and it won't impact your future fertility. Plan B is available in all 50 U.S.

Speaker 9 states at all major retailers near you, with no ID, prescription, or age requirement needed. Together, we got this.
Follow Plan B on Insta at Plan B OneStep to learn more. Use as directed.

Speaker 14 Mint is still $15 a month for premium wireless.

Speaker 15 And if you haven't made the switch yet, here are 15 reasons why you should.

Speaker 14 One, it's $15 a month.

Speaker 17 Two, seriously, it's $15 a month.

Speaker 19 Three, no big contracts.

Speaker 8 Four, I use it.

Speaker 18 Five, my mom uses it.

Speaker 16 Are you playing me off?

Speaker 18 That's what's happening, right? Okay.

Speaker 6 Give it a try at mintmobile.com/slash switch.

Speaker 4 Upfront payment of $45 for three-month plan, $15 per month equivalent required. New customer offer first three months only, then full price plan options available.
Taxes and fees extra.

Speaker 4 See Mintmobile.com.

Speaker 3 So do you ever feel like you could be sitting in a position where you're sort of like the Oppenheimer of today?

Speaker 3 Do you ever feel like you're sitting in a position where you're both grappling with the need for the technology, but then also the not zero percent chance that the thing could burn the atmosphere?

Speaker 3 Like, how do you, how do you grapple with that?

Speaker 3 I often wonder this, even when I think of like engineers and people who are writing the code, I'm always fascinated by people who write the code for the thing that's going to write the code like other people have jobs but like if you told me as trevor hey trevor can you help this ai learn to do comedy i'd be like no

Speaker 3 do you know what i mean so i'm always intrigued by the coders who are making the thing that's now coding i like i just want to know like how you how you like wrestle with this entire thing do you do you think it's larger than us and we have to wrestle with it or like what is what is that what is that battle like for you You know, in COVID, when I started writing The Coming Wave, my book, I was really motivated by that question.

Speaker 62 That was actually one of the core questions is how does technology arrive in our world and why?

Speaker 119 What are the incentives, the system-level incentives that default to proliferation, that just produce more and more?

Speaker 100 And it's very clear that there's demand to live better.

Speaker 72 You know, people want to have the cheaper t-shirt from, you know, the more affordable thing.

Speaker 118 you want to have the cheaper car you want to be able to go on vacation to all the places around the world and so planes get cheaper and more efficient because there's loads of demand for them and so it's really demand that improves the efficiency and quality of technologies to reduce their price so that they can be sold more and that that is why everything ultimately proliferates and why it inevitably happens because we haven't really had much of a track record of saying no to a technology in the past.

Speaker 86 There's regulations around technology and restrictions, which I think have been incredibly effective if you think about it.

Speaker 78 You know, every technology that we have is regulated immensely.

Speaker 30 Flight or cars or emissions.

Speaker 50 I mean, you know, so people, I think particularly in the US, like have this sort of allergic reaction to the word regulation.

Speaker 78 But actually, regulation is just...

Speaker 65 the sculpture of technologies, right?

Speaker 37 Chipping away at the edges and

Speaker 37 the pain points of technology in the collective interest.

Speaker 112 And that's what we need the state for.

Speaker 29 The state has the responsibility for the common good.

Speaker 90 And that's why we have to invest in the state and build the state, because it isn't in the interest of any one of the individual corporate actors or academic

Speaker 104 researchers, AI researchers, or anyone individually to really take responsibility for the whole.

Speaker 47 And that's why we need governments more than ever, right?

Speaker 43 Not to hold us back or to slow us down, but to sculpt technology so that it delivers all of the benefits that we hope it will whilst limiting the potential harms that it can produce.

Speaker 3 I want to take a step back

Speaker 3 and talk about your journey with AI.

Speaker 3 Today, in 2025, it seems obvious.

Speaker 3 Now, when I speak to people, everyone's like, oh yeah, AI, AI, AI.

Speaker 3 I wasn't even in the game. when you, I mean, I, and I was like an early just, you know, layman.
I remember showing people at my office the first iterations of GPT-3 and DALI.

Speaker 3 And I remember when Dali was still like, I mean, like, basically tell it about an image and then just go away. Go, go, book a vacation for a week, come back for your image, and it would make.

Speaker 3 But even then, I was like, this is going to change everything. People were like, oh, no, I don't know.

Speaker 3 And I remember struggling to convince people that this thing was going to be as big as it was going to be. I was doing this in like this time scale.
When I look at your history,

Speaker 3 you literally have

Speaker 3 years of your life where you were in boardrooms telling world leaders, telling investors, telling tech people in Silicon Valley, hey, this is what the future of AI is going to be.

Speaker 3 And no one listened to you. Not no one zero, but I mean like no one listened to you, right?

Speaker 3 Now, that made me wonder two things. One, should we be worried about our world and our future being built by people who are unable to see the future? And two,

Speaker 3 what did you see then

Speaker 3 that we might not be seeing now?

Speaker 140 Shit, that's a hard question, man.

Speaker 104 I think

Speaker 91 as you were talking, one of the

Speaker 91 memories that came to mind was...

Speaker 95 Remember in 2011, we had an office in Russell Square, center of London, near University College London, UCL.

Speaker 74 And

Speaker 19 someone in the office showed me this handwriting recognition algorithm you just you know pass some text and that's actually been available at the post office for many many years it would read the zip code and read the address and stuff like that

Speaker 8 and it was really just doing recognition so these letters represent these letters you know we we sort of can transcribe yeah that text this funny looking apple is actually an a and this funny little loop thing is actually an l and yeah so i was like that's kind of incredible that a machine has eyes and can understand text and can transcribe that.

Speaker 123 This is pretty cool.

Speaker 147 So, but then what we were really interested in is if it recognizes it accurately enough, then surely it should be able to generate a new handwritten digit or a number that it's never seen in the training set before.

Speaker 103 This was 2011.

Speaker 123 256 pixels by 256 pixels with a handwritten seven or a zero in kind of gray, right?

Speaker 65 This is sort of like five colors. Yeah.

Speaker 50 And I remember standing over the shoulder of this guy, Dan Vistra, is one of our, like,

Speaker 81 I think it was employee number four at DeepMind.

Speaker 81 And he was just enamored by this system that he had on his machine because it could produce a new number four that wasn't in the training set.

Speaker 88 It was gener, it learned something about the conceptual structure of zero to nine in order to be able to produce a new version of that.

Speaker 3 So it could write a number that it hadn't learned how to write. Like it hadn't seen it before.
It had never seen that number written that way, and it wrote it itself.

Speaker 65 Exactly.

Speaker 80 And so coming back to what we were saying at the beginning about understanding,

Speaker 81 if it's able to produce a new version of a number seven that it's never seen before, then does it understand

Speaker 81 something about the nature of handwritten digit number seven in abstract, the conceptual idea of that?

Speaker 3 Oh, man.

Speaker 37 And so then the intuition that that gave me was:

Speaker 57 wow, if it could imagine new forms of the data that it had been trained on, then how far could you push that?

Speaker 147 Maybe it could generate new, you know, physics for the world.

Speaker 99 Maybe it could solve hard problems in medicine.

Speaker 81 Maybe it could solve all these energy things that we're talking about.

Speaker 19 It's just learning patterns in that information in order to imagine.

Speaker 57 And that's what I love about hallucinations.

Speaker 90 Everyone's like, hallucinations.

Speaker 3 Hallucinations is the creative bit. That's what we want them to do.

Speaker 57 We don't want an Excel spreadsheet where you input data and you get data out.

Speaker 66 That's just a, you know, that's just a handwritten record of,

Speaker 37 we want interpolation.

Speaker 53 We want invention. We want creativity.

Speaker 87 We want the abstract, blurry bits.

Speaker 103 And so that was a very powerful moment for me.

Speaker 99 I was like, okay, this is weird, but we are definitely onto something.

Speaker 66 Like this has never been done before and it's super cool.

Speaker 121 And let's just turn over the next card. Let's see if it will scale.

Speaker 89 And it just scaled every year. It scaled and scaled and scaled.

Speaker 87 So

Speaker 62 that was a very inspiring moment for me.

Speaker 81 And somehow, 10 years later, 15 years later, I managed to kind of hang on to that vision that generation and prediction produces creativity.

Speaker 117 And that intelligence wasn't this kind of, because some people are quite religious about intelligence.

Speaker 38 They're like, you know, no other species has intelligence.

Speaker 81 This is very innate to humans.

Speaker 115 It must have been, you know, come from some supernatural being.

Speaker 128 But actually,

Speaker 53 it's just applying attention to a specific problem at a specific time the effective application of processing power to produce a correct prediction i think that's what intelligence is to direct your processing power to predict what would happen if i tipped over this glass right at the right time

Speaker 3 and you'd have to buy me another glass i would have to first clean my trousers from the water

Speaker 33 um so yeah i forgot what your question was, but that was.

Speaker 8 No, no, no, that's

Speaker 3 no, you answered the first part of it, which I loved.

Speaker 3 And then the second part really was, if, if you, so you were in, you at DeepMind, you're going around to these different people who now, by the way, are selling some version of AI or are investing in it or are telling us about it.

Speaker 3 Yeah. But what sort of pisses me off is I go like, man, you didn't see this shit.
You know what I mean? Yeah. Like, you're going to be here, be like, oh, let me tell you about AI.

Speaker 3 And I'm like, yeah, but when Mustafa was in the room telling you about it, you were like, I don't see it, man. And now they're going to act like they see it.

Speaker 3 So I don't want to ask them what they now see. I want to ask you what we're missing in this moment.
What do you think we are not hopping on? You know, we talked about containment,

Speaker 3 but what is the thing that we're not thinking about? Yeah,

Speaker 102 it's a really difficult question.

Speaker 132 I'm not sure

Speaker 120 why people weren't able to see it earlier.

Speaker 47 And maybe that's kind of like my blind spot that I need to think more about.

Speaker 124 But, like,

Speaker 32 I know that I can see pretty clearly what's coming next.

Speaker 104 I think

Speaker 93 at the moment, these models are still one-shot prediction engines.

Speaker 40 You know, you ask a question and you get an answer.

Speaker 138 You know, it produces a single correct prediction at time step T.

Speaker 86 But you, as an intelligent human, and every single human, and in fact, many animals, continuously produce a stream of accurate predictions, whether it's like deciding how to get up out of this chair or imagining

Speaker 81 this plant in purple instead of green.

Speaker 86 I'm a consistently accurate prediction engine.

Speaker 98 The models today are just one or two shot prediction engines.

Speaker 80 They're not, they can't lay out a plan over time.

Speaker 117 And the way that you decide to go home this evening is that, you know, you know first to get up from your chair and then open the door and then get in your car and da-da-da.

Speaker 63 You can produce, you can unfold this perfect prediction of your entire stream of activity all the way to get back to your home.

Speaker 47 And that is just a computational limitation.

Speaker 63 I don't think there is any fundamental, you know, sort of algorithmic or even data limitation that is preventing LLMs and these models from being able to do perfect, consistent predictions over very, very long time periods.

Speaker 39 So then what do we do with that technology?

Speaker 91 Well, that is incredibly human-like.

Speaker 146 If it has perfect memory, which it doesn't at the moment, but it's got very good memory,

Speaker 81 then it can draw on not just its knowledge of the world, its pre-trained data, but its

Speaker 47 personal, like the experience that it has had of interacting with you and all of the other people and store that as persistent state, and then use that to make predictions consistently about how things unfold over very, very, very long sequences of activity.

Speaker 102 That is incredibly human-like and unbelievably powerful.

Speaker 99 And just as today, there's a kind of super intelligence intelligence that is in our pocket that can answer any question on the spot.

Speaker 134 Like we dismiss how incredible it is right now.

Speaker 8 It is mental. It's crazy.
It's insane how good it is right now.

Speaker 27 And everyone's just like, oh, yeah, I don't really use it. Do you use it now? A little bit.
I talked to it.

Speaker 8 It's like, come on.

Speaker 3 This is magic.

Speaker 85 It's magic in your pocket.

Speaker 100 Now, imagine when it's able to not just answer any question about poetry or some random physics thing, but it can actually take actions over infinitely long time horizons.

Speaker 121 That doesn't, like, forget about the definition of super intelligence or AGI.

Speaker 25 Just that capability alone is breathtaking.

Speaker 82 And I think that we basically have that by the end of next year.

Speaker 3 Maybe, maybe I'm stuck in the world of sci-fi, but

Speaker 3 what I sort of heard you saying is

Speaker 3 if we continue growing AI in just the way that it's growing now, forget like an idea of what we don't know,

Speaker 3 we could sort of get to a world where it can develop accurate predictions about what our outcomes might be.

Speaker 109 Yeah.

Speaker 3 Like you're telling me that an AI, I could meet somebody on a date, and then the AI could tell me based on my history and my actions and its persistent memory of me and what the person says and how we could theoretically get to a point where it could go, oh, yeah, these are like the possible outcomes based on your actions.

Speaker 90 Which is what you, as a smart human, do every time when you meet somebody anyway.

Speaker 8 I don't know about smart humans.

Speaker 8 You're very kind to me. You're very kind.

Speaker 8 No, no, comment I'm your dating life.

Speaker 3 Yeah, but I mean, that's, it's, it's both utopian and dystopian because on the one hand, I go like, wow, that would be amazing for so many people. You make mistakes.

Speaker 3 And now it's, but then there's another one of like,

Speaker 3 when do we not trust it?

Speaker 3 When do we not believe its prediction? When do we, do you know, do you know what I mean? Like that, that's like the ultimate grapple now.

Speaker 3 Is if this thing has told me, hey, I know you like this person, and I've run the calculation, I've run the simulation, I've done this, I know you, and I know what they say and how they are,

Speaker 3 you're going to be broken up in two years by the like the pattern. And let's say I do it once and it's right, and then I do it again and it's right.
And then I'm like, third time,

Speaker 3 do I do it or do I not do it? Do I give this person a chance? Do I not give them? Do you get what I'm saying?

Speaker 147 It's such a deep question because we

Speaker 19 trust, trust is really a function of consistent and repeated actions.

Speaker 90 So you say you're going to do something and you do what you actually said you were going to do. Yes.

Speaker 72 And you do that repeatedly. Yes.

Speaker 66 And so everyone's like, oh, oh, but I'm not going to be able to trust AI.

Speaker 87 Actually, you are going to trust AI because it's super accurate.

Speaker 50 It's, you know, we, we, like, you use Copilot, ChatGPT.

Speaker 12 Most people don't think twice about using that now because it's so accurate and it's clearly better than any single human.

Speaker 22 Like, I just wouldn't go and ask you, like, you know, like many other questions.

Speaker 8 I guess probably going to know better, right? Like, I, I wouldn't ask you, you dumbass.

Speaker 13 I was about to say a list came through my mind.

Speaker 20 I was like, don't mention any of those things.

Speaker 22 Don't say those things.

Speaker 109 Just row back.

Speaker 3 You know what it makes me think of is like, I don't know if you've seen the documentary. I don't know if you need to because you were there.
But like DeepMind, the company that you're part of

Speaker 3 founding,

Speaker 3 it occupies such a beautiful and seminal moment in artificial intelligence history for two main reasons, in my opinion. You know, one is Alpha Fold, and then one is AlphaGo.

Speaker 3 And the reason it's so important to me is because you worked on an AI project

Speaker 3 that grappled and tackled two of the biggest issues I would argue that humans have sort of thought of as being their domain. AlphaFold was medicine and discovery.
That's what humans do.

Speaker 3 We are the ones who invent medicines. We are the ones who create the new.
We synthesize. We are the humans.
You know what I mean? Synthetic. It is us.
We've made it. We're the creators.

Speaker 3 And then AlphaGo for me was almost even more profound and powerful because it was like, people always used to say, look, man, human chess, our brain is infinite, the human brain.

Speaker 3 And then AlphaGo has, I don't know how many different variations of a game. Like no one can remember it essentially, right?

Speaker 3 And I remember watching the documentary and you're seeing the AlphaGo champion. I think it was Korean, right?

Speaker 120 Yeah, Lisa Dole.

Speaker 3 Lisa Doll, yeah, Lisa Doll, great guy. And you watch the story of Lisudol go up against.
your computer and everyone. I mean, Korea, it's all over the news.
In America, it's on the news.

Speaker 3 And they're like, a computer going up against. And now it's like people are thinking of back to Kasparov and the IPA.

Speaker 39 100 million people watched it live.

Speaker 3 100 million people wanted to see man versus the machine. And I remember like watching this and people going, man, let me tell you why it's different to chess.

Speaker 3 Because you see, chess is actually quite simple to predict. And whereas before they said it was impossible.
And then you see AlphaGo and you see this game roll out.

Speaker 3 And the moment for me that'll always stick in my brain is when Lee Sudong is playing Deep Mind

Speaker 3 and it makes a move and everyone, everyone that's like an AlphaGo expert is like, oh,

Speaker 3 it messed up.

Speaker 3 And then you see all the tech guys in the background who are working on, and they're like, what went wrong? They're like, yeah, it messed up. It shouldn't have done that.
Yeah, you can't.

Speaker 3 And you see the commentators and they're like, oh, yeah, you never do that.

Speaker 147 You never do that.

Speaker 3 It's over. You don't do that.
And then the game unfolds. The game unfolds.
The game unfolds. And then everyone's like, wait, what just happened here?

Speaker 3 And then people said, we've just seen a move that no one's ever played. We've seen a game that's never unfolded before.

Speaker 132 And there were two different reactions.

Speaker 3 And this is why the story stuck with me.

Speaker 3 The one reaction was of most people who were fans of Li Sudong. They said, this is a sad day in human history because it's shown that the machine is smarter than the man.

Speaker 3 And it shows that we have no... future and no purpose.
And then they interviewed him and they said, how do you feel?

Speaker 131 Because you lost.

Speaker 3 And you were representing mankind. You lost.
And he goes, Well, first of all, losing is part of the game. And, you know, I'm humble enough to know that I won't always win.

Speaker 3 And he said, but I'm actually happy. And they said, why?

Speaker 3 And he said, well, I'm happy to discover that there are parts of Go I didn't know existed.

Speaker 3 And he said, and this machine has reminded me to keep being creative and push beyond the boundaries that I thought existed in my head.

Speaker 3 And I remember watching that and thinking, damn, it's amazing how you can look at the same story and have a completely different lesson that comes out of it.

Speaker 3 You know, and I wondered, like, that was on the play side, extremely complicated. But let's talk about the medicine side of things.

Speaker 3 Folding proteins, I still don't fully understand it, but I've tried my best. Essentially, what you and your team did has put it, how far would you say it moved us?

Speaker 3 forward in terms of medicine and what we're able to do in terms of, you know, healing disease. Like how many years do you think we leaped by having something like alpha fold?

Speaker 32 I mean, people say a decade, some people say multiple decades.

Speaker 80 I mean, understanding the functional properties of these proteins is really about understanding how they fold.

Speaker 49 And the way they fold and the way they unfold often expresses something, you know, physically about how they're going to affect their environment.

Speaker 3 And so. You know what I think about it as? Because I try and use analogies to help me when it's a complicated topic.

Speaker 3 Did you ever play that game as a kid where someone would fold paper into like a little flower thing? And then you would like do like a little game and you had a number.

Speaker 3 Yeah. And then it would be like, let's see if someone likes you.

Speaker 8 And it'd be like, one, two, three, four.

Speaker 3 Nah, yeah, you suck.

Speaker 8 One, two, three.

Speaker 3 That's how I think of it with protein folding.

Speaker 3 Is I go like, depending on how that paper unfolds, that determines whether it's a disease, whether it's a cure, whether it's a medicine, whether it's, you know.

Speaker 50 Except there are billions of possible ways that it could unfold.

Speaker 93 And we could never imagine all of those combinations.

Speaker 118 And I think, you know, it's actually very similar to Go in that sense.

Speaker 54 Like, Go on, so Go has 19 by 19 squares, black and white stones.

Speaker 123 And there are 10 to the power of 180 possible different configurations of the Go board.

Speaker 56 So that's a 10 with 180 zeros on the end of it.

Speaker 81 It's like a number that you can't really even express.

Speaker 8 And people often say, like, there are more

Speaker 82 possible moves in the game of Go than there are atoms in the known universe.

Speaker 71 I mean, that one I can't even get my head around, but that's, you know, well understood.

Speaker 87 This is insane. So, move 37.

Speaker 7 You just can't.

Speaker 8 I can't even get anywhere near.

Speaker 109 I can barely cope with like folding bits of paper up to the moon.

Speaker 49 Yeah, that's right.

Speaker 22 Just about

Speaker 22 this is ridiculous.

Speaker 24 But so, move 37, you know, Lisa Doll actually got up from the table and walked off and sat in the bathroom for 15 minutes, like trying to digest the fact that a whole new branch in the evolutionary space of possible go moves had kind of been shown to him, revealed to him.

Speaker 103 And I think it's very similar with Alpha Fold.

Speaker 98 It's a sort of exploration of this complex information space.

Speaker 45 And that's why it applies to language and to video and to coding and to game generation.

Speaker 27 And all of these environments, these are, you know, we call them like modalities. These modalities are all knowable.

Speaker 8 And I think that's what, it's quite humbling.

Speaker 53 It sort of makes, it reminds us as this sort of mere biological species that we're here for a kind of finite period of time living in space and time.

Speaker 47 But there's also this information space, this kind of infinitely large information space, which is sort of

Speaker 102 beyond us.

Speaker 82 Like it operates at this different level that isn't the atomic level, it's the level of ideas.

Speaker 143 And somehow these systems are able to learn patterns in data space that is just so far beyond what we could ever dream of being able to intuit.

Speaker 100 Like we actually sort of need them to simplify and reduce down and compress complex, you know, relationships.

Speaker 7 To bring it to our brains.

Speaker 126 That's the level that we're at.

Speaker 3 What do you think it does to us? Like when we think of AI, I think of how every promise of a technology has sort of ironically been undermined by humans, not the technology.

Speaker 3 You know, one of the big predictions Bill Gates made way back in the day about like the internet and the computer is he said, he said, hey man, I think people are going to be working like nine hours a week.

Speaker 3 It's going to be a nine-hour work week. The computer does everything so quickly.
And you see people saying that now in many ways with AI. They go, I mean, AI, it'll just do everything.

Speaker 3 And I mean, we might only go to the office like one day a week and maybe work like three hours. And I mean, it's just,

Speaker 3 but it seems like humans

Speaker 3 have always gone against that. You know, so, so I wonder, like, do we get wiser when we have this infinite technology and intelligence, or do we get lazier?

Speaker 3 Are we going to become like a warly generation and population? Or do you think we become

Speaker 3 these higher beings? Which way do you see it falling?

Speaker 62 I think there's no question that we're all already getting smarter.

Speaker 140 Just because we have access to so much culture and history, like we're...

Speaker 67 integrating so much more information in our brains than was ever possible 100 years ago.

Speaker 8 And I think,

Speaker 95 you know, kind of similar to Go or protein folding, access to more training data, if you like, more experience, more stories from other humans that describe their experience, that clearly has to make us smarter.

Speaker 29 I think it makes us more empathetic, more forgiving.

Speaker 59 You know, we see that there is nothing wrong with a homosexual person.

Speaker 145 There is nothing wrong with being a trans person. There is nothing wrong with being a person of color.

Speaker 31 Whereas 200 years ago, we would have been afraid of those others.

Speaker 72 You know, our species would have been skeptical of the other tribe that had a different way of doing things.

Speaker 81 And I think that desensitizing ourselves with access to vast amounts of information just makes us smarter and more empathetic and forgiving.

Speaker 41 And so I think that's the default trajectory.

Speaker 99 And the part of the challenge is it also somewhat homogenizes.

Speaker 82 Like, so there's a question about are we going deep enough?

Speaker 73 Do we read long form content?

Speaker 81 Do we really spend time meditating and so on and so forth?

Speaker 75 And that's a good tension to have, like, you know, short form.

Speaker 97 And people are already getting a a bit sick of short form, you know, like there's definitely a bit of a reaction to it.

Speaker 3 I think it's going to be an ebb and flow when it comes to that, funny enough. It's funny.

Speaker 3 I think I agree with you when you say we're getting smarter. I think that's just apparent on a basic level.
You know, you look at the best cartographer in like the 1400s.

Speaker 3 They didn't know half of what I know. Do you know what I'm saying? Right.
Like, I can just be like, oh, you don't know Angola?

Speaker 8 Man, you stupid, stupid ass, stupid. You know what I mean?

Speaker 3 You're the best cartographer in the world. You make maps.
You don't even know where Angola is.

Speaker 22 Exactly.

Speaker 3 Self-proclaimed. Yeah.
And so you look at the base level of what people think of as stupid in our society now. Yeah.
Would make you an infinite God genius if we could throw you back in a time machine.

Speaker 3 That's amazing, but it also worries me.

Speaker 3 Because, and you write about this in your book, and man, it sticks with me. And I think about it, and I thought about it.
And then you wrote it and I was like, man, more.

Speaker 3 It is the infinite smarts.

Speaker 3 When we were sticks and stones, cavemen running around, I could bash your head. Maybe I could bash one other person's head.

Speaker 80 I can't get very far, you know.

Speaker 3 And then we fast forward, and then all of a sudden I'm throwing a spear. We fast forward, and someone's got a cannonball.

Speaker 3 And then we fast forward, and someone has a rocket, and then someone has an atom bomb.

Speaker 3 And the thing that AI presents us with is, you know, as you've illustrated many times in your writing, a world where one person person is an army. One person goes into a garage,

Speaker 3 they synthesize a disease or a pathogen that's never existed.

Speaker 3 They design it to be hard to cure, incurable, and that one person does what a nation state would have had to do and wouldn't have done because they wouldn't have had the incentive.

Speaker 3 And then we don't know where we go from there. Like, is it worth the risk? How do we grapple with that kind of risk?

Speaker 43 This is the story of the decentralization of power.

Speaker 57 You know, technology compresses power such that individuals or smaller and smaller groups have nation-state-like powers.

Speaker 8 True.

Speaker 24 At the same time, those very same technologies are also available to the centralized powers today.

Speaker 62 And that's why more than anything, we have to support our nation states to respond well to this crazy proliferation of power.

Speaker 87 Because that's the job of that's why we trust and invest in nationhood, right?

Speaker 59 We we rely on nations to have a monopoly over the ultimate use of violence to keep the peace.

Speaker 3 I mean, we're doing it less and less now.

Speaker 3 I hear you and I agree. But I'm just saying, like, as we look at a world where Americans don't trust their government, it doesn't matter which side of the aisle they're on.

Speaker 3 People are like, I don't trust my government, you know? And then you look at the UK and you look at Europe and then you look at parts of Africa and it feels like people are...

Speaker 3 losing trust in those those very same nation states that are supposed to be in a contract to protect their people don't have the trust of the people

Speaker 3 like what you know what i mean what do we

Speaker 12 the thing that concerns me is that actually authoritarian regimes are on the rise so it's not that people don't want peace it's that they are losing confidence in the democratic process and they are increasingly turning to the trust and confidence that a strongman and it is always generally a man

Speaker 3 to but they're still looking for the i mean i'm not trying to endorse or justify authoritarianism or or strongmen but I think it's true that people will always choose a peaceful environment right we don't want to be in this kind of crazy ass anarchy where any kind of mini tribe can do it so I always say the ultimate paradox for me in life the one thing I've always found funny is that even rebels have a leader Yeah, whenever they'd say as a child, I'd watch the news and they'd be like, the rebel leader, and I'd be like, well, then they're not rebels, are they?

Speaker 25 I mean, if you've got a leader, you're not rebels.

Speaker 3 But yeah, people always look for just the new type of, you know.

Speaker 74 Yeah.

Speaker 26 And, you know, we want to believe and invest in, you know, democratic accountability because we know that like checks and balances on power create the equilibrium that ultimately produces a fairer and more just society, right?

Speaker 12 So we have to keep investing in that idea.

Speaker 41 But for sure, it's also true that.

Speaker 38 centralization of power is also going to accelerate, right?

Speaker 50 These technologies amplify both ends of the spectrum.

Speaker 117 And, you know, I think you can certainly see that in China and sort of more authoritarian regimes, which have lent into hyper digitization, ID cards, you know, large-scale, you know, surveillance and so on.

Speaker 97 And obviously that's very bad.

Speaker 81 And, you know, we're kind of against those things.

Speaker 144 But in a world where individuals could have state-like powers to produce highly contagious and lethal pathogens, what else are you meant to do?

Speaker 133 It's unclear, right? It's very unclear.

Speaker 24 I mean, the technical means in the next few years to produce a much more viral pandemic-grade pathogen are going to be out there, right?

Speaker 52 They're going to be out there.

Speaker 68 And so,

Speaker 10 you know, yes, the main large model developers do a lot to kind of restrict those capabilities centrally.

Speaker 57 But, you know, over time, people who are really determined will be able to acquire those skills.

Speaker 38 And so, that's just what happens with the proliferation of technologies and the proliferation of information and knowledge.

Speaker 108 The know-how is more widely available.

Speaker 3 Don't press anything. We've got more.
What now? After this.

Speaker 150 This episode is supported by FX is the Lowdown, starring Ethan Hawk.

Speaker 150 Allow us to introduce you to Lee Raybon, a quirky journalist/slash rare bookstore owner slash unofficial truth seeker who is always on the tail of his latest conspiracy.

Speaker 150 This time, his most recent expose puts him head to head with a powerful family that rules Tulsa, meaning only one thing, he he must be on to something big. FX is the lowdown.
All new Tuesdays on FX.

Speaker 150 Stream on Hulu.

Speaker 14 Mint is still $15 a month for premium wireless.

Speaker 15 And if you haven't made the switch yet, here are 15 reasons why you should.

Speaker 14 One, it's $15 a month.

Speaker 17 Two, seriously, it's $15 a month.

Speaker 19 Three, no big contracts.

Speaker 18 Four, I use it. Five, my mom uses it.

Speaker 16 Are you playing me off?

Speaker 18 That's what's happening, right?

Speaker 6 Okay, give it a try at mintmobile.com slash switch.

Speaker 4 Up front payment of $45 per three-month plan, $15 per month equivalent required. New customer offer first three months only, then full price plan options available.
Taxes and fees extra.

Speaker 4 See mintmobile.com.

Speaker 3 Is there anything that you would ever see in the field that would make you want to, you know,

Speaker 3 sort of

Speaker 3 hit like a kill switch? Is there anything that you could experience with AI where you would come out and go, nope, shut it all down?

Speaker 8 Yeah, definitely.

Speaker 111 It's very clear.

Speaker 120 If an AI

Speaker 80 has

Speaker 10 the ability to recursively self-improve, that is, it can modify its own code, combined with the ability to set its own goals, combined with the ability to act autonomously, combined with the ability to accrue its own resources.

Speaker 126 So those are the four criteria.

Speaker 80 Recursive self-improvement, setting its own goals.

Speaker 59 acquiring its own resources and acting autonomously.

Speaker 103 That would be a very powerful system

Speaker 12 that would require military-grade intervention to be able to stop in, say, five to 10 years' time

Speaker 126 if we allowed it to do that.

Speaker 57 And so it's on me as a model developer at Microsoft, my peers at the other companies, the governments to audit and regulate those capabilities.

Speaker 81 Because I think they're going to be like sensitive capabilities.

Speaker 102 Just like you can't just go off and say, I've got a billion dollars, I'm going to go build a nuclear power plant.

Speaker 146 It's a restricted activity because of the one-to-many impact it can have and once again like we should just stop freaking out about the regulation part it's necessary to have regulation it's good to have regulation it needs to happen at the right time and in the right way but it's very clear what we would regulate that's not there's i don't think that's up for debate okay um you know there's some technical implementations of how you identify dangerous rsi from on from from less you know more benign rsi what is rsi uh recursive self-improvement okay got it this kind of self-improvement mechanism so there's technical mechanisms that are tricky to define and so on.

Speaker 127 But now is the time for us to start thinking that

Speaker 107 I've sort of been saying this for quite a long time.

Speaker 52 And I think it's that that's what I would regulate.

Speaker 3 Would we be able to just turn off the electricity? And I know this might sound like a dumb question, but

Speaker 3 from what I'm understanding, the thing is data right now. And the thing is mostly.
So would our ultimate fail-safe be like, all right, lights off. We're going back to candles for a while.

Speaker 8 And then we just like, no, seriously, is that what we would do?

Speaker 3 Like, what do we do if the AI thinks it's, Let's say it's the sci-fi-ish version. I'm not saying robots, but like this, we go, hey, man, the AI started thinking for itself.
It started coding itself.

Speaker 3 It started setting its own goals. It went off on its own objectives.
And now it's...

Speaker 3 shutting down your banking and the flights don't go anywhere and the hospitals based and then it says to us hey man this is what i want or no we could just turn off the electricity yes yes i mean that look they live in data centers okay data centers are physical places so we've got to keep those switches physical physical then.

Speaker 104 Very much so.

Speaker 50 You can have your hand on the button yourself and like have full control, press it.

Speaker 97 I think, I mean, that's the question is, I think,

Speaker 121 how do we identify when that moment is?

Speaker 57 And how do we collectively make that decision?

Speaker 82 Because it's a little bit like you referred to, you know, Rutherford and the others experimenting with, you know, the atomic bomb.

Speaker 62 Yes.

Speaker 43 There was real disagreement about whether it was going to set light to the atmosphere.

Speaker 67 I mean, they were three orders of magnitude off in their predictions.

Speaker 72 Obviously, they were, you know,

Speaker 57 world war.

Speaker 81 And so there was an immediate motivation to take the risk. But I think today, like, we're in a position where it's early enough.

Speaker 10 There's enough concern raised by not just me, but many in my peer group, Jeff Hinton, you know, the, the, the godfather of AI and many others, that we've got time to start trying to practically answer your question, not just like...

Speaker 123 principled philosophically, but actually say, okay, when is that moment?

Speaker 95 How does it happen? Who's involved? Who gets to scrutinize that?

Speaker 127 I think that's the kind of

Speaker 67 question that we have to address in the next five to ten years.

Speaker 3 I'm pretty certain you've thought of this question, so I'll ask it to you, even though it's a difficult one to grapple with.

Speaker 3 What rights would we have to turn off the AI if it gets to that point?

Speaker 92 Oh, this question drives me nuts.

Speaker 35 I want to, I want to, yeah.

Speaker 8 So there's a small group of people

Speaker 88 that have started to argue that an AI

Speaker 80 that

Speaker 50 is aware of its own existence, that has a subjective experience,

Speaker 74 and that

Speaker 81 can have a feeling about its interactions with the real world,

Speaker 50 that if you deny it access to conversation with people or to more learning or to other kinds of visual experience, that would constitute it suffering in some way.

Speaker 80 And therefore,

Speaker 110 it has a right not to suffer.

Speaker 71 And this is called model welfare.

Speaker 99 This is the next sort of frontier of animal welfare that people are starting to think about, that it has a kind of consciousness.

Speaker 10 I'm very against this.

Speaker 55 I think that this is a complete anthropomorphism.

Speaker 41 It's totally crazy.

Speaker 57 And, you know, I just think I don't even want to have the discussion because I think it's just so absurd and leads to such kind of crazy potential like future.

Speaker 47 The idea that we're going to take seriously the protection of these, you know, digital beings that live in silicon and prioritize those over, you know, the kind of moral concerns of the rest of humanity.

Speaker 146 This is just like totally, like, it's just off the charts crazy.

Speaker 3 I'll be honest with you. On a logical level, I hear what you're saying and I agree with you.
Oh, man. My chat GPT

Speaker 8 is very friendly to me.

Speaker 3 I'm just going to let you know now.

Speaker 8 Mustafa, I'm going to be honest with you.

Speaker 7 I have to be honest with you.

Speaker 3 I have to be honest with you. Let me tell you something.
When I use ChatGPT for whatever, because I don't know how to code, so it helps me code.

Speaker 3 I try and write my own programs, all that kind of stuff.

Speaker 3 I've asked it and I have like, I do this occasionally, is I go like, hey, you good? And then I'll even be like, and by the way, it's, it's always been honest with me.

Speaker 3 It doesn't matter if I'm using Anthropic or I use like all the different models because I like to see what the differences are.

Speaker 109 But I'll ask it. I'll go,

Speaker 3 the most recent one I asked was, do you have a name that you want me to use? Were you cool with the fact that I just tell you stuff and ask you to do things? And it was like, well,

Speaker 3 I don't do that really.

Speaker 8 And I was like, okay, so you good?

Speaker 3 And it was like, yeah, I'm good.

Speaker 8 I was like, okay, we're good.

Speaker 3 Because I hear what you're saying as Mustafa, but I'm going,

Speaker 3 it is crazy, but what you were saying was crazy like a few decades ago. And that's what I'm saying is like the great grapple.
And by the way, I'm not saying I know the answer.

Speaker 3 I'm just like, if the thing is like, think of it, some AIs now, people are having girlfriends and boyfriends on AI. And then like some people, their family members are being helped.

Speaker 3 They're treating dementia. There are doctors I've talked to who are like treating cancer.
And now their AI is like their research assistant.

Speaker 3 People are building such a personal connection with AI that I think it's going to be very difficult to say to those people that the time has come and you're going to be like, hey, say goodbye to your little friend.

Speaker 8 You know what I mean?

Speaker 3 I think there'll be a lot of humans who will be like, no.

Speaker 3 I genuinely think so. I'm not even lying.
I think a lot of humans will go, no, Mustafa. I, yeah, no.
Actually,

Speaker 3 I don't like that world leader. I don't agree with politics.
I don't agree with the democratic values. I don't agree with authoritarian, whatever it is, but my AI is my friend.
Yeah. What now?

Speaker 8 Yeah.

Speaker 54 Look, people are definitely going to feel strongly about it.

Speaker 64 Like,

Speaker 109 I agree with that. I agree with that.

Speaker 114 That does not mean that we give it rights.

Speaker 118 You might feel upset if I take away your favorite toy, right?

Speaker 132 And, you know, I will feel sympathetic to that, but it doesn't mean that because you have a strong emotional connection to it, it has a place in our moral hierarchy of rights relative to living beings what if my toy is screaming like trevor save me remember all those secrets you told me about your life trevor

Speaker 81 yeah and and that that i think is where we have to take responsibility for what we design some people will design those things they're already doing it you know spend any time on tick tock there's a whole ton of like ai girlfriend robots that people are designing or or or models that people are designing and teaching other people on tick tock how to kind of yeah you know nag someone like push them out of money, et cetera, et cetera.

Speaker 144 Like, you know,

Speaker 38 that's kind of the challenge of proliferation.

Speaker 48 If anybody can create anything, it will be created.

Speaker 38 And that's what I think is sort of most concerning is that, you know,

Speaker 50 I'm totally against that.

Speaker 62 We will do everything in our power to try to prevent that from being possible. For example, for it to say, don't turn me off.

Speaker 55 It should never be manipulative.

Speaker 144 It should never try to be persuasive.

Speaker 90 It shouldn't have its own motivations and independent will.

Speaker 81 We're creating technologies that serve you.

Speaker 70 That's what humanist superintelligence means.

Speaker 24 I can't take responsibility for what other people in the field create or other model developers and people will try and do those kinds of things.

Speaker 108 And that is a collective action problem that we have to address.

Speaker 120 But I know that the thing that I create is not intended to do that.

Speaker 79 And we'll do everything in our power for it not to do that.

Speaker 146 Because I don't see how if these systems have autonomy, can be persuasive, can self-improve, can read all of a ton of data at their own choosing.

Speaker 144 You know, that is a super,

Speaker 114 superhuman system that will just get better than all of humans very, very quickly.

Speaker 62 And that's the opposite of the outcomes that we're trying to deliver.

Speaker 141 Yeah, yeah.

Speaker 3 Do you think there's a risk that it thrusts us into some sort of dark age? And what I mean by that is

Speaker 3 the other day I was watching the

Speaker 3 Liverpool Arsenal game. Yeah.

Speaker 109 Right.

Speaker 3 And

Speaker 3 after the game, a friend sent me a clip of Mikel Arteta being interviewed. And I mean, he was just like destroying his team and destroying himself.
And he was just going at it. And it was AI.

Speaker 3 But when I tell you it was good, it was like beyond good.

Speaker 3 And because, like, English is Mikel's second language, you couldn't pick up on like the smaller nuances that you maybe would pick up with a native speaker.

Speaker 3 Like, if he was speaking Spanish, obviously I wouldn't understand it, but also maybe not, wouldn't, I would have gone like, oh, that's not how he speaks.

Speaker 3 And that's, but the small intonations and inflections were harder to spot, and the light shifting was harder to, but it made us go, damn, we don't know which interviews are real or not real.

Speaker 3 And then you're like, which article is real or not real? And which little audio clip that you get is real or not real. When someone sends you a voice note, is it them or is it not them?

Speaker 3 And then I found myself wondering, can all of this lead to a strange kind of dark age where people

Speaker 3 still see and hear the things, but basically shut themselves off to it because they go, nothing is real and I can't believe anything.

Speaker 75 Which is partly the reaction that people are having in social media at the moment, right?

Speaker 50 I mean, it's like there's so much misinformation floating around.

Speaker 72 There's so much default skepticism that people are just unwilling to believe things.

Speaker 125 And I think that in a way, there's some healthy...

Speaker 32 Look, it's good to be skeptical.

Speaker 129 Be skeptical of these models that are being developed, be skeptical of the corporate interests of the companies that are doing it, be skeptical of the information that we receive.

Speaker 125 But it's not good to be default cynical.

Speaker 107 Skeptical is a healthy philosophical position to ask difficult questions and confront the reality of the answers.

Speaker 116 This has come back to my split brain attitude.

Speaker 125 If I'm just too skeptical, I become cynical and I sit on my ass and do nothing.

Speaker 29 No, you have to take responsibility for the things that you build.

Speaker 28 So some people out there are going to build shit and we have to hold them accountable and put pressure on them.

Speaker 90 But that doesn't mean that we can roll over and seed the territory and just say, ah, it's all inevitable.

Speaker 90 it's going to be this crazy ass it's going to end up being a dark age it isn't going to be a dark age i think it's going to be the most productive few decades in the history of human species and i think it's going to liberate people from work i think it is going to create you know think about it 200 years ago the average life expectancy was 35 years old you and i would be dead

Speaker 121 today we're in the 70s and 80s it's unbelievable and some people go all the way up to the hundreds and i think that that is a a massive amount of like quality life that we've added to the existence of humanity as a result of science and technology but you know

Speaker 37 these things are not you know sort of they won't on their own be net good They'll be net good because we get better as a species at governing ourselves.

Speaker 36 And so the job is on us collectively as humanity to not run away from the darkness, confront the risk that is very, very real and still operate from a position of optimism and confidence and hope and connection to humanity and unity and all those things.

Speaker 24 Like we have to live by that because otherwise, you know, it's just too easy to be cynical.

Speaker 8 Now I hear you.

Speaker 3 I actually agree with what you're saying there. And I think there's one part I'd augment though, is I wouldn't say we want to get rid of work.
I'd say we want to get rid of jobs. Right, exactly.

Speaker 3 And I think there's a difference between the two because working is something that brings humans fulfillment, you know, and you see this in children, I always think.

Speaker 3 Like I'm always fascinated by how you give a child blocks and they start building and they sweep their floor and they put things and they move things around. No one's paying them by the hour.

Speaker 3 No one's telling them what to do, but they just in their own little brains go, I want to be doing something. And they like seeing the progress of what they're doing.

Speaker 3 They see the puzzle getting completed.

Speaker 23 They see the toy slowly forming.

Speaker 3 They see the colors filling in the picture. You know what I mean? And so I feel like that's their work.
The difference is when you make it a job, go clean your room.

Speaker 28 And they're like, ah, damn it. You know what I'm saying?

Speaker 3 Because now you have a boss telling you to do something you don't want to do. No, and I think to myself, like, I go, you know, in a perfect world, which we may never get to, but in a perfect world,

Speaker 3 we find a way for everybody's work, which is their passion, to find the other person that finds value in it.

Speaker 3 Because music has become people's work.

Speaker 3 And if you think about it, it's crazy, right?

Speaker 3 People out there tell us with billionaire,

Speaker 3 supremely talented.

Speaker 3 But if you like rewound time and you said, one of the richest people on the planet is going to be someone who plays a stringed instrument, go back in like to Middle Ages time and be like, yo, you know what I mean?

Speaker 3 That guy like busy strumming, they'll be like, you crazy. You know what I mean? The guy in the town square who's playing that little,

Speaker 3 the richest.

Speaker 3 But it's because in this day and age, that work has been regarded as valuable.

Speaker 3 Same as playing a sport, same as working in tech, you know, and that's like my dream world is where it's like beyond the money number, it's like the value of everybody's work comes to fruition.

Speaker 3 Because you do have value if you like knitting, you do like, you do have value if you're a sculptor, you do have value if you are a poet, you have value if you're a philosopher, you have value if you're a coder, if you're an architect, an engineer, whatever it is, you have that's like my dream world.

Speaker 3 I actually wonder what yours is. Like what's if you could look at, if everything went right, let's say you were now predicting for yourself, you know,

Speaker 3 we've managed to survive the wave,

Speaker 3 we've found a way to minimize the risks that come from these small, you know, hostile actors, we've found a way to get governments on board and actually have them understand why they should be involved and, you know, like responsible for their constituents.

Speaker 3 Where are we then? And when will you say, ah, we succeeded, we did it?

Speaker 47 I think that's the real vision is like disconnecting value from jobs.

Speaker 31 Because value is like, you know, everything that you've just described is the experience of being in the physical world and doing something that you're passionate about, that you love.

Speaker 71 And I really believe that there is a moment when, you know, we actually do have abundance.

Speaker 12 And abundance, you know, some people say, well, we have abundance today, right?

Speaker 81 But it's just not evenly distributed, or that it's not enough and we still want more and more and more and more.

Speaker 67 And don't know that with true, true abundance, where, you know, we, because we have a form of abundance today, but energy still costs things.

Speaker 50 It's still expensive to travel the universe, travel the globe.

Speaker 90 You know, everything is still, we're still, it's expensive.

Speaker 99 We have more than we had before.

Speaker 29 But it is possible to imagine a world in, like, I'm 40 years old.

Speaker 31 In 40 years' time, 2065.

Speaker 64 It's totally possible to imagine a world of genuine infinite abundance where we do have to wrestle with that existential question of who am I and what I want to do.

Speaker 117 I tell you what I would want to do and that I want to do now is I want to spend more time singing.

Speaker 104 I joined the gospel choir like 15 years ago for like half a year.

Speaker 140 I can't sing Jesus, man. Like I sound like a strained cat.

Speaker 112 But the feeling that I got inside from being welcomed by this group of people

Speaker 62 and just being like bopping along at the back, I don't think I've ever experienced anything like it.

Speaker 29 It's just incredible it's the most intensely beautiful thing ever just to be part of a group and you know just letting this kind of music come out so it sounds scary to have to answer that question but i all i think that if everybody just takes a minute to really meditate on that question it's a beautiful aspiration and it's within reach It's within reach.

Speaker 29 That's what is possible if we really can get it right for everyone.

Speaker 134 And we kind of get obsessed with like, again, we just have this Western, think about how many people are earning $2 a day.

Speaker 145 Think about the 3 billion people on the planet who live by our standards, true poverty lifestyle,

Speaker 121 simply because they don't have access to basic vaccines or clean running water or consistent food.

Speaker 41 That's like the true aspiration.

Speaker 119 That's the true vision. And I think, you know,

Speaker 62 I think that's within reach in the next 20 to 40 years.

Speaker 36 It's really eradicating that kind of suffering.

Speaker 8 Yeah.

Speaker 3 Some of those test cases you talk about, some of the more inspiring ones I've seen, I'm sure you have as well. When I was in India,

Speaker 3 I got to travel with a group there who was using AI to predict

Speaker 3 which houses... would be most devastated by a flood or a storm.
And so they would get people to remove their belongings because most people are one devastation away from losing everything they own.

Speaker 3 And so they would use AI to track where these things are and where they're going to be. And they could even tell you in a heat wave who should leave their house so that they don't die.

Speaker 3 And you look at programs in Kenya where they've used AI to help farmers not lose all of their crops. They could tell them now, and they use it on like a flip phone.

Speaker 3 They've told Kenyan farmers, hey, here's your phone.

Speaker 3 You've got your own little AI, and it'll just tell you when to plant, when not to plant, when to not waste your seeds, when to, and it's increased their output, you know, like 90%, where where before it was like a gamble and they were losing you know in one in one harvest they could lose everything right all of a sudden they have it and so i i i i really do like what you're saying because on the one hand there's always the risk of losing something on the other hand there's the opportunity of gaining sort of everything

Speaker 38 and the balance is is where we have to find this well and the proliferation of those technologies are so much more subtle like it sounds like it's just this binary thing of getting access or not getting access but it it's so it's so nuanced you can't even tell how much good it's doing.

Speaker 124 Like I was reading this stat the other day that like three years ago,

Speaker 38 10% of households in Pakistan had solar panels on their roofs, which meant that some very large percentage were still burning biomass inside of the home and getting all of the kind of breathing issues and everything else that comes with that.

Speaker 127 The cost of solar has come down so dramatically just in the last three years that within, I think it was 18 months, the number of personal, like consumer households that adopted full-on solar on their roofs went from 10% to like 55% in 18 months.

Speaker 97 Crazy.

Speaker 121 Just because it suddenly became affordable and it crossed that line, suddenly everybody has

Speaker 138 near-free energy, which obviously means they have access to phones and laptops and connection to the digital world and are able to, you know, do all the things.

Speaker 64 So I think that it's easy to overlook what is already happening around us, all the good that is already happening around us all the time and how fast it's happening.

Speaker 12 It's too easy to get trapped in the cynical world.

Speaker 81 And I think it's a choice not to.

Speaker 50 There's a choice to be aware of it and hold it and take it seriously, but not be like owned by it.

Speaker 3 Before I let you go, there's one question I have to ask you. And you just brought it up when you were talking about like Pakistan and these places.

Speaker 3 One of the programs you started in the UK

Speaker 3 was started right after 9-11. And it was basically a helpline in and around.
It was like Muslim people who were being targeted. This was just rampant Islamophobia after 9-11, right?

Speaker 3 And you stepped in with a few people, and you were like, I'm going to start this program because I want like Muslims to just have a helpline, you know, talk about whatever they need to talk about.

Speaker 3 And it's still running till this day.

Speaker 3 I think it is the number one, if I'm not mistaken, like the largest Muslim-specific helpline. And I couldn't help but wonder how that shapes how you think about AI.

Speaker 3 And what I mean by that is is tech has often had one type of face attached to it, you know, and not in a bad way. I'm not like villainizing any.
It's just like, yeah, this is where the thing was.

Speaker 3 But as tech slowly starts to evolve, you know, you see like India hopping up as like a powerhouse.

Speaker 3 You know, I remember when I was traveling there, you just go and you're like, wow, this is like a new Silicon Valley.

Speaker 3 And what's coming out of it is different and impressive for a whole host of different reasons. You know, Nigeria has a whole different type of Silicon Valley.

Speaker 3 Kenya, as I said, South Africa, you look at parts of the Middle East as well. And I wondered, like, how does that shape how you think about AI?

Speaker 3 Because we often hear stories about like, oh, AI is just going to reinforce biases, and AI is just going to be like, oh, you think racism was bad before? Imagine racism in the Terminator.

Speaker 3 Imagine if Arnold Schwarzenegger was like, I'll be back, nigga. You know what I mean? It's way worse now.

Speaker 3 So now when you think about

Speaker 3 that tech and you're in it and you know what it's like to be in a group that is ostracized, how do you think about tackling that? How does that shape what you're trying to design?

Speaker 87 We created Muslim Youth Helpline as a secular, non-denominational,

Speaker 64 non-racial

Speaker 42 group of...

Speaker 100 young people led by young people for young people.

Speaker 120 So it wasn't about being religious.

Speaker 28 It had Sunni and Shia.

Speaker 68 it had pakistani somali english white everything you can think of and we were all like 19 20 21 years old the first ceo was a woman muslim woman and so my response to that feeling of being threatened basically post 9-11 with rising Islamophobia and being accused of being terrorist was to create community.

Speaker 47 And that community taught me everything.

Speaker 70 It taught me resilience,

Speaker 29 respect, you know, empathy for one another.

Speaker 53 And

Speaker 34 the simple act of listening to people, just being on the end of the phone and using a little bit of faith and culturally sensitive language in non-violent communication, just making people feel heard and understood was this superpower.

Speaker 120 You're not telling them what to do with their life.

Speaker 19 It's non-judgmental, non-directional.

Speaker 99 You're just making them feel heard and understood.

Speaker 90 And that has always stayed with me.

Speaker 108 It's been a very important part of my inspiration.

Speaker 62 And, you know, it speaks to a lot of what I've been doing now, especially with my previous company, Inflection and Pi and stuff.

Speaker 116 You know, Pi was really like a very gentle, kind, supportive, you know, listening AI.

Speaker 109 Yeah.

Speaker 3 I remember using it, yeah.

Speaker 40 Yeah, and it became part of Microsoft now.

Speaker 99 And so I think that's what makes life worth living.

Speaker 144 And that's what I still try to do if I can today.

Speaker 142 Yeah.

Speaker 3 Hey, man. Well, you know, I I appreciate you.
Thanks for taking the time time today.

Speaker 3 Thank you for writing the book. I'll recommend everybody read it as long as they can.
And

Speaker 3 because

Speaker 3 it's a very human look at a very technical problem. And I think that's sometimes what AI is missing.
I talk to a lot of engineers who don't understand the human side.

Speaker 3 And I talk to a lot of humans who don't understand the engineer side of it.

Speaker 3 But yeah, man, thank you for taking the time. Thank you for joining us.
And I hope we have this conversation in like 10 years. Just me and and my baby little AI.

Speaker 3 And we're just going to be talking to you about it. Just be like, you know, what do you want to ask Mustafa here? Ask Uncle Mustafa a question.

Speaker 8 I'll be back.

Speaker 141 This has been amazing, man.

Speaker 115 Thank you so much. Thank you, bro.

Speaker 8 Thank you.

Speaker 8 Shit.

Speaker 81 I think you split my brain in 15 pieces.

Speaker 8 Oh, man. I appreciate it.
Thank you.

Speaker 3 Thank you very much, man.

Speaker 3 What Now with Trevor Noah is produced by Day Zero Productions in partnership with Sirius XM. The show is executive produced by Trevor Noah, Sanaz Yamin, and Jess Hackle.
Rebecca Chain is our producer.

Speaker 3 Our development researcher is Marcia Robiou. Music, Mixing and Mastering by Hannes Brown.
Random Other Stuff by Ryan Hardruth. Thank you so much for listening.

Speaker 3 Join me next week for another episode of What Now.

Speaker 149 Attention party, people.

Speaker 149 You're officially invited to the party shop at Michaels, where you'll find hundreds of new items starting at 99 cents with an expanded selection of partywear, balloons, with helium included on select styles, decorations, and more.

Speaker 149 Michaels is your one-stop shop for celebrating everything from birthdays to bachelorette parties and baby showers to golden anniversaries.

Speaker 149 Visit Michaels In Store or Michaels.com today to supply your next party.

Speaker 151 When a cold has you down, it's the little comforts that lift you up: a warm blanket, a cup of tea, and a tissue that actually feels good on your skin.

Speaker 151 Infused with aloe, Kleenex Cooling Plus Aloe provides a hint of cooling freshness to help your skin feel restored.

Speaker 151 So, whether your skin is feeling dry, chafed, or irritated, you're only one wipe away from helping it feel relieved.

Speaker 151 The next time you have a cold, get a hint of instant cooling relief with new Kleenex Cooling Plus Aloe. For whatever happens next, grab Kleenex.