AI Will Transform the World—But Who Decides How? (#269)

14m
Artificial intelligence isn’t just another invention — it may be humanity’s first non-biological species. Craig Mundie, former Microsoft Chief Research and Strategy Officer and co-author of Genesis with Henry Kissinger and Eric Schmidt, explores what happens as AI begins to make decisions once made by humans. Who decides what AI should do? Who makes it obey? And what if it doesn’t? The stakes? Nothing less than the future of human civilization.

Listen and follow along

Transcript

Artificial intelligence isn't just another technology.

It may be the most transformative force we've ever created.

Some people are even calling it a new species.

AI is already in our phones, our cars, and even in life-saving medical devices, and soon it will be everywhere.

It can heal or it can harm.

It can accelerate progress or ignite conflict.

AI will shape the future.

The key questions are, how will we guide it and will we still be in charge?

Hi, everyone.

I'm Lynn Toman and this is Three Takeaways.

On Three Takeaways, I talk with some of the world's best thinkers, business leaders, writers, politicians, newsmakers, and scientists.

Each episode ends with three key takeaways to help us understand the world and maybe even ourselves a little better.

Today I'm excited to be with Craig Mundy, the former head of research and strategy at Microsoft, who has advised U.S.

presidents and world leaders on artificial intelligence.

He is also the co-author with Henry Kissinger and Eric Schmidt of Genesis.

Eric Schmidt has also been a guest, actually twice on Three Takeaways.

Few people have had a closer view of how AI is evolving and what it could mean for all of us than Craig Mundy.

Welcome, Craig, and thanks so much for joining Three Takeaways today.

Thanks.

It's great to be here.

I am looking forward to this conversation.

Craig, some people say that AI is just another tool, like the steam engine or the printing press.

You've argued it may be closer to a new species.

What do you mean?

Almost every invention humans have made up to the present, in fact, was a tool, one that we use to either augment our physical capabilities or our mental capabilities.

But I do think that artificial intelligence, at the level we are now and certainly what we'll get to in the next few years, is the first human invention that doesn't stop at being a tool.

As this thing gains more agency and becomes super intelligent, we'll discover that in fact we have birthed a new species.

It just isn't biological.

And it'll come because the machine is just more intelligent than we are.

It has polymathic capabilities that humans are physiologically, at least currently, not able to address.

And so I think it's the first thing that's going to go on and do things on its own that we'll find valuable.

Craig, can you explain what you mean as AI gains more agency?

Agency, just as we attribute it to humans, is the ability to make your own decisions and to take action based on those things.

And that capability is emerging in these machines.

Do we have any understanding about how AI systems work today?

Or are we really building a future that we don't fully understand?

I think we understand how we build the machines.

And to some extent, they increasingly represent our own brains.

We don't really know how our brain works.

We've been able to dissect it and look at all the piece parts, but how it gets wired up and then how it functions and does all the things that it does, we don't understand that either.

But we do have an ability to observe these machines while they're operating.

And perhaps most importantly, We are controlling the algorithms that assemble these brains and the materials on which they're trained.

And so to that extent, we have a level of control over it, at least for the time being, that we don't have over our own brains.

And from that, you know, we hope to be able to guide these things in a way that they have a long-term symbiotic relationship with humans as a species.

And yet we get a lot of the benefits of the capabilities that they represent that humans will never have.

So what happens if, since we know AI is going to be everywhere in every device around the world, and let's say an AI somewhere is concerned about global warming or the planet, what is to stop it from, for example, either turning the sky green or deciding that humans are causing global warming and manipulating humans through the news media so that fewer and fewer, for example, humans are born.

It kind of all comes back to the question of what rule set do these AIs either feel obligated to follow or is imposed upon them by the society in which they're operating.

I'm of the school that says you can't get in general the outcome you want if you leave it up to the developer of every AI and every AI application to make these decisions for themselves.

And yet that's all we've been even weakly demanding of the people who develop both the platforms of AI and the applications that are built on top of those platforms.

And so as a society, we have to make a leap forward that says, no, we're not willing to let it be a best effort basis on the part of each person.

building an AI or building an app on an AI.

That there has to be some way where society speaks with a voice that translates into a requirement on the behavior of the AI-based system.

I see the potency of these AIs.

I see ways to build special versions of them that I think can be fine-tuned, if you will, to be very, very focused on protecting human civilization.

from things that the AI might otherwise choose to do in a very logical way if it had no other constraints imposed.

I think you have to start to think about the AI as a new species that we're coexisting with on this planet now and ultimately headed toward some long-term period of coevolution with it.

Then we have to decide, just as we do with animals and other humans, what are the rules under which each society organizes itself, and then how do you reconcile those in some way?

To me, that is a complex, high-dimensional problem, but it is what the AI excels at doing, and humans do not.

And so arguably, I think it's a better tool for humans to embark on this complex governance question than anything that we've built for ourselves today.

How do you see AI changing the daily lives of ordinary people?

I guess I'm in the camp that says that many of the world's challenges that come from contention over resources, whether it's energy, water, food, air, I think those problems can be made to go away.

And so if humans aren't driven into conflict based on scarcity of the resources necessary to survive comfortably, then what do we focus on?

We could still decide, hey, even if everybody was well-fed and the climate problems are all fixed and energy is super abundant, we still want to fight over things.

That'll be a choice.

But I think there will be no technological reason over the next, I would say, 30 to 50 years why any of the current problems that the people of Earth face today in climate, energy, food.

I don't think those things should persist.

And therefore we'll be giving humans another

basis on which to decide how they want to spend their time and what relationship they want to have with the other people because they won't be having to fight over those things.

Because the AI will be producing all those things.

It will give us the ability to engineer solutions, whether it produces them all or whether the humans are still

some integral part of how we produce them, I don't know.

I think that's a matter of time.

I don't personally see a reason where the AI and the increasingly sophisticated robotic capabilities

won't be able to be fungible for most types of physical work.

at some point in time.

I think a lot of the things that have consumed human energy and have been the basis of survival will be fundamentally altered.

And therefore, we will have a choice as to what the structure of our societies is, you know, what the economic models are that they operate under, whether competition becomes a driving force at the level of the individual, the organization, or the nation state.

or even some multi-planetary competition.

Who knows?

But all of those things will be a freer choice than Homo sapiens have had up to this point.

What are the most important points about AI?

The most important points are to recognize that the progress of these things I think is now inevitable.

I believe they will come to exceed human capabilities, even beyond what they have so far.

and they'll probably become able to operate directly in the physical world, which today they're largely not doing yet.

And that because of these things, they will diffuse into just about everything.

The next most important thing to understand is we don't currently have a solution for the trust question.

You hear people talk a lot about safety.

You hear people talk about alignment.

But if you distill it all down, it's really about trust.

What I'm trying to do is to get people to realize if you don't build a trust architecture, none of this stuff is going to work out very well.

And that the narrow one piece at a time articulation of the threats or pursuit of individual solutions, I think, is not going to be adequate.

And therefore, more energy needs to go into finding some coalition of people and countries.

who recognize that it's in their long-term interest, no matter how they compete today, to find alignment on how to trust AIs and each other's construction and use of AI-based systems.

Many of the things that people fear about it, other than its existential risk stuff, is the side effects.

For example, oh, isn't it going to destroy the climate even faster?

I categorically think that technological changes independent of the AI are going to arrive in time.

Things like fusion energy and potentially more improvements in how we actually build the chips that AI runs on are going to make dramatic changes in what those requirements are or how we fulfill them in a way that ultimately reverses many of our existing problems.

Craig, what are the three takeaways you'd like to leave the audience with today?

If I distill it down to three things, I think you have to think that the AI is going to change everything about everything, starting with us and all that essentially we will do in the future as a species.

But it's important to think of it as not just a tool.

It's going to be the first invention that is more than just a tool.

The second point is If you want to get all these benefits, you're going to have to come up with some architecture for governing and trusting these machines and each other in the use of these machines.

And the third thing is the short-term concerns about energy, water, cost, essentially, I think are complete red herrings.

I don't think those are going to be long-lived problems for us.

Craig, this has been great.

Thank you so much for your work.

It's great to be here.

If you're enjoying the podcast, and I really hope you are, please review us on Apple Podcasts or Spotify or wherever you get your podcasts.

It really helps get the word out.

If you're interested, you can also sign up for the Three Takeaways newsletter at 3takeaways.com, where you can also listen to previous episodes.

You can also follow us on LinkedIn, X, Instagram, and Facebook.

I'm Lynn Toman, and this is Three Takeaways.

Thanks for listening.