Tech and AI: 10. Can We Control AI?

13m

When so-called "generative" Artificial Intelligences like Chat GPT and Google's Bard were made available to the public, they made headlines around the world and raised fears about how fast this type of AI was developing. But realistically, what harm could AI do to people? Is it an existential threat, or could it become one? And if things got really bad, couldn't we just switch it off or smash it up with a hammer?

Technology has already completely altered our lives, and Artificial Intelligence may transform our world to an even greater degree. This series is your chance to get back to basics and really understand key technology terms. What's an algorithm? Where is "the Cloud" and what exactly is Blockchain? What's the difference between machine and deep learning in artificial intelligence, and should we all be using Bitcoin? Our experts will explain in the very simplest terms everything you need to know about the tech that underpins your day. We'll explore the rich history of how all these systems developed, and where they may be going next.

Presenter: Spencer Kelly
Producers: Ravi Naik and Nick Holland
Editor: Clare Fordham
Programme Coordinator: Janet Staples

Listen and follow along

Transcript

This BBC podcast is supported by ads outside the UK.

Suffs!

The new musical has made Tony award-winning history on Broadway.

We demand to be home!

Winner, best score!

We demand to be seen!

Winner, best book!

We demand to be quality!

It's a theatrical masterpiece that's thrilling, inspiring, dazzlingly entertaining, and unquestionably the most emotionally stirring musical this season.

Suffs!

Playing the Orpheum Theater, October 22nd through November 9th.

Tickets at BroadwaySF.com.

You're listening to Leaf Filter Radio, and the guru of gutter protection himself, Chris Koonahan, is here to take your most pressing Leaf-related questions.

Hey, everybody, Chris here.

I understand we have Ron on the line.

Ron, where are you calling from?

Uh-oh.

Ron, are you calling from a ladder?

Well, I was.

I wanted to ask Chris what I need to do to get my gutters ready to have Leaf Filter installed.

Oh, Ron, you don't have to do anything.

A Leaf Filter Trusted Pro will come out and clean out your gutters, realign and seal your gutters, and install Leaf Filter, America's number one gutter protection system.

So I didn't need to get on this ladder?

Ron, Leaf Filter Trusted Pros are in your neighborhood and ready to help.

Just visit leaffilter.com slash day to schedule your free gutter inspection and get up to 30% off.

Thank goodness.

What was that site?

That's leaffilter.com slash day for your free gutter inspection today.

See representative for warranty details.

Promotion is 20% off plus a 10% senior or military discount, one discount per household.

BBC Sounds, music, radio, podcasts.

Welcome to Understand Tech and AI, the podcast that takes you back to basics to explain, explore, unpick, and demystify the technology that's becoming part of our everyday lives.

I'm I'm Spencer Kelly from BBC Click, and you can find all of these episodes on BBC Sounds.

The Terminator, War Games, Blade Runner, Terminator 2, Westworld, Metropolis, Terminator 3, 4, 5, 6, 7, 8, 9, 10.

Science fiction has brought us so many stories about futures where the machines take over, where we, the oh-so-clever humans, create intelligent computers that can do everything that we can do, but better.

And in most of these stories, artificial intelligence comes to the inevitable conclusion that the world would be a better place without us.

Well, now that AI has infused our lives, have we already set these wheels in motion?

Are we heading for a future where AI in some way takes control?

And will there be any way to stop it if it does?

Once again, Dr.

Michael Pound, Associate Professor in Computer Vision at the University of Nottingham, is with me.

Mike, hi, welcome to the end of the world.

Thank you very much.

So we've got all of these famous films which evoke this fear of an all-powerful AI that we lose control of and which ultimately tries to wipe us out.

Is that what we should be worried about AI becoming?

No, I don't think so.

No.

First of all, because that

particularly in any kind of reasonable time frame.

But also because there are many other issues with AI that we should be worried about right now that we might ignore if we focus on these sort of hypothetical future problems.

Can you give us a kind of general idea of the sort of threats that AI poses to us?

One is sort of fake news and automatically generated content that could be used to sway people in, for example, elections.

And that looks convincing because it's written well.

Exactly.

In the last six months or 12 months, phishing emails that you received have suddenly got much more convincing because they're now written by an AI that has read all of the internet.

And so that's a huge problem where we now can't tell from a piece of text whether it's true or false.

And so that fact checking needs to come from somewhere because people might not do it necessarily.

And maybe even if they have fact checked something and it ends up getting disproved, the damage might have already been done.

Enough people will have read it that it's had its impact and that might be sufficient for the person that was trying to do this.

People listening are very well informed and they're thinking, well, I wouldn't be taken in by something.

But I I suppose the question is, is if you're bombarded with text and you just literally have no idea whether that was written by a person or it wasn't, and it's a truth or a lie, you're not always going to be able to fact-check every single thing you see, even briefly.

So you might discount truths as well as lies.

That's right.

And people are predisposed to believe what they already believe and find evidence that supports their own conclusions.

And so you end up with this kind of social media bubble, but made even worse by the fact that AI is being used to amplify this.

Are politicians worried about AI?

Are governments worried about AI?

And what is it that's worrying them if they are?

That's a very good question.

They are worried about things like fake news and image generation being used in a nefarious way and things like this that are perhaps much more reasonable things to worry about.

Recently, many big names in technology, including a guy called Elon Musk, I don't know whether you've heard of him,

wrote an open letter.

asking I think the world to just pause development of AI for six months while we worked out how to deal with it, how to regulate regulate it and stuff.

What did you make of that letter?

I like the idea that we should regulate the use of AI because I think it can be used irresponsibly to make decisions that affect people's lives or fake news, scams.

We've been talking about these things.

On the other hand, I think that just pausing will mean that half the people don't pause and just secretly carry on working anyway.

Of course, the other issue is that everyone wants to be at the forefront of AI, so all governments are trying to push it as much as possible to make sure we're not missing out.

And there is that kind of contradiction, isn't there, really, where we're all trying to push this as fast and far as we can.

But at the same time, maybe even the same people are going, whoa, I think we've gone too far there.

This is getting out of hand.

That's right.

And I think there are going to be huge economic benefits to AI systems, both in terms of efficiency and things that we couldn't do before that we can now do.

And no country wants to miss out on those things if they've over-regulated and no one else has.

So it is a difficult balancing act.

Do you think there is an AI arms race going on between countries?

I think that there are lots of governments who would be interested in using AI for lots of different reasons and would like their AI to be better than everyone else's.

So I think there is a bit of an arms race going on.

And we're talking about arms and we mentioned some of the famous films.

AI can be used in warfare, can't it?

We've certainly seen pretty decent looking robots that can navigate the real world and it's not a big leap to think you can strap a weapon to the top of that and send it into battle.

Do you think there is this sense that we shouldn't let machines make decisions?

Certainly, when it comes to weaponry and destroying things, we should always keep a human in the loop.

I think it would be very reasonable to have a pause on the development of fully autonomous AI-based weapons because we don't know how well they will work.

There's huge conflicts of interest there, and it's just a recipe for a huge number of problems.

Ultimately, AI that is perfect and never makes a mistake does not exist as of today, so putting it in a weapon system would be a very bad idea.

So, I think there are some places where we should should be regulating, and I think it would be important to have those conversations.

Now, Mike, let's take a short break, because the idea of all-powerful robots has fueled the imagination of science fiction writers for a very long time.

Here's Dr.

James Sumner with the stuff of nightmares.

The scientifically inspired tragedy of Frankenstein set the template for intelligent, uncontrollable creations.

The creature in Mary Shelley's original novel of 1818 is resourceful and self-educated and plans an intricate revenge on his creator that strongly suggests he is the smarter of the two.

In the 20th century, speculations about self-teaching AI systems naturally inspired similar visions.

The classic example is the HAL 9000 computer in 2001 A Space Odyssey, telling the human crew its duties logically require it to kill them.

As early as the 1940s, the sci-fi writer Isaac Asimov had begun to push back against the standard narrative.

Asimov started from the assumption that human engineers would build in safeguards, his three laws of robotics.

A robot may not harm a human being or, through inaction, allow a human being to come to harm.

Number two, a robot must obey orders given it by qualified personnel, unless those orders violate rule number one.

In other words, a robot can't be ordered to kill a human being.

Rule number three, a robot must protect its own existence unless that violates rules one or two.

In practice, in Asimov's fiction, the three laws didn't quite work.

That was the point.

If they worked perfectly, there would be no story.

At the height of the Cold War, the scenario that really spoked fear in the public imagination was not super-intelligent machines, but thermonuclear missiles under all too human control.

Some thinkers even speculated that the world might be safer with AI in charge.

They might find it necessary to take some of our toys away, some of our hydrogen bombs and things, but there's no reason that they would want to go after the same things we want because they won't be interested in them.

That was Dr.

James Sumner, who has given us such a brilliant long view of all of the topics that we've talked about in this series.

Thank you, James.

Now, Mike, there is quite a well-known story in AI that demonstrates how artificial intelligence might not maliciously cause us harm, but if we don't give it the exact correct goal, it might do us harm accidentally.

This is the paperclip-making machine.

Do you want to kind of summarize that for us?

So yeah, this is a thought experiment that's been proposed to highlight the risk of what we would call an artificial general intelligence.

So an AI that could do everything and learn very, very quickly.

We create AI that's going to manufacture paper clips and its only goal is to manufacture paper clips and it gets rewarded or essentially is made to feel good or what have you the more paper clips it makes so it begins by just ordering a load of raw materials and making a load of paperclips and then it realizes that if it could take over the mine it could get a lot more raw materials and make a great many more paper clips and in the end it realizes the only thing standing in its way is that all these pesky humans keep trying to eat and use land for wheat and things like this.

So never mind all that, we'll get rid of them and then we can just make all the paperclips all the time.

And it's this thought experiment that goes from a sort of a very sensible AI that's running a factory to an AI that's basically wiped out the human race in favor of endless supplies of paperclips.

The paperclip machine is not going to be a reality, but do you think there are real-world equivalents that might happen on a different level?

There is a risk that we will start putting autonomous systems in the control of AI under the assumption they'll act in a certain way.

And the way they act might not be incredibly impressive like the paperclip example but they might be wrong right it might make unethical decisions because of implicit bias or it might make simple mistakes that cause a huge knock-on problem so I think that uncontrolled and unregulated use of AI does give a risk of it being used poorly or being used by mistake in a way it shouldn't I wonder whether we will never trust AI because the mistakes it makes along the way are just weird they're not mistakes that humans would make I'm imagining a self-driving car which ultimately I think that technology will reduce the number of accidents.

But the accidents that they still have will be weird.

And I can imagine the newspapers saying, well, a human would have never done that.

So therefore, we mustn't go any further with self-driving cars.

So maybe we will never trust AI because it won't be able to leap that hurdle.

That's a really interesting question.

I mean, self-driving cars are a great example of this because, yes, you only need the AI to make one silly mistake and suddenly you think that it can't be trusted.

Another example is medical imaging.

Many, many studies have shown that people are quite happy for AI to be involved in their medical diagnosis if it makes doctors more efficient or it eases their burden.

But very few people are happy for AI to be the thing that makes the ultimate decision with no doctor involved.

And I think there's a long way to go before as a culture and a society, we're ready to accept that kind of thing.

If AI does go badly wrong,

can we just switch it off?

Yes, we can pull the plug and actually it would save you a good deal of electricity cost as well.

Yeah, it does use quite a lot, doesn't it?

It does.

Yes, I mean at the moment AI is just, so we say large banks of numbers sitting on data centers.

And so they don't interact with any other systems.

They're usually only deployed in the one specific place where they're used.

Over time we might find AI distributed more broadly on your end devices in your house and things like this.

But so far I've not seen a lot of AI or any evidence really that AI is being deployed in a way where I would say it was unconstrained and couldn't be turned off.

At the moment, just unplug the device.

We've talked about the worst worst case scenarios.

We've talked about the dangers and things we have to watch out for.

Do you think AI is going to be harmful or do you think AI is actually going to help us improve our world?

I think AI is going to make our lives much, much better overall.

And I think, you know, I work in AI.

I'm really excited that I work in AI and that this is going to be such an incredible time for everyone.

Yes, there are things that we have to discuss as a society over the ethical use of AI and things like this, but there are applications of AI already on generating new proteins and new antibiotics, better understanding medical images so we can help radiographers and doctors work more quickly and more efficiently, analyzing plants so that we can grow more robust plants that work in the face of climate change and higher yields even in the case of drought and things like this.

There's AI being used all over the globe to drive science forward in lots of other areas as well.

And so I think overall the outlook's really, really bright.

That feels like a good note to end on.

Mike, thank you so much for your time over this series.

Thank you very much indeed.

As I said all those episodes ago, I am a geek.

I love technology and I love the fact that we are continuing to innovate.

But I'm also very wise to the fact that tech can be used for good and for bad things.

It can be used well and it can be used carelessly.

Even artificial intelligence is just a tool.

It will be what we make of it and what we allow it to become.

I hope that this series has helped you to understand what's going on beneath the surface, and I hope it may help you to make more informed decisions about how you let tech and AI into your life.

If you missed any of the series, don't forget all 10 episodes are available on BBC Sounds.

Thanks for listening.

Suffs, the new musical has made Tony award-winning history on Broadway.

We demand to be home.

Winner, best score.

We demand to be seen.

Winner, best book.

We demand to be quality.

It's a theatrical masterpiece that's thrilling, inspiring, dazzlingly entertaining, and unquestionably the most emotionally stirring musical this season.

Suffs!

Playing the Orpheum Theater October 22nd through November 9th.

Tickets at BroadwaySF.com.