Artificial Intelligence

43m

Artificial Intelligence

Brian Cox and Robin Ince return for a new series of their award winning science/comedy show. Tonight the infinite monkey's are joined on stage by comedian Jo Brand, neuroscientist Anil Seth, and robotics expert Alan Winfield to discuss Artificial Intelligence. How close are we to creating a truly intelligent machine, how do we define intelligence anyway, and what are the moral and ethical issues that the development of intelligent machines might bring?

Producer: Alexandra Feachem.

Listen and follow along

Transcript

This BBC podcast is supported by ads outside the UK.

It's Bretzky, baby, and I don't know why they let me on the radio, but I do know you're in California, which means you can play on SpinQuest.com with over a thousand slots and table games absolutely free and with the ability to win real cash prizes with instant redemptions.

Your first $30 coin package is only $10 today.

Hurry up, SpinQuest.com.

SpinQuest is a free-to-play social casino.

Voidwear prohibited.

Visit spinquest.com for more details.

Suffs, the new musical has made Tony award-winning history on Broadway.

We demand to be home.

Winner, best score.

We demand to be seen.

Winner, best book.

We demand to be quality.

It's a theatrical masterpiece that's thrilling, inspiring, dazzlingly entertaining, and unquestionably the most emotionally stirring musical this season.

Suffs, playing the Orpheum Theater, October 22nd through November 9th.

Tickets at BroadwaySF.com.

Want to stop engine problems before they start?

Pick up a can of C-Foam motor treatment.

C-Foam helps engines start easier, run smoother, and last longer.

Trusted by millions every day, C-Foam is safe and easy to use in any engine.

Just pour it in your fuel tank.

Make the proven choice with C-Foam.

Available everywhere.

Automotive products are sold.

C-Foam!

Hello, I'm Robin Entz.

And I'm Brian Cox.

And welcome to the podcast version of the Infinite Monkey Cage, which contains extra material that wasn't considered good enough for the radio.

Enjoy it.

Hello, I'm Robin Entz.

And I'm Brian Cox.

Now, one of the questions I am most commonly asked is: is Professor Brian Cox actually a replicant?

I've seen things you wouldn't believe.

Chemists on fire in a Ford Orion.

Proton beams colliding in the dark under the shores of Lake Geneva.

All those moments will be lost.

Well, actually, they won't be.

They'll probably be on iPlayer and Relentless Speech on BBC4.

But

your hair!

Because it has gone grey, and no one believes that you dye.

People actually think you dye your hair grey to make yourself appear more human.

Authoritative.

Anyway, we decided we start off with a little bit of a blade runner parody, and our producer went, it's too niche, our audience won't get it.

And she was 50% right.

And by the way, Brian is not a replicant because he passed the Voigt Kampf test.

That was too nerdy.

Which is, of course, the Voigtkamp test is, do you cry during Toy Story 3?

The reason for all this is that today's show is about artificial intelligence.

Could we create conscious machines that think and feel?

What do we mean by the word conscious?

How intelligent are our machines today?

And should we be concerned about the machines of tomorrow?

So I love the way you do that.

That's proper science.

And should we be concerned about the machines of tomorrow?

Clarkson.

No, it's really good.

It wasn't Clarkson at all.

It was very professional.

You thinking you're a beta male all of a sudden.

So to help us understand the science, philosophy and ethics of artificial intelligence, we are joined by three panelists, one of whom is possibly a robot.

You have to work out which one it might be.

And they are.

Hello, I'm Professor Alan Winfield, Bristol Robotics Lab, University of the West of England.

And my favourite robot, in fact, I've brought two of them along, it's this EPUC, and we do swarm intelligence research with these robots.

I'm Annel Seth, I'm Professor of Cognitive and Computational Neuroscience at the University of Sussex.

And my favourite intelligent robot is continuing the Blade Runner theme: Roy Batty, the replicant.

And not just for his wonderful death monologue, so beautifully paraphrased by Brian, but because I think Roy Batty makes us think about what will happen when AIs really care about themselves.

Very different to his sister Nora.

This is a strange audience, you see, because you don't get the Blade Runner references, but last of the summer wine, you're on

not our usual crowd.

Oh, hello.

I am Doctor, Doctor, Doctor, Doctor, Doctor, Doctor, Doctor, Doctor, Doctor, Doctor Brand.

I indeed have nine honorary doctorates, and this is the only chance I'm ever going to get to show off about it because where else do they ask you to introduce yourself in that manner?

And when anyone ever says to me

on a plane, is there a doctor?

I always say yes.

And a lot of people have died.

But

my favorite robot is Hal from 2001, A Space Odyssey, because he turned.

And this is our panel.

Well, we s actually before we get on to the main when we were in the green room, we were talking about it seems at the moment there's a lot of interest in terms of in pop culture about artificial intelligence.

It's something that's been in science fiction a lot.

And Channel 4 had a very popular series called Humans.

And when we brought that up, both of you kind of went,

so I thought, just to start off,

what were your problems in terms of the scientific research you work at, in what you felt about humans?

My problem essentially is that it's scientifically implausible because in humans what you have are super advanced robots that will in real life will not exist probably for 500 years parachuted into essentially present-day society, which is crazy.

Ano?

Well, my problem is I only saw the first two episodes.

So that gives me a lack of authority on it.

But I also think that, yeah, I agree with Alan, that in contrast to Blade Runner, back to Blade Runner already, where not only you had these amazing replicant robots, but society had also changed.

So it was more plausible, it was more interesting, it was more dramatically powerful rather than just taking one thing and saying, Okay, is it done?

Now what?

Joe, did you see humans?

Thankfully, no.

That's that couple of years.

Although I did know I did see a picture in the paper and and when it was a very attractive robot, wasn't it?

I was slightly disappointed it wasn't a fat, annoying one.

And that's what would do me very well, making a robot that was slovenly and unpleasant.

But what about my Hitchhiker's Guide?

Remember, Marvin, the paranoid android, is a bit slovenly and grumpy.

I haven't done Hitchhiker's Guide either.

Oh my God.

Oh my god!

They're turning!

Two!

Two!

Tell them you love Star Wars.

Just something.

Say you're more of a fan of the dirk chips.

Can I just say I genuinely haven't seen Star Wars before?

Yeah!

I'm so sorry.

To bring this back on track, how do we define intelligence?

Not as what intelligence tests test.

That's what a lot of people think intelligence is.

What happens when you fill out one of those IQ tests?

Intelligence is basically doing the right thing at the right time, doing it in a flexible way, in a way that helps you keep alive, helps you sustain yourself.

And that there are various different kinds of intelligence beyond that.

There is this intellectual, more rational variety of intelligence which is needed for playing chess, but also for solving complex problems and thinking about the future, thinking about the past.

But there are also things like social intelligence, being able to behave appropriately in front of an audience or with other people.

There's emotional intelligence, being able to understand what other people are feeling as well as what they might be thinking.

And each of these varieties of intelligence, we generally have an experience as a whole, but that doesn't mean they can't be separated and understood separately and perhaps even built separately when we think about AI.

I think William H.

Calvin, Hospital, in his book How the Mind Works, he said that his definition of intelligence is what we use when we don't know what to do.

So it's something that goes beyond the kind of the hardwired or genetic programming.

Would you see

something in that?

That's interesting, yeah.

A lot of our behaviors, a lot of things that we do, we do do automatically.

There are reactions, there are instincts, there are reflexes.

We don't generally think we need to be smart to do them, but our brains need to be smart to do them.

A lot of what we do is reflexive, is automatic.

We sometimes don't even need to be conscious of these things, but our brains are still doing smart things.

In fact, this is we'll get onto this, I'm sure, but it's very, very difficult to program robots to do some of the things that we find very easy, that we don't actually have to think about.

And a lot of the issues with AI is they started with the stuff that seems for us very hard, like playing chess to beat the grandmaster, thinking that the other stuff that we don't think about is going to be really easy.

Alan, would you define a chess-playing computer as intelligent?

I would, yes, absolutely, Brian.

But it's a narrow kind of intelligence.

In fact, we use that exact word, narrow intelligence.

And the kind of everyday intelligence that we're all kind of good at without really realizing it, you know, the intelligence of being able to go and make a cup of tea in someone else's kitchen, for instance, without even thinking about it, that's really, really hard for AI.

So we've now accepted, after 60 years of AI, that the things that we originally thought were easy are actually very hard,

and the things we originally thought were very hard, like playing chess, are actually relatively easy.

And just to clear up the term artificial intelligence, because I we were talking about it earlier.

Artificial is a strange word, isn't it?

Because it can be, you know, an artificial table or something, or an artificial it seems like a substandard word.

Is it the right word, artificial intelligence?

Well, I mean, John McCarthy, who coined the phrase, actually has said that he made a mistake.

He actually, you know, in retrospect, it was not a good idea because it kind of set the level of expectation too high for artificial intelligence.

But we, you know, we're stuck with it.

But you're quite right that, I mean, really, artificial intelligence simply just means

synthesizing, modeling, mimicking natural intelligence.

So, you know, people like Anil, myself, you know, although I'm a professional engineer, I'm actually an amateur biologist, you know, an amateur, lots of other things.

And, you know, we try and understand natural intelligence in order to make artificial intelligence.

You know, I tend to be at the kind of bottom end of the spectrum, very simple animals.

So, doing the right thing at the right time, even the simplest animals have to do that.

Because if they don't, they either starve, get eaten, or miss the opportunity to mate.

It's as simple as that.

Can I just

that definition, doing the right thing at the right time?

I mean, surely doesn't that involve some sort of value judgment?

Because if I stick my foot out and trip my husband up, to me that's doing the right thing,

Whereas to him, it's not.

No the answer is no, it doesn't, because you know, most animals, you know, right down to single-celled organisms, have to do the right thing at the right time.

It may be a simple thing, but they still have to do it.

And it's clear that they don't have the cognitive machinery to have value judgments.

And really, it's of course it's only us scientists looking down on them through a microscope or you know, the microscope of robotics.

We're in in a sense making a judgment about what was right or wrong for that organism.

It doesn't, it either survives or it doesn't.

But I agree with Joe, actually, because even the simplest creatures, they might not have a conscious or an explicit value judgment system that, okay, yeah, it's a good idea to trip my fellow rat down the rat stairs, whatever.

But their evolution has provided them with an innate value system, an innate set of value judgments, which will define doing A rather than B, will make me more likely to survive.

But I think the problem with that definition is it's it's too general.

As you say, it applies to anything, it applies to a worm, it applies to some bacteria.

So

then intelligence becomes a bit vacuous.

So I do think we need something more.

I think we need something that underpins this idea that intelligence is something online, something that organisms do that is over and above what's provided by the immediate constraints and affordances and opportunities of the environment.

And I think, Robin, that's a bit what you said as well: that intelligence is

being more flexible than just what our reactions determine.

I mean, Joe, how do you feel about artificial intelligence?

I mean, do you look forward to a day where a robot can trip over your husband and you can just relax?

I really do.

Well, I kind of look forward to the day when I can just read a book and a robot can do everything that I would have done if I wasn't lying down on the sofa with massive box of chocolates and an 18-year-old man.

I don't really mean that.

I'm well past that.

I'm not even interested anymore.

We'll be covering that in another episode of this series.

Do not worry.

I won't be back.

Alan, can we go?

Because you've mentioned a couple of times epoch there.

If you could just explain to the listeners what

you have.

So, of course, they're not smart enough to be able to move, well, very much on anything other than a perfectly smooth surface.

But in the lab, we have around 50 of these robots.

And yes, they're on the radio.

They've been on the radio before.

And we do essentially experiments where we model the kind of social behaviors that are observed in ants and

social insects.

And using this swarm of robots as a kind of microscope, if you like, to study swarm intelligence, we've been able to figure out things that the ant biologists haven't really worked out.

So here's an example.

So some ants do what we call call division of labour.

So, in other words, if there's a few hundred thousand ants in the nest and there's a lot of food out in the environment, more ants will go out to forage.

If there's less food, if there's a famine, then very few ants will go out to forage, and they actually seem to adapt the ratio of foragers to kind of restors or loafers in the nest somehow to

balance, if you like, you know, to track the amount of food in the environment.

So, you know, the question for the biologists is how do they do that?

Well, of course, it's very difficult to, you know, you can't do experiments with real ants, but it's really hard.

I mean, some people do.

So actually, what we do is we make models of that behavior with a swarm of simple robots, program rules, behaviours that we conjecture are the correct behaviours, and then see if we get the same emergent or self-organizing behaviour.

Can I just say something quickly for the listeners?

If you want to know what they look like, they look like sort of see-through ramekins with lids on and they're lighting up.

And if you're a student and you don't know what a ramekin is, it's an ashtray.

It's interesting that the idea that

you talk there about emergent behaviour, particularly in an ant colony, where you get very sophisticated behaviours from the colony, but the individuals are not particularly sophisticated.

I suppose that's an analogue for what we are, in a sense.

Or are we just that?

This is the question, I suppose.

We look at our intelligence.

Is it widely accepted that whatever it is, it's emergent from the hardware that we have, and therefore, in principle, we can imagine intelligence emerging from some sophisticated piece of programming?

I mean, Anil may disagree, but I think that's still an open question as to whether you know the extent to which human intelligence is an emergent property of

our bodies and our minds, or if you like, our brains.

Marvin Minsky, who was one of the main figures in AI, talked about the society of mind, that our cognitive processes, our cognitive architecture is not located in one particular place, it's not a thing by itself, it's the collective behavior of many different, more fine-grained things that our minds and brains do.

So this is kind of an old idea, I think, in AI.

And it's in some sense, it's trivially the case that

our brains are massively complex networks, about a hundred and ninety billion neurons, a thousand times more connections.

So, if you counted one each second, it would take you about three million years to finish counting the number of connections in any human brain.

They all speak to each other, all these neurons.

It's almost inconceivable that we don't appeal to something like emergence to understand what's going on here.

Different groups of neurons start firing together, they set up patterns within the whole brain that then constrain what individual neurons do.

This is actually where physics might come in because statistical physics can contribute a lot to understanding collective behavior and emergence.

In fact, we still don't really have a good way of measuring how emergent a phenomenon is in a physical system.

And the brain is a physical system, of course.

But I'll tell you what, Anal, you really know how to play Brian because there's always a point in the show where you can see him drifting off.

And as long as you say statistical physics could play a part, he really lights up.

The implications of this view are interesting.

I remember one of the philosophical debates, that's the old ones, Searle's Chinese room.

The picture was when Searle wrote it down that you could have a room full of people

who would enact an algorithm.

So so you could put in some question,

and without knowing what was happening at all, they would move around and follow the algorithm, and out would come the answer.

And the suggestion was that that's essentially our brain.

So the first question, I suppose, is that what our brain does?

Is it something you could simulate in the computer?

Is it what we call a universal Turing machine in that sense?

It's an algorithm that we could run in principle on other hardware.

But isn't part of the Chinese room thing as well the fact that what delivers the message doesn't actually understand the message?

So, that's a that's quite an important part, isn't it?

The individual part of the message is not actually understood by the system.

Right, so understanding is a property of the whole system.

This may be true of the human, there are many things that come out in the Chinese room experiment.

It's most usually associated with this philosophical view of functionalism: that is intelligence, is consciousness, is mind something that you can simulate in another system, and that will then have that property.

So, you can simulate chess playing in a computer, and that computer is actually playing chess.

It might not be playing chess in the same way that a human plays chess, but it's playing chess.

So, it's not simulating chess, it is actually playing chess.

It's the difference between simulation and instantiation.

On the other hand, if you're simulating a weather system and you're trying to predict whether a hurricane's going to form or not, nobody would expect it to get windy inside the computer that's simulating weather or wet.

So, what about mind?

What about intelligence?

What about consciousness?

Is it something that when you simulate it, and you can because computers are wonderfully powerful and flexible simulation engines, that's what you can use them for?

Is it a case of simulation or is it a case of instantiation?

I think for some things, the answer is pretty clear: that if it's an overt behavior, like chess playing or perhaps like making a cup of coffee, then if the system can do it, it can do it.

But is there anyone at home inside?

Is there any subjective experience going on along with that, as there is for me at least, and for you too?

That's still the open question.

That's the question that functionalism gets at.

What's your guess?

Because, Alan, you were shaking your head and nodding at the same time.

Oh, no, he's broken down.

Get the next Alan.

I knew Alan Mark I wouldn't work properly.

I told you.

Yeah, I mean, if a thing is a simulation of intelligence, it would be absurd to say it's not really intelligent, in my view.

I mean, if you take a good example, Google Translate.

So Google Translate really does translate from one language to another.

Now,

it's perfectly okay and probably right to say, well, there's nobody at home, as it were, inside the vast Google Plex, you know, computing that actually is understanding what's going on.

But it's also true to say that no individual neuron in our brains, or indeed probably entire networks of neurons, know what's going on either.

So, you know, it's a kind of one of those slippery arguments.

I mean,

I used to be a simulationist in the sense that I thought, no, if it's a simulation, it's not the real thing.

I've changed my mind.

So, what it is?

I think it really is.

I think if you make something that behaves as if it's intelligent sufficiently well,

then I think you have to admit it really is intelligent.

So, intelligence then is just the ability to do a specific task in this case.

So, Google, like thinking of of the chess machine, well, I suppose we know that a chess machine has gone to a new level if it gloats when it beats you.

Or if Google, when it translates, when it actually looks back and it notices that through the translation, the grammar, et cetera, is inexact, it feels a level of shame.

And then we start to get

throws in a quip at the end of its translation.

So, the intelligence we're talking about is just this

a specific function

in this case.

It could be a very big function.

Yeah.

What did you feel about that?

Well, I think that sort of programming a machine to play chess is a totally different sort of intelligence, because surely there are a finite amount of possibilities and they're rational and logical.

So that, for example, if you're playing chess, you wouldn't necessarily sacrifice a piece unless there was some built-in intelligent reason for doing that, shall we say.

Whereas when you're speaking about human intelligence, humans are kind of very irrational beings.

And so, what are you basing that intelligence on?

Are you basing it on a particular sort of person?

Is that person a man or a woman?

You know, you cannot predict people's options when things happen to them, you know.

I mean, just as an example, a friend of mine, when her boyfriend finished with her, we were at a party and she went and jumped in the lake.

Well, how, yeah, she's all right, she didn't drown.

But how would that sort of thing be inbuilt?

Not that I'm saying that robots should particularly go and jump in lakes, but you know, there are so many infinite possibilities if you're trying to program a robot to be like a human.

I think it's just simply

always going to be impossible because you're always going to be missing that final irrational bit of humanity that we all have, which means no one can a hundred percent say what we're gonna do next.

I love the jumpy of the lake thing, the idea of the terminator robot play by Arnold Schwarzenegger, which just suddenly comes up, What would Virginia Woolf do?

I suppose the question must the question must be, though, if not, if that irrationality, if that human behaviour, is not an emergent property based on the laws of physics, essentially operating in this biological shell, then what is it?

Well, excuse me, because I'm going to use the word paradigm, and

for that, I apologise.

But, for example, in science, when you have a scientific paradigm and it moves from one to another to a new paradigm, that leap is normally made for very irrational reasons.

It's a quantum leap and it's a piece of creativity in one person's head.

And what I'm saying is that if you're expecting robots to be intelligent, how are you going to cater for leaps forward in progress that you might want them to help you make if they don't have an inbuilt, irrational piece of something in their brain that enables them to make a quantum leap and be creative?

Don't look at me like a gone mad.

No, I'm not.

I'm just thinking that is definitely a tenth doctorate.

That is

the creativity turns out to be one of the hardest problems to tackle in AI.

And it's almost the antithesis of the chess playing thing because chess is creative when humans play it, but the way a chess computer plays it is, as you described, will enumerate a vast number of possibilities and run through them extremely quickly with some heuristics that you can program in as well.

That's clearly not what people do, which is why when the first computer back in 97 beat Garry Kasparov, it was a big hoo-ha, but that didn't herald the new dawn of robots taking over the world.

Far from it, it just highlighted what the real problems are.

And they are controlling these complex bags of flesh in unpredictable and unreliable environments, given sensors that don't work properly, motor actuators, arms, and legs that fly all over the place, hard to keep control over.

And we do behave irrationally, but we behave predictably irrationally.

We make the right kinds of shortcuts at the right times.

And it's that ecological smartness that we can try to understand in human brains, and not just humans, other animals as well.

And I think that's a big pitfall to avoid: that AI won't happen when we create the first human indistinguishable from another.

Other animals are extremely smart in their own way.

But there is this flexible deployment of strategies to deal and exploit unpredictability that's core to intelligence.

Alan, it's a super hard problem, and we just don't know how to make artificial creativity.

And curiously, I think in order to crack the problem, I think we have to make robots that make mistakes.

In other words, make them un-robotic, because actually, that's

how we create, essentially.

And I'm reminded, so Arthur Kerstler wrote an amazing book called The Act of Creation.

And one of the things he said in that book is that actually the act of telling a joke is very similar to the act of creation.

He said, really, the way to think about it is that it's the difference, small difference, between ha-ha and aha.

And, you know, actually, I think we probably, I don't see any reason in principle that we couldn't make a robot that is genuinely creative, but it would be a very, very different kind of thing to what we currently think of as as robots.

And it it you know, and it may not be, you know, entirely uh you know satisfactory.

So this kind of crazy random stochastic you know robot that that crashes around and makes mistakes, that may well be the price for it being creative.

But how much of artificial intelligence research is concerned with understanding the the the human mind, so understanding how we think.

And how much of it is separate to that?

I suppose the question would be: if we do create something that we define as being intelligent some way, or conscious, let's say, will it have been because we copied ourselves, or will it have emerged in a different way?

So, there's always two

ways to look at an enterprise like AI.

You can look at it from an engineering perspective, where you want to just build a system to do something that you want it to do, to be smart.

Or you can build a system to help understand how natural systems work.

I'm a neuroscientist, so that's the way I look at AI: build computer simulations of the brain as a way of understanding how our natural intelligence, our natural cognition, and perhaps our natural consciousness arises.

There's an interaction between the two, of course.

So, the more you understand about how we work as people and how other animals work, we'll be able to isolate the principles that may allow us to build more effective and useful machines.

There's a good analogy here about the history of flights.

If you just blithely copy something and think that, okay, I want to build something to do X, so a natural system does X, I'll just copy it.

This generally doesn't work.

So people who tried to build flying machines initially started by building things that flapped.

That didn't really work.

But it helped, in a way, get a hold of the laws of aerodynamics, which enabled people to build things that did fly, and they're a little bit like birds, but not entirely like birds.

I think the same thing will happen with AI and psychology, cognitive science, that simulations will help us understand what really matters, and we may end up building things that are not direct replications of humans or even other animals, but that work on the essential principles that we're striving to understand.

But this really brings up the question of it's not just the brain, it's the body and the environment.

So we have this this problem with AI and and it that sometimes that we think it's just the brain.

You can take the brain out of the body, put it in a vat, whatever, replicate it in a computer and put it on a hard drive somewhere.

Even if you've captured the individual connections of your brain, of my brain, of any brain, those change over time in ways that depend completely on being in this particular body, in this particular environment.

And you realize that you actually have to replicate the whole thing, the body, the environment, and the way all these things interact together.

And then you realize it's a much harder problem and it's not just something you can simulate inside a computer.

So your consciousness is not just contained in your head, is what you're saying.

It's a product of interaction between your body, the environment, the brain.

Well, in one way, it is.

I mean, in one way, well, I think at least, in one way, the fact that you are conscious right now of what you're conscious of is a property of your brain, the state of your brain right now.

However, to get that brain into that state right now depends on your body, depends on the environment, depends on your social environment, depends on your past history, and so on.

So, right now, it's a property of the brain, but not in general.

But isn't it that you need, if you have no input whatsoever, if there is no experience, that everything that a creature learns is it is over time, it's through experience.

That if you are without any experience whatsoever, how does anything build?

How does you know personality, understanding, or intelligence build?

Well, I think that one group of robots that it would be quite easy to build would be an England rugby team

because there's obviously not many connections going on there.

I mean, my interest really is:

are you trying to build an intelligent robot that's like a human and reacts like a human?

No, because I think what sort of robot do you build if you're if you do it?

Do you build a Mother Teresa robot or do you build a Donald Trump robot?

You know, it's

what sort of robot do you want?

And it would be impossible, I think, to find down what are the most appropriate emotional responses for a robot that could do the best things for society, as it were?

So, in that case, does that mean you're trying to build a robot that is able to carry out an almost infinite number of tasks but not turn on you or jump in the lake?

Well, my personal view is that we shouldn't be building humanoid robots at all.

I I think there are good ethical reasons for not building humanoid and especially android robots, in other words, those that are really high fidelity kind of human looking robots.

And you know, wha why do I think it's a bad idea?

Well, I mean, firstly, there are lots of reasons.

Firstly, it's like a giant vanity project for humanity.

It's great in science fiction, but actually, you know, ninety nine point nine nine percent of all of the real world things, the useful things that don't p put people out of jobs that we'd like robots to do in the world do not need a

human-like android body.

If you want a driverless car, it would be absurd to build a humanoid robot and get it to get into the car and

drive it like that.

But

in this discussion, as Jod said earlier,

we're talking about what would we want these things to do that we create.

But of course, I suppose ultimately the question is:

could we end up creating something that is, let's say, intelligent or conscious?

Maybe I shouldn't mix those two terms, but could we end up creating it accidentally, if if if indeed intelligence is an emergent property, and therefore be in a position where we can't continue to wish to control this thing, and say, I've built this thing, it's going to do this job, this stuff, and this job.

Do we get to a level of sophistication where it emerges into this being that I suppose you'd have to give rights to, and etc., etc.?

Exactly.

I mean, it it's it's a a deeply interesting question.

Uh so this is the emergence of, you might call it artificial subjectivity.

So, in other words, the moment that the AI, in a sense, wakes up.

It's no longer a zombie.

And there are some serious philosophers, Thomas Metzinger, for instance, a German philosopher, who has written a lot about the, as it were, the ethics of artificial suffering.

So, the question is: if you've built this fabulous simulation of intelligence

and you switch it on.

How do you know

that that thing is firstly experiencing artificial subjectivity, in other words, kind of a phenomenal self?

And

what do you do about it if it is?

Because

there's a very, I mean, I'm very persuaded by Metzinger's arguments.

There's a very strong probability that that thing that you've just switched on is not happy,

seriously.

And so you've got the, you know, you've got the essence.

What possibility?

What percentage-ish possibility?

Do you mean?

I don't really understand that, how you can ascribe an emotion to a machine.

Why is there a big possibility it's not happy?

Well, stuff may well emerge that we just didn't expect.

There's a fundamental asymmetry to this question.

If we accidentally build a robot that's happy and finds joy and everything, well, no problem, really.

But if we accidentally build a robot that can can suffer, that's something we need to be much more worried about.

I mean, we face this in not just with robots, but with other animals, of course, as well.

And Jeremy Bentham, the philosopher, put this back in the 18th century.

The question is not whether they can reason, not whether they can talk, but whether they can suffer.

And that's the fundamental question that prescribes our behaviour towards things that are not ourselves.

And I've been thinking for various reasons about fish last week and do fish chips and well

that's that's next week, yeah.

Chips suffer when I get my hands definitely suffer.

But do fish feel pain?

We tend not to think about that for precisely the reasons that we tend to inhabit this uncanny valley as well.

Fish are sufficiently different from us when we see them swimming about that we don't tend to ascribe any of the states that we ourselves experience.

Yet fish have pain receptors, they have brains, they're quite different, their behaviors are quite simple.

But the ability to feel pain and to suffer is arguably the most fundamental of all emotional states because it's about self-preservation.

It's not about anticipatory regret or shame or guilt, which require much higher levels of reasoning and sophistication and social intelligence.

So, there are very good reasons to at least preemptively think about

if we understand how human pain and more generally suffering arises, the general mechanisms.

We wouldn't necessarily just want to build them and assume that they're just going to be simulations.

This is quite problematic, isn't it?

Because what you're suggesting is that we're not really in full control of this research.

I suppose not in the sense that we're going to create something that's going to destroy it,

but in a sense, as you said, there are emergent properties.

And as we get more sophisticated at building more and more sophisticated systems, then these issues may arise.

So, do you think, therefore, you need a philosophical framework before you proceed with the research at that level?

I think we do.

I think we need an ethical framework underpinned by philosophical and and and I've you know I'm I'm serious about this.

I think that we're we're rapidly approaching the point where robotics and AI research needs to have ethical approval and have continuous ethical kind of monitoring, if you like, in exactly the same way that we we do right now for you know human subject research, clinical, you know, medical research.

I like to think we need a worry budget.

I mean we can worry about things, there's a finite amount of worrying any we can do.

There are these very catastrophic but not in principle impossible things that might happen, like building a accidentally building a robot that can suffer.

Might be as rare as accidentally building a 747, but it might happen.

There's building robots that might then build other smart robots and enslave us all into the depths of their singularity.

That

might happen, it's very rare.

But there are many other things that we should worry about much more.

This is a question, it's not to worry about AI.

That's not the problem.

It's real stupidity that's the problem.

And

we can already interface

with a look at Robin.

But one of the limitations of our minds is seeing the consequences of our actions at a distance.

Our minds have evolved to deal with the very local in time and space and small groups.

It's very hard for us to really feel, gets back to emotions again, the consequences of our actions over great distances of time and space.

So, one thing I'm, for example, very concerned about is the use of AI in military drones, where people can make actions happen at a distance.

Well, I was just going to say that, because if you think about it in some ways, what emerges is going to be down to who's making it, you know.

And let's say someone managed to build a robot army that was indestructible and used it for sort of nefarious purposes.

And to me, the big problem is humans and what they're like, really, because they have the power to create these things.

And we know that there's a pretty sizable minority of humans who want to do damage to other people in the world and

aren't nice like you two and want to protect sad robots, you know.

It's true, though.

It's true.

I can see a lot of people who would go, right, let's have a robot army and let's invade Europe or whatever it was.

And that's the issue.

So it's kind of it.

It's the whole internet debate as well, isn't it?

The internet is a marvellous thing, but you've got good and evil on it.

And is the evil too evil for it to exist?

And should we backpedal?

Which, of course, we can't now.

But I think

as we go forward, the same thing potentially might happen with robots as well, which I think is scary and it's down to human nature.

Absolutely.

And for me, this is why we need public engagement in robotics.

We need public debate, because we as a supposedly democratic society should decide not only the robots that we want in our lives and in society, but the ones that we do not want.

You know, that's really important.

And in order to make that decision, you know, we need we need to all understand what's going on.

I mean, but I will just give a little wave a flag for the international campaign for robot arms control.

It exists.

Find the website and you can sign up.

It's a really good movement against this very worrying militarisation of autonomous robotics.

The potential for AI to be a good thing, though, I think is more apparent when you think beyond just robots.

Robots are very important but very difficult.

AI is much more general than that, and we're already seeing some enormous benefits of AI in society.

So search engines are very useful for us.

That's AI working behind search engines.

Driverless cars are just around the corner.

Medical diagnosis is an area, yes, medical diagnosis is an area where better decisions are being made using machines that are able to do pattern recognition in a way that complements humans and doesn't replace them.

And it's when we focus on what AI systems can do that complement rather than replace that I think we can see major benefits to society.

Driverless cars, I mean, they stand to totally reduce almost to zero road traffic deaths.

Why is it that so many science fiction filmmakers in the 70s and 80s, and on TV as well, decided that all robots would be a little bit prissy and camp?

Was it if you look and you, you know, C3PO, Zen from Blake 7, K9 from Doctor Who?

And it's like, why did they decide that, oh, no, don't do that, which is also very similar to Richard Dawkins, but it's like,

Dawkins is one.

I knew it.

But that is, but that was an interesting kind of.

Why do you think there is anything in that idea that they go, let's try and make sure that this non-threatening idea,

anyway.

I'm not saying there shouldn't be.

I'm not saying that's, I think that's great as well.

So we asked our audience,

don't you do it, Brian?

I've got a robot that does my digging now.

I can't believe it.

And he's broken the shovel.

Anyway, so the.

This year, you worry me because your skin doesn't age.

Why would that be

Like ash from Alien.

So we ask the audience, tonight's show is about artificial intelligence.

What are you most looking forward to when robots become our masters?

The inevitable retribution that comes from having always opened the microwave one second before the timer goes off.

I'm looking forward to having robots-only friends and then turning them off when I've had enough.

Us getting ill and robot doctors telling them to turn us off and then on again.

The invention of a Brian Cox bot to travel around the country shouting, everything is physics.

I'm getting more awful lately.

Having a sexy Brian Cox android,

a Brian Cox robot,

giving a sex bot a damn.

Good lord, no, I'm not reading that.

Racing audience.

Well, I hope those answers have helped you know where the public want you to go.

Thank you very much to our guests, Annal Seth, Alan Winfield, and Joe Brand.

I don't think we have actually got time for a listener's email.

Let's do that another time.

We have had plenty of emails, and if you do have any questions, do send them in.

We'd love receiving them.

There is just time, though, for a quick trail because we've been recording a couple of specials about general relativity today, and they'll be out three months ago.

That's space-time

and scheduling.

Space-time and scheduling.

A minor perturbation in the metric.

Scheduling is a minor perturbation in the metric.

You're not going to work for the Radio Times, are you?

Thank you very much for listening.

Goodbye.

That was the Infinite Monkey Cage podcast.

I hope you enjoyed it.

Did you spot the 15 minutes that was cut out for radio?

Anyway, there's a competition in itself.

What do you think?

It should be more than 15 minutes.

Shut up.

It's your fault.

You downloaded it.

Anyway, there's other scientific programs also that you can listen to.

Yeah, there's that one with Jimmy Alkaseltzer.

Life Scientific.

There's Adam Brother Finn.

His dad discovered the atomic nucleus.

That's Inside Science.

All in the Mind with Claudia Hammond.

Richard Hammond's sister.

Richard Hammond's sister.

Thank you very much, Brian.

And also, Frontiers, a selection of science documents on many, many different subjects.

These are some of the science programs that you can listen to.

Your night in just got legendary.

Legends.com is the only free-to-play social casino and sports book where you can spin the reels, drop parlays, chase the spread, and hit up live blackjack without leaving your couch.

Slots, sports, original games.

Legends has it all.

Win real prizes and redeem instantly straight to your bank.

Legends is a free-to-play social casino void prohibitive.

Must be 80 plus pay responses.

Visit Legends.com for full details.

Get in the game now and score a 50% bonus on your first purchase only at legendswithaz.com.