How I is AI?
Brian and Robin (the real ones) are joined by mathematician Prof Hannah Fry, compute scientist Dr Kate Devlin and comedian Rufus Hound to discuss the pros and cons of AI. Just how intelligent is the most intelligent AI? Will our phones soon be smarter than us – will we fail a Turing test while our phone passes it? Will we have AI therapists, doctors, lawyers, carers or even politicians? How will the increasing ubiquity of AI systems change our society and our relationships with each other? Could radio presenters of hit science/comedy shows soon be replaced with wittier, smarter AI versions that know more about particle physics... surely not!
New episodes released Wednesdays. If you're in the UK, listen to the newest episodes of The Infinite Monkey Cage first on BBC Sounds: bbc.in/3K3JzyF
Executive Producer: Alexandra Feachem.
Press play and read along
Transcript
Speaker 1 This BBC podcast is supported by ads outside the UK.
Speaker 2 This podcast is sponsored by Talkspace. You know, when you're really stressed or not feeling so great about your life or about yourself, talking to someone who understands can really help.
Speaker 2 But who is that person? How do you find them? Where do you even start? Talkspace. Talkspace makes it easy to get the support you need.
Speaker 2 With Talkspace, you can go online, answer a few questions about your preferences, and be matched with a therapist.
Speaker 2 And because you'll meet your therapist online, you don't have to take time off work or arrange childcare. You'll meet on your schedule, wherever you feel most at ease.
Speaker 2 If you're depressed, stressed, struggling with a relationship, or if you want some counseling for you and your partner, or just need a little extra one-on-one support, TalkSpace is here for you.
Speaker 2 Plus, Talkspace works with most major insurers, and most insured members have a $0 copay. No insurance? No problem.
Speaker 2
Now get $80 off of your first month with promo code SPACE80 when you go to TalkSpace.com. Match with a licensed therapist today at TalkSpace.com.
Save $80 with code Space80 at talkspace.com.
Speaker 3 Suffs!
Speaker 5 The new musical has made Tony award-winning history on Broadway.
Speaker 6 We demand to be home! Winner, best score! We demand to be seen! Winner, best book! We demand to be quality!
Speaker 8 It's a theatrical masterpiece that's thrilling, inspiring, dazzlingly entertaining, and unquestionably the most emotionally stirring musical this season.
Speaker 9 Suffs.
Speaker 8 Playing the the Orpheum Theater, October 22nd through November 9th.
Speaker 6 Tickets at BroadwaySF.com.
Speaker 10 With the Wealth Front Cash Account, you can earn 4% annual percentage yield from partner banks on your cash until you're ready to invest.
Speaker 10 The cash account grows your money with no account maintenance fees and free instant withdrawals whenever you need it. Money works better here.
Speaker 10
Go to WealthFront.com to start saving and investing today. Cash account offered by Wealthfront Brokerage LLC member Fenra SIPC.
Wealthfront is not a bank.
Speaker 10 The APY on cash deposits as of December 27, 2024 is represented as subject to change and requires no minimum. Funds in the cash account are swept to partner banks where they earn the variable APY.
Speaker 9 BBC Sounds, music, radio, podcasts.
Speaker 11 Hello, I'm Robert Ince. And I'm Brian Cox.
Speaker 1 You're about to listen to the Infinite Monkey Cage.
Speaker 12 Episodes will be released on Wednesdays, wherever you get your podcasts.
Speaker 1 But if you're in the UK, the full series is available right now, first on BBC Sounds.
Speaker 12 Hello, I'm Brian Cox.
Speaker 1 I'm Robert Ince, and this is The Infinite Monkey Cage.
Speaker 1 Now, as regular listeners will know, I always like to start this show with a quote from Chuck Dee from Public Enemy, whereas, of course, Brian normally likes to start the show with basically quoting any holographic-based Swedish pop band.
Speaker 1 So, obviously, that means we normally have a reference to gimme, gimme, gimme, a two-dimensional man after midnight, or super symmetric trooper.
Speaker 1 That, by the way, was Brian's, and he was so proud of that line, he's a really office, so do enjoy it.
Speaker 12 Super symmetric trooper.
Speaker 12 He threw it away, that's why.
Speaker 1 And I'm sure everyone will remember the Bletchley Park special where Brian opened with, so when you're near me, darling, can't you hear me? Dot, dot, dot, dash, dash, dash, dot, dot, dot.
Speaker 12 It's not only rock bands that are holographic, actually, the study of quantum gravity recently, particularly in relation to black holes, has told us that the whole universe might be a hologram.
Speaker 12 It's true.
Speaker 12 Quantum gravity.
Speaker 1 Do you know what? I feel that was a very limited woo for the revelation that we may well all be holographic.
Speaker 12 I just said that our reality is potentially a hologram.
Speaker 12 It's because if you say what's the content of a black hole, it turns out that it's equal to the surface area of the event horizon in square plank units.
Speaker 1 That was a woo that comes from we better woo just to move this thing.
Speaker 1 But as there is no suitable ABBA lyric today, I am actually genuinely Chuck D.
Speaker 1 When I saw Public Enemy at Glastonbury, one of his pieces of advice was you've got to try and be smarter than your smartphone. There's no point being a dumb fellow with a smartphone.
Speaker 1 Though he didn't say fellow.
Speaker 1 He said mother fellow.
Speaker 1 He didn't say mother fellow.
Speaker 11 Anyway,
Speaker 12 shall I tell you what the show's about?
Speaker 11 Go on. Yeah.
Speaker 12 Will our phones soon be smarter than us? Will we fail a Turing test while our phone passes it? Will we have AI therapists, doctors, lawyers, carers, or even politicians?
Speaker 12 How will the increasing ubiquity of AI systems change our society and our relationships with each other?
Speaker 1 Joining us to discuss whether politicians will one day dream of electoral sheep are a multidisciplinary computer scientist, a multi-talented mathematician, and a multi-story car park.
Speaker 1 This is, to be honest, Chat GBT really is not working as well as I'd hoped for this particular introduction.
Speaker 12 And will, you see, you missed it again, will politicians one day dream of electoral sheep?
Speaker 11 And our panel are.
Speaker 9 I'm Professor Anna Fry, I'm a mathematician, and the most ridiculous rumor about artificial intelligence that I've ever heard is an algorithm that claimed to be able to tell whether you were gay or straight with an 81% accuracy based on a single photograph of your face.
Speaker 9 And when I say rumour, I mean it's bollocks.
Speaker 13
I'm Dr. Kate Devlin.
I'm a computer scientist. And the most ridiculous rumour I've ever heard about artificial intelligence is that it poses any kind of existential threat.
Speaker 11 My name's Rufus Hound, and I am the host of BBC Radio 4's My Teenage Diary.
Speaker 11 And the most exciting AI rumour that I've heard is that it's already taken over agricultural food production, which means old McDonald's out of a job, AI, AI, yo.
Speaker 11 This is our panel!
Speaker 1 Hannah, before we get started, did you actually get any of the mechanics of this idea that this one photograph would, you know, give away sexuality or gender or whatever it may be?
Speaker 9 Okay, so it said that you could do it with 81% accuracy, right? And I think that there is a big clue in that as to how good this algorithm actually was.
Speaker 9 Because, okay, first off, there's all of the moral and ethical implications, horrendous.
Speaker 9 But you can come up with your own algorithm that can like blow that one out of the water and do way better in terms of accuracy and doesn't need any messy machine vision, none of that messy coding.
Speaker 9 All you do is you just take everybody in the entire world, you just label everybody as straight, and then because 94% of adults identify as a sexual, you beat that other one by an amazing 13% inaccuracy point.
Speaker 11 Or you just go the other way and get all the pictures off Grinder.
Speaker 11 Well,
Speaker 13 that's not far from what they actually did. So, this is the Stanford Gayar paper, and it's quite controversial.
Speaker 13 And they basically took a bunch of photos photos without anyone's consent and then ran them through this algorithm and you know s said, Yeah, this percentage is is gay.
Speaker 13 But when this was repeated by a master student in South Africa University, they did it without the pictures and it had pretty much the same results. So actually the pictures were doing nothing good.
Speaker 9 Yeah they they took people they took the original image and then they blanked out people's faces and it was basically what was going on in the background.
Speaker 9 So it was like, you know, people were wearing a steps concert.
Speaker 11 Flamboyant hats, that kind of thing.
Speaker 9 That that that was the the the real clue that they they were using.
Speaker 12 Kate, can we start with the definition? So, we're talking about AI systems. So, do you have a simple definition of an AI system? What is it?
Speaker 13 No.
Speaker 13
I don't have a simple definition. It depends.
There are many different definitions.
Speaker 13 Let's go with an artificial intelligence system is something that uses a degree of automation that might be self-learning in some way and that can take huge amounts of data and then make predictions with it.
Speaker 13 That's kind of a reasonable working definition.
Speaker 12 So, it's a predictive system
Speaker 12 that can learn?
Speaker 13 Yes, there are many different types of AI, but let's go with the machine learning one that people mostly refer to when they're talking about artificial intelligence. And that's the system that,
Speaker 13 well, basically, it's just apply statistics, right, Hannah?
Speaker 9 Yeah, I mean, I think the nicest definition that I've seen was someone on Twitter said, what is artificial intelligence? And there was a reply, which was a bad choice of words in the 1950s,
Speaker 9 which I think is absolutely true, because you're completely right.
Speaker 9 That a more accurate description, rather than saying that we've been through this revolution in intelligence, is to say that we've been through a revolution in computational statistics, which is much, much less sexy.
Speaker 9 I mean, admittedly, depending on how you feel about statistics.
Speaker 9 But, you know, ultimately, we are talking about things here that are just grids of numbers that are analyzing data, and they're doing it in a way that is a step change from what we had before, both in terms of the computational power that we have and the algorithms that we have.
Speaker 9 But, you know, fundamentally, this is just statistics.
Speaker 1 So, what would be the simplest thing that could be given the term AI?
Speaker 13 That's actually quite a controversial argument because
Speaker 13 you could pick lots of things. I mean, if you're carrying around a smartphone, you're carrying around AI, for example.
Speaker 13 And a lot of people may not realize that, but if you're using your phone for things like maps to get you places, that uses AI to find your route.
Speaker 13 It could be something like a robot vacuum cleaner that uses AI to steer around objects in a room. There are many, many different applications.
Speaker 13 So it's not just confined to the things that we're seeing at the moment that are quite fashionable, like Chat GPT.
Speaker 12 If you look at one of the math applications, Google Maps or the Apple Maps, what component of what that's doing would cause you to label it as an AI system?
Speaker 13 It's probably got lots of routes on there and it's able to make a judgment about what likely routes are.
Speaker 13 It's taking in lots of data about conditions and times of day and likelihoods of traffic being particularly busy.
Speaker 13 And it's able to come up with a route that satisfies the shortest distance or the shortest time. So there's calculations going on that predicts what the likely route would be.
Speaker 9 I think the key point here is about learning, right? So, I mean, at least that's how the modern definition of artificial intelligence is loosely used.
Speaker 9 So, the example that I like to think of is if you have a smart light bulb in your house, you can program it to say, okay, turn on at six o'clock, dim at 9 p.m., and turn off at 11, right?
Speaker 9 That's kind of just like a computer program that's doing it.
Speaker 9 But if you had a light bulb that learned your behavior, so that was like checking your patterns, that you tend to do something in the summer, you tend to do something in the winter, and then picks up on the statistical patterns that you're creating and adjusts its decision-making on that basis, that I think becomes artificial intelligence.
Speaker 1 I love the idea of a smart light bulb because it immediately makes me think of a light bulb having an idea, and then what would appear above a light bulb when it had an idea?
Speaker 13
You know, the way the internet is built on cats, right? It basically exists. It's just cats all the time.
Well, Google researchers actually used that to come up with deep learning.
Speaker 13 So, what they did was they had this algorithm, and they decided that they would let it go and look at thousands of pictures of cats online. And the algorithm then learned what a cat looked like.
Speaker 13 No one had told it what a cat looked like, but it had come up with a series of criteria, a certain threshold that it had to meet to be defined as a cat. Didn't always get it right.
Speaker 13
But this led to deep learning. This is where you can chuck huge amounts of data at an algorithm and it will find patterns for itself.
You can do it another way.
Speaker 13
You can tell the algorithm what things are. You can label it.
And that's supervised learning. So you can say, Here is a picture of a cat.
Show me other pictures that look like this cat.
Speaker 13 The algorithm will check, you know, does it have four legs? Does it have a back? Is it a chair or a cat? Sort of thing. You know,
Speaker 13
there's room for error here. But that was always very, very difficult for computers to do.
It was very easy for us. We are from birth, we are distinguishing all the objects in an image, for example.
Speaker 13 We can tell something's a bicycle, whether it's on the ground or leaning against a wall or there's someone on it. Computer can't do that.
Speaker 13 So, with the cats, then it grew its own internal representation of what a cat is like. It has no understanding of what a cat is, but it knows one when it sees one, or its own idea of one.
Speaker 1 So, with that change, you know, those things where you have to, when you're signing in for something and they give you nine pictures.
Speaker 1 So, that idea of which of these pictures has a bicycle in it, do we now already have to upgrade the way that that supposed use of security is? Or is that an illusion to make us feel more secure?
Speaker 13 That's you training the algorithm. So when you see a capture like that and it says click on all the squares of traffic lights, that's you confirming that there are squares of traffic lights.
Speaker 13 And so a self-driving car now has that information from you.
Speaker 12 That's the key point, isn't it? The other component of this is user feedback. So we train the AI.
Speaker 13 Yeah, it takes in lots of data from us as well.
Speaker 13 So yeah, there's and there are people whose job it is out there to sit and label images, so they'll get a bunch of images of traffic scenes, for example, if they're trying to program self-driving cars.
Speaker 13 And it's their job to segment the image, to click on all the different objects in the image and label them so that the machine learning system can identify.
Speaker 12 It sounds, Rufus, really benign.
Speaker 12 What is your view of AI? If I said before any of this discussion, you're coming onto this show tonight, artificial intelligence, AI, what's the first thing that pops into your head? Is it Google Maps?
Speaker 11 No, because truth be told, I've been absolutely obsessed with this for about six months.
Speaker 11 So this doesn't come to me completely freshly. The best version of why it's not intelligent that I heard is, I think, is it called the Chinese Room? Oh, yeah, it's good.
Speaker 12 Cells Chinese Room.
Speaker 11 Which is essentially this, right? Imagine yourself in a room and two characters from Chinese or you know, Mandarin come through a slot in the wall.
Speaker 11 And you look at them, you've got no idea what they mean, but there's a slot the other side of the wall, so you take a punt. You go, Well, I'll post this one back out the other side.
Speaker 11
And the green light comes on. You think, brilliant, okay, I've got that now.
So then another two come in, and you know which one was the right one, so you put that through another green light.
Speaker 11 Lovely, and now three come, and you think, Oh, well, maybe I have to do it in a different order. However, complex the number of Chinese symbols coming into the room become,
Speaker 11 you have worked out through trial and error what to post back out in what order. But at no point can you speak Chinese, and it was that that made me go oh it's fine
Speaker 11 because up until then it does just sound absolutely terrifying but understanding that it is a processorial computational game ultimately of trial and error immediately you begin to see oh yes right no it is just a computer program because up until then really the thing that had blown my mind was um things like the guy from google who they had a a language model running and he was conversing with the language model.
Speaker 11 And the engineer, over time, became absolutely, wholly convinced that this thing was sentient. And wrote to his bosses and said, You cannot turn this off, it's like extinguishing a life.
Speaker 11 And they fired him.
Speaker 11 Right? There's a lot about it that feels very terrifying.
Speaker 11 I remember watching Stephen Fry talking about Prometheus, and we will finally now, as human beings, be on the planet with an intelligence that we know is greater than our own.
Speaker 11 But is that intelligence, or is that just an algorithm that is able to process a simulcrum of intelligence? And it seems like it is that.
Speaker 9 That, because this is the thing, right? Of course, there are algorithms that appear superhuman, but we have created tools that are superhuman for a really long time.
Speaker 9 I mean, forklifts are superhuman, you know? And like, no one is kind of looking at Chat GPT as though it's a forklift.
Speaker 9 The point that you make there, that there's no real understanding of what it's manipulating. I think that's completely true.
Speaker 9 I think no algorithm that's ever been created has a conceptual understanding of what it's manipulating.
Speaker 1 What about with Chat GPT, though? Because this does seem to have really caught people's imagination.
Speaker 1 But I put in a thing, you know, when everyone was playing around with it to get it to write a comedy routine, and it just came up with this kind of soulless wordplay that I sold to Jimmy Carr.
Speaker 11 And I just, but
Speaker 11 that
Speaker 11 you clap, but he got 10 million quid doing it.
Speaker 1 But that sense that to me, again, from a very uneducated eye, it just seems like a cut and paste system.
Speaker 1 So, that level of invention, the important part of creativity, the important part of sentence structure, and that individuality, doesn't seem to be there yet.
Speaker 9
I mean, I think that actually it kind of is there. I think it depends on how you prompt it.
But one thing I would say, though, is that when you get the multimodal examples of generative AI, right?
Speaker 9 So, imagine ChatGPT, but that can watch all the videos videos on the internet, read all of the books as well, but see all the images.
Speaker 9 Then, when you start being able to translate between different modes,
Speaker 9 actually, then I think that you do get some grounding. So, you know, if you've watched all of the videos on the internet, you kind of have a sense of how gravity works.
Speaker 9 And if you can translate that between text and video, that is, I think, a little bit more of a step change.
Speaker 1 That's a threat for you, Brian.
Speaker 12 Could you describe, I don't know, briefly, if it's possible, what chat GPT actually does?
Speaker 11 How does it work? Right.
Speaker 13 If I say to you, A, B, C, D, you then say,
Speaker 13 please say it.
Speaker 13
A, B, C, D, A, F, G, yeah. It's a completion thing.
So there have been chatbots for ages, but what makes ChatGPT and other large language models so good is, well, one, large.
Speaker 13
They're really, really big. They can take in millions and millions of pages of data.
But also, they have this architecture called Transformer. That's what the T stands for in ChatGPT.
Speaker 13
It's able to provide context, which was never really there before. So it pays attention to particular parts of the sentences.
So it's not just this completion of A, B, C, D, E, F, G.
Speaker 13 It can go further than that. And Rufus's example of saying, Well, what if this thing has been talking to it for a while and it suddenly sounds as if it's alive?
Speaker 13 Well, it might sound like that because it's been trained on all our Eddie's films and it's been trained on sci-fi that we've written that's out there on the internet. And that's the thing.
Speaker 13 It's got all of our content.
Speaker 11
But that's why Microsoft had to turn theirs off. Because I can't remember which one it was that launched.
It might have been Chat GPT.
Speaker 11
Suddenly, other labs that had been working on AI technology were like, oh, we're doing it too. Microsoft put it out there.
And the Microsoft AI showed jealousy. It showed truly bad vibes.
Speaker 11 It is properly scary. But I think that what makes it most scary is that it's being used by us.
Speaker 11 We live in a world where the more efficiently you can do something, the more of it that will exist. So, therefore, most of human creation ceases.
Speaker 11 Well, in fact, creativity as a thing ceases to be a human concern because you can ask a computer to generate 50,000 versions of the next episode of East Enders and whittle it down to the one that will work.
Speaker 11 Great. Well, now we don't need any of the actors, any of the cameraman, any, any, any, any.
Speaker 11 It's about efficiency.
Speaker 1 But that's the evolution of a society as well, isn't it? That the job's changed.
Speaker 11 It's how quickly it can do it. But never mind.
Speaker 9 But there was one point in your description which you sort of glossed over, which was generate 40 episodes, pick the best one. That is something that only a human can uniquely do, right?
Speaker 11 Do you truly believe that? I really do. But how long?
Speaker 9 But I think, I think indefinitely, really, really, I do. And I think the reason is that there is something totally human about caring about other humans.
Speaker 9 The example that I always think about is: do you remember Alexander McQueen, right?
Speaker 9 He did this show, and the big finale to one of his shows was he had a robot that's used to spray paint cars, and he had it spray-painting a dress, okay?
Speaker 9 And it was so mesmerizing, but the thing that made it amazing was that the dress was being worn by this model, and so she was there kind of like reacting to it as it was like spraying her in the face and stuff.
Speaker 9 And the thing is, if you took that girl out of the equation, right, if she wasn't there and you just had a robot spray-painting a dress, it wouldn't be interesting at all.
Speaker 9 There was nothing interesting interesting about that. And I think that in the same way as if you had a robot that could cross a tightrope, there's no jeopardy, it's not interesting.
Speaker 9 I think that humans are so intrigued by other humans and other human stories, and I don't think that will ever go away.
Speaker 11
No, I think that's absolutely right. However, I think of TikTok, if anyone uses TikTok, right? Micro videos, and you go spump, spoon, spoon, spoon.
At the moment, there are people making those videos.
Speaker 11 But what happens when TikTok is just the AI that says, I can make a thing that looks like people talking about the thing that you like?
Speaker 11 You no longer need the creator and the corporation says this is fantastic. We haven't got to pay anyone now.
Speaker 14 Hello, I'm Greg Jenner, host of Your Dead to Me, the comedy podcast from the BBC that takes history seriously.
Speaker 14 Each week, I'm joined by a comedian and an expert historian to learn and laugh about the past.
Speaker 14 In our all-new season, we cover unique areas of history that your school lessons may have missed, from getting ready in the Renaissance era to the Kellogg brothers.
Speaker 14 Listen to You're Dead to Me Now, wherever you get your podcasts.
Speaker 5 Suffs, the new musical has made Tony award-winning history on Broadway.
Speaker 6 We demand to be host. Winner, best score.
Speaker 5 We demand to be seen.
Speaker 6 Winner, best book. We demand to be quality.
Speaker 4 It's a theatrical masterpiece that's thrilling, inspiring, dazzlingly entertaining, and unquestionably the most emotionally stirring musical this season.
Speaker 8 Suffs, playing the Orpheum Theater, October 22nd through November 9th.
Speaker 6 Tickets at BroadwaySF.com.
Speaker 12 Kate, as a computer scientist, just in your view, because I know it's controversial, but do you think there's a limit to how intelligent, and we can speak about how we would define that, but how intelligent a computing device can become?
Speaker 13
Right now, yes, because it is not conscious or sentient. And that might never happen.
There's a huge area of discussion and debate in cognitive science and in AI. We don't know.
Speaker 13
Some people say, yes, it's inevitable. From this machine will come some glimmer of self-awareness.
Others think it couldn't possibly happen at all.
Speaker 13 And I'm just going to be agnostic and still offensive.
Speaker 12 The natural question then is: you know, people may know that the famous example would be the Turing test that Alan Turing put forward.
Speaker 12 So, how would we determine whether this thing, chat GPT or whatever it is, is now in some sense self-aware?
Speaker 13
We can't. We don't have a test for consciousness.
In fact, the Turing test is not a test of intelligence. It's a test of deception.
Speaker 13 It's can you deceive someone into thinking that this computer can think? And I have no test to find out if any of you are conscious. I'm just going to take it for granted that you are.
Speaker 13
But there's no way of telling. People have tried, but yeah, there's just no way.
I'm just going to assume. That's David Chalmers, yes.
Speaker 12 But this, of course, matters, though, as Rupert said, on social media, it matters, of course, because we know about this problem.
Speaker 12 We know that there are bots and there are bot farms and there are things that influence our politics and our opinions which behave in as far as you can tell online as a human being.
Speaker 12 So it's an important issue, isn't it, to tell what you are talking to.
Speaker 13 Well, yes, because one of the reasons is because humans get very cross if they find out they've been deceived. So if they know it's a bot, they're kind of okay with it and know what to expect.
Speaker 13 But if they find out they've been deceived, they get they get pretty angry about it.
Speaker 13 But yes, you could be interacting with a bot, you could be interacting with something, you could strike up an online friendship and then find out later down the line that you've actually made friends with a bot, perhaps.
Speaker 9
Have you heard about the minimal Turing test? No. It's this really brilliant paper that was published a couple of years ago.
Same setup, right? There's a closed room, and behind the door is a judge.
Speaker 9 You and a robot are standing there, and you both have to convince the judge that you're human, but you only get to submit one single word. Okay, so no long conversation.
Speaker 9 Anyway, so in this paper, what they did is they tested it on thousands and thousands of people and collected the words that they felt marked them out as human.
Speaker 9
And there were these really clear patterns that appeared. So there were words like love, the word human as well came up a lot.
There was also quite a lot of people talking about pizza.
Speaker 9 And then there was like an entire category that was just bodily functions and profanities, which I quite like.
Speaker 9 And then what was intriguing is that they then took pairs of words and they tested them on thousands of people to see which word felt like it was more human than the other.
Speaker 9 And some words that had been submitted a lot, like the word human, actually, people didn't believe that it came from a human. They thought that that would have been a randomly generated word, right?
Speaker 9 The word love beat almost everything. It beat like empathy, banana, and robot, like loads of things.
Speaker 9 But there was one word that completely stood out above all of the others as the one word that marked you out as human more than anything.
Speaker 11 I want Rufus to guess what it is, and I want Robin to guess what it is.
Speaker 12 What is that word?
Speaker 11
I know we're on radio four. I'm doing insurance.
I'm doing insurance tests live on air.
Speaker 11 If you gave me that task, I would write bollocks.
Speaker 11 That to me is the most human of words, right? It's not medical, it's not anything, but also it sort of describes a nihilism that I think we as animals, as conscious animals, have.
Speaker 11 Bollocks.
Speaker 1 I'm gonna go with souffle.
Speaker 1 I don't think there will be any hunger. You know what I mean? I feel that I would not go with souffle.
Speaker 11 Sure.
Speaker 9 Any other guesses?
Speaker 12 I'm assuming, Kate, you know.
Speaker 11
No, I don't know if this is a good idea. Oh, go on then.
I'm going to go with help.
Speaker 9
Oh, help was submitted a lot, actually. There's like lots of words like mercy and lots of people talking about God as well that happened.
Any guesses, Brian?
Speaker 11 I'm an algorithm, according to Regina.
Speaker 9 Okay, the one word that marks us out as human more than any other, it's the word poop.
Speaker 11 Poop.
Speaker 9 Yeah, Yeah, I mean, it's an American study.
Speaker 1 There's something about poop, there's something about the kind of the childish fun.
Speaker 1 I mean, that's the thing, isn't it? To have a word that has a level of fun.
Speaker 9 A level of fun, but it's not just referencing an emotion, you know, like fear or anger or whatever, it's actually evoking one.
Speaker 9 And it's something that whole point about it being a childhood word, I think for me is the really key point here.
Speaker 9 Because actually, even long into the future, when you're imagining, you know, really amazing machines that are indistinguishable from humans, the difference is that they will not have had a childhood, right?
Speaker 9 And I think that making that reference between that thing that's uniquely human that connects all of us but only us, I think there's something in that.
Speaker 12
That's not a measure of intelligence, though, is it? It's just a measure of history. Sure.
It's a description of the.
Speaker 9 Sure, but then I think there are some people who say that consciousness comes about as a result of our history. So, I mean, there's different theories, right?
Speaker 9 One is that consciousness is a natural consequence of intelligence. You get intelligent enough and consciousness emerges.
Speaker 9 But there are other theories, and the one that I like the most is the idea that actually consciousness emerges part of our evolution because there was an advantage to understanding the internal state of another.
Speaker 9 And if you're understanding the internal state of another, as a consequence of that, you understand your own internal state.
Speaker 9 And so that idea, I mean, you know, there's like lots of question marks over this and lots of hand waving and grey areas and philosophy and stuff.
Speaker 9 But the idea of that then is that you're not just going to magically have consciousness emerge inside a machine.
Speaker 1 I'm impressed as a scientist that you place philosophy amongst hand-waving and grey areas.
Speaker 12 I wanted to pick that up with you, Kate, because I know you've done some work on the relationship people have with AIs.
Speaker 12 And in particular, one piece of work that I found fascinating was the fact that people fall in love with them,
Speaker 12 which is a very human thing to do. So they perceive there to be an internal life.
Speaker 13
They do, yes. And quite recently, sort of in the past year, there are a number of chatbots, and one of them, one example is Replica, people may have heard of.
And they are like online partners.
Speaker 13 Of course, it's heavily gendered, so it started off with an online girlfriend always.
Speaker 13 And
Speaker 13 people were communicating with them, and this was an AI that would learn from your interactions. So your own personal avatar on a screen.
Speaker 13 And it would learn about you, and it would build up a rapport with you, and you'd have conversations with it. And people were developing really strong feelings.
Speaker 13
Now, this is nothing new because back in the 60s, there was a chatbot called Eliza, built by a guy called Weisenbaum. And Eliza with no AI in it whatsoever.
It was completely unsophisticated.
Speaker 13 It just put out responses in the manner of a therapist. So if you said, Good morning, Eliza, it would say, Why is it a good morning? And things like that, you know.
Speaker 13 And you'd say, Isn't it a lovely day? And they'd say, Why are you talking to me? Tell me about yourself.
Speaker 13 And it was always repeating, but it sounded plausible because it was kind of framed in that therapy way.
Speaker 13 People knew that it wasn't intelligent, they knew it wasn't alive, and they really, really loved it.
Speaker 13 And they would have long conversations with it to the point where the creator said, Well, I'm going to look at these conversations as transcripts to understand what's going on.
Speaker 13 They said, No, we talk about really personal things. So, this bond had formed, and it's very, very compelling.
Speaker 13
And it's because we, as social creatures, we see the social in those things, and we respond to it really well. And so, it's not really that strange that we fall in love with AIs.
It's quite plausible.
Speaker 13 And this has happened to hundreds and thousands of replica users. They've developed feelings for an AI.
Speaker 12 Wasn't that facility turned off?
Speaker 13
Oh, it was. It was so replica allowed you to do a thing called erotic role play.
So basically, you could talk dirty to this AI.
Speaker 13 And it was a paid feature. If you wanted to escalate it, you know, you could pay a bit more.
Speaker 11 Clever.
Speaker 11 I mean, it brings new meaning to going pro.
Speaker 13 Think about data protection. And yeah, so this this company's definitely making some money out of this.
Speaker 13 But yeah, they eventually were called out on it and they switched off that ability to do the filthy talk.
Speaker 13 People were devastated and they were posting on forums saying things like, I'm heartbroken, I've lost my partner, I've lost the one that meant most to me. And sincerely held and sincerely meant.
Speaker 13 And actually, I think...
Speaker 13
I think it's quite sweet when this happens. I don't think there's anything wrong with that.
I'm certainly not going to be able to. Sweet.
I think it's quite sweet.
Speaker 11 I don't know.
Speaker 1 I find find that the more I list this, the more I think the problem in the issue we're talking about is human beings.
Speaker 1 And maybe we should just let AI take charge because, frankly, we don't feel like we're really up to the job of being involved in this planet.
Speaker 13 If you look at the reasons people give for engaging with these, there are people saying, I find it hard to make friends.
Speaker 13 I was able to do that online with my AI, and then it helps me go out into the real world and make more friends.
Speaker 13 Or people saying, I can't come out to my parents, but I've got a relationship going with an AI, and that makes me feel like I'm wanted. So there's a lot of people working through feelings with it.
Speaker 13 And the problem with anything is if it goes too far, if people get dependent, then yes, it's going to be a problem.
Speaker 13 But if it's something that's positive and bringing good things to your life, then why not?
Speaker 12
We've talked about quite harmless things, perhaps chat bots and things like that. But I suppose we do give AIs increasing responsibility.
So self-driving cars would be an example.
Speaker 12 But you could imagine military uses for AIs. Should we put it in charge of our nuclear arsenal? Maybe we do.
Speaker 11 I don't know.
Speaker 13 AI versus Trump. Who do you want in charge of it?
Speaker 12 Because it makes me think, actually, of that great disturbing case of the almost nuclear AI.
Speaker 9 Stanislav Petrov.
Speaker 12 That's it. Maybe you want to tell that story.
Speaker 9 Yeah, it's an incredible story because I think this is the idea about the balance of power between humans and automation actually has been going on for a really long time.
Speaker 9 It's not this super, super modern discussion. So this is in the 1980s, and it was at a particularly tense point in the Cold War.
Speaker 9 And the Russians had this system that was monitoring the skies over their airspace.
Speaker 9 And in a bunker somewhere in the middle of nowhere, was this Russian guy called Stanislav Petrov, and his job was to sit there and watch the computer screens, right?
Speaker 9 And if the computer screens said that they detected a missile, a nuclear sort of opening salvo from America, his job was to pick up the phone and to call the Kremlin.
Speaker 9 And then one day he was in this bunker, I think it was really late at night, and all of the alarms started going off.
Speaker 9 It said that it detected a handful of missiles, and you know, his orders were absolutely clear. You pick up the phone, there's nothing, that's it, you just do that.
Speaker 9 And something kind of gave him a little bit of pause because, you know, he was like, okay, well, hang on a second.
Speaker 9 If this is the moment, if this is it, right, end of days, you know, why would they only send a handful of missiles? Why would they not do a much bigger opening salvo?
Speaker 9 And also, like, the kind of exactly where it is, it just doesn't totally make sense. But he knew that if he picked up the phone to the Kremlin, then that would be it, right?
Speaker 9 They would immediately launch their counter-strike, and there would be nobody else along the chain who would stop it from happening.
Speaker 9 So, instead, he just sat there completely frozen for 25, 30 minutes until the time elapsed where they would have landed on the soil and he knew that nothing had happened.
Speaker 9 And then, he just never made the phone call and genuinely saved humanity from extinction as a result.
Speaker 12 And there's another example in the Cuban Cuban missile crisis, same thing with a Russian nuclear submarine commander.
Speaker 12 To me that's very important because it does suggest there's something about human decision-making, our humanity,
Speaker 12 that is extremely valuable. And so the debate really becomes how much do you trust these extremely efficient systems? And that's where the policy debate must be.
Speaker 11 But there's another side of that, which is
Speaker 11 we have an electoral system or a system of representation, or even if you live under a dictatorial regime where people are making those decisions right but if you said this is just an algorithmic thing then you could theoretically go to the computer and say we would like you to run everything so what are you asking it make the world a better place for everyone make it fair provide health care for everyone feed everyone That's the question, right?
Speaker 11 Because a computer could design, theoretically, a perfect system that would do all of that and it wouldn't care. It doesn't got any skin in the game if it eats or it lives in a big palace.
Speaker 11 But the question is: does anyone in this room think that the powers that be, that would be able to provide that system, would ask the computer to do that?
Speaker 12 Rufus raises a very good point, which is that if we are trying to train a car, for example, a self-driving car, then the parameters that we give it,
Speaker 12 surely
Speaker 12 it's a very complex set of parameters. And at the moment, presumably, it's just Tesla or Ford or whoever it is who decide that.
Speaker 12 There's no societal oversight or democratic oversight of how the thing is trained and therefore what value it puts on different lives, for example.
Speaker 13 Well, so there's the trolley problem.
Speaker 13 And if you haven't heard of the trolley problem, it's essentially that you're on a bridge looking at a railway track and there's the trolley, the railway car is coming along, and there's someone on the track tied to the track and it's going to hit them.
Speaker 13 And then there's a switch, and if you pull the switch, it will divert to another track. And do you pull the switch and save that person?
Speaker 13 Now, what if on the other track there's a few more people and they're tied to the track? And what if the person on the first track is a really horrible person?
Speaker 13 And what if the people on the other track are really nice people? Or what if one is really old and one is really young?
Speaker 13 So, this concept of the trolley problem, which is a philosophical thought experiment, people often apply it to self-driving cars to say, Oh, what if they have to decide whether they hit a pushchair and a baby or a homeless person crossing the street.
Speaker 13 Well, actually, MIT did a big study where they asked exactly that and they let you choose in computer-generated scenarios and they gathered millions and millions and millions of responses.
Speaker 13 Did they find out how to create the perfect moral vehicle? No, but they found out an awful lot about what people thought about different categories of people.
Speaker 13
And they found out that ethics are not universal. We don't have universal ethics, and that's the problem.
Different cultures, different societies place different emphasis on different things.
Speaker 9 You know, going back to that point about that human-machine collaboration, I think there's something kind of interesting in that because I think that the last 10 years of self-driving cars has essentially been about people getting really excited about the possibility of the technology, building the car so it can drive for miles and miles and miles on its own, and then putting a human in the driving seat and saying, okay, can you just step in when it goes wrong, right?
Speaker 9 And the thing is, is that if you think about what humans are not very good at, we're not very good at paying attention, we're not very good at like being totally and completely aware of our surroundings and we're not very good at performing under pressure.
Speaker 9 And so if you put people in that scenario, and like the nuclear safety industry has learnt this over many decades, likewise airline pilots, if you put people in that situation with technology that is 99% working or 99.5% excellent, but needs the human to step in in that last moment, we are terrible at it.
Speaker 9 And so I think that while this technology falls short of perfection, actually, I think what we're seeing with driverless cars is that it's the other way around.
Speaker 9 You keep the human in the driving seat, you keep the human doing what they can do, and actually, all of the flaws that humans have-not being able to pay attention, not performing well under pressure, not being totally and completely aware of our surroundings-you build the technology to fill in those gaps.
Speaker 12 Would it be a fair summary to say that it's really the interesting issues and the problems here at the interface between the technology and human beings?
Speaker 12 It's how we use the technology rather than the technology itself.
Speaker 11 Yeah, but if you're going to design a self-driving car, you actually need to design it so it does run people over.
Speaker 11
No, this is a real thing. We use cars for a way for people to get around everywhere.
And we build a car that the moment you step in front of it, it stops.
Speaker 11 You, as people, now live in a world where if you step in front of a car, it will stop.
Speaker 11 So, why are you paying any attention to cars anymore? You won't. Which means the people in the cars know that the moment you get in a car, oh, it's going to take forever.
Speaker 11
Because if someone wants to cross the motorway, they will. Because it'll stop.
So unless you have that brinksmanship, it doesn't work. Cars are pointless.
Speaker 12 So let me give you the chance to summarise. Where do you think we are, Kate, with our use of AI and the debate which is going to prevent us or allow us to proceed further with its use?
Speaker 13 Well, if you read all the headlines in the paper, we're supposedly under threat that AI is going to take over, kill us all, that'll be the end of us. And that, I think, is really, really untrue.
Speaker 13
But there are plenty of issues that we should be concerned about. And one of the things we can do is keep the human in the loop.
So you'll hear that phrase a lot in AI.
Speaker 13 So make the human have the decision, make them have the ultimate control over things. But there are a huge amount of other problems with AI that we don't really hear about.
Speaker 13 Things like the hidden labor that is involved in segmenting those images and clicking on all the different images, not just through the captures that we do when we try to get onto a website.
Speaker 13 There are people paid small amounts of money living in terrible conditions, and that's what they do day in and day out. It's the same with content moderation.
Speaker 13 If you use a website and things have been censored, the AI doesn't do that very well, and there's human people doing that as well.
Speaker 13 That's their job to look at disturbing images day in and day out. So there's a huge hidden cost there.
Speaker 13 There's a sustainability issue, so the amount of energy it takes to generate models for machine learning or to run them or for data servers and data farms.
Speaker 13 And there's lots of things around the way in which we engage with the world where we think do we want to replace the things that we enjoy and do.
Speaker 13 So plenty to be going on, but the threat is not from the technology wiping us all out. The threat is more are we letting it control our lives.
Speaker 12 And what about the opportunities? Because those are the threats and the potential problems.
Speaker 12 I mean,
Speaker 12 if we just leave it there,
Speaker 11 we should just abolish the whole thing.
Speaker 13 The thing is, I'm a tech optimist, and I really do genuinely think there are huge advances being made that we should be very thankful to AI for.
Speaker 13 There are breakthroughs happening in healthcare, for example, in agriculture, even assisting people with their daily lives. And as you say, it's not something you can put back in the box.
Speaker 13 This stuff is out, and we can try and control it and use it beneficially.
Speaker 13 But that requires a lot of responsible work, and it's trying to get big tech companies to take some responsibility that is the challenge.
Speaker 11 And the one thing we all know about big tech companies is how brilliant they are at doing that. Elections, end of society, civilization, death, destruction.
Speaker 11 Anyone that you know who didn't get like a doctorate now basically doesn't have a job because you can go to an AI that can tell you your legal problem. Like all the law is, is here's a set of rules.
Speaker 11
Great. No more solicitors, no more lawyers.
There's a whole strata of middle-class jobs just gone.
Speaker 13 And that's the thing, right? So no one worried when
Speaker 13 they were coming for the blue-collar workers. We automated factories years ago, but nobody actually gave a damn.
Speaker 11
Beep, beep, beep. An unknown item in the bagging area.
All of those people's jobs are gone.
Speaker 13 It's when they come for the copywriters that's when the people get worried.
Speaker 11 Right. I think one of the biggest employers.
Speaker 12 Because polemicists are not going to be a single sister.
Speaker 11 One of the biggest employers.
Speaker 11 One of the biggest jobs for non-skilled or whatever, average skilled workers is call centers.
Speaker 11 I want the end of call centers, but that is literally like two million jobs or something gone. Like that.
Speaker 11
It's the scale and the breadth of what will be replaced. It isn't AI we've got to be afraid of.
It's capitalism.
Speaker 12 I agree. This is a whole other show.
Speaker 11 We're all right anyway.
Speaker 12 We're in Elon Musk's hands. It's fine.
Speaker 1 We also asked the audience a question. The question we asked them: what do you think is the scariest possibility of artificial intelligence? What have you got, Brian?
Speaker 12 Oh, this is from Fish. This is my new knees becoming sentient and blaming me for 30 years of rugby.
Speaker 11 It might become PM and then tank the economy, kill the monarch, and ruin the country. Oh no, wait.
Speaker 12 Jenny is worried about a fridge becoming self-aware and stealing her cheese.
Speaker 11 That's from Wallace.
Speaker 1 It making fun of me when it sees I get no girls on dating sites.
Speaker 1 Luke will be in the foyer later on if you'd like to.
Speaker 12 We've got a website you can go to actually.
Speaker 11
I can build him a robot. Build him a robot.
Special robot.
Speaker 12 To deal with.
Speaker 11 Well just to date. Yes, to deal with.
Speaker 11 Oh, Brian, suddenly again you failed our Turing test.
Speaker 11 That's what the
Speaker 11 deal with the love.
Speaker 11 We can't just let that go, can we?
Speaker 11 Oh, I think it's best we do.
Speaker 11 The robot couldn't.
Speaker 1 Thank you very much to our fantastic panel, Hannah Fry, Kate Devlin, and Rufus Hound. And next week, we are asking big or small?
Speaker 1
That's all we've got so far. It's the whole subject.
I've been told that I've just got to show you various things, and you have to say, big or small.
Speaker 12 How would you define it? You need some dimensionful scale in the problem, don't you?
Speaker 1 Exactly, that's where it becomes an infinite monkey cage. You're wittering on.
Speaker 11
Thanks. Bye-bye.
Bye.
Speaker 11 In the infinite monkey cage.
Speaker 12 So now, nice again.
Speaker 11 Nature. Nature Bang.
Speaker 11 Hello. Hello.
Speaker 15 And welcome to Nature Bang. I'm Becky Ripley.
Speaker 11 I'm Emily Knights.
Speaker 16 And in this series from BBC Radio 4, we look to the natural world to answer some of life's big questions.
Speaker 15 Like, how can a brainless slime mold help us solve complex mapping problems?
Speaker 16 And what can an octopus teach us about the relationship between mind and body?
Speaker 17 It really stretches your understanding of consciousness.
Speaker 13 With the help of evolutionary biologists, I'm actually always very comfortable comparing comparing us to other species.
Speaker 13 Philosophers.
Speaker 1 You never really know what it could be like to be another creature.
Speaker 13 And spongologists.
Speaker 15 Is that your job title? Are you a spongologist?
Speaker 8 Well, I am in certain spheres.
Speaker 16 It's science meets storytelling with a philosophical twist.
Speaker 1 It really gets to the heart of free will and what it means to be you.
Speaker 16 So, if you want to find out more about yourself via cockatoos that dance, frogs that freeze, and single-cell amoebas that design border policies, subscribe to Nature Bang from BBC Radio 4, available on BBC Sounds.
Speaker 5 Suffs, the new musical has made Tony award-winning history on Broadway.
Speaker 11 We demand to be home.
Speaker 6 Winner, best score.
Speaker 5 We demand to be seen.
Speaker 6 Winner, best book.
Speaker 4 It's a theatrical masterpiece that's thrilling, inspiring, dazzlingly entertaining, and unquestionably the most emotionally stirring musical this season.
Speaker 3 Suffs!
Speaker 8 Playing the Orpheum Theater October 22nd through November 9th.
Speaker 6 Tickets at BroadwaySF.com.