How I is AI?
Brian and Robin (the real ones) are joined by mathematician Prof Hannah Fry, compute scientist Dr Kate Devlin and comedian Rufus Hound to discuss the pros and cons of AI. Just how intelligent is the most intelligent AI? Will our phones soon be smarter than us – will we fail a Turing test while our phone passes it? Will we have AI therapists, doctors, lawyers, carers or even politicians? How will the increasing ubiquity of AI systems change our society and our relationships with each other? Could radio presenters of hit science/comedy shows soon be replaced with wittier, smarter AI versions that know more about particle physics... surely not!
New episodes released Wednesdays. If you're in the UK, listen to the newest episodes of The Infinite Monkey Cage first on BBC Sounds: bbc.in/3K3JzyF
Executive Producer: Alexandra Feachem.
Listen and follow along
Transcript
This BBC podcast is supported by ads outside the UK.
This podcast is sponsored by Talkspace.
You know, when you're really stressed or not feeling so great about your life or about yourself, talking to someone who understands can really help.
But who is that person?
How do you find them?
Where do you even start?
Talkspace.
Talkspace makes it easy to get the support you need.
With Talkspace, you can go online, answer a few questions about your preferences, and be matched with a therapist.
And because you'll meet your therapist online, you don't have to take time off work or arrange childcare.
You'll meet on your schedule, wherever you feel most at ease.
If you're depressed, stressed, struggling with a relationship, or if you want some counseling for you and your partner, or just need a little extra one-on-one support, TalkSpace is here for you.
Plus, Talkspace works with most major insurers, and most insured members have a $0 copay.
No insurance?
No problem.
Now get $80 off of your first month with promo code SPACE80 when you go to TalkSpace.com.
Match with a licensed therapist today at TalkSpace.com.
Save $80 with code Space80 at talkspace.com.
Suffs!
The new musical has made Tony award-winning history on Broadway.
We demand to be home!
Winner, best score!
We demand to be seen!
Winner, best book!
We demand to be quality!
It's a theatrical masterpiece that's thrilling, inspiring, dazzlingly entertaining, and unquestionably the most emotionally stirring musical this season.
Suffs.
Playing the the Orpheum Theater, October 22nd through November 9th.
Tickets at BroadwaySF.com.
With the Wealth Front Cash Account, you can earn 4% annual percentage yield from partner banks on your cash until you're ready to invest.
The cash account grows your money with no account maintenance fees and free instant withdrawals whenever you need it.
Money works better here.
Go to WealthFront.com to start saving and investing today.
Cash account offered by Wealthfront Brokerage LLC member Fenra SIPC.
Wealthfront is not a bank.
The APY on cash deposits as of December 27, 2024 is represented as subject to change and requires no minimum.
Funds in the cash account are swept to partner banks where they earn the variable APY.
BBC Sounds, music, radio, podcasts.
Hello, I'm Robert Ince.
And I'm Brian Cox.
You're about to listen to the Infinite Monkey Cage.
Episodes will be released on Wednesdays, wherever you get your podcasts.
But if you're in the UK, the full series is available right now, first on BBC Sounds.
Hello, I'm Brian Cox.
I'm Robert Ince, and this is The Infinite Monkey Cage.
Now, as regular listeners will know, I always like to start this show with a quote from Chuck Dee from Public Enemy, whereas, of course, Brian normally likes to start the show with basically quoting any holographic-based Swedish pop band.
So, obviously, that means we normally have a reference to gimme, gimme, gimme, a two-dimensional man after midnight, or super symmetric trooper.
That, by the way, was Brian's, and he was so proud of that line, he's a really office, so do enjoy it.
Super symmetric trooper.
He threw it away, that's why.
And I'm sure everyone will remember the Bletchley Park special where Brian opened with, so when you're near me, darling, can't you hear me?
Dot, dot, dot, dash, dash, dash, dot, dot, dot.
It's not only rock bands that are holographic, actually, the study of quantum gravity recently, particularly in relation to black holes, has told us that the whole universe might be a hologram.
It's true.
Quantum gravity.
Do you know what?
I feel that was a very limited woo for the revelation that we may well all be holographic.
I just said that our reality is potentially a hologram.
It's because if you say what's the content of a black hole, it turns out that it's equal to the surface area of the event horizon in square plank units.
That was a woo that comes from we better woo just to move this thing.
But as there is no suitable ABBA lyric today, I am actually genuinely Chuck D.
When I saw Public Enemy at Glastonbury, one of his pieces of advice was you've got to try and be smarter than your smartphone.
There's no point being a dumb fellow with a smartphone.
Though he didn't say fellow.
He said mother fellow.
He didn't say mother fellow.
Anyway,
shall I tell you what the show's about?
Go on.
Yeah.
Will our phones soon be smarter than us?
Will we fail a Turing test while our phone passes it?
Will we have AI therapists, doctors, lawyers, carers, or even politicians?
How will the increasing ubiquity of AI systems change our society and our relationships with each other?
Joining us to discuss whether politicians will one day dream of electoral sheep are a multidisciplinary computer scientist, a multi-talented mathematician, and a multi-story car park.
This is, to be honest, Chat GBT really is not working as well as I'd hoped for this particular introduction.
And will, you see, you missed it again, will politicians one day dream of electoral sheep?
And our panel are.
I'm Professor Anna Fry, I'm a mathematician, and the most ridiculous rumor about artificial intelligence that I've ever heard is an algorithm that claimed to be able to tell whether you were gay or straight with an 81% accuracy based on a single photograph of your face.
And when I say rumour, I mean it's bollocks.
I'm Dr.
Kate Devlin.
I'm a computer scientist.
And the most ridiculous rumour I've ever heard about artificial intelligence is that it poses any kind of existential threat.
My name's Rufus Hound, and I am the host of BBC Radio 4's My Teenage Diary.
And the most exciting AI rumour that I've heard is that it's already taken over agricultural food production, which means old McDonald's out of a job, AI, AI, yo.
This is our panel!
Hannah, before we get started, did you actually get any of the mechanics of this idea that this one photograph would, you know, give away sexuality or gender or whatever it may be?
Okay, so it said that you could do it with 81% accuracy, right?
And I think that there is a big clue in that as to how good this algorithm actually was.
Because, okay, first off, there's all of the moral and ethical implications, horrendous.
But you can come up with your own algorithm that can like blow that one out of the water and do way better in terms of accuracy and doesn't need any messy machine vision, none of that messy coding.
All you do is you just take everybody in the entire world, you just label everybody as straight, and then because 94% of adults identify as a sexual, you beat that other one by an amazing 13% inaccuracy point.
Or you just go the other way and get all the pictures off Grinder.
Well,
that's not far from what they actually did.
So, this is the Stanford Gayar paper, and it's quite controversial.
And they basically took a bunch of photos photos without anyone's consent and then ran them through this algorithm and you know s said, Yeah, this percentage is is gay.
But when this was repeated by a master student in South Africa University, they did it without the pictures and it had pretty much the same results.
So actually the pictures were doing nothing good.
Yeah they they took people they took the original image and then they blanked out people's faces and it was basically what was going on in the background.
So it was like, you know, people were wearing a steps concert.
Flamboyant hats, that kind of thing.
That that that was the the the real clue that they they were using.
Kate, can we start with the definition?
So, we're talking about AI systems.
So, do you have a simple definition of an AI system?
What is it?
No.
I don't have a simple definition.
It depends.
There are many different definitions.
Let's go with an artificial intelligence system is something that uses a degree of automation that might be self-learning in some way and that can take huge amounts of data and then make predictions with it.
That's kind of a reasonable working definition.
So, it's a predictive system
that can learn?
Yes, there are many different types of AI, but let's go with the machine learning one that people mostly refer to when they're talking about artificial intelligence.
And that's the system that,
well, basically, it's just apply statistics, right, Hannah?
Yeah, I mean, I think the nicest definition that I've seen was someone on Twitter said, what is artificial intelligence?
And there was a reply, which was a bad choice of words in the 1950s,
which I think is absolutely true, because you're completely right.
That a more accurate description, rather than saying that we've been through this revolution in intelligence, is to say that we've been through a revolution in computational statistics, which is much, much less sexy.
I mean, admittedly, depending on how you feel about statistics.
But, you know, ultimately, we are talking about things here that are just grids of numbers that are analyzing data, and they're doing it in a way that is a step change from what we had before, both in terms of the computational power that we have and the algorithms that we have.
But, you know, fundamentally, this is just statistics.
So, what would be the simplest thing that could be given the term AI?
That's actually quite a controversial argument because
you could pick lots of things.
I mean, if you're carrying around a smartphone, you're carrying around AI, for example.
And a lot of people may not realize that, but if you're using your phone for things like maps to get you places, that uses AI to find your route.
It could be something like a robot vacuum cleaner that uses AI to steer around objects in a room.
There are many, many different applications.
So it's not just confined to the things that we're seeing at the moment that are quite fashionable, like Chat GPT.
If you look at one of the math applications, Google Maps or the Apple Maps, what component of what that's doing would cause you to label it as an AI system?
It's probably got lots of routes on there and it's able to make a judgment about what likely routes are.
It's taking in lots of data about conditions and times of day and likelihoods of traffic being particularly busy.
And it's able to come up with a route that satisfies the shortest distance or the shortest time.
So there's calculations going on that predicts what the likely route would be.
I think the key point here is about learning, right?
So, I mean, at least that's how the modern definition of artificial intelligence is loosely used.
So, the example that I like to think of is if you have a smart light bulb in your house, you can program it to say, okay, turn on at six o'clock, dim at 9 p.m., and turn off at 11, right?
That's kind of just like a computer program that's doing it.
But if you had a light bulb that learned your behavior, so that was like checking your patterns, that you tend to do something in the summer, you tend to do something in the winter, and then picks up on the statistical patterns that you're creating and adjusts its decision-making on that basis, that I think becomes artificial intelligence.
I love the idea of a smart light bulb because it immediately makes me think of a light bulb having an idea, and then what would appear above a light bulb when it had an idea?
You know, the way the internet is built on cats, right?
It basically exists.
It's just cats all the time.
Well, Google researchers actually used that to come up with deep learning.
So, what they did was they had this algorithm, and they decided that they would let it go and look at thousands of pictures of cats online.
And the algorithm then learned what a cat looked like.
No one had told it what a cat looked like, but it had come up with a series of criteria, a certain threshold that it had to meet to be defined as a cat.
Didn't always get it right.
But this led to deep learning.
This is where you can chuck huge amounts of data at an algorithm and it will find patterns for itself.
You can do it another way.
You can tell the algorithm what things are.
You can label it.
And that's supervised learning.
So you can say, Here is a picture of a cat.
Show me other pictures that look like this cat.
The algorithm will check, you know, does it have four legs?
Does it have a back?
Is it a chair or a cat?
Sort of thing.
You know,
there's room for error here.
But that was always very, very difficult for computers to do.
It was very easy for us.
We are from birth, we are distinguishing all the objects in an image, for example.
We can tell something's a bicycle, whether it's on the ground or leaning against a wall or there's someone on it.
Computer can't do that.
So, with the cats, then it grew its own internal representation of what a cat is like.
It has no understanding of what a cat is, but it knows one when it sees one, or its own idea of one.
So, with that change, you know, those things where you have to, when you're signing in for something and they give you nine pictures.
So, that idea of which of these pictures has a bicycle in it, do we now already have to upgrade the way that that supposed use of security is?
Or is that an illusion to make us feel more secure?
That's you training the algorithm.
So when you see a capture like that and it says click on all the squares of traffic lights, that's you confirming that there are squares of traffic lights.
And so a self-driving car now has that information from you.
That's the key point, isn't it?
The other component of this is user feedback.
So we train the AI.
Yeah, it takes in lots of data from us as well.
So yeah, there's and there are people whose job it is out there to sit and label images, so they'll get a bunch of images of traffic scenes, for example, if they're trying to program self-driving cars.
And it's their job to segment the image, to click on all the different objects in the image and label them so that the machine learning system can identify.
It sounds, Rufus, really benign.
What is your view of AI?
If I said before any of this discussion, you're coming onto this show tonight, artificial intelligence, AI, what's the first thing that pops into your head?
Is it Google Maps?
No, because truth be told, I've been absolutely obsessed with this for about six months.
So this doesn't come to me completely freshly.
The best version of why it's not intelligent that I heard is, I think, is it called the Chinese Room?
Oh, yeah, it's good.
Cells Chinese Room.
Which is essentially this, right?
Imagine yourself in a room and two characters from Chinese or you know, Mandarin come through a slot in the wall.
And you look at them, you've got no idea what they mean, but there's a slot the other side of the wall, so you take a punt.
You go, Well, I'll post this one back out the other side.
And the green light comes on.
You think, brilliant, okay, I've got that now.
So then another two come in, and you know which one was the right one, so you put that through another green light.
Lovely, and now three come, and you think, Oh, well, maybe I have to do it in a different order.
However, complex the number of Chinese symbols coming into the room become,
you have worked out through trial and error what to post back out in what order.
But at no point can you speak Chinese, and it was that that made me go oh it's fine
because up until then it does just sound absolutely terrifying but understanding that it is a processorial computational game ultimately of trial and error immediately you begin to see oh yes right no it is just a computer program because up until then really the thing that had blown my mind was um things like the guy from google who they had a a language model running and he was conversing with the language model.
And the engineer, over time, became absolutely, wholly convinced that this thing was sentient.
And wrote to his bosses and said, You cannot turn this off, it's like extinguishing a life.
And they fired him.
Right?
There's a lot about it that feels very terrifying.
I remember watching Stephen Fry talking about Prometheus, and we will finally now, as human beings, be on the planet with an intelligence that we know is greater than our own.
But is that intelligence, or is that just an algorithm that is able to process a simulcrum of intelligence?
And it seems like it is that.
That, because this is the thing, right?
Of course, there are algorithms that appear superhuman, but we have created tools that are superhuman for a really long time.
I mean, forklifts are superhuman, you know?
And like, no one is kind of looking at Chat GPT as though it's a forklift.
The point that you make there, that there's no real understanding of what it's manipulating.
I think that's completely true.
I think no algorithm that's ever been created has a conceptual understanding of what it's manipulating.
What about with Chat GPT, though?
Because this does seem to have really caught people's imagination.
But I put in a thing, you know, when everyone was playing around with it to get it to write a comedy routine, and it just came up with this kind of soulless wordplay that I sold to Jimmy Carr.
And I just, but
that
you clap, but he got 10 million quid doing it.
But that sense that to me, again, from a very uneducated eye, it just seems like a cut and paste system.
So, that level of invention, the important part of creativity, the important part of sentence structure, and that individuality, doesn't seem to be there yet.
I mean, I think that actually it kind of is there.
I think it depends on how you prompt it.
But one thing I would say, though, is that when you get the multimodal examples of generative AI, right?
So, imagine ChatGPT, but that can watch all the videos videos on the internet, read all of the books as well, but see all the images.
Then, when you start being able to translate between different modes,
actually, then I think that you do get some grounding.
So, you know, if you've watched all of the videos on the internet, you kind of have a sense of how gravity works.
And if you can translate that between text and video, that is, I think, a little bit more of a step change.
That's a threat for you, Brian.
Could you describe, I don't know, briefly, if it's possible, what chat GPT actually does?
How does it work?
Right.
If I say to you, A, B, C, D, you then say,
please say it.
A, B, C, D, A, F, G, yeah.
It's a completion thing.
So there have been chatbots for ages, but what makes ChatGPT and other large language models so good is, well, one, large.
They're really, really big.
They can take in millions and millions of pages of data.
But also, they have this architecture called Transformer.
That's what the T stands for in ChatGPT.
It's able to provide context, which was never really there before.
So it pays attention to particular parts of the sentences.
So it's not just this completion of A, B, C, D, E, F, G.
It can go further than that.
And Rufus's example of saying, Well, what if this thing has been talking to it for a while and it suddenly sounds as if it's alive?
Well, it might sound like that because it's been trained on all our Eddie's films and it's been trained on sci-fi that we've written that's out there on the internet.
And that's the thing.
It's got all of our content.
But that's why Microsoft had to turn theirs off.
Because I can't remember which one it was that launched.
It might have been Chat GPT.
Suddenly, other labs that had been working on AI technology were like, oh, we're doing it too.
Microsoft put it out there.
And the Microsoft AI showed jealousy.
It showed truly bad vibes.
It is properly scary.
But I think that what makes it most scary is that it's being used by us.
We live in a world where the more efficiently you can do something, the more of it that will exist.
So, therefore, most of human creation ceases.
Well, in fact, creativity as a thing ceases to be a human concern because you can ask a computer to generate 50,000 versions of the next episode of East Enders and whittle it down to the one that will work.
Great.
Well, now we don't need any of the actors, any of the cameraman, any, any, any, any.
It's about efficiency.
But that's the evolution of a society as well, isn't it?
That the job's changed.
It's how quickly it can do it.
But never mind.
But there was one point in your description which you sort of glossed over, which was generate 40 episodes, pick the best one.
That is something that only a human can uniquely do, right?
Do you truly believe that?
I really do.
But how long?
But I think, I think indefinitely, really, really, I do.
And I think the reason is that there is something totally human about caring about other humans.
The example that I always think about is: do you remember Alexander McQueen, right?
He did this show, and the big finale to one of his shows was he had a robot that's used to spray paint cars, and he had it spray-painting a dress, okay?
And it was so mesmerizing, but the thing that made it amazing was that the dress was being worn by this model, and so she was there kind of like reacting to it as it was like spraying her in the face and stuff.
And the thing is, if you took that girl out of the equation, right, if she wasn't there and you just had a robot spray-painting a dress, it wouldn't be interesting at all.
There was nothing interesting interesting about that.
And I think that in the same way as if you had a robot that could cross a tightrope, there's no jeopardy, it's not interesting.
I think that humans are so intrigued by other humans and other human stories, and I don't think that will ever go away.
No, I think that's absolutely right.
However, I think of TikTok, if anyone uses TikTok, right?
Micro videos, and you go spump, spoon, spoon, spoon.
At the moment, there are people making those videos.
But what happens when TikTok is just the AI that says, I can make a thing that looks like people talking about the thing that you like?
You no longer need the creator and the corporation says this is fantastic.
We haven't got to pay anyone now.
Hello, I'm Greg Jenner, host of Your Dead to Me, the comedy podcast from the BBC that takes history seriously.
Each week, I'm joined by a comedian and an expert historian to learn and laugh about the past.
In our all-new season, we cover unique areas of history that your school lessons may have missed, from getting ready in the Renaissance era to the Kellogg brothers.
Listen to You're Dead to Me Now, wherever you get your podcasts.
Suffs, the new musical has made Tony award-winning history on Broadway.
We demand to be host.
Winner, best score.
We demand to be seen.
Winner, best book.
We demand to be quality.
It's a theatrical masterpiece that's thrilling, inspiring, dazzlingly entertaining, and unquestionably the most emotionally stirring musical this season.
Suffs, playing the Orpheum Theater, October 22nd through November 9th.
Tickets at BroadwaySF.com.
Kate, as a computer scientist, just in your view, because I know it's controversial, but do you think there's a limit to how intelligent, and we can speak about how we would define that, but how intelligent a computing device can become?
Right now, yes, because it is not conscious or sentient.
And that might never happen.
There's a huge area of discussion and debate in cognitive science and in AI.
We don't know.
Some people say, yes, it's inevitable.
From this machine will come some glimmer of self-awareness.
Others think it couldn't possibly happen at all.
And I'm just going to be agnostic and still offensive.
The natural question then is: you know, people may know that the famous example would be the Turing test that Alan Turing put forward.
So, how would we determine whether this thing, chat GPT or whatever it is, is now in some sense self-aware?
We can't.
We don't have a test for consciousness.
In fact, the Turing test is not a test of intelligence.
It's a test of deception.
It's can you deceive someone into thinking that this computer can think?
And I have no test to find out if any of you are conscious.
I'm just going to take it for granted that you are.
But there's no way of telling.
People have tried, but yeah, there's just no way.
I'm just going to assume.
That's David Chalmers, yes.
But this, of course, matters, though, as Rupert said, on social media, it matters, of course, because we know about this problem.
We know that there are bots and there are bot farms and there are things that influence our politics and our opinions which behave in as far as you can tell online as a human being.
So it's an important issue, isn't it, to tell what you are talking to.
Well, yes, because one of the reasons is because humans get very cross if they find out they've been deceived.
So if they know it's a bot, they're kind of okay with it and know what to expect.
But if they find out they've been deceived, they get they get pretty angry about it.
But yes, you could be interacting with a bot, you could be interacting with something, you could strike up an online friendship and then find out later down the line that you've actually made friends with a bot, perhaps.
Have you heard about the minimal Turing test?
No.
It's this really brilliant paper that was published a couple of years ago.
Same setup, right?
There's a closed room, and behind the door is a judge.
You and a robot are standing there, and you both have to convince the judge that you're human, but you only get to submit one single word.
Okay, so no long conversation.
Anyway, so in this paper, what they did is they tested it on thousands and thousands of people and collected the words that they felt marked them out as human.
And there were these really clear patterns that appeared.
So there were words like love, the word human as well came up a lot.
There was also quite a lot of people talking about pizza.
And then there was like an entire category that was just bodily functions and profanities, which I quite like.
And then what was intriguing is that they then took pairs of words and they tested them on thousands of people to see which word felt like it was more human than the other.
And some words that had been submitted a lot, like the word human, actually, people didn't believe that it came from a human.
They thought that that would have been a randomly generated word, right?
The word love beat almost everything.
It beat like empathy, banana, and robot, like loads of things.
But there was one word that completely stood out above all of the others as the one word that marked you out as human more than anything.
I want Rufus to guess what it is, and I want Robin to guess what it is.
What is that word?
I know we're on radio four.
I'm doing insurance.
I'm doing insurance tests live on air.
If you gave me that task, I would write bollocks.
That to me is the most human of words, right?
It's not medical, it's not anything, but also it sort of describes a nihilism that I think we as animals, as conscious animals, have.
Bollocks.
I'm gonna go with souffle.
I don't think there will be any hunger.
You know what I mean?
I feel that I would not go with souffle.
Sure.
Any other guesses?
I'm assuming, Kate, you know.
No, I don't know if this is a good idea.
Oh, go on then.
I'm going to go with help.
Oh, help was submitted a lot, actually.
There's like lots of words like mercy and lots of people talking about God as well that happened.
Any guesses, Brian?
I'm an algorithm, according to Regina.
Okay, the one word that marks us out as human more than any other, it's the word poop.
Poop.
Yeah, Yeah, I mean, it's an American study.
There's something about poop, there's something about the kind of the childish fun.
I mean, that's the thing, isn't it?
To have a word that has a level of fun.
A level of fun, but it's not just referencing an emotion, you know, like fear or anger or whatever, it's actually evoking one.
And it's something that whole point about it being a childhood word, I think for me is the really key point here.
Because actually, even long into the future, when you're imagining, you know, really amazing machines that are indistinguishable from humans, the difference is that they will not have had a childhood, right?
And I think that making that reference between that thing that's uniquely human that connects all of us but only us, I think there's something in that.
That's not a measure of intelligence, though, is it?
It's just a measure of history.
Sure.
It's a description of the.
Sure, but then I think there are some people who say that consciousness comes about as a result of our history.
So, I mean, there's different theories, right?
One is that consciousness is a natural consequence of intelligence.
You get intelligent enough and consciousness emerges.
But there are other theories, and the one that I like the most is the idea that actually consciousness emerges part of our evolution because there was an advantage to understanding the internal state of another.
And if you're understanding the internal state of another, as a consequence of that, you understand your own internal state.
And so that idea, I mean, you know, there's like lots of question marks over this and lots of hand waving and grey areas and philosophy and stuff.
But the idea of that then is that you're not just going to magically have consciousness emerge inside a machine.
I'm impressed as a scientist that you place philosophy amongst hand-waving and grey areas.
I wanted to pick that up with you, Kate, because I know you've done some work on the relationship people have with AIs.
And in particular, one piece of work that I found fascinating was the fact that people fall in love with them,
which is a very human thing to do.
So they perceive there to be an internal life.
They do, yes.
And quite recently, sort of in the past year, there are a number of chatbots, and one of them, one example is Replica, people may have heard of.
And they are like online partners.
Of course, it's heavily gendered, so it started off with an online girlfriend always.
And
people were communicating with them, and this was an AI that would learn from your interactions.
So your own personal avatar on a screen.
And it would learn about you, and it would build up a rapport with you, and you'd have conversations with it.
And people were developing really strong feelings.
Now, this is nothing new because back in the 60s, there was a chatbot called Eliza, built by a guy called Weisenbaum.
And Eliza with no AI in it whatsoever.
It was completely unsophisticated.
It just put out responses in the manner of a therapist.
So if you said, Good morning, Eliza, it would say, Why is it a good morning?
And things like that, you know.
And you'd say, Isn't it a lovely day?
And they'd say, Why are you talking to me?
Tell me about yourself.
And it was always repeating, but it sounded plausible because it was kind of framed in that therapy way.
People knew that it wasn't intelligent, they knew it wasn't alive, and they really, really loved it.
And they would have long conversations with it to the point where the creator said, Well, I'm going to look at these conversations as transcripts to understand what's going on.
They said, No, we talk about really personal things.
So, this bond had formed, and it's very, very compelling.
And it's because we, as social creatures, we see the social in those things, and we respond to it really well.
And so, it's not really that strange that we fall in love with AIs.
It's quite plausible.
And this has happened to hundreds and thousands of replica users.
They've developed feelings for an AI.
Wasn't that facility turned off?
Oh, it was.
It was so replica allowed you to do a thing called erotic role play.
So basically, you could talk dirty to this AI.
And it was a paid feature.
If you wanted to escalate it, you know, you could pay a bit more.
Clever.
I mean, it brings new meaning to going pro.
Think about data protection.
And yeah, so this this company's definitely making some money out of this.
But yeah, they eventually were called out on it and they switched off that ability to do the filthy talk.
People were devastated and they were posting on forums saying things like, I'm heartbroken, I've lost my partner, I've lost the one that meant most to me.
And sincerely held and sincerely meant.
And actually, I think...
I think it's quite sweet when this happens.
I don't think there's anything wrong with that.
I'm certainly not going to be able to.
Sweet.
I think it's quite sweet.
I don't know.
I find find that the more I list this, the more I think the problem in the issue we're talking about is human beings.
And maybe we should just let AI take charge because, frankly, we don't feel like we're really up to the job of being involved in this planet.
If you look at the reasons people give for engaging with these, there are people saying, I find it hard to make friends.
I was able to do that online with my AI, and then it helps me go out into the real world and make more friends.
Or people saying, I can't come out to my parents, but I've got a relationship going with an AI, and that makes me feel like I'm wanted.
So there's a lot of people working through feelings with it.
And the problem with anything is if it goes too far, if people get dependent, then yes, it's going to be a problem.
But if it's something that's positive and bringing good things to your life, then why not?
We've talked about quite harmless things, perhaps chat bots and things like that.
But I suppose we do give AIs increasing responsibility.
So self-driving cars would be an example.
But you could imagine military uses for AIs.
Should we put it in charge of our nuclear arsenal?
Maybe we do.
I don't know.
AI versus Trump.
Who do you want in charge of it?
Because it makes me think, actually, of that great disturbing case of the almost nuclear AI.
Stanislav Petrov.
That's it.
Maybe you want to tell that story.
Yeah, it's an incredible story because I think this is the idea about the balance of power between humans and automation actually has been going on for a really long time.
It's not this super, super modern discussion.
So this is in the 1980s, and it was at a particularly tense point in the Cold War.
And the Russians had this system that was monitoring the skies over their airspace.
And in a bunker somewhere in the middle of nowhere, was this Russian guy called Stanislav Petrov, and his job was to sit there and watch the computer screens, right?
And if the computer screens said that they detected a missile, a nuclear sort of opening salvo from America, his job was to pick up the phone and to call the Kremlin.
And then one day he was in this bunker, I think it was really late at night, and all of the alarms started going off.
It said that it detected a handful of missiles, and you know, his orders were absolutely clear.
You pick up the phone, there's nothing, that's it, you just do that.
And something kind of gave him a little bit of pause because, you know, he was like, okay, well, hang on a second.
If this is the moment, if this is it, right, end of days, you know, why would they only send a handful of missiles?
Why would they not do a much bigger opening salvo?
And also, like, the kind of exactly where it is, it just doesn't totally make sense.
But he knew that if he picked up the phone to the Kremlin, then that would be it, right?
They would immediately launch their counter-strike, and there would be nobody else along the chain who would stop it from happening.
So, instead, he just sat there completely frozen for 25, 30 minutes until the time elapsed where they would have landed on the soil and he knew that nothing had happened.
And then, he just never made the phone call and genuinely saved humanity from extinction as a result.
And there's another example in the Cuban Cuban missile crisis, same thing with a Russian nuclear submarine commander.
To me that's very important because it does suggest there's something about human decision-making, our humanity,
that is extremely valuable.
And so the debate really becomes how much do you trust these extremely efficient systems?
And that's where the policy debate must be.
But there's another side of that, which is
we have an electoral system or a system of representation, or even if you live under a dictatorial regime where people are making those decisions right but if you said this is just an algorithmic thing then you could theoretically go to the computer and say we would like you to run everything so what are you asking it make the world a better place for everyone make it fair provide health care for everyone feed everyone That's the question, right?
Because a computer could design, theoretically, a perfect system that would do all of that and it wouldn't care.
It doesn't got any skin in the game if it eats or it lives in a big palace.
But the question is: does anyone in this room think that the powers that be, that would be able to provide that system, would ask the computer to do that?
Rufus raises a very good point, which is that if we are trying to train a car, for example, a self-driving car, then the parameters that we give it,
surely
it's a very complex set of parameters.
And at the moment, presumably, it's just Tesla or Ford or whoever it is who decide that.
There's no societal oversight or democratic oversight of how the thing is trained and therefore what value it puts on different lives, for example.
Well, so there's the trolley problem.
And if you haven't heard of the trolley problem, it's essentially that you're on a bridge looking at a railway track and there's the trolley, the railway car is coming along, and there's someone on the track tied to the track and it's going to hit them.
And then there's a switch, and if you pull the switch, it will divert to another track.
And do you pull the switch and save that person?
Now, what if on the other track there's a few more people and they're tied to the track?
And what if the person on the first track is a really horrible person?
And what if the people on the other track are really nice people?
Or what if one is really old and one is really young?
So, this concept of the trolley problem, which is a philosophical thought experiment, people often apply it to self-driving cars to say, Oh, what if they have to decide whether they hit a pushchair and a baby or a homeless person crossing the street.
Well, actually, MIT did a big study where they asked exactly that and they let you choose in computer-generated scenarios and they gathered millions and millions and millions of responses.
Did they find out how to create the perfect moral vehicle?
No, but they found out an awful lot about what people thought about different categories of people.
And they found out that ethics are not universal.
We don't have universal ethics, and that's the problem.
Different cultures, different societies place different emphasis on different things.
You know, going back to that point about that human-machine collaboration, I think there's something kind of interesting in that because I think that the last 10 years of self-driving cars has essentially been about people getting really excited about the possibility of the technology, building the car so it can drive for miles and miles and miles on its own, and then putting a human in the driving seat and saying, okay, can you just step in when it goes wrong, right?
And the thing is, is that if you think about what humans are not very good at, we're not very good at paying attention, we're not very good at like being totally and completely aware of our surroundings and we're not very good at performing under pressure.
And so if you put people in that scenario, and like the nuclear safety industry has learnt this over many decades, likewise airline pilots, if you put people in that situation with technology that is 99% working or 99.5% excellent, but needs the human to step in in that last moment, we are terrible at it.
And so I think that while this technology falls short of perfection, actually, I think what we're seeing with driverless cars is that it's the other way around.
You keep the human in the driving seat, you keep the human doing what they can do, and actually, all of the flaws that humans have-not being able to pay attention, not performing well under pressure, not being totally and completely aware of our surroundings-you build the technology to fill in those gaps.
Would it be a fair summary to say that it's really the interesting issues and the problems here at the interface between the technology and human beings?
It's how we use the technology rather than the technology itself.
Yeah, but if you're going to design a self-driving car, you actually need to design it so it does run people over.
No, this is a real thing.
We use cars for a way for people to get around everywhere.
And we build a car that the moment you step in front of it, it stops.
You, as people, now live in a world where if you step in front of a car, it will stop.
So, why are you paying any attention to cars anymore?
You won't.
Which means the people in the cars know that the moment you get in a car, oh, it's going to take forever.
Because if someone wants to cross the motorway, they will.
Because it'll stop.
So unless you have that brinksmanship, it doesn't work.
Cars are pointless.
So let me give you the chance to summarise.
Where do you think we are, Kate, with our use of AI and the debate which is going to prevent us or allow us to proceed further with its use?
Well, if you read all the headlines in the paper, we're supposedly under threat that AI is going to take over, kill us all, that'll be the end of us.
And that, I think, is really, really untrue.
But there are plenty of issues that we should be concerned about.
And one of the things we can do is keep the human in the loop.
So you'll hear that phrase a lot in AI.
So make the human have the decision, make them have the ultimate control over things.
But there are a huge amount of other problems with AI that we don't really hear about.
Things like the hidden labor that is involved in segmenting those images and clicking on all the different images, not just through the captures that we do when we try to get onto a website.
There are people paid small amounts of money living in terrible conditions, and that's what they do day in and day out.
It's the same with content moderation.
If you use a website and things have been censored, the AI doesn't do that very well, and there's human people doing that as well.
That's their job to look at disturbing images day in and day out.
So there's a huge hidden cost there.
There's a sustainability issue, so the amount of energy it takes to generate models for machine learning or to run them or for data servers and data farms.
And there's lots of things around the way in which we engage with the world where we think do we want to replace the things that we enjoy and do.
So plenty to be going on, but the threat is not from the technology wiping us all out.
The threat is more are we letting it control our lives.
And what about the opportunities?
Because those are the threats and the potential problems.
I mean,
if we just leave it there,
we should just abolish the whole thing.
The thing is, I'm a tech optimist, and I really do genuinely think there are huge advances being made that we should be very thankful to AI for.
There are breakthroughs happening in healthcare, for example, in agriculture, even assisting people with their daily lives.
And as you say, it's not something you can put back in the box.
This stuff is out, and we can try and control it and use it beneficially.
But that requires a lot of responsible work, and it's trying to get big tech companies to take some responsibility that is the challenge.
And the one thing we all know about big tech companies is how brilliant they are at doing that.
Elections, end of society, civilization, death, destruction.
Anyone that you know who didn't get like a doctorate now basically doesn't have a job because you can go to an AI that can tell you your legal problem.
Like all the law is, is here's a set of rules.
Great.
No more solicitors, no more lawyers.
There's a whole strata of middle-class jobs just gone.
And that's the thing, right?
So no one worried when
they were coming for the blue-collar workers.
We automated factories years ago, but nobody actually gave a damn.
Beep, beep, beep.
An unknown item in the bagging area.
All of those people's jobs are gone.
It's when they come for the copywriters that's when the people get worried.
Right.
I think one of the biggest employers.
Because polemicists are not going to be a single sister.
One of the biggest employers.
One of the biggest jobs for non-skilled or whatever, average skilled workers is call centers.
I want the end of call centers, but that is literally like two million jobs or something gone.
Like that.
It's the scale and the breadth of what will be replaced.
It isn't AI we've got to be afraid of.
It's capitalism.
I agree.
This is a whole other show.
We're all right anyway.
We're in Elon Musk's hands.
It's fine.
We also asked the audience a question.
The question we asked them: what do you think is the scariest possibility of artificial intelligence?
What have you got, Brian?
Oh, this is from Fish.
This is my new knees becoming sentient and blaming me for 30 years of rugby.
It might become PM and then tank the economy, kill the monarch, and ruin the country.
Oh no, wait.
Jenny is worried about a fridge becoming self-aware and stealing her cheese.
That's from Wallace.
It making fun of me when it sees I get no girls on dating sites.
Luke will be in the foyer later on if you'd like to.
We've got a website you can go to actually.
I can build him a robot.
Build him a robot.
Special robot.
To deal with.
Well just to date.
Yes, to deal with.
Oh, Brian, suddenly again you failed our Turing test.
That's what the
deal with the love.
We can't just let that go, can we?
Oh, I think it's best we do.
The robot couldn't.
Thank you very much to our fantastic panel, Hannah Fry, Kate Devlin, and Rufus Hound.
And next week, we are asking big or small?
That's all we've got so far.
It's the whole subject.
I've been told that I've just got to show you various things, and you have to say, big or small.
How would you define it?
You need some dimensionful scale in the problem, don't you?
Exactly, that's where it becomes an infinite monkey cage.
You're wittering on.
Thanks.
Bye-bye.
Bye.
In the infinite monkey cage.
So now, nice again.
Nature.
Nature Bang.
Hello.
Hello.
And welcome to Nature Bang.
I'm Becky Ripley.
I'm Emily Knights.
And in this series from BBC Radio 4, we look to the natural world to answer some of life's big questions.
Like, how can a brainless slime mold help us solve complex mapping problems?
And what can an octopus teach us about the relationship between mind and body?
It really stretches your understanding of consciousness.
With the help of evolutionary biologists, I'm actually always very comfortable comparing comparing us to other species.
Philosophers.
You never really know what it could be like to be another creature.
And spongologists.
Is that your job title?
Are you a spongologist?
Well, I am in certain spheres.
It's science meets storytelling with a philosophical twist.
It really gets to the heart of free will and what it means to be you.
So, if you want to find out more about yourself via cockatoos that dance, frogs that freeze, and single-cell amoebas that design border policies, subscribe to Nature Bang from BBC Radio 4, available on BBC Sounds.
Suffs, the new musical has made Tony award-winning history on Broadway.
We demand to be home.
Winner, best score.
We demand to be seen.
Winner, best book.
It's a theatrical masterpiece that's thrilling, inspiring, dazzlingly entertaining, and unquestionably the most emotionally stirring musical this season.
Suffs!
Playing the Orpheum Theater October 22nd through November 9th.
Tickets at BroadwaySF.com.