Rutherford and Fry on Living with AI: AI in the Economy
The refrain ‘robots will take your job’ is one heard with increased frequency, but how quickly is automation of the labour force really happening and would it really be such a bad thing if many jobs were powered by artificial intelligence?
In this third episode, inspired by this year’s BBC Reith lectures from AI expert Stuart Russell, Adam Rutherford and Hannah Fry - together with expert guests - imagine what the future of work might look like. Will the move towards increased use of artificial intelligence in areas like healthcare, customer service and manufacturing see jobs disappear or will it simply create new ones we cannot yet imagine?
Economists are divided on what the effects of machines doing our jobs will be. Some argue it could lead to wide scale unemployment, or skilled workers being forced in into lower skilled jobs. Others believe this might be an opportunity to reshape our socio-economic systems to one where workers are freed from tedious repetitive jobs and instead have more leisure time to pursue their own interests and find meaning outside of work. Will we all one day receive a universal basic income and stop asking each other what we do for work when we meet someone new?
Producer - Melanie Brown
Assistant Producer - Ilan Goodman
Listen and follow along
Transcript
Hello, I'm Dr.
Adam Ratherford.
I spend my time writing and talking about science.
And today I'm going to tell you about artificial intelligence.
Wow, Adam, you said all of that without moving your lips.
That's because I didn't say any of it at all.
And that was an AI version of me that has been trained to sound exactly like me, pretty much, and say kind of the things that I might say in a Radio 4 programme.
Except for pronouncing your name correctly, apparently.
I don't think you're in danger of losing your job just yet, but that is what today's Ratherford and Fry is all about.
Will artificial intelligence render work obsolete?
Well, this was the theme of the third of Stuart Russell's Wreath lectures for the BBC.
And in this series, me and her are picking apart each of his themes.
So, so far, we've looked at the emergence of AI in general and its role in future warfare.
Today, we're asking, how is AI going to change the way we work?
Will AI help us to do more work faster?
Or will we ever be free to smell the roses instead of grinding out a nine-to-five to make ends meet?
Well, today we've got two experts to hold our hands.
Daniel Suskind is a fellow in economics at Oxford University and author of A World Without Work.
And Julie Scharr is professor in the Department of Aeronautics and Astronautics at MIT, where she leads the Interactive Robotics Group.
So welcome to you both.
Julie, let's start with you.
Stuart Russell there was keen to point out that this sort of threat of technology taking over, taking our jobs, has been described as early as people like Aristotle in the classical era.
So given how long we've been talking about this, do we think that the threat of technology replacing our jobs is real?
Well, automation and AI, as we increasingly use it for automation, it performs tasks, it doesn't perform jobs.
And so there are many ways that it changes our work for the better and for the worse.
But we're a long way, in my view, of these technologies fully eating away at the full spectrum of what humans bring to many of their jobs.
Daniel, tell me about the economist's argument here.
Do new technologies tend to make economies grow and maybe even increase living standards or do they end up causing unemployment?
They can do both.
If you look at modern economic growth, which began 300 or so years ago, what's driven that is technological progress.
If we think of the economy as a pie, technological progress has made made that economic pie far, far bigger over the last few centuries.
But it also has this other side, which is that that same technological progress can push some people out of work.
The problem is that there are jobs for people to do, but for various reasons people can't take up those jobs.
Sometimes because they don't have the right skills.
Sometimes it's that people just don't live in the right place.
I think there's also, interestingly, issues of identity.
People have particular conceptions of themselves and they want to stay out of work until the right job comes up in order to protect that identity.
Thinking about the way that things have unfolded in the past, do you think that what's about to happen is going to be fundamentally different?
In the short run, the next 10, 15 years, I don't think so.
I think as we move further into the 21st century, I do think that if we get the sorts of technologies that Stuart Russell and his colleagues are writing and worrying about, then the implications for the world of work will be quite different.
Julie, give us an example of areas of our lives where we're interacting with AI and the world of work and we don't even necessarily
know that it's there.
One of the funny things about AI is that we don't call it AI once it's working for us in our everyday lives at work.
But we have across from transportation, from thinking about the sophisticated algorithms used at places like FedEx to deliver packages and organize work, to healthcare.
The majority of your car may be built by robots.
If there were some jobs that no longer existed, Daniel, I don't know, I'm thinking about bomb disposal or cleaning sewers.
Would that really be a bad thing?
Are there some jobs that we do want technology to take?
I think absolutely.
Clearly that's the case.
Dangerous jobs, boring jobs, routine jobs.
You know, we could point to lots of jobs that it would be pretty good if technology did.
I think the challenge at the moment is that it's not just those jobs that technology is starting to encroach on.
It's also taking on some of the interesting work as well.
Things that require creativity or judgment or even empathy, all these things that people think are really important and really valuable in a job.
Well, that's an interesting point actually, because beyond just the obvious jobs, I suppose, of repetitive tasks and dangerous jobs, how far does this idea extend?
Here is Stuart Russell.
Let's imagine that technology creates a twin of every person.
And your twin shows up to your job, whether it's your current job or one of those those wonderful new jobs that will be created.
Your twin is a bit more cheerful, a bit less hungover, and willing to work for nothing.
How many of you would still have a job?
Well, this is an important idea, but surely there is no more important job on earth than a radio presenter.
Surely, surely.
Which you can't do hungover, at least you can't do it well.
So we, rather selfishly, we wanted to know if our jobs are at risk.
Now, Adam's voice at the beginning of this episode was generated by a startup company called Oxolo.
Let me just play you that clip and you should remember that it's not just the voice that's generated by the AI here, but the text too.
Artificial intelligence is a computer program that can do a job that a human can do.
That's it, that's all it is.
It's not a robot that can think like a human.
It's not a computer that can feel like a human.
It's not a computer.
that can be like a human.
It's a computer program that can do a job that a human can do.
And that's a very important distinction.
Because if you think of artificial intelligence as a human-like computer, you're going to be disappointed.
If you think of artificial intelligence as a computer program that can do a job that a human can do, you're going to be excited.
Because artificial intelligence is already here.
Just when you think it was over, it keeps on going.
Yeah, okay, so my first question is, am I that boring?
I would say they need to turn it down, but only by about five.
Okay, okay.
It is clearly identifiable as my voice.
I'm not sure it quite gets the intonation.
It doesn't sound human, does it?
No, and it doesn't sound smooth either.
Those were words that formed grammatically correct sentences.
I wouldn't say it made a lot of sense.
What did you think, Daniel?
You don't know us as well.
You don't have to listen to me drone on about stuff as often as Hannah does, but how convincing was that?
I think one of the ideas there was very interesting.
The idea that these systems and machines don't have to copy the way that we sound or the way that we think and reason in order to perform tasks and activities that we do.
And this, I think, is the big change that's happened in the world of artificial intelligence in the last few years.
Being able to build systems that can outperform human beings, not by copying them, but by doing it in very different ways.
So I suppose, you know, it sounds a bit funny and some of it doesn't make quite a lot of sense, but that's because we're judging it.
by the wrong benchmark.
We're comparing it to a human being.
And instead, we should be just be asking, how capable is it?
I should tell you, though, let me just introduce Sidney Otten, who is the head of AI from Oxolo, who explains how this system works at the moment and also how they're planning on using this technology commercially.
At Oxolo, we create a platform for AI-based entertainment.
That is, we bring together all the different technologies with computer vision and essentially generate avatars acting in scenes that can also chat with you.
In order to create the AI Atom, we had to work on two parts.
Firstly, we had to create a text that would sound like it would be written by Adam and secondly we had to create a voice that would sound like Adam.
You can clearly recognize that it's Adam speaking although here and there it sounds a little computer-ish and robotic, but that is also something that given more data and time can also be resolved.
So I would say when it comes to creating speech, the technology is ready to be used.
And when it comes to creating text, simple topics that don't require fine details and a lot of knowledge are not a problem at all.
Replacing radio moderators with AI technology is certainly a vision that has come closer in recent years.
I believe that it will be possible in the next five years, but right now it's probably better to ask Adam to moderate the radio series.
Well, I'm very grateful for for that endorsement from uh Sydney there.
Julie, what do you think about that five year prediction?
I mean, we're talking about generating text and the voice.
Are my jobs secure for five years?
Yes, he's put you on notice.
There you go.
You have five years.
You should make the most of it and enjoying your job, I suppose.
Do you think that's realistic?
I'm an AI researcher and a roboticist, and so the way we're able to produce these results is by leveraging large sources of data.
Humans are ultimately structuring the machine's understanding of the world through the producing and sourcing of this data.
And what can easily be missed are that context really matters.
You know, what's appropriate in one context is different than what's appropriate in another context.
And that's where it becomes really hard to sort of impart or teach machines this type of nuance.
The AI does not have a conceptual understanding of what it's saying,
whereas in theory, I do.
Exactly.
And that makes you able to move between your work environment, your home environment, and bring your knowledge and your experience and contribute in a productive and appropriate manner.
And the machine is really kind of faking it.
It's kind of acting, right?
And it may work well in one setting and it's going to work less well in another setting.
Okay, well it sounds then, overall, Adam, like your job is safe for a while, a few years.
But the fact that we have these capabilities, it means, I guess, that it's not just routine and repetitive jobs that we are talking about AI being able to replace.
Here's Stuart Russell again.
In the coming decade, I think we'll see real advances in language understanding.
Machines will be able to interpret the content of human communication sufficiently well to automate many short interaction tasks, customer service, insurance claims, and so on.
The low-level programming jobs and computer-based clerical tasks, typical of outsourced work, are also likely to disappear as the technology of robot process automation advances.
So, Daniel, where is it in the short-term future where we might see AI encroaching upon the jobs that we might not expect?
I think the majority of jobs are likely to be affected by automation in some way.
And let me explain why I think that.
There was a study done by McKinsey and Company, the consulting firm, a few years ago.
And they looked at 820 jobs in America.
And what they found was that only 5% of those jobs could be fully automated, given existing technologies.
But what they also found was that 60% of those jobs were made up of individual tasks and activities, 30% or more, which could be automated.
In other words, when you look at the actual tasks that make up jobs, it turns out that the majority of jobs actually have lots of tasks and activities that can be automated.
So I'm thinking about the sort of tier of work that we don't necessarily associate with being fundamentally mundane or fundamentally dangerous.
Lawyers or accountants, clerical jobs.
There are elements to those jobs that AI might be able to contribute to.
That's right.
And it's also not just the routine activities in those jobs.
There are systems that can make medical diagnoses as accurately as leading doctors.
There are systems that can help you file your tax return as efficiently and as effectively as a human accountant.
You know, in all these different jobs, even our most expert jobs, it's often possible to find tasks that we might not have thought could be automated, but increasingly can be.
Julie, let me ask you the same question, but much more about robotics.
Are there tasks that we expect that will end up being replaced by robots, or in the very near future where we will see humans and robots working much more together.
Many small aspects of the work that we do today, both in cognitive tasks and also in physical tasks, if you think about on an assembly line or in a warehouse, can be augmented or supported or done by robots or artificial intelligence today.
And there's tremendous opportunity for improving human capability and well-being by lending that additional machine support.
But the challenge is the integration challenge.
What do you mean about this idea of integrating human and robots together?
Traditionally, robots were in cages, they were separated from people, and now we have these collaborative robots, these safe robots that you can really deploy right alongside a person.
There were relatively few numbers of settings in which you could take a little task that could be done by a robot today, separate it, pull it out, cage it, structure it, and give it to a robot.
And one of the reasons it's so challenging to do that is because if any little aspect of how you do that work changes, you have to reprogram that robot.
And that's a time-consuming and challenging task.
As we develop artificial intelligence that's able to understand humans as more than just obstacles, but can understand what a human partner is thinking, anticipate what they'll do next, then a robot can become more flexible and sort of jump in as a human teammate would and pick up some of the work that's being done by a person.
Bond disposal is a really great example because it's clearly a task or a job that we would like automation to do more of.
When you deploy a robot today to do that task, an operator has to painstakingly sort of step the robot through the tasks motion by motion.
And it's very time consuming and very cognitively taxing for a person to do that.
But we have seen success in these applications in deploying something called supervised autonomy.
It's the ability through this intelligent interface for a person to impart their understanding of the world, like this object will be safe to nudge or move, but really critical is that the system needs to be supervised.
So in simulation, the system will show this movement, did this look right, and then the operator has the final authority to approve and then for the machine to take that action.
So these are ways in which a very fine-grained, integrated fashion, you can leverage the relative strengths of humans and intelligent machines to accomplish what we would prefer not to do or we can't do alone.
Okay, well we want to jump forward into the into the far future because one of the themes that Stuart Russell's lectures tackles is the inevitable emergence of AGI, of artificial general intelligence, where computer systems are capable of thought and possibly even creativity regardless of the specific task in hand.
Now, people have been thinking about the impact this would have on economies and our working lives for a long time.
And we sifted through the colossal BBC archive and we found a film made in the dim and distant past of 1963, which imagined a future, actually the future they were imagining was 1988, where no one had to work at all.
Let's have a listen to this.
It is 0830 hours, September the 14th, 1988.
These are children of our time.
They should live to be a hundred.
They may colonize the stars.
They will not toil.
They need never be unhappy.
But this morning, as every morning, there is a problem.
How to spend a golden lifetime.
What to do with so much time.
Yeah, I was born in 1984, so I am one of those kids.
And I can categorically say that that is not how my life has turned out.
No, it didn't quite work out like that.
So that was a clip from the 1963 documentary, Time on Our Hands.
The optimism.
We are having those same conversations now, though, Daniel.
What do you think that our future will look like once we eventually get to this AGI?
It is a view that computer scientists working on the coalface of these technologies think is plausible, that sometime in the future, and not necessarily the distant future, we're going to have technologies that are able to do all the tasks that we do better than us.
And it's a remarkable thought.
The thing that comes to my mind, alongside the sort of extraordinary prosperity that it would bring, given the importance of technological progress in driving economic growth, is that questions about automation are going to move from being questions about what can and cannot be automated, sort of technical questions, into moral questions about what should and should not be automated.
You mentioned there though that idea of incredible prosperity.
Is that incredible prosperity for everybody?
If you go back to the turn of the first century AD and take the global economic pie and divide it up into equal slices for everyone in the world, most people get a few hundred dollars.
Almost everyone lives on or around the poverty line.
And if you roll forward a thousand years, roughly the same is true.
But what's happened over the last 300 years is that that economic pie has exploded in size.
So global GDP per head, the value of those individual slices of the pie, is already about $11,000.
In principle, we are collectively more prosperous and have the potential to become more prosperous than ever before.
The challenge, though, and you hinted at it there, is that, well, how do we make sure everyone gets a slice of that pie if our traditional way of doing so,
paying people for the work that they do, isn't available to us anymore.
Julia, as the technologist in this conversation, what is your take on the emergence of artificial general intelligence and whether that will significantly contribute to us being freed up to explore the stars or whatever that documentary was saying?
Like any technology, there's a question of how we apply it.
Will this result in a society in which there is true shared prosperity?
We haven't seen that over the past four years in the introduction of automation.
It has differentially impacted those without college-level educations.
And so, you know, even today, we have these concerns of how we take technologies and how we ensure that the benefits seen from them are shared.
And it's not a question we have to be asking for some far-off point in the future when AGI exists.
It's a question for us really today and right now.
Well, I guess there's also another important point here about the limits of what type of replacements are even possible.
Here's what Stuart Stuart Russell had to say.
The inevitable answer seems to be that people will be engaged in supplying interpersonal services that can be provided or which we prefer to be provided only by humans.
That is, if we can no longer supply routine physical labour and routine mental labour, we can still supply our humanity.
We will need to become good at being human.
Become good at being human.
Julie, do you think that there are some things that only humans are able to do?
I think it's easy to underestimate how important being a human is to many, many tasks that seem rote.
And I'll give you one very specific example.
I was touring a facility where it was someone's job to load up stock, to put it into an oven, to heat it.
It's a very hard job.
You have to lift this heavy, you know, these heavy objects, put it in this crate, put it into the oven.
And they came through and said, we'd really, really like a robot to help support this job.
And I said, oh, well, you know, so why haven't you done that?
They said, well, the challenge is we only have one person in the entire facility that can actually do this job.
Like, well, why is that?
Well, if it's not done in this just perfect way, then when it's heated, you know, there's these defects on the object as it cures.
We really do need this incredibly human capacity to learn, to problem solve, to innovate.
And this is true for many, many jobs that we as humans might call rote jobs, but for a machine, it's really just an intractable problem still.
That's not to say that we can't have machine support in these jobs.
This idea of augmenting or complementarity is really important to thinking about how we develop technologies that can create the future that we want to see.
Better jobs, better productivity.
Holds us as humans in all the ways we bring ourselves as humans to our work.
I guess in many ways that example is really describing something that is an incredibly complex problem that is just too difficult to solve with a a machine at the moment.
But I wonder too about the things that are maybe more uniquely human, like in the care sector, for instance, or getting a hug from a robot just to be probably never going to be quite the same thing as getting one from a loved one.
Absolutely, absolutely.
I mean, how much of what we create in the world really are things that machines will not be able to replace.
These are examples that machines clearly fall short.
It doesn't mean that they can't play a helpful and useful role, but
there's no true substitute for another human being or a connection with another living creature, even a pet.
But that doesn't mean that we shouldn't be working to figure out the right ways we can leverage these technologies to be able to address these societal challenges.
I don't think anyone would disagree with that, Julie, and the whole interpersonal, you know, the actual real human touch that cannot be replaced.
But I want us to talk from a sort of societal perspective as well.
I feel like we're in danger of talking about a very specific Western rich country version of the future.
And it all feels a little bit like we're in danger of,
well, ignoring the fact that
the type of future we're talking about can only exist in wealthy countries.
Aaron Powell, this is the great challenge, I think, if
these sorts of futures come about.
The great economic challenge becomes less one of solving the traditional economic problem, which is how we make sure the economic pie is large enough for everyone to live on, and instead it becomes a problem of distribution.
How on earth do we share out that pie?
Particularly if we can't rely upon the world of work to do it.
And that's true within a country like the one we're sitting in in the UK, but it's also even more acute, even more problematic in other countries.
There are quite a lot of big questions and big challenges.
Do we have an idea of what it is that we should be aiming for rather than just making these kind of incremental little improvements and hoping that we end up with something good at the end.
A project that I've been involved in recently
with the
World Economic Forum and Stuart Russell, in fact, was designed to do exactly this.
We got together a group of not only economists but philosophers and computer scientists and writers and just a huge range of different people to sit down and argue with one another
about
what sort of positive future we want.
And what was fascinating was that it was very productive, but it was also very fractious.
There was huge disagreement about where we ought to be going.
Because it's a conversation about values and what does a good society look like?
What does a good balance between work and life look like?
And those are questions that we can only really reach answers to together.
This brings an idea that's been bouncing around my head for the whole time we've been having this conversation, which is that there's an inherent value to work.
There is a moral and ethical value to paying someone for a job of work.
And we seem to be having this conversation predicated on the idea that we're going to free up all this time, we're all going to become novelists or paint landscapes or sit around doing nothing.
But actually, I like working and sometimes I like doing mundane jobs as well whilst listening to Radio 4, scrubbing the floor.
So there is a value to work and that needs to be part of the conversation as well.
I think that's right.
You know, many people say that work isn't just a source of an income, but it's also a source of meaning and purpose.
When you start to dig a little bit deeper and look at the relationship between work and meaning, it's not actually as clear-cut as you might commonly think.
I mean, today, for instance, it's certainly true that some people do get a lot of meaning and purpose, but there's many people for whom they would rather not work if they didn't have to.
And so one of the key questions within this whole conversation that we all need to be having around the table is,
are we asking AI to help us free up time so that we can work under our own terms rather than do the jobs that we are forced to do?
Julia, let me ask you this.
Paint us a picture, really, of what you think we should be aiming for.
We're presupposing in this conversation that the goal is for technology to replace aspects of our work that we don't want to be doing today.
But there's an equally desirable future where technology supports us in
being able to fully leverage the capabilities, the value we bring as humans into all aspects of work that we do today.
My research team collaborated with a team of social scientists to conduct interviews and site visits of manufacturers across Europe and in the U.S.
And one of the questions that we asked was, as you're introducing these new technologies, is your ultimate goal here in introducing these technologies a lights-out factory?
We heard a variety of answers.
In some cases, the answer was yes.
We're moving towards a lights-out factory where there's no human involved at all.
But equally often, we heard that the goal was not a lights-out factory.
And just a quote from one of those interviews was, a lights out factory, why would we want that?
A factory without humans is a factory that's not innovating.
It's we as humans that drive the learning curve on an assembly line, that figure out how to do work better for us or more efficiently.
And once we automate, we cut off that unique capability that we as humans bring to do things better, to explore new ideas.
But it doesn't have to be that way.
In developing AI that is able to facilitate communication, that's able to leverage expertise from us, that's that's able to be quote unquote programmed or driven by literally anybody in their job, not just PhDs with machine learning, can really be transformative and
flip the switch from work that we don't want to be doing to work that we feel really truly leverages us as full human beings.
Well, on that point of the unique things that humans can bring, that brings us to a close of what this unique set of humans is bringing for today.
The final episode of Living with AI, we are going to look at what is known as the alignment problem.
We know what we want from AI, but AI doesn't necessarily have quite the same values as us.
And then it occurred to me, we have to build AI systems that know they don't know the true objective, even though it's what they must pursue.
Yes, so thank you to our two guests, Daniel Susskind from Oxford University and Julie Shah from MIT.
I'm Adam Rutherford.
And I'm Hannah Frye.
See you next time.