Future of Work: AI

25m
In the third and final part of our Future of work series, Kara and Scott chat with Susan Athey, who teaches The Economics of Technology at Stanford Graduate School of Business. They take a deep dive into AI, discussing how it will impact work as we know it, and whether all the doom and gloom is justified.
Follow Susan at @Susan_Athey.
Follow us on Instagram and Threads at @pivotpodcastofficial.
Follow us on TikTok at @pivotpodcast.
Send us your questions by calling us at 855-51-PIVOT, or at nymag.com/pivot.
Learn more about your ad choices. Visit podcastchoices.com/adchoices

Listen and follow along

Transcript

Support for the show comes from Saks Fifth Avenue.

Sacks Fifth Avenue makes it easy to shop for your personal style.

Follow us here, and you can invest in some new arrivals that you'll want to wear again and again, like a relaxed product blazer and Gucci loafers, which can take you from work to the weekend.

Shopping from Saks feels totally customized, from the in-store stylist to a visit to Saks.com, where they can show you things that fit your style and taste.

They'll even let you know when arrivals from your favorite designers are in, or when that Brunello Cacchinelli sweater you've been eyeing is back in stock.

So, if you're like me and you need shopping to be personalized and easy, head to Saks Fifth Avenue for the Best Fall Arrivals and Style Inspiration.

Hi, everyone.

This is Pivot from New York Magazine and the Vox Media Podcast Network.

I'm Kara Swisher.

And I'm Scott Galloway.

And this is our special three-part series on the future of work where we look at the business and technology trends that will shape the workforce, employment, and the very nature of work.

Today, we're going to do a deep dive on AI and how it will impact the work as we know it.

Some numbers to get us started, 56% of U.S.

workers report using generative AI to complete work tasks, according to a survey from the conference board.

22% of U.S.

workers worry that technology will make their jobs obsolete, according to Gallup.

Only 26% of companies have established AI policies.

We're going to talk through all this with our guest, Susan Athey.

Susan is a professor of the economics of technology at Stanford Graduate School of Business.

She's also the chief economist of the antitrust division at the U.S.

Department of Justice, but is talking to us today about her work at Stanford, because there's a lot going on at the Justice Department.

Welcome, Susan.

Hi, it's great to be here.

All right, first of all, there's a lot of doom and gloom talk about there when it comes to talking about how the workforce will transform due to AI.

I think it's the single biggest question I get.

I'm on a book tour right now.

People ask about it, and they're worried about it.

They don't have a lot of information, but there is some merit to it.

A recent survey found that 44% of companies expect some layoffs to occur in 2024 due to new AI capabilities.

As Scott always talks about, CEOs are always looking for efficiencies, and it makes sense if they can find them.

Where do you fall?

Are we too worried or not worried enough?

I happen to be in the middle.

I think Scott is probably in the middle too, but where do you fall?

Yeah, I think I'm also in the middle.

I think there's a lot of hot takes that are pretty extreme.

So, at one end, there's utopia, where our biggest problem is how to find meaning when we don't need to work anymore.

And, you know, drones are dropping things at our doorstep, and you know we're deep printing our our food but at the other end you know there's some kind of dystopia and there's a lot of different versions of that dystopia actually you know robot wars or other things but you know even if we imagine sort of peacetime um you know you may have a highly capital intensive world in an economy where a few get rich and the rest become irrelevant um if you don't need workers they lack political power there and you know we don't do a good job with redistribution and there's like mass unemployment, which then, of course, can lead to unrest.

So, you know, those are pretty extreme, although science fiction kind of gives you some ideas of what to imagine.

But my own view is much more in the middle.

One thing, the utopia is a bit unrealistic because it leaves out the economics and politics of how everything gets done.

Like how do resources get allocated and who actually governs us?

But the dystopia doesn't seem imminent either because there's so many bottlenecks and constraints on the path to universal adoption of a new technology.

We still have to fax stuff.

And so there's just a lot of frictions on the way to

being able to achieve mass adoption.

So I'm really more focused on short-term worries.

Like how do we help people make the transition as certain jobs are likely to be displaced and how to include more people in prosperity?

Nice to meet you, Susan.

So when you think about technology and its impact or new technologies, whether it's automation or

different agri-farming technologies, it generally

follows the following curve.

There's some short-term job destruction, and then those efficiencies and that capital or those profits are redeployed, and you end up typically with more, with net job growth.

And I don't see why this technology would be any different.

What's different here?

Shouldn't this result in net job growth eventually?

So, I mean, it's certainly possible that you can end up in a more capital-intensive economy.

And so, you know, there's no necessary reason that it has to go one way or the other.

But in the end, you know, there's a lot of things where humans can be productive and they even can be productive in a world of robots.

So if you think about taking care of older people or taking care of younger people, you know, that's a nice example because it tends to sort of scale with the size of the population.

And we're going to have young people and we're going to have old people.

And there's really can be a pretty low productive ratio of humans looking after other humans.

So that just suggests to me that the marginal product of humans isn't going to go down.

And in fact, you know, we may find ways to augment humans so that they can be productive longer or be productive with less knowledge.

So for example, for elder care, more focused on the well-being side of things rather than just

the

physical care.

So I think I agree with you that

we just can't necessarily imagine what the new equilibrium looks like, but it seems hard for me to imagine that there aren't things for humans to do.

And also, until the AI solves electricity and

figures out how to make a lot more chips, we will still have constraints on the resources that are used on computing.

Leaning into Scott's question, what are the, say, the positive features it would bring to the workforce that we're overlooking?

There's been a lot of AI professionals who are doing the doom scrolling.

So, one thing is that

it can be stressful in a job if you're worried about making a mistake.

Also, certain physical aspects of a job can be challenging.

And a lot of jobs require a lot of training to avoid making mistakes.

One thing that AI can do is it can help you be good at a job with a little bit less upfront training.

And it can also avoid mistakes.

Yeah.

One reason that seniors want to stop doing their jobs is if they start having some memory problems or worry about overlooking things that's very stressful for them.

So they don't want to do a bad job or hurt someone in their job.

Like they want to save work.

They might want to do child care, but then they're worried they might, you know, forget something or forget the kid in the car.

Like the child, which could be a good thing.

Exactly.

But the same with elder care.

That requires the same kind of attention to detail.

But in the end, if drugs can be dispensed automatically and if you have some kind of

safety monitoring going on, then it can be possible to really reduce or eliminate some of those risks and then give people the chance to not just do something that's a fun hobby that keeps them busy, but something that really contributes to society and helps keep them engaged.

So an aid, you're talking about it like an aid.

Now, you run a lab at Stanford with the thesis that technology can benefit humans.

Tell us about that work and as it relates to AI and the workforce.

First of all, AI is a general purpose technology.

So what that means really broadly is that the same innovations can be used for lots of different purposes.

And it could be distributed around the world and shared at relatively low cost.

So the premise of our lab is that we will collaborate with social impact organizations.

The organizations have a relationship with some in-comp customers.

They could be patients in a hospital.

They could be people who need counseling.

They could be workers who need transitioning or students learning.

The organization has that relationship with the end consumers.

And then we try to take advantage of all the students we have at Stanford and all of the great technological capabilities that we have to build things for those social impact applications.

And the idea is, once you build them, they can be shared and spread.

So, more broadly, when you just think about it, someone who understands the capabilities of AI,

people have outlined so many potential threats.

It becomes sentient, income inequality, job destruction.

There's just so many different threats that people have outlined.

Is there one threat, Susan, that you think is the most ominous that we should be focusing on from a regulatory standpoint and that academics should be modeling out?

What should we be doing to potentially prevent a tragedy that comes if there's a threat here?

So I think there's a number of individual tactical threats, threats, and then there's some that are maybe a bit more systemic.

So starting with the jobs, we have never been good

as a society at following through with the redistribution that can make everybody better off.

So international trade is a great example.

Econ textbooks tell us why that's good, why that can make everybody better off.

But often, if people are attached to a location, individual humans are made much worse off.

And there's a lot of evidence about the negative impacts of job displacement.

I've done some of this research myself.

And often the people in the worst locations and the worst industries, if they lose a factory, they can be very bad off even 10 years later, while the people who are more educated and mobile are able to move locations or find new jobs.

So what we see here is that there's over the last 10 or 15 years, we haven't seen as much productivity benefit of all of the computing advances in the numbers, but we have seen a lot lot of firms lay the infrastructure.

So they are in the cloud now.

They're using software as a service.

And that makes it much faster for them to adopt a certain technology if it comes quickly.

So think about human customer service agents.

If there's software as a service and firms are already kind of plugged into that software as a service for their current humans,

you could see many firms all at once kind of replacing their humans.

And so those people,

especially if there are areas of the country where they put the call centers because labor was cheap, because there weren't not a lot of other jobs or in certain countries specializing in this, they could get hit hard all at once.

And that can be very disruptive.

And in the political environments we have, not just in the U.S.

or around the world, it can be very difficult.

to do the redistribution that might help everybody share in the benefits because somewhere else

there may be lots of benefits, although it might be accruing to a smaller number of people if it's a capital intensive replacement.

It might be the software engineers who are making the improvements.

So that's like an economy-wide thing.

And then there's a bunch of tactical issues as well.

I think one, you know, the disinformation, misinformation is a really big issue.

We need people to be invested in democracy in order to, you know, go through any transition.

We need people to think about hard problems.

And if we have hard problems with trade-offs, you know, and then people are just kind of being polarized in the process,

we won't be able to have the kinds of

societal discussions we need.

And then, of course, there's all the security threats.

Those are also, you know, a bit scary.

And we've never been very good at investing to prevent problems.

We're good at reacting, but some of these problems may come so fast that we are kind of left flat-footed.

So we're going to go on a quick break.

And when we come back, we'll talk more about AI's impact on the workforce, including which industries that you think will most transform in the next five years due to AI.

Commercial payments at Fifth Third Bank are experienced and reliable, but they're also constantly innovating.

It might seem contradictory to have decades of experience but also be on the cutting edge of the industry, but Fifth Third does just that.

They don't believe in being just one way for your business because your business has more than just one need.

Like needing your payments to be done on time, safely, and without any bumps today, but also needing to know you won't be hitting any bumps tomorrow.

That's why they handle over $17 trillion in payments smoothly and effectively every year, and were also named one of America's most innovative companies by Fortune magazine.

After all, that's what commercial payments are all about.

Steady, reliable expertise that keeps money flowing in and out like clockwork.

So 53rd does that.

But commercial payments are also about building new and disruptive solutions.

So 5th 3rd does that too.

That's your commercial payments of Fifth Third Better.

Scott, we're back with our special series on the future of work.

We're talking to Stanford professor Susan Athey.

So, we're going to talk about healthcare in a second, but because I think it's probably the one that's going to be most transformed.

But, what industries do you think off the top will be most transformed?

So, one thing to look at, first of all, is just the industry of AI.

And we have a lot up in the air right now about how that's going to shake out.

So we do need to be aware of how concentrated that industry is going to be and whether there's going to be a good environment for startups to be able to create services.

And we even see it in my lab at Stanford.

We're building services that can be used for social impact, but that requires tools.

AI is a business.

AI is a business, but AI, of course, transforms everything around it.

We are seeing some of the earliest adopters being, say, software as a service, firms that serve a lot of customers.

And so the ones that get ahead in AI can have a higher market share.

And all of the things, all the infrastructure around it will be transformed as well.

Right.

So that's a business.

So healthcare is a hot topic, obviously, when it comes to AI disruption.

The market for AI and healthcare projected to reach over $170 billion by 2029.

But 60% of Americans say they would be uncomfortable with a provider relying on AI.

Speaking of hot dogs, you did a trial using digital counseling to help patients choose contraceptive methods.

Talk about the overall thing and then what you did there, how it went there.

So in the developing country context, it can be very difficult to recruit enough nurses that have a lot of education and experience.

So this was a digital assistant for the nurses to guide patients through a counseling session.

It made sure that the patients were able to express their concerns about side effects as well as their desires.

And then it provided a ranking of options.

And we compared a method that where the app provided a ranking versus where the patients just led the discussion.

And we found that when the app provided a ranking that was responsive to what the patients wanted, the patients spent more time evaluating the options and had higher satisfaction and were more educated about their options.

But interestingly, the nurses also liked it and they felt that they learned from using the application because it's not very satisfying to counsel people when you're not sure you're giving them the very best information for them.

And it can be hard to hold in your head all the different combinations of side effects and concerns people want.

And so the application sort of helped them do a better job helping their patients.

And so I think that's a general trend.

And again, in the developing context or in places where resources are are tight, you can potentially get better information to people.

But crucially, there was still a human in the loop who could interpret all of it and could answer questions and help people feel comfortable with the information that they were getting.

And also the patient was more engaged as a result of this and was more participating in their decision making.

And I think that's a much more likely short-term thing because it's really hard to get technology to do up, you know, to avoid the errors.

And if you're in high-stakes environment like healthcare, errors can still be very costly.

So having an assistant to the provider to make better choices is something that seems imminently likely.

So somewhere there's going to be a lot of change.

Somewhere there's going to be a lot of change.

And what about for teachers?

The stats are pretty staggering.

60% of educators use AI in their classrooms already.

You specifically experimented with teaching children to read using news feeds, no less.

Tell us about that.

Yeah, so we worked with an educational application that was sort of like a Netflix or a TikTok for stories for reading.

And when we first started working with them, they had humans curate the news feed to pick stories they thought would be interesting.

But we built a recommendation system based on the students' past behavior and found 50% increases in the amount of stories the students read.

And then we also used gamification to try to get them excited about it at the beginning and show that the students continued reading afterwards.

And what I take from that is, like, look, the commercial sector has figured out how to get you hooked, you know, but a lot of that is detrimental.

It's doom scrolling.

It doesn't really make you happy.

But we can potentially use those same kind of tactics for good to help you make

for education and to help you develop positive habits.

So you touched on K through 12, Professor.

I think a bunch of us have been talking about and waiting for the impending disruption of higher ed.

And I've just been shocked how resilient and static it is.

I mean, you walk into a, I don't know,

I think it's the same at GSB, but you walk into Stern and I mean, you could be walking in in 2000 or 2023.

The classroom environment just hasn't changed that much.

The curriculum hasn't changed that much.

Do you see any sort of disruption coming?

Or is this all, I mean, it just feels like so far it's been this fortress where the walls hold onto the business model.

Do you think that AI is going to change higher ed?

So I had a colleague that actually fine-tuned an AI on all of his course materials.

And so it could answer questions about the course materials, but only the court, well, as much as possible focused on the course materials.

And he said that it cut way down on the email questions during that

semester of the course.

So I do think that there's a lot of ability to get sort of basic, repetitive questions answered in in a more customized way.

We're also,

I led a study at GSB of how AI could be used, and we felt like we're generally in the early stages.

There's been a lot of experimentation in terms of, you know,

how do you integrate it rather than outlaw it while still getting people to learn the concepts.

One example of that is coding syntax.

So there's some people who are CS

students and they need to learn to code, but we need MBA students, we need business people to be able to think about how coding works because there's going to be digitization in every single industry going forward.

But MBAs aren't super excited about learning lots of syntax and you waste all your time just teaching details.

And now the code reviews can help you with the syntax.

And so that can help people move much faster and get to the interesting stuff, the thinking part, and not spend as much time on the syntax part.

But there's a downside because know, they can also more easily just skip over it without thinking at all.

And so I think we're going to be really challenged as professors to really change the way we assess students so that we ensure that they still get the conceptual part, but don't get bogged down with the part that may be less important in the future.

Like missing commas and where the curly braces go is just not where it's at, you know, basically starting now.

What do you advise students in terms of trying to prepare for an AI future?

Outside of taking courses in AI and spending time with different LLMs, do you think it's going to impact the way, I don't know, or the skills we emphasize in terms of preparing for a more AI-enabled future?

I think logic is going to be very important.

The AI is good when you break down a task into a part that a robot would be especially good at.

It's often like a repetitive task, and it often involves something where you can measure success easily.

And measuring success is hard.

And so it's often going to be the case that we can put AI on something where we can measure success.

Thinking about how to measure success and thinking beyond short-term clicking type measures about what success really looks like requires a lot of logical thinking.

It also requires thinking about what sometimes people call second-order effects or equilibrium effects.

So, you know, for example, I helped people get jobs by doing portfolios.

Like if everyone made those same portfolios, maybe they wouldn't be so effective in getting jobs because part of what was signaling that you were a good worker was that you figured out to make the portfolio.

But if everybody does it, it no longer has the signaling value.

That's kind of equilibrium thinking that is required to anticipate what happens when you put things in.

And also in just being creative in terms of how do you measure success if like clicks and eyeball scrolls and stuff is not enough to understand success.

So that kind of thinking, you know, it becomes more important, not less important.

That's a complement to AI, not, it's not substituted by AI.

All right.

So in summary, should the average worker be worried about AI taking their job?

I mean, if you were, if people must ask you this all the time, right?

You just say it depends, or what do you say to them?

Beyond the students, beyond people who are currently like, I'm getting replaced.

I mean, I think if your job is to, you know, create images and sell them or, you know write ad copy or send repetitive emails to your customers and hand write them you know that doesn't seem like that's going to last very long

now what

what what takes its place may be managing systems that do those things or measuring systems that do those things but there there may be less of those jobs the people who are in those jobs may be more productive but then as i mentioned earlier jobs may open up that previously had big barriers to entry big training requirements um while those jobs might become um you know more

more possible more possible for people to transition into but i as i mentioned before the big concerns we are we are terrible at transitions we are terrible at helping people through transitions especially at the lower end of the income distribution yes we are so what is possible is is not the same as what we're going to actually choose to do.

And it could in fact affect people on the higher end, lawyers.

Absolutely.

I mean, you see this also already, just research, document searches.

But, you know, we used to have lots of paralegals, you know, go through stacks of documents.

Bait stamping.

Yeah.

But now paralegals do keyword searches.

That's going to get changed.

But actually, we still use paralegals.

It's interesting, though,

we may use people in different ways.

And some new lawyers are not getting the experience of

reading the document, but also just reading documents by hand.

You can get a lot, you can get by by only doing keyword searching and never just picking up the documents one by one and sort of seeing where your creativity takes you.

Yep, that's a really good point.

You still have to read it.

Oh, not really, as long as you can get a summary of it by AI.

Anyway, Susan, thank you so much.

We really appreciate it.

Okay, Scott, that's it for the final part of our three-part series on the future of work.

Read us out.

Today's show was produced by Lara Naiman, Zoe Marcus, and Taylor Griffin.

Ernie Intertod engineered this episode.

Thanks also to Drew Burroughs and Mil Severio.

Nashak Kurwa is Vox Media's executive producer of audio.

Make sure you subscribe to the show wherever you listen to podcasts.

Thanks for listening to Pivot from New York Magazine and Vox Media.

You can subscribe to the magazine at nymag.com/slash pod.

We'll be back next week for another breakdown of all things tech and business.

This month on Explain It to Me, we're talking about all things wellness.

We spend nearly $2 trillion on things that are supposed to make us well.

Collagen smoothies and cold plunges, Pilates classes, and fitness trackers.

But what does it actually mean to be well?

Why do we want that so badly?

And is all this money really making us healthier and happier?

That's this month on Explain It To Me, presented by Pureleaf.