Decoder: Barack Obama on AI, Free Speech, and the Future of the Internet
Kara and Scott will return next week!
Learn more about your ad choices. Visit podcastchoices.com/adchoices
Listen and follow along
Transcript
Support for the show comes from Saks Fifth Avenue.
Sacks Fifth Avenue makes it easy to shop for your personal style.
Follow us here, and you can invest in some new arrivals that you'll want to wear again and again, like a relaxed product blazer and Gucci loafers, which can take you from work to the weekend.
Shopping from Saks feels totally customized, from the in-store stylist to a visit to Saks.com, where they can show you things that fit your style and taste.
They'll even let you know when arrivals from your favorite designers are in, or when that Brunello Cachinelli sweater you've been eyeing is back in stock.
So, if you're like me and you need shopping to be personalized and easy, head to Saks Fifth Avenue for the best follow rivals and style inspiration.
So your AI agents, they make the team that uses them more productive, right?
But if they aren't connected to other agents or your data or your existing workflows, how productive can they really make your teams?
Any business can add AI agents.
IBM connects your agents across your company to change how you do business.
Let's create Smile to Business, IBM.
Hi, everyone.
I'm Kara Swisher.
And I'm Scott Galloway.
Today, we're bringing you a special episode of Decoder, Nile Patel's recent conversation with former President Barack Obama.
The two talked about AI and the role that free speech and government regulation play, topics we've been talking about a lot on Pivot lately.
And while this is a serious conversation, there's a lighter moment when President Obama is asked about his iPhone home screen and which four apps he has in that dock at the bottom.
Scott, I'm curious.
What are your four apps?
Anything interesting?
Well, hold on.
U-porn and caviar, but let me get this.
You get the head of diversity and inclusion from the Forestry Service, and Neili gets Obama.
I know.
I had Obama.
I interviewed Obama.
My favorite apps.
I love, I hate to, I love, I have this new app called Wheelie.
And I use Caviar when I'm in
New York.
Wheelie, it's kind of a high-end Uber.
It's like douche uber, duber.
Duber.
Okay.
Yeah.
What else do I use?
I guess that's about it.
I don't use.
Oh, Spotify.
I love Spotify.
I have mail, messages, phone, and settings on my four apps at the bottom.
That's what I use.
Oh, that's, oh, you mean at the bottom the figure actually is.
Okay.
That's different.
But I like that wheelie.
And I, and use those.
Those are good ones.
Those are nice ones too.
And my stock market app.
I don't even know what that's called.
Does it even have a name?
It's stocks.
Stocks.
Stocks.
Yeah, I don't use Twitter back in some file.
Now I have news apps.
I have my news apps and threads actually near the top too.
Anyway, and weather, obviously.
CNN.
I'm sorry, CNN.
CNN.
Okay, good.
Well, good, good, good, good, good.
Anyway, enjoy the episode, everyone, and happy Thanksgiving.
Are you doing Thanksgiving in London, Scott?
No, I'm actually doing it in Florida.
I'm really excited.
I'm going to go see friends.
It's going to be great.
Some weather, some warm weather.
Sanny.
Anyway, while Scott's sunning himself on the Florida beaches, we'll be back next week with more pivot.
Hello, and welcome to Decoder.
I'm Neil Apatel, editor-in-chief of The Verge, and Decoder is my show about big ideas and other problems.
We've got a good one today.
I'm talking to former President Barack Obama about AI, social networks, and how to think about democracy as both of those things collide.
I sat down with Obama last week at his offices in Washington, D.C., just hours after President Biden signed a sweeping executive order about AI.
That order covers quite a bit, from labeling AI-generated content to coming up with safety protocols for the companies working on the most advanced AI models.
You'll hear Obama say he's been talking to the Biden administration and leaders across the tech industry about AI and how best to regulate it.
And he has a particularly unique experience here since President Obama is one of the most deep-faked people in the entire world.
You'll also hear him say that he joined our show because he wanted to reach you, the decoder audience, and get you all thinking about these problems.
One of Obama's worries is that the government needs insight and expertise to properly regulate AI.
And you'll hear him make a pitch for why people with that expertise should take a tour of duty in the government to make sure we get these things right.
We're going to get right into it, but some notes before we start.
My idea here was to talk to Obama, the constitutional law professor, more than Obama, the politician, so this one got wonky fast.
You'll hear him mention Nazis in Skokie.
That's a reference to a famous Supreme Court case from the 70s where the ACLU argued that the town of Skokie, Illinois banning a Nazi group from marching was a violation of the First Amendment.
You'll hear me get excited about a case called Red Lion v.
FCC, a 1969 Supreme Court decision that said the government could impose something called the fairness doctrine on radio and television broadcasters because the public owns the airwaves and can thus impose requirements on how they're used.
There's no similar framework for cable TV or the internet, which don't use public airwaves.
And that makes them much harder, if not impossible, to regulate.
Obama says he disagrees with the idea that social networks are something called common carriers that have to distribute all information equally.
That's an idea floated most notably by Justice Clarence Thomas in a 2021 concurrence, and which forms the basis of laws regulating social media in Texas and Florida.
Those laws are currently headed to the Supreme Court for review.
Lastly, Obama says he talked to a tech executive who told him the best comparison to AI's impact on the world would be electricity.
And you'll hear me say that I have to guess who it is.
So here's my guess.
It's Google Sundar Pichai, who has been saying AI is more profound than electricity or fire since 2018.
But that's my guess.
You all take a listen.
Let me know what you think.
Oh, and one more thing.
I definitely asked Obama what apps were on his home screen.
I mean, come on.
You would have done the same thing.
Okay, President Barack Obama.
Here we go.
President Barack Obama, you're the 44th President of the United States.
We're here at the Obama Foundation.
Welcome to Decoter.
It is great to be here.
Thank you for having me.
I am really excited to talk to you.
There's a lot to talk about.
We are here on the occasion of President Biden signing an executive order about AI.
I would describe this order as sweeping.
I think it's over 100 pages long.
There's a lot of ideas in it.
Everything from regulating biosynthesis with AI.
There's some safety regulations in there.
It mandates something called red teaming, transparency, watermarking.
These feel like new challenges, like very new challenges for the government's relationship with technology.
I want to start with a decoder question.
What is your framework for thinking about these challenges and how you evaluate them?
This is something that I've been interested in for a while.
So
back in 2015, 2016, As we were watching the landscape transformed by social media and the information revolution impacting every aspect of our lives.
I started getting in conversations about artificial intelligence and this next phase, this next wave that might be coming.
And I think one of the lessons that we
got from the transformation of our media landscape was that incredible innovation, incredible promise, incredible good can come out of it,
but there are a bunch of unintended consequences and that we have to be maybe a little more intentional about
how our democracies interact with what is primarily being generated out of the private sector.
And what rules of the road are we setting up?
And how can we make sure that we maximize the good and maybe minimize some of the bad?
So I commissioned
my science guy, John Holdren, along with John Podesta, who had been a former chief of staff and worked on climate change issues.
Let's pull together some experts to figure this out.
And we issued a big report in my last year.
The interesting thing even then was
people felt enormously promising technology, but
we may be overhyping how quick it's going to come.
And as we've seen just in the last year or two, even those who are developing these large language models,
who are
in the weeds with these programs, are starting to realize this thing is moving faster and is potentially even more powerful than we originally imagined.
So my framework and in conversations with government officials, private sector, academics, the framework I emerged from is that this is going to be a transformative technology.
It's already
in all kinds of small ways, but very broadly changing the shape of our economy economy in some ways.
Even our search engines, basic stuff that we take for granted is already operating under some AI principles, but this is going to be turbocharged.
It's going to impact how we make stuff, how we deliver services, how we get information, and the potential for us to
have enormous medical breakthroughs, the potential for us to be able to provide individualized tutoring for kids in remote areas, the potential for us to solve some of our energy challenges and deal with greenhouse gases.
This could unlock amazing innovation,
but that it can also do some harm.
We can end up with powerful AI models in the hands of somebody in a basement who develops a new smallpox variant, or
non-state actors who suddenly, because of a powerful AI tool, can hack into critical infrastructure.
Or maybe less dramatically, AI
infiltrating the lives of our children in ways that we didn't intend, in some cases, the way social media has.
So what that means then is that
I think the government, as an expression of our democracy, needs to be aware of what's going on.
Those who are developing these frontier systems need to be transparent.
I don't believe that we should
try to put the genie back in the bottle and be anti-tech because of all the enormous potential.
But I think we should put some guardrails around some risks that we can anticipate and have enough flexibility that it doesn't destroy innovation, but also is guiding and steering this technology in a way that maximizes not just individual company profits, but also the public good.
good.
So let me make the comparison for you.
I would say the problem in tech regulation for the past 15 years has been social media.
How do we regulate social media?
How do we get more good stuff, less bad stuff, make sure the really bad stuff is illegal?
You came to the presidency on the back of social media.
I was the first digital president.
You had a Blackberry.
I remember people were very excited about your Blackberry.
I wrote a story about your iPad.
That was...
transformative.
That's young people are going to take to the political environment.
They're going to use these tools.
We're going to change America You can make an argument.
I wouldn't have been elected had it not been for social networks.
Now we're on the other side of that.
There's another guy who got elected on the back of social networks.
There was another movement in America that has been very negative on the back of that election.
We have basically failed to regulate social networks, I would say.
There's no comprehensive privacy bill, even.
There was already a framework for regulating media in this country.
We could apply a lot of what we knew about should we have good media to social networks.
There are some First Amendment questions in there, what have you, important ones, but there was an existing framework.
With AI, it's we're going to tell computers to do stuff and they're going to go do it.
Right.
We hope.
We have no framework for that.
We hope they do what
we think we're telling them to do.
We also have you, we ask computers a question and they might just confidently lie to us or help us lie at scale.
There is no framework for that.
What do you think you can pull from the sort of failure to regulate social media into this new environment such that we get it right this time?
Or this is going to do anything at all.
Well, this is part of the reason why
I think
what the Biden administration did today in putting out the EO, the work they've done, is so important.
Not because it's the end point, but because it's really the beginning of building out a framework.
When you mentioned how this executive order has a bunch of different stuff in it, what that reflects is
we don't know all the problems that are going to arise out of this.
We don't know all the promising potential of
AI, but we're starting to put together sort of the foundations for what we hope will be a smart framework for dealing with it.
In some cases, what AI is going to do is to
accelerate advances in, let's say, medicine.
You know, we've already seen, for example, with
things like protein folding and the breakthroughs that can take place that would not have happened had it not been for some of these AI tools.
And
we want to make sure that that's done safely.
We want to make sure that it's
done responsibly.
And it may be that we already have some laws in place that can manage that.
There may be some novel developments in AI where an existing agency and existing law just doesn't work.
If we're dealing with the alignment problem and we want to make sure that some of these large language models where even the developers aren't entirely confident about what these models are doing, what the computer is thinking or
doing,
well, in that case, we're going to have to figure out what are the red teaming, what are the testing regiments.
And in talking to the companies themselves, they will acknowledge that their safety protocols and their testing regiments, et cetera, may not be where they need to be at.
And I think it's entirely appropriate then for us to say, to plant a flag and say, all right,
frontier companies, you need to disclose what your safety protocols are to make sure that we don't have rogue programs going off and hacking into our financial system, for example.
Tell us what tests you're using.
Make sure that we have some independent verification that right now this stuff is working.
But that framework can't be a fixed framework because these models are developing so quickly that oversight and any regulatory framework is going to have to be flexible and it's going to have to be nimble.
And by the way, it's also going to require some really smart people who understand how these programs and these models are working.
not just in the companies themselves, but also in
the nonprofit sector and in government, which is why I was glad to see that the Biden administration, part of the executive order, is specifically calling on a bunch of hotshot young people who are interested in AI to do a stint outside of the companies themselves and go work for government for a while, go work with some of the research institutes that are popping up at places like the Harvard Lab or the Stanford AI Center and some other nonprofits.
Because we're going to need to make sure that everybody
can have confidence that whatever journey we're on here with AI, that it's not just being driven by a few people without any kind of interaction or voice from ordinary folks, regular people who are going to be using these products and impacted by these products.
We have to take a quick break.
When we're back, President Obama and I talk about how regulation should shape the future of AI.
As a founder, you're moving fast towards product market fit, your next round, or your first big enterprise deal.
But with AI accelerating how quickly startups build and ship, security expectations are also coming in faster, and those expectations are higher than ever.
Getting security and compliance right can unlock growth or stall it if you wait too long.
Vanta is a trust management platform that helps businesses automate security and compliance across more than 35 frameworks like SOC 2, ISO 27001, HIPAA, and more.
With deep integrations and automated workflows built for fast-moving teams, Vanta gets you audit ready fast and keeps you secure with continuous monitoring as your models, infrastructure, and customers evolve.
That's why fast-growing startups like Langchain, Ryder, and Cursor have all trusted Vanta to build a scalable compliance foundation from the start.
Go to Vanta.com slash Fox to save $1,000 today through the Vanta for Startups program and join over 10,000 ambitious companies already scaling with Vanta.
That's vanta.com slash vox to save $1,000 for a limited time.
Support for Pivot comes from LinkedIn.
From talking about sports, discussing the latest movies, everyone is looking for a real connection to the people around them.
But it's not just person to person, it's the same connection that's needed in business.
And it can be the hardest part about B2B marketing, finding the right people, making the right connections.
But instead of spending hours and hours scavenging social media feeds, you can just tap LinkedIn ads to reach the right professionals.
According to LinkedIn, they have grown to a network of over 1 billion professionals, making it stand apart from other ad buys.
You can target your buyers by job title, industry, company role, seniority skills, and company revenue, giving you all the professionals you need to reach in one place.
So you can stop wasting budget on the wrong audience and start targeting the right professionals only on LinkedIn ads.
LinkedIn will even give you $100 credit on your next campaign so you can try it for yourself.
Just go to linkedin.com slash pivot pod.
That's linkedin.com slash pivot pod.
Terms and conditions apply.
Only on LinkedIn ads.
We're back with President Barack Obama talking about the importance of AI regulation.
There's ordinary folks folks and there's the people who are building it who need to go help write regulations.
And there's a split there.
The conventional wisdom in the valley for years is the government is too slow.
It doesn't understand technology.
And by the time it actually writes a functional rule, the technology it was aiming to regulate will be obsolete.
This is markedly different, right?
The AI doomers are the ones asking for regulation the most.
The big companies have asked for regulation.
Sam Altman has toured the capitals of the world, politely asking to be regulated.
Why do you think there's such a fervor for that regulation?
Is it just incumbents wanting to cement their position?
Well, look,
you're raising an important point, which is,
and rightly, there's some suspicion, I think, among some people that, yeah, these companies want regulation because they want to lock out competition.
And as you know, historically, sort of a central principle of tech culture has been open source.
We want everything out there.
Everybody's able
to play with models and applications and
create new products.
And that's how innovation happens.
Here, regulation starts looking like, well, maybe we start having closed systems and the big frontier companies, the Microsofts, the Googles, the OpenAIs, Anthropics, that they're going to somehow lock us out.
But in my conversations with the tech leaders on this, I think there is, for the first time, some genuine humility
because they are seeing the power that these models may have.
I talked to one executive, and look, there's no shortage of hyperbole in the tech world, right?
But this is a pretty sober guy, like an adult who's
who's seen a bunch of these cycles and been through boom and bust.
And I asked him, I said, well, when you say this technology you think is going to be transformative, give me sort of some analogy.
He said, you know, I sat with my team and we talked about it.
And after going around and around, what we decided was maybe the best analogy was electricity.
And I thought, well, yeah, electricity, that was a pretty big deal.
Yeah.
And if that's the case, I think what they recognize is it's in their own commercial self-interest that there's not some big screw up on this.
If in fact, it is as transformative as they expect it to be.
Having some rules, some protections that create a competitive field, allow everybody to participate, come up with new products, compete on price, compete on functionality, but
that none of us are taking such big risks.
Yeah.
There's a view that the whole thing blows up.
in our faces.
I do think that
there is sincere concern that if we just have an unfettered race to the bottom, that this could end up
choking off the goose that might be laying a bunch of golden eggs.
There is the view in the valley, though, that any constraint on technology is bad.
Yeah, and I just agree.
Any caution, any principle where you might slow down is the enemy of progress.
And the net good is better if we just race that as fast as possible.
In fairness, that's not just in the valley.
That's in every business I know.
It's not like Wall Street loves regulation.
It's not as if manufacturers are really keen for government to micromanage how they produce goods.
But one of the things that we've learned
through the industrial age and the information age over the last century is that
you can over-regulate.
You can have over-bureaucratized things.
But that if you have smart regulations that set some basic goals and standards, making sure you're not creating products that are unsafe to consumers, making sure that if you're selling food, people who go in the grocery store can trust that they're not going to die from salmonella or E.
coli, making sure that if somebody buys a car, that
the brakes work, making sure that if I take my electric whatever and I plug it into a socket anywhere, any place in the country, that it's not going to shock me and blow up on my face.
It turns out all those various rules, standards actually create marketplaces and are good for business.
And innovation then develops around those rules.
So it's not an argument that I think part of what happens in the tech community is the sense that we're smarter than everybody else.
And these people slowing us down are impeding rapid progress.
And I, you know, when you look at the history of innovation, it turns out that having some smart guideposts around which innovation takes place not only doesn't slow things down, in some cases, it actually raises standards and accelerates progress.
There were a bunch of folks who said, look, you're going to kill the automobile if you put airbags in there.
Well, it turns out actually people figured out, you know what, we can actually put airbags in there and make them safer.
And over time,
the costs go down and everybody is better off.
There's a really difficult part in this EO about provenance.
Yeah.
Watermarking content, making sure people can see it's AI generated.
You are among the most deep faked
people in the world.
Well, because what I realized when I, when I left office, I'd probably been filmed and recorded more than any human in history just because I happened to be the first president when the smartphone came out.
I'm assuming you have some very deep personal feelings about being deep faked in this way.
There's a big First Amendment issue here, right?
I can use Photoshop one way, and the government doesn't say I have to put a label on it.
I use it a slightly different way.
The government's going to show up and tell Adobe, you've got to put a label on this.
How do you square that circle?
Well, it's very challenging.
I think this is going to be an iterative process.
I don't think you're going to be able to create a blanket rule.
But the truth is, that's been how our governance of information, media, speech, that's how it's developed for a couple hundred years now.
With each new technology, we have to adapt and figure out some new rules of the road.
So let's take my example, a deep fake of me that is used for political satire or just to, you know, somebody doesn't like me and they want to deep fake me.
I was the president of the United States, and there are some pretty formidable rules that have been set up to protect people from making fun of public public figures.
I'm a public figure.
And what you are doing to me as a public figure is different than what you do to a 13-year-old girl in
freshman in high school.
So we're going to treat that differently.
And that's okay.
We should have different rules for public figures than we do for private citizens.
We should have different rules for what is clearly sort of political commentary and satire versus cyberbullying or.
Where do you think those rules land?
Do they land on individuals?
Do they land on the people making the tools like Adobe or Google?
Do they land on the distribution networks like Facebook?
Yeah,
my suspicion is how responsibility is allocated, we're going to have to sort out.
But I think the key thing to understand is,
and look, I taught constitutional law.
I'm close to a First Amendment absolutist in the sense that I generally
don't believe that even offensive speech, mean speech, et cetera,
should be certainly not regulated by the government.
And I'm even game to argue that on social media platforms, et cetera, that the default position should be free speech rather than censorship.
I agree with all that.
But keep in mind, we've never had completely free speech, right?
We have laws against child pornography.
We have laws against human trafficking.
We have laws against certain kinds of speech that we deem to be really harmful to the public health and welfare.
The courts, when they evaluate that, they say,
you know, they come up with a whole bunch of time, place, manner restrictions that may be acceptable in some cases, aren't acceptable in others.
You get a bunch of case law that develops.
There's arguments about it in the public square.
We may disagree.
Should Nazis be able to protest in Skokie?
Well, you know, that's a tough one, but
we can figure this out.
And that, I think, is how this is going to develop.
I do believe that the platforms themselves
are
more than
just common carriers like the phone company.
They're not passive.
There's There's always some content moderation taking place.
And so once that line has been crossed, it's perfectly reasonable for the broader society to say, well, we don't want to just leave that entirely to a private company.
I think we need to at least know how you're making those decisions, what things you might be amplifying through your algorithm and what things you aren't.
And it may be that what you're doing isn't illegal, but we should at least be able able to know how some of these decisions are made.
I think it's going to be that kind of process that takes place.
What I don't agree with is the large tech platforms suggesting somehow that we want to be treated entirely as
a common carrier and like we're just
view, right?
Yeah, which,
but on the other hand, we know you're selling advertising based on the idea that you're making a bunch of decisions about your project.
Well, this is very challenging, right?
If you say you're a common carrier, then you are in fact regulating them.
You're saying you can't make any decisions.
You say you are exercising editorial control.
They are protected by the First Amendment.
And then regulations get very, very difficult.
It feels like...
even with AI, when we talk about content generation with AI or with social networks, we run right into the First Amendment over and over again.
And most of our approaches, this is what I worry about, is we try to get around it so we can make some speech regulations without saying we're going going to make some speech regulations.
Copyright law is the most effective speech regulation on the internet because everyone will agree, okay, Disney owns that, bring it down.
Well, because
there's property involved in it.
There's money involved.
There's money.
Maybe less property than money, but there's definitely money.
IP and hence money.
Yeah.
Well, look,
here's my general view.
Yeah.
But do you worry that we're making fake speech regulations without actually talking about the balance of equities that you're describing?
I think that we need to have,
and AI, I think, is going to force this, that we need to have
a
much more robust
public conversation around these rules and agree to some broad principles to guide us.
And the problem is right now, let's face it, it's gotten so caught up in partisanship, partly because of the last election, partly because of COVID and vax and anti-vax proponents, that we've lost sight of our ability to just come up with some principles that don't advantage one party or another or one position or another, but do reflect our broad adherence to democracy.
But the point, I guess, I'm emphasizing here is this is not the first time we've had to do this.
We had to do this when radio emerged.
We had to do this when television emerged.
It was easier to do back then, in part because you had three or five companies or you, you know, the public through the government technically owned the airwaves.
And so you could make these decisions.
No, no, this is a square on my bingo card.
If I could get to the red line case with you, I've won.
Right.
There was a framework here that said the government owns the airwaves.
It's going to allocate them to people in some way, and we can make some decisions, and that is an effective and appropriate thing.
That was the hook.
Can you bring that to the internet?
I think you have to find a different kind of hook.
Sure.
But ultimately, even though the idea that the public and the government own the airwaves
that that was really just another way of saying this affects everybody
and so we should all have a say in how this operates and we believe in capitalism and we don't mind you making a bunch of money through the innovation and the products that you're creating and the content that you're putting out there but
We want to have some say in what our kids are watching or how things are being advertised, et cetera.
If you were the president now
and I was with my family last night and the idea that the Chinese TikTok teaches kids to be scientists and doctors in our TikTok, the algorithm is different and we should have a regulation like China has that teaches our kids to be, it came up and all the parents around the table said, yeah, we're super into that.
We should do that.
How would you write a rule like that?
Is it even possible with our first time?
Well, look, for a long time, let's say under television, there were requirements around children's television.
It kept on getting watered down to the point where anything qualified as children's television, right?
We had a fairness doctrine that made sure that there was some
balance in terms of how views were presented.
And
I'm not arguing
good or bad in either of those things.
I'm simply making the point that we've done it before.
And there was no sense that somehow that was anti-democratic or it was that squashing innovation.
It was just an understanding that we live in a democracy.
So we kind of set up rules so that we think the democracy works as
better rather than worse.
And everybody has some say in it.
The idea behind the First Amendment is we're going to have a marketplace of ideas that these ideas battle themselves at.
And ultimately, we can all judge better ideas versus worse ideas.
And I deeply believe in that core principle.
We are going to have to adapt to the fact that now there is so much content, there are so few regulators, everybody can throw up any idea out there, even if it's sexist, racist, violent, et cetera.
And that makes it a little bit harder.
than it did when we only had three TV stations or a handful of radio stations or what have you.
But the principle still applies, which is how do we create a deliberative process where the average citizen can hear a bunch of different viewpoints and then say, you know what,
here's what I agree with.
Here's what I don't agree with.
And hopefully through that process, we get better outcomes.
We need to take another break.
When we return, we'll talk to President Obama about what happens when AI and social media collide.
Support for this show comes from Robinhood.
Wouldn't it be great to manage your portfolio on one platform?
With Robinhood, not only can you trade individual stocks and ETFs, you can also seamlessly buy and sell crypto at low costs.
Trade all in one place.
Get started now on Robinhood.
Trading crypto involves significant risk.
Crypto trading is offered through an account with Robinhood Crypto LLC.
Robinhood Crypto is licensed to engage in virtual currency business activity by the New York State Department of Financial Services.
Crypto held through Robinhood Crypto is not FDIC insured or CIPIC protected.
Investing involves risk, including loss of principal.
Securities trading is offered through an account with Robinhood Financial LLC, member SIPIC, a registered broker dealer.
Support for the show comes from Saks Fifth Avenue.
Sacks Fifth Avenue makes it easy to shop for your personal style.
Follow us here, and you can invest in some new arrivals that you'll want to wear again and again, like a relaxed product blazer and Gucci loafers, which can take you from work to the weekend.
Shopping from Saks feels totally customized, from the in-store stylist to a visit to Saks.com, where they can show you things that fit your style and taste.
They'll even let you know when arrivals from your favorite designers are in, or when that Brunella Cacchinelli sweater you've been eyeing is back in stock.
So, if you're like me and you need shopping to be personalized and easy, head to Saks 5th Avenue for the best fall arrivals and style inspiration.
We're back with President Barack Obama, and we're ready to dive into what generative AI means for free speech and the internet.
Let me crash the two themes of our conversation together, AI and the social platforms.
Meta just had earnings.
Mark Zuckerberg was on the earnings call, and he said, for our feed apps, Instagram, Facebook, threads, for the feed apps, I think that over time, more of the content that people consume is either going to be generated or edited by AI.
So he envisions a world in in which social networks are showing people perhaps exactly what they want to see
inside of their preferences, much like advertising that keeps them engaged.
Should we regulate that away?
Should we tell them to stop?
Should we embrace this as a way to show people more content that they're willing to see that might expand their worldview?
This is something I've been wrestling with for a while.
I gave a speech about
misinformation and our information silos at Stanford last year.
I am
concerned about business models that
just
feed people exactly what they already believe and agree with and all designed to sell them stuff.
Do I think that's great for democracy?
No.
Do I think that that's something that the government itself can regulate?
I'm skeptical that you can come up with perfect regulations there.
What I
actually think probably needs to happen, though, is that we need to
think about
different platforms and different
models,
different business models, so that it may be that I'm perfectly happy to have AI
mediate how I buy genes
online, right?
That could be very efficient.
I'm perfectly happy with it.
If
it's a shopping app
or or a thread, fine.
When we're talking about political discourse, when we're talking about culture, et cetera, can we create other places for people to go that broaden their perspective, make them curious about how other people are seeing the world, where they actually learn something as opposed to just reinforce their existing biases?
But I don't think that's something that government is going to be able to sort of legislate.
I think that's something that consumers
interacting with companies are going to have to discover and find alternatives.
The interesting thing, look, I'm not obviously 12 years old.
I didn't grow up
with my thumbs on these screens.
So
I'm an old ass, you know, 62-year-old guy who sometimes can't really work all the apps on my phone.
But I do have two daughters who are in their 20s.
And it's interesting the degree to which at a certain point, they have found almost every
app,
social media app, thread, getting kind of boring after a while.
It gets old,
precisely because all it's doing is telling them what you already know or what the program thinks you want to know or what you want to see.
So
you're not surprised anymore.
You're not discovering anything anymore.
You're not learning anymore.
So I think there's a promise to how we can,
there's a market.
Let's put it that way.
I think there's a market for
products that don't just do that.
It's the same reason why,
you know, people have asked me around AI,
are there going to still be artists around and singers and actors, or is it all going to be
computer-generated stuff?
And my answer is,
for elevator music,
AI is going to work fine.
A bunch of elevator musicians just freaked out, dude.
For the average, even legal brief, or let's say a research memo in a law firm, AI can probably do as good a job as a second-year law associate.
Certainly as good a job as I ever did.
Exactly.
But,
you know, Bob Dylan or Steve.
There's one thing.
That is different.
And the reason is because part of the human experience, part of the human genius is it's almost a mutation.
It's not predictable.
It's messy.
It's new.
It's different.
It's rough.
It's weird.
That is the stuff that ultimately taps into something deeper in us.
And I think there is going to be a market for that.
So you, in addition to being the forum president, you are a best-selling author.
You have a production company with your wife.
You're in the IP business, which is why you think it's property.
It's good.
I appreciate that.
The thing that will stop AI in its tracks in this moment is copyright lawsuits, right?
You ask a generative AI model to spit out a Barack Obama speech and it will do it to some level of passability, probably C plus.
That's my estimation.
It'd be one of my worst speeches.
It might sound you fire a cannon of C plus content at any business model on the internet, you upend it.
But there are a lot of authors, musicians now, artists suing the companies, saying this is not fair use to train on our data to just ingest all of it.
Where do you stand on that?
Do you think that as an author, do you think it's appropriate for them to ingest this much content?
Set me aside for a second because the,
you know, Michelle and I, we've already sold a lot of books and we're doing fine.
And so I'm not overly stressed about it personally.
But what I do think President Biden's executive order speaks to, but there's a lot more work that has to be done on this.
And copyright is just one element of this.
If AI
turns out to be as pervasive and as powerful as its proponents expect, and I have to say, the more I look into it, I think it is going to be that disruptive.
We are going to have to think about not just intellectual property.
We're going to have to think about jobs and the economy differently.
And not all these problems are going to be solved inside of industry.
So what do I mean by that?
I think with respect to copyright law, you will see
people with legitimate claims financing lawsuits and litigation and
through the courts and various other regulatory mechanisms,
people who are creating content, they're going to figure out ways to get paid and to protect the stuff they create.
It may impede the development of large language models for a while, but over the long term, I don't think that'll just be a speed bump.
The broader question is going to be, what happens when 10%
of existing jobs now definitively can be done better by
some large language model or other variant
of AI?
Are we going to have to re-examine?
how
we educate our kids and what jobs are going to be available.
And
the truth of the matter is that for during my presidency, there was, I think, a little bit of naivete
where people would say, you know, the answer to lifting people out of poverty and making sure they have high enough wages is we're going to retrain them and we're going to educate them and they should all become coders because that's the future.
Well, if AI is coding better than all but the very best coders,
if
Chat GPT can generate a research memo better than the third, fourth year associate, maybe not the partner who's got a particular expertise or judgment,
now what are you telling young people coming up?
I think we're going to have to start having conversations about
how do we pay those jobs that can't be done by AI.
How do we pay those better?
Healthcare, nursing,
teaching, childcare,
art,
things that are really important to our lives, but maybe commercially, historically, have not paid as well.
Are we going to have to think about the length of the work week and how we share jobs?
Are we going to have to think about the fact that
more people
choose to
operate like independent contractors, but where are they getting their health care from and where are they getting
their
retirement from, right?
Those are the kinds of conversations that I think we're going to have to start having to deal with.
And that's why I'm glad that the President Biden's EO begins that conversation.
Again, I can't emphasize enough because I think you'll see some people saying, well, we still don't have tough regulations.
Where's the teeth in this?
We're not forcing these big companies to do XYZ
as quickly as we should.
That I think this administration understands, and I've certainly emphasized in conversations with them, this is just just the start.
And
this is going to unfold over the next two, three, four, five years.
And by the way, it's going to be unfolding internationally.
There's going to be a conference this week
in England around international safety standards on AI.
Vice President Harris is going to be attending.
I think that's a good thing because part of the challenge here is we're going to have to have some cross-border frameworks and regulations and standards and norms.
That's part of what makes this different and harder to manage than
the advent of radio and television because the internet by definition is
a worldwide phenomenon.
I got to ask, have you used these tools?
Have you had the aha moment where the computer's talking to you?
Have you generated a picture of yourself?
I have used some of these tools during the course of these conversations and this research.
And
it's...
Bing flirted with you yet it flirts with everybody I think
Bing didn't flirt with me but you know the way they're designed and I've actually raised this with some of the the designers in some cases they're designed to anthropomorphize to to make it feel like you are talking to a human right it's like can we pass the turing test right that's a specific objective because it makes it seem more magical.
And in some cases, it improves function, but in some cases, it just makes it cooler.
And so there's a little pizzazz there, and people are interested in it.
I have to tell you that generally speaking, though,
the way I think about AI is as a tool, not a buddy.
And I think part of what we're going to need to do as
these models get more powerful, and this is where I do think government can help, is also just educating the public on what these models can do and what they can't do.
These are really powerful extensions of yourself and tools,
but also reflections of yourself.
And so
don't get confused and think that somehow what you're seeing in the mirror is
some other consciousness.
A lot of times this is just feeding back to you.
I just want being to flirt with you.
This is what I felt personally very deeply.
All right, last question.
I need to know this.
It's very important to me.
What are the four apps in your iPhone doc?
Four apps at the bottom.
I've got Safari.
Key.
I've got my text,
you know, the
green box.
You're a blue bubble.
Do you give people any crap for being a green bubble?
No, no,
I'm okay.
All right.
I've got my email and I have my music.
That's it.
So it's like the stock set.
Yeah.
You know, if you asked
the ones that I probably go to more than I should,
I might have to put like words with friends on there where I think I waste a lot of time.
And
maybe my
NBA league pass.
It's pretty good.
It's pretty good.
But, you know,
I try not to overdo it on the bus.
League pass is just one click above the dock.
That's what I'm getting out of this.
That's exactly.
President Obama, thank you so much for being on Decoder.
I really appreciate this conversation.
I really enjoyed it.
And I want to emphasize once again,
because you've got an audience that understands this stuff, cares about it, is involved in it and working at it.
If you are interested in helping to shape all these amazing questions that are going to be coming up, go to AI.gov and see if there are opportunities for you, fresh out of school, or you might be an experienced tech coder who's
done fine,
bought the house, got everything set up, and says, you know what, I want to do something for the common good.
Sign up.
This is part of what we set up during my presidency, U.S.
Digital Services.
It's remarkable
how many
really high-level
folks decided that for six months, for a year, for two years, them
devoting themselves to questions that are bigger than just, you know,
what
the latest app
or video game was
turned out to be really important to them and meaningful to them.
And attracting that kind of talent into this field with that perspective, I think is going to be vital.
Yeah, sounds like it.
All right.
It's great to talk to you.
Thanks so much.
You bet.
I'd like to thank President Barack Obama for taking the time to join Decoder, and I'd like to thank you for listening.
I hope you enjoyed it.
Here's some news.
Next year, we're planning to bring you more episodes of Decoder every week.
And so I'd love to hear what you want us to do more of.
You can email us at Decoder at the Verge.
I really do read every email.
Or you can hit me up directly on threads.
I'm at Reckless1280.
We also have a TikTok.
You can check it out.
It's at DecoderPod.
It's a lot of fun.
I have been told I need to start a TikTok account so I can start replying to the comments.
I'm going to do it.
If you like Decoder, please share it with your friends and subscribe wherever you get your podcasts.
If you really like the show, hit us with that five-star review.
Decoder is a production of The Verge and part of the Vox Media Podcast Network.
Today's episode was produced by Kate Cox and Nick Stat.
It was edited by Callie Wright.
The Decoder music is by Breakmaster Cylinder and our executive producer is Eleanor Donovan.
We'll see you next time.
This month on Explain It To Me, we're talking about all things wellness.
We spend nearly $2 trillion on things that are supposed to make us well.
Collagen smoothies and cold plunges, Pilates classes and fitness trackers.
But what does does it actually mean to be well?
Why do we want that so badly?
And is all this money really making us healthier and happier?
That's this month on Explain It to Me, presented by Pureleaf.
Bundle and safe with Expedia.
You were made to follow your favorite band and from the front row, we were made to quietly save you more.
Expedia, made to travel.
Savings vary and subject to availability, flight inclusive packages are at all protected.