The Computer Scientist

33m
With all the hype and hysteria around AI, it’s important to remember that AI is still just a tool. As powerful as it is, it is not a promise of dystopia or utopia.

Host Garry Kasparov is joined by cognitive scientist Gary Marcus. They agree that on its own, AI is no more good or evil than any other piece of technology and that humans, not machines, hold the monopoly on evil. They discuss what we all need to do to make sure that these powerful new tools don’t further harm our precarious democratic systems.

Get more from your favorite Atlantic voices when you subscribe. You’ll enjoy unlimited access to Pulitzer-winning journalism, from clear-eyed analysis and insight on breaking news to fascinating explorations of our world. Subscribe today at TheAtlantic.com/listener.Garry chairs the Renew Democracy Initiative, publisher of The Next Move.
Learn more about your ad choices. Visit megaphone.fm/adchoices

Listen and follow along

Transcript

Bundle and safe with Expedia.

You were made to follow your favorite band and from the front row, we were made to quietly save you more.

Expedia, made to travel.

Savings vary and subject to availability.

Flight inclusive packages are at all protected.

Only murders in the building season find.

The Hit Hulu original is back.

The Knightbuster died.

He was talking with this mobster.

Was he killed in a hit?

We need to go face to face with the mob.

Get ready for a season.

Bundiono signore.

This is how I die.

You can't can't refuse.

You're gonna save the day, like you always do, by being smart, sharp, and almost always find mistakes.

The Hulu Original Series, Only Murders in the Building, premieres September 9th, streaming on Hulu and Hulu on Disney Plus for bundle subscribers.

Terms apply.

New episodes Tuesdays.

In 1985, at the tender age of 22, I played against 32 chess computers at the same time in Hamburg, West Germany.

Believe it or not, I beat all 32 of them.

Those were the golden days for me.

Computers were weak and my hair was strong.

But just 12 years later, in 1997, I was in New York City fighting for my chess life against just one machine, a $10 million IBM supercomputer nicknamed Deep Blue.

It was actually a rematch.

I like to remind people that I beat the machine the year before in Philadelphia.

And this battle became became the most famous human machine competition in history.

Newsweek's cover called it the brain's last stand, no pressure.

It was my own John Henry moment, but I lived to tell the tale.

A flurry of books compared the computer's victory to the Wright brothers' first flight and the moon landing.

Hyperbole, of course, but not out of place at all in the history of our love-hate relationship with so-called intelligent machines.

So, are we repeating that cycle of hype and hysteria?

Of course, artificial intelligence is far more intelligent than all chess machines.

Large language models like ChatGPT can perform complex tasks in areas as diverse as law art, and of course helping our kids cheat on their homework.

But are these machines intelligent?

Are they approaching so-called AGI or artificial general intelligence that matches or surpasses humans?

And what will happen when they do if they do?

The most important thing is to remember that AI is still just a tool.

As powerful and fascinating as it is, it is not a promise of dystopia or utopia.

It is not good or evil, no more than any tech.

It is how we use it for good or bad.

From the Atlantic, this is Autocracy in America.

I'm Gary Kasporov.

My guest is Gary Marcos.

He is a cognitive scientist whose work in artificial intelligence goes back many decades.

He is not a cheerleader for AI.

Anything but.

In fact, his most recent book is called Taming Silicon Valley, How We Can Ensure That AI Works for Us.

He and I agree that humans, not machines, hold a monopoly on evil.

And we talk about what what humans must do to make sure that the power of artificial intelligence doesn't do harm to our already fragile democratic systems.

Gary Marcus, welcome to our show.

This is the Gary Show.

You are an expert on artificial intelligence, and you have worked on it for many decades, starting at a a very young age so before we talk about AI

I have to ask you back then in 1997 who you were rooting for who was I rooting for me

or Deep Blue but be honest please no no bad blood you know

in 1997 I had become disenchanted with AI And I don't think I had I really cared that much.

I knew that eventually a chess machine was going to win.

I had actually played Deep Blue's predecessor, Deep Thought, and it had kicked my ass, even I think with its opening book turned off or some humiliating thing like that.

Not that I'm a great chess player, but you know, I saw the writing on the wall, so I wasn't really rooting, I was just watching as a scientist to see, like, okay, when do we sort this out?

And at the same time, I was like, yeah, but that's chess, and you can brute force it, and that's not really what human intelligence is about.

So I honestly didn't care that much.

You said brute force.

With all the progress being made, would you say that machines are still relying almost exclusively on brute force or we see some

transformation from simple quantity into some quality factors?

I mean I hate to say it's a complicated answer, but it's a complicated answer.

It is a complicated answer.

I wouldn't ask you a simple question.

I figured that.

In some ways, we've made real progress since then and in some not.

The kind of brute force that Deep Blue used is different from the kind of brute force that we're using now.

You know, the brute force that beat you was able to look at an insane number of positions essentially simultaneously and go several moves deep and so forth.

And large language models don't actually look ahead at all.

Large language models can't play chess at all.

They make illegal moves.

They're not very good.

But what they do do is

they have a vast amount of data.

If you have more data, you have a more representative something or other.

So, like, if you take a poll of voters, the more voters you have, the more accurate the poll is.

So, they have a very large sample of human writing.

In fact, the entire internet.

And they have a whole bunch of data that they've transcribed from video and so forth.

So, they have more than all of the written text in the internet.

That's an insane amount of data.

And what they're doing every time they answer a question is they're trying to approximate what was said in this context before.

They don't have a deep understanding of the context.

They just have the words.

They don't really understand what's going on.

But that deep pile of data allows them to present an illusion of intelligence.

I wouldn't actually call it intelligence.

It does depend on what your definition of the term is.

But what I would say is it's still brute force.

So let me come back to chess for a second.

If you ask a large language model, even a recent one, to play chess, it will often make illegal moves.

That's something that a six-year-old child won't do.

And I don't know when you learned chess, I can't remember, but you were probably quite young.

So I'm guessing you were four or something like that.

Five, five and a half.

Five.

When you were five and a half, you know, pretty much immediately you understood the rules.

So basically, you probably never made illegal moves in your chess career starting when you were a little child.

And 03 was making them this weekend.

I asked a friend to go try it out.

And when you were five and a half, you'd only seen whatever, one game, two games, now ten, and whatever.

There are millions of games, maybe tens of millions or hundreds of millions that are available in the training data.

And Lord knows they use any training data they can get.

So there's a massive amount of data.

The rules are there.

The Wikipedia has a rules of chess entry.

That's in there.

All of that stuff's in there.

And yet still it will make illegal moves, like have a queen jump over a knight to take the other queen.

Making mistakes, not mistakes, actually violating the rules.

So again, just tell us...

How come, why?

Because the rules are written, and technically they can extract all the information that is available, and they're still making illegal moves.

Yeah, and in fact, if you ask them verbally, they will report the rules.

They will repeat the rules because, in the way that they create text based on other texts, they'll be there.

So, I actually tried this.

I asked it, I said, Can a queen jump over a knight?

And it says, No, in chess, a queen cannot jump over any piece, including a knight.

So, it can verbalize that, but when it actually comes to playing the game, it doesn't have an internal model of what's going on.

So, even though it has enough training data that it can actually repeat what the rules are, it can't use those in the service of the game because it doesn't have the right abstract representation of what happens dynamically over time in the game.

Yes, it's very interesting because it seems to me that, you know, what you are telling us is that, you know, machines know the rules because rules are written.

But, you know, it still doesn't know what can be done or cannot be done unless it's explicitly written.

Correct?

Well, I mean, it's worse than that.

I mean, the rules are explicitly written, but there's another sense of knowing the rules, which we actually understand what a queen is, what a knight is, what a rook is, what a piece is.

And it never understands anything.

It's one of the most profound illusions of our time that most people witness these things and attribute an understanding to them that they don't really have.

Okay, so now I think our audience understands why you're often called an AI skeptic.

But I believe AI realist is better because

I also share just your overall view of the future of AI and human-machine collaboration.

Let me just drop in.

I love that you called me an AI realist rather than a skeptic.

I share that.

And I always say AI is not a magic wand, but it's not a terminator.

It's not a harbinger of utopia or dystopia.

It's a technology.

It doesn't buy you a ticket to heaven, but it doesn't open the gates of hell.

So let's be realistic.

Yeah, so let me talk about the realism first and then the gates of hell.

So, so on the realism side, I think you and I have a lot in common.

We're both realists, both politically and scientifically.

We both just want to understand what the truth is and

how that's going to affect society and so forth.

I mean, the fact is, I would like AI to work really well.

I actually love AI.

People call me an AI hater.

I don't hate AI.

But at the same time, to make something good, you have to look at the limitations realistically.

So that's the first part.

Is it going to open the gates to heaven or hell?

That's actually an open question, right?

AI is a dual-use technology, like nuclear weapons, right?

Can be used for good, can be used for evil.

And when you have a dual-use technology on the table, you have to do your best to try to channel it to good.

But look, I also keep repeating that humans still have a monopoly for evil.

I think

we can disregard the fact that every technology can be used for good or bad depending on who is going to use it.

And I think that the greatest threat coming from the AI's world is potentially this technology

being controlled and used by those who want to do us harm.

Mostly agree with you there.

First of all, neither of us are that worried about the machines becoming deliberately malicious.

I don't think the chance of that is zero, but I don't think it's very high.

Agree we should be worrying about malicious humans and

what they might do with AI, which I think is a huge, huge concern.

We also have to worry, because of the kind of AI that we have now, that it will just do really bad things by by accident, because it's so poorly connected to the world.

It doesn't understand what truth is, it can't follow the rules of chess, etc.

It can just accidentally do really bad things.

And so we have to worry about, I think, the accidents and the misuse, maybe less about the malice.

Now,

let me ask a very sort of primitive or just a question that has no scientific background.

So while analyzing our chess decisions, we always say, okay, this part is being, you know, just made through calculation, this one through recognition of patterns.

Now, in your view, what percentage of these decisions or suggestions made by AI

based on calculations?

And what is what

percentage is attained to

understanding, I mean, I don't want to use the word intuition, but recognition of patterns.

So like say strategy versus simple tactical calculation?

So,

first thing I say, I should clarify something, which is there are different kinds of AI out in the world.

So, for example, a GPS navigation system is all what I would call calculation and no intuition.

It simply has a vast table of different locations and the routes that you can take between those places

for different segments of it and the times that are typical and so forth.

All calculation, nothing I would describe as pattern recognition.

I would still call it AI.

It's not a sexy piece of AI and it's not what most people talk about when they talk about AI right now.

Most people are talking about chatbots like ChatGPT.

When Deep Blue beat you, that was all calculation.

Maybe you could argue there was a tiny bit of pattern recognition.

Stockfish is now kind of a merger of the two.

It's kind of a hybrid system, which I think is the right way to go.

The things that are popular mostly aren't hybrids, although they're increasingly kind of sneaking some hybrid stuff in the back door.

I would say they're not doing any calculation at all.

I would say that they're all pattern recognition.

A pure large language model is all pattern recognition with no deep conceptual understanding and no deep representations at all.

There's no deep understanding even of what it means to jump a piece or illegal move.

None of that is really there.

So everything it does is really pattern recognition.

When it does play chess, it's recognizing other games.

There's an asterisk around this, which is they can do a little bit of analogy in certain contexts.

So it's not pure memorization.

it's not pure regurgitation, but it comes close to that.

And it's never kind of deep and conceptual.

So

before we move into politics,

and I will just, you know, give you some statements and you tell me

if I'm right or maybe it have to be corrected.

So this infrastructure and this whole industry has not solved the alignment problem.

Not even close.

The alignment problem means making machines do what you want them to do or the things that are compatible with humans.

And already we saw a great example, which is chess.

You tell it, I want to play chess, here are the rules of chess, and it can't even stick to that.

Now you get to something harder, like don't cause harm to humans, which is much more complicated to even

define what harm means and so forth.

They can't do that at all.

There is no real progress, I would say, in the alignment problem.

Adding more data doesn't help that much with the alignment problem.

There's another thing called reinforcement learning.

It helps a little, but we have nothing like a real solution to alignment.

Okay.

So the bottom line is that simply adding information or just, you know, cleaning this human data and just building the skyscrapers of this data doesn't help very much.

So we reached a plateau.

So the idea that we simply keep piling more and more data and we'll transform this quantity into a quality to move to the next level, it doesn't work because there's no evidence.

This kind of super intelligence is going to happen tomorrow or just in the foreseeable future.

It's not going to work.

We will get to super intelligence eventually, but not by just feeding the beast with more data.

You know, I thought what you were going to ask me was, is this field intellectually honest?

And my answer is not anymore.

AI used to be an intellectually honest field, at least for the most part.

And now you just have people hyping stuff, praying.

There's actually a great phrase I heard, pray imprompt.

Like you pray imprompt and hope you get the right answer.

And it's just, it's not reasonable to suppose that these things are actually going to get us to AGI.

But the whole field is built on that these days.

We'll be right back.

Running a business comes with a lot of what-ifs, but luckily, there's a simple answer to them.

Shopify.

It's the commerce platform behind millions of businesses, including Thrive Cosmetics and Momofuku, and it'll help you with everything you need.

From website design and marketing to boosting sales and expanding operations, Shopify can get the job done and make your dream a reality.

Turn those what-ifs into

sign up for your $1 per month trial at shopify.com/slash special offer.

Okay, so

to support your reputation as an AI realist and not be just on the negative side, because you already said enough, and I couldn't agree more with everything you just said, we have to support our reputation of those who believe that AI still brings something good into this world.

So how do we benefit from AI's interference or infusion of virtually every aspect of our life?

So I think it's a multi-part answer because AI affects so many parts of our life.

Right now, the best AI for helping people, in my opinion, is not the the chatbot the best piece of ai right now i think is alpha fold which is a very specialized system that does one thing and only one thing which is to take nucleotides in a protein and figure out what their 3d structure is likely to be it may help with drug discovery lots of people are trying that out That seems like a genuinely useful piece of AI and should be a model.

But I would say of the big AI companies, DeepMind is the only one seriously pursuing AI for science at scale.

Most people are just like, well, can I throw a chatbot at something?

And mostly that's not going to lead that much advance as opposed to creating special purpose solutions.

I think we have to be intellectually honest about the limitations of this generation of AI and build better versions of AI and introduce new ideas and foster them.

And right now we're in this place where the oxygen is being sucked out of the room, as Emily Bender once said,

and nobody else can really pursue anything else.

Like all the venture funding is to do large language models and so forth.

There's the research side of it.

There's like finding the right tools for the job side of it.

There's also a legal side of it, which is if we want AI to be net benefit to society, we have to figure out how to use it safely and how to use it fairly and justly.

If we don't, which is what's happening right now in the United States, where we're doing nothing, then of course there's going to be lots of negative consequences.

Negative consequences.

I think the one place where we all feel these negative consequences is politics or things related to politics like propaganda and simply just sharing information.

That's where AI plays a massive role because again, we saw the influence of these various forms of AI being used to influence the elections and

it seems unstoppable now.

So just briefly, so what do you think?

It's anything can be done or we are just we have entered the era of these information wars that will be run by these chatbots and the sheer power behind them could at one point decide the results of any elections.

This is a place where I may be an AI optimist, although not short term.

So I genuinely believe that in principle, we can build AI that could do fact-checking automatically faster than people.

And I think we need that.

Right now, it's sort of politically hot, so nobody even wants to touch it.

But I think in the long run, that's what we need to do.

Think about like the 1890s with all the yellow journalism of people like Hearst and so forth.

All bullshit.

Some people think it led to war based on false facts, and that led to fact-checking being a thing.

And we may return to that because I think people are going to get disgusted by how much bullshit they are immersed in.

And I think in principle, not current AI, but future AI could actually do that at scale faster than people.

And I think that could be part of the solution eventually.

Part of it is political will.

And right now we lack it.

Like

the far right has so

politicized the notion of truth that it is hard to get people to even talk about it.

But I think that there will be a swing back in the pendulum of that someday.

Whether that happens in the United States is a very complicated situation right now.

But I think the world at large is not going to be satisfied with the state of affairs where you can't trust anything.

Dictators love it.

It's great for them.

That's why it was the Russian propaganda model.

You know, Putin loves the idea that nobody knows what to believe, and so you just kind of have to go along with what he makes you do.

But it seems to me me that the political moment, definitely in this country, also in Europe, now is not very friendly to

this notion, so fact-checking.

Very unfriendly.

People believe what they want to believe, and unfortunately, fake news have this element of sensationalism that always attracts the attention.

And I think lies become weapons on both sides.

There's some blatant lies, there's some more, you know, just covert lies.

But at the end of the day, I think no political,

meaningful political force in this country now is interested of defending the truth, defending the pure correct fact-checked data, because it may and most likely will interfere with their political agenda.

And also the facts they always lose in the battle of public opinion these days

against fake news.

I mean, my one moment of optimism is we saw this before in the like 1890s, and eventually people got fed up.

It's not going to happen soon, though.

Right now, people are complacent and apathetic, and they have given up on truth.

I could also be wrong in my rare moment of optimism.

I think that things are going to get so bad that

people will resist.

But I mean, that's an open question.

At least once in history, people did get fed up with that state of affairs.

It is also true what you're saying.

Lies do tend to travel faster than truth.

And that's part of what happened in the social media era is that whole thing got accelerated, right?

The social social media companies don't care about truth, and they realize they would make more money by trafficking in fake narratives.

And that's part of why we are where we are now.

Yeah, you mentioned a couple of times 1890s and early the 20th century, so as this one of the moments of transition.

So, what about the, let's say, mid-20th century, with the booming sci-fi book industry, had many, many stories about the future influence of technology, technology to dominate our society, technology interfering with democracy.

The great writers, you know, just they predicted that at one point we would have to deal with this direct challenge of technology in the hands of few to influence the opinion of many.

Are we now at this point?

I keep thinking about rewriting like word-for-word remake of 1984, which I think was written in the late 40s.

You know, we are exactly where Orwell warned us about, but with technology that makes it worse.

Large language models can be, I don't know, we call them super persuaders.

They can persuade people of stuff without people even realizing they're being influenced.

And the more data you collect on someone, the easier that job becomes.

And so we are exactly living in the world that Orwell warned us about.

Okay.

So let's talk about tech bras.

So they believe that all-powerful technology could actually help to improve the society, because society has too many problems, they cannot be resolved any other way but to lead the public, to educate the public, to control the public mind to cure these problems.

Is this threat real and is it doable?

Because some people even say that, oh,

it may lead us to something called techno-fascism, where while preserving all the elements of representative democracy, we will

end up in some kind of dystopian society where the few few in charge of massive data

will

make election results predictable and bend them to their favor.

I mean, that's exactly what's happening in the United States right now, is techno-fascism.

You know, the intent appears to be to replace most people's

most federal workers with AI, which is going to have all the problems that we talked about.

The intent is to surveil them, to surveil people, to get massive amounts of data, put it all together in one place accessible uh to you know a small oligarchy i mean that's that's just what they're doing this is not science fiction that could happen in 10 years this is essentially the active thing that is happening right now that has you know been happening for the last few months

question so um is it inevitable so how the society at large can resist this pressure from this new techno oligarchy that has all the money that has control of technology and again

let's be honest, most of the public, so they care more about convenience rather than about

the security of their devices.

I mean, for instance, that's known that people want these devices, this new technology, to bring some short-term benefits.

iPhones are the opiates to the people.

Exactly.

So because our reliance on these new devices, because

we are willing to use the simplest passwords, because to do a complicated password is too time-consuming.

So again, ignoring even threats to our personal data.

So can we rally enough people to make this threat?

I think the default path is what you described.

I would add privacy to it.

So people have given up on privacy.

They won't do the basic things on security.

And they have given up an enormous amount of power.

And that power hasn't even just gone to the government.

Power has really gone to the tech companies who have enormous influence over the government.

And

unless people get out of their apathy, that's certainly where the United States is likely to stay.

It's only if there is mass action and if people realize what has happened to them.

There were huge protests specifically directed towards Elon Musk, and he was kind of, as far as I can tell, pushed aside.

Those protests were somewhat effective in mitigating some of the more egregious things that he tried to do.

So he's at least kind of not at center stage anymore.

But short of that, I think the default is the sort of dark world that we're talking about that reminds me a lot of contemporary Russia, where few people have most of the power, most people have essentially no power.

And to a surprisingly large degree, people just consent to that.

Giving up their freedom, giving up their privacy, maybe giving up their

independence of thought as these systems start to shape their thoughts.

To me, that's extremely dark, but not everybody seems to understand what's going on.

And unless more people understand what's going on, this is where we're going to stay.

Yes.

So

to wrap it off, can you give us just some glimpse of hope?

Any idea how we can fight back by using enormous power that AI and all these devices give to us?

Because we are many, we are millions, and they are few, though they're very powerful few.

So

what's the best bet for us to take back our future back in our hands and also to make sure that the political institutions of the United States, of the Great Republic, will survive its 250th anniversary that will be celebrated next year?

I think our powers are the same as they always were, but we're not using them.

So we have powers like striking.

We could have a general strike.

You know, strikes, boycotts.

You know, we could all say, look, we're not going to use use generative ai unless you solve some of these problems right now the people making generative ai are sticking the public with all of the costs cost to the information ecosphere um all the enormous climate costs of these systems like they're just sticking everything to the public and we could say that's not cool you know we would love to have ai but make it better make it so it's reliable it's not you know, destroying the environment, massacring the environment, you know, and then we'll use your stuff.

Right now, we'll boycott it.

So we could say, hey, we're not going to do this anymore.

You know, we'll come back to your tools later.

They're nice, but I think we could live without them.

You know, they save some time and that's cool.

But are you sure, Gary?

I mean, let's be realistic.

So I hate pouring cold water in your concept of our hot resistance.

But do you seriously think that people today, I mean, starts with students, will stop using chat DPTs?

I think it's very unlikely.

But

the reality is that the students, by adding to the revenue streams and user numbers massively, like students are a huge part of it.

They're adding to the valuations of the companies.

They're giving companies power.

And what the companies are trying to do is to keep those students from ever getting jobs.

And the companies probably are going to succeed in that, right?

The people who are losing their jobs first are students.

The students graduating are entering this world where junior workers aren't getting hired as much, and probably in part because of AI.

In some ways, they're the most screwed by all of this.

And they have given birth to this monster because they drive the descriptions up.

So, you know, OpenAI can raise all of this money because a lot of people are using it.

A large fraction, I don't know the exact numbers, are students using them to write their term papers.

If students just stopped doing that, it would actually undermine OpenAI.

It might lead to the whole thing collapsing.

And

that would actually change what their employment prospects are like.

yeah i'm i'm very skeptical about them just i'm skeptical about it too so is it fair to say that regarding ai short term you are pessimistic you you have very uneasy feelings midterm you're optimistic and long term you're bullish um

no it's more agnostic.

It's like, I think this could work out, but we have to get off of our asses if we want it to work out.

We may reach some point where people in the U.S.

do fight back.

We have more of an expectation historically of having certain kinds of freedoms than I think the Russian people do.

And so it could turn around.

And to the extent that it makes me an optimist to think it could turn around, yeah.

But generally, I like the metaphor that we're kind of on a knife's edge and we have choice.

It's important to realize that we still have choice.

It's not all over yet.

We still have some power to get ourselves on a positive AI track.

But it is not the default.

It is not where we're likely to go unless we really do stand up for our rights.

So it's not the most optimistic forecast, but at least it's a call for action.

But we could, we could take action.

Exactly.

We are America and we still could and we should.

Our fate rests 100% on political will.

Gary Marcos, thank you very much for this most enlightening conversation.

Thank you so much for the conversation.

This episode of Autocracy in America was produced by Arlene O'Reillo.

Our editor is Dave Shaw.

Original music and mix by Rob Smirciak.

Fact-Checking by Ina Alvarado.

Special thanks to Paulina Kasparov and Nick Gringard.

Claudia Nebay is executive producer of Atlantic Audio.

Andrea Walmes is our managing editor.

I'm Gary Kasparov.

See you back here next week.

For a limited time at McDonald's, McDonald's, get a Big Mac extra-value meal for $8.

That means two all-beef patties, special sauce, lettuce, cheese, pickles, onions on a sesame seed bun, and medium fries, and a drink.

We may need to change that jingle.

Prices and participation may vary.