What You Should Know About Artificial Intelligence (and Why Centaurs Are Our Future)

54m
Next week, the White House is releasing the first executive order on A.I. in U.S. history. So we asked David Epstein, bestselling author of Range and the best sports-science writer in America, to explain the state of our union. And what humans can do that the best computers still cannot. Even though A.I. still might, uh, wipe out 10 percent of the global population. Plus: Why robots need to be more like First Take — and why our future depends not on humans or computers... but a centaur playing chess.

PTFO-approved reading
David Epstein's Range: https://bookshop.org/p/books/range-why-generalists-triumph-in-a-specialized-world-david-epstein/12472879

Watch on YouTube: https://youtu.be/zbgKzE7SztI
Learn more about your ad choices. Visit podcastchoices.com/adchoices

Listen and follow along

Transcript

Welcome to Pablo Torre Finds Out.

I am Pablo Torre, and today we're going to find out what this sound is.

My worst fears are that we cause significant, we, the field, the technology, the industry, cause significant harm to the world.

Right after this ad.

You're listening to DraftKings Network.

If you're looking to add something special to your next celebration, try Remy Martin 1738 Accord Royale.

This smooth, flavorful cognac is crafted from the finest grapes and aged to perfection, giving you rich notes of oak and caramel with every sip.

Whether you're celebrating a big win or simply enjoying some cocktails with family and friends, Remy Martin 1738 is the perfect spirit to elevate any occasion.

So go ahead, treat yourself to a little luxury, and try Remy Martin 1738 Accord Royale.

Learn more at remymartin.com.

Remy Martin Cognack, Veeen, Champain, a forgiveness alcoholic volume, reported by Remy Control, USA, Incorporated in York, New York, 1738, Centaur Design.

Please drink responsibly.

I'm trying to do the math here of when we met each other, by the way.

I think we might have had the same first day at Sports Illustrated, although I was a 10th fact-checker.

That's right.

I was a staff fact-checker.

So you were.

How embarrassing.

I didn't even know that was a title.

I forgot about that.

I mean, it felt like being in steerage class, nonetheless.

We were fact-checkers together at SI.

We shared a wall.

We played mini ping-pong a lot.

Yeah, I mostly won.

I was way better at it than you.

Not true.

I wish we had a good record of this.

And then

you became the best sports science writer in America.

Oh, thanks.

Okay, so two things here.

Number one, David Epstein is way too humble to fully co-sign the title I just gave him.

But you should know that it is 1,000% true.

They've wrote The Sports Gene, which is the New York Times best-selling book.

He followed that up with Range, which is the number one New York Times bestseller.

And

after definitely losing to me at Sports Illustrated Mini PingCong, he became an investigative journalist at ProPublica.

And so yeah, number two.

My voice just cracked, which is appropriate, because I'm a little afraid.

The White House on Monday is hosting an unprecedented event on artificial intelligence.

It is a special event because they are expected to announce an executive order on AI for the first time.

This is long awaited.

This is deeply anticipated.

It's about ostensibly safe, secure, and trustworthy AI.

This is what Axios reported.

So, the person I wanted to talk to was Dave.

Because Dave, quite simply, is the smartest, most curious, most discerning consumer and explainer, interpreter of complicated science that I know.

And in his capacity as expert now on what humans do well, David has been summoned across Silicon Valley to talk to leaders in tech privately about AI.

And so I wanted to find out what my friend Dave has found out.

I could talk to you about any number of things, but I wanted to talk to you about a topic that isn't like explicitly on that resume because you are the guy I want to talk to about artificial artificial intelligence.

I feel like I should justify that a little bit because your expertise more than anything, especially in your most recent book, Range, which is on the desk here if you're watching on the DraftKings network or on YouTube, your books are about human performance.

Yeah.

And I want to know how afraid we humans should be about our performance when it comes to AI.

Yeah, that's a complicated question.

The first answer, I think, is I don't think anybody knows for sure, even the people building the technology.

That's not assuring.

I think as we've talked a little bit outside of this, I've been in conversations, like non-public, just friendly conversations between people working at the same places on AI with extremely different takes on what they think the impact is going to be.

One was saying we're going to have artificial general intelligence, meaning that you'll have, you know, AI that's capable of just carrying out a wide range of tasks better than people, basically, like.

people but better which is like the cinematic sci-fi version of ai right right and he was saying three to five years, no question.

And the guy he was talking to, and again, this was a private conversation.

It wasn't on stage.

There was no reason to, was saying,

I think we had a period of fast growth and now it's not going to be as fast.

And I think it's a glorified toy.

And I use Google a thousand times as much every day.

Right.

And so this to me was emblematic of the fact that even people working on it together

don't really know exactly exactly where we are or where we're going.

If you're looking to add something special to your next celebration, try Remy Martin 1738 Accord Royale.

This smooth, flavorful cognac is crafted from the finest grapes and aged to perfection, giving you rich notes of oak and caramel with every sip.

Whether you're celebrating a big win or simply enjoying some cocktails with family and friends, Remy Martin 1738 is the perfect spirit to elevate any occasion.

So go ahead, treat yourself to a little luxury, and try Remy Martin 1738 Accord Royale.

Learn more at remymartin.com.

Remy Martin Cognac, Feen Champion, African Alcohol by Volume 40 by Remy Control, USA Incorporated, New York, New York, 1738, Centaur Design.

Please drink responsibly.

I want to get to the terrifying and the the exciting, but I also want to just define what we mean for people who are maybe not caught up on artificial intelligence as of, you know, the fall of 2023.

Like, how do you define it for a layperson?

That's a good question.

I mean,

I think that's kind of a problem, right?

I think the current AI that everyone's talking about is generative AI, which means it's basically capable of creativity or coming up with answers that are responsive to things that you ask, but not directly in the way that it's just like a search query.

Like it can learn on its own and generate answers out of material that it has learned on its own.

Yep.

And this is, I've seen this characterized as essentially like an advanced version of autocomplete.

Extremely advanced version.

I guess like Cal Newport, a computer scientist at Georgetown who's written about AI and who I've interviewed a few times,

he likened it in one case to

sort of like a Plinko, or was that like old game on Prices Regular?

Yeah,

You're going to play Plinko.

I am going to give you one free Plinko chip.

The chip is kind of bouncing down the pegs.

And so, so to totally simplify, you train the thing, like you give it a sentence where you know the full sentence, and you take a word out, and then you have it guess what that word is.

And the closer it gets to the right word, the you know, the better score you give it.

And the farther it is, the worse score you give it.

And through a process of doing that a bajillion times in a much more complicated way, it starts to learn how to fill in these sentences.

And he likens that to the little Plinko chip bouncing down.

It's sort of a statistical process of like getting more and more likely to end up sort of in the right hole.

But it's still this just sort of

statistical process of predicting what comes next.

Yeah.

And you can realize this sometimes.

Like

I use ChatGPT every day.

In the world of artificial intelligence, there's been one name that's been on everyone's lips lately, ChatGPT.

ChatGPT.

ChatGPT.

I am ChatGPT, a large language model trained by OpenAI.

I am capable of understanding and generating text and can answer a wide range of questions, as well as generate creative writing and text summaries.

Like recently I was looking for a quote from Kazuo Ishiguro, the novelist and Nobel laureate novelist, and I knew it had something to do with where he says like, I write stories to see if other people feel the same or something, but I couldn't remember what it was and where it was from.

And so I typed that in, and it right away is like, this is Ishiguro from his Nobel Prize speech in whatever year.

Here's the full quote.

And it was something like Ishiguro saying like, the real reason we write stories is because it's the only way to say for certain complex emotions, here's how it feels to me.

Does it feel the same way to you?

It got the quote from my vague description, got it to the right plot and said, here's the quote.

And then like the first sentence or two was correct and then the rest was made up, which was like, holy cow.

That's you remember, it's a prediction machine.

So it did not have access to the quote.

It was not going and getting the quote.

Apparently, for whatever reason, for the first sentence or two, the predictions of the next word that was coming were right, and then they went off the rails.

So that was a really interesting, just to start to do like the anthropology of how does it go wrong.

Right.

It's quite interesting.

Well, it also feels like a metaphor, right?

Like it's partially correct.

And the rest of it, we are sort of,

we as humans are obliged to fact check.

To give another example, I was reading, like I went through this famous econ study from the 90s that showed like, why do cities make people so much more productive and resilient, all this stuff?

And it has to do with the fact basically that ideas spread easily when people are crammed together and they jump between industries.

So, solutions from one industry start getting used in another industry, and it causes all this innovation, all this stuff.

That's called spillover.

And the paper was evaluating these hypotheses: like, do cities drive innovation so much because of regional specialization?

You get clustering of people in a specific industry, or because of diversity and spillover?

And the answer turned out to be spillover.

And it's quite clear.

The paper comes down strongly, and it made this economist named Ed Glazer famous and all this stuff.

And so, I

start asking ChatGPT about it just to say,

I don't know, just to play around with it.

And I said, Well, what did this paper found?

Well, on the one hand, it found this, and on the other hand, it found that, and it's nuanced and blah, blah, blah, blah, blah.

And I started cutting and pasting results and being like, it doesn't seem that nuanced.

Like, this paper came down quite strongly on one side.

And it's, it said, like,

you're right.

Your keen reading will help me improve or something.

And I was like, will it really?

And it goes, no, that's just a norm of human conversation.

It won't.

But it's one of the ways I've found it going wrong often is that it's too nuanced on stuff that has a clear conclusion.

Have a take.

Exactly.

Exactly.

Chat GPT.

I never would have predicted that we would be at the point as like, ChatGPT does not have enough hot takes.

Artificial intelligence needs to be more like first take.

Yeah.

There you go.

I was like, I can't imagine we would land this early in this conversation.

But I do want to know why you're using it.

What do you use ChatGPT for?

One of my questions, just as I try to sort of grope my way through the semi-darkness of this technology is like, what would a smart but principled person actually use this for now?

I frequently will use it.

Say, so

that paper that I was talking about by Ed Glazer, I'll make a counter argument and say,

tell me why I'm wrong as if you were Ed Glazer.

And so I'll have an economist with specific points like arguing back at me.

So I'm kind of using it to steel man arguments when I'm interested in an area of research.

Which is to say, create the strongest possible counter argument.

Right, right.

So I'll...

Opposite of straw man.

Right.

So I'll have some conclusion of some research and I'll say, this is what I think.

What do you think?

And it'll say some things.

And then I'll say, tell me why I'm wrong as if you were.

X person who I think would disagree or maybe have something to add.

So I use it that way a lot.

Or I try to summarize research and say, what do you think about this?

Or maybe I write a timeline of something and say, like, just, is this correct?

And

not that I would like take its word and run with it, but it can pick out things that are wrong in that or lead you in another direction or, you know, tell you some nuances.

So like, you have to check it.

I mean, again, generative AI, like it brings up stuff you might want to look into.

When I've taken my newsletter and run some of the copy through it to be like, can you improve this?

Like copy edit it for me.

I actually find, I thought that was going to be a no-brainer use.

And I've found it to be quite bad and sometimes not even grammatical.

So I was a little surprised at that.

That is legitimately reassuring.

That said, when I asked it to write like a marketing blurb for range, which is already out, I think it might have done a better job than I could have because I stink at marketing copy.

So there are also some things that you just, you're self-scouting and you're like, I am not

passionate or adept at this specific task.

Let me see what generative AI has got for me.

Yeah.

And this, again, was a thing I didn't need to do.

I mean, the book's been out for several years.

Like, I didn't actually need this done, but I was kind of curious.

And a lot of that kind of copy tends to be quite general anyway.

As if written by a machine.

You know, the one thing I've found that I've found it extremely impressive on is

some medical journals will like tweet out or post on their site

challenges for doctors.

It'll be like, a patient comes in with these symptoms.

What do you think they have?

And I'll take that and just enter it in to see if it gets the answer.

And often there's a picture too.

And I can't put the picture in, at least at ChatGPT.

So I'll ask it just with the text and it does really well, even without the picture.

So this drives me toward where I think a lot of people's anxieties are, which is

the question of like, what's going to get replaced?

Yeah.

I think that really has to do with the

kind of the context.

So

I think it is very good at diagnosis.

And there's been other research showing it's quite good at medical diagnosis, which kind of makes sense, right?

That's sort of a decision tree of statistical likelihood of you like add this symptom in and that symptom and you try to get to a conclusion.

In my mind, I don't think that replaces doctors at all.

I have a positive view on that potentially.

If we go that like we could go a million ways.

A big concern is that humans will be replaced by what some of these economists, particularly these two economists, Derone Asimoglu and Simon Johnson at MIT call so-so automation, meaning you introduce technological tools that automate someone's job, but not very well.

You take a cheaper route to replace something that already exists and either it's as good or a little bit worse, or you can try to like supplement human performance.

And in my view, like most of the stuff that doctors are doing is not doctor house anyway, right?

Like most of the stuff that most doctors are doing.

Like mystery solving.

Yeah.

Most of it is like similar stuff that they're seeing over and over and over.

And they're already using algorithmic diagnostics in a lot of cases anyway.

And you ever talk to doctors and they're like, they can't spend time with their patients the way that they used to some of the older doctors.

Like this could be great.

Like maybe they don't have to spend time going through all these records necessarily to figure out the diagnosis and they can do the strategic thinking side of the task.

And then maybe

we can have higher quality strategic healthcare for a lot more people.

I think that would be great.

Well, now I want to just dive in on strategic and what that connotes in terms of what humans are good at and what AI is not.

Because you've written about this in range, but I do want you to just spell out like, what can we as a species hang our hat on that the machines can't?

I think you're kind of getting at what I wrote about the so-called centaur story

in range, which had to do with, which kind of starts with.

It's Alex Rodriguez's painting.

It's,

did we ever find out if that was real?

Oh, I have a source that says it's real.

Oh, really?

Yeah, yeah, yeah, yeah.

That is just glorious.

You also used to investigate A-Rod.

And now I officially digress into the other Centaur topic, but this Centaur topic.

Yeah, yeah.

So the Centaurs I'm talking about

are chess partnerships.

So, 1997, IBM's Deep Blue beats Scary Kasparov, then the best chess player in the world at chess.

And Deep Blue wins because it's so much better at what in chess called tactics, which are basically small combinations of moves, patterns that you have to study.

So, this is you kind of got to specialize early in chess to be great.

If you haven't started studying those patterns by age 12, your chance of reaching international master status, which is one down from Grand Master, drops from like one in four to like one in 55, I think.

And

Deep Blue was so much better at recognizing these patterns than Kasparov, who had spent his whole life studying them, that it beat him.

And today, like a free app on your phone would beat him.

But he noticed in the game that it wasn't as good at strategy, which is like how do you arrange the battles to wage the war, so to speak.

And so afterward, he helped promote so-called freestyle chess tournaments where humans could play, computers can play, humans could play in partnership with computers.

And the winners were neither supercomputers nor grandmasters nor grandmasters with supercomputers two amateur chess players with three normal laptops right they knew something about chess but they were very much amateurs they knew something about algorithmic search and they could kind of coach the computers where to look and handle the streaming information like there was a funny press conference in one of these tournaments where they were asked to analyze their game because they're playing the highest level of chess right ever seen and so these these computer human partnerships were called centaurs chess centaurs half man half in this case machine horse yeah

And, uh, and they, they were, one of the guys was sort of like, I can't analyze it on that deep of a level because I don't know chess that well.

Right.

And so this is, this is,

this is mind-blowing, right?

The idea that here you have the super, the most supercomputers ever and the greatest grandmasters ever, and it's sort of a mediocre human and

a computer.

Yeah.

And, and so I think the lesson here is that when you outsourced the tactical stuff, this kind of repetitive pattern recognition, the skills that produced the best performance was totally different.

Now it was shifted to this strategic level, which I think

is a place where we still have a huge amount of value to add and will like for the foreseeable future in this strategic sort of sort of thinking.

And I think that happens even at things that are sort of like less sexy.

Clearly, chess is the sexiest of all.

That's right.

I mean, endeavors.

But I mean, so like to use something that you don't even think twice about, when I was reading news coverage of when ATMs first came online in America in like 1970, 70 or 71, and some of it's totally apocalyptic.

It's like

there were some 300,000, some bank tellers at the time, and it's like they're going to go out of business overnight.

But over the next 40 or 50 years, instead what happened is there were more ATMs, there were more bank tellers because ATMs made each branch cheaper to operate.

sort of fewer tellers per branch, but more branches overall.

Even more interestingly, it fundamentally changed the job from one of someone who's doing basically repetitive cash transactions or checks to someone who's like a customer service representative and a marketing professional or a financial advisor or this much more strategic level of thinking.

And I think that's emblematic of a lot of technological change in some ways.

I mean, even the original Luddites, you know, Luddite is the term for someone who's anti-technological progress, basically.

They were weavers who went around breaking looms because they were worried worried they were going to take their jobs.

And they did take many of their jobs.

But in the long run, there still ended up being even more work in that industry than there was before because it made the demand so much greater because you could produce much more at much lower prices and things like that.

And so there's disruption.

But I think when it frees up humans to do more of the strategic level thinking, that can actually be really good and really kind of a...

lead to more shared prosperity and things like that.

But that's not always what happens.

Wait, but so just on the on this the idea that humans humans have a gift for strategy, why can't the machines replicate that in a way that can also render us obsolete on that level?

Yeah, I mean, and who knows what will happen eventually, right?

But our brains are

sort of wired for thinking through analogies and doing what psychologists call transfer, which is taking your skills and knowledge and applying them to problems that you haven't quite seen before.

basically.

And that sort of analogical thinking that allows us to skip between domains and sort of integrate high-level knowledge from different areas is something that humans have actually gotten better at from modern dynamic work.

And I think where we still have like a huge amount of value to add.

Not to mention, at the end of the day, the strategy of even what the technology should be attempting to do is up to us.

Because in many cases in history, technological innovation has led to

like so-so automation, displaced people, not even improving on what they do, and surveillance.

This book that really really left an impression on me that I read recently called Power in Progress by these two economists that I mentioned from MIT, Darone Asimoglu and Simon Johnson, basically they go through a thousand-year history of technological innovation.

And one of the points they're making is whether innovation leads to shared prosperity or increasing misery depends on the institutions that humans create around the technology.

And that that is like an extraordinarily important strategic thing to do that involves tons of stakeholder voices and dispersed competing powers and functioning markets and all these things.

And so I came away from that book thinking even more like we need to be thinking in a whole systems level of strategy for society as we incorporate these technologies.

So I want to get into the systems and the institutions, but before I get there, just the idea, like sort of the brass tacks basic translation of

what humans will be expected to do with AI, it sounds like it would be wise for us to learn how to use it.

Yeah.

As opposed to

worry about being replaced by it.

I mean, I think worrying about being replaced is okay.

Like, and I think worrying on other people's behalf about other people being replaced is okay.

Because frankly, I think in the past, with whether it's been automation or

free trade, both are things that have brought huge benefits to a lot of people.

But also, maybe in retrospect, perhaps we should have put a little more thought into the people that were going to be affected by those things.

So I think the worrying is

good.

Well, when it gets to the institutions though and the question of like regulation, right?

I was watching Sam Altman.

How would you describe Sam Altman for people who are not familiar with him and his empire now?

He's a face of generative AI by virtue of being the leader of OpenAI, which debuted ChatGPT, right?

And I don't think OpenAI is the only place that had or has tools like this, but they went public with something very amazing first.

And so he's the guy.

Yeah.

And some competitors, I think, this is my understanding of what I've read.

Some competitors of his,

whether it's like Google and DeepMind, right?

They didn't go that quickly.

Right.

Deliberately.

And that guy seemed to let more genie loose than they did.

Yeah.

Yeah.

And I don't know what the right answer for that is.

Although I know some of those competitors that I've talked to feel that it was maybe premature.

Which again sends a little bit of a chill down my spine, right?

Right.

Because we don't know the consequences.

Yeah.

And actually,

I saw Barack Obama speak.

a little while ago.

Or what you thought was the or what I thought was Barack Obama.

Actually, he mentioned in that that he's like, i guess he was sort of the the the first sort of social media president so there were way more photos of him than anyone else so a lot of stuff got like trained on photos of him so i guess there's like more fakes of him than anyone or something like that but yeah because it went into training stuff early but he was saying that you know he's friends with a lot of tech people and he was like these people you know some of them are expressing their concern and saying we should slow down These are not people who are prone to underhyping stuff or saying we should slow down usually.

So he was like, that really catches my attention.

And the people who usually are hyping, you know, a photo sharing app as if it's going to cure cancer are saying like, whoa, whoa, whoa, whoa, whoa.

Like, there's something notable about that.

And

I think that's true.

Well, so when it comes to the regulations, the laws, and Sam Altman went to Washington, D.C.

and basically said some version of, hi, I'm the face of AI, generative AI.

Please regulate me.

My worst fears are that we cause significant, we, the field, the technology, the industry, cause significant harm to the world.

I think that could happen in a lot of different ways.

It's why we started the company.

It's a big part of why I'm here today and why we've been here in the past and been able to spend some time with you.

I think if this technology goes wrong, it can go quite wrong.

And we want to be vocal about that.

We want to work with the government to prevent that from happening.

But we try to be very clear-eyed about what the downside case is and the work that we have to do to mitigate that.

He was sort of diplomatically offering

some olive branch towards this should be actually governmentally controlled.

What just big picture do you think needs to happen?

Gosh, man, I wish I knew.

I mean, I know what the bad things that can happen are, which are displacing people's jobs without any

thought for that, right?

The worst case scenario is like displacing people's jobs while getting worse services and mainly using technology for surveillance, right?

Surveillance and lack of job creation.

The best scenario is using tech to do things that we don't like that much that creates new tasks, that touches lots of different industries and isn't used strictly for like oppressive surveillance.

Economists are often talking about the rules of the game for society, whether that's norms of people's behavior or whether that's

the degree to which contracts are or not enforced, you know, the rules of the game and property laws and things like that.

And it seems to me that historically, with this kind of disruption, you really need a lot of countervailing forces, right?

Like you need some power in labor and in capital, like the people who own stuff and the people who make stuff.

And I think what I'm afraid of is that

because Silicon Valley has been so legitimately innovative and because

they are so smart, Sometimes their opinions may not be counterweighted to the proper extent because the point ultimately should be to have shared prosperity, right?

Yes.

That's why we're building stuff.

Right.

Right.

Yeah.

Yeah.

Hopefully the genie helps us.

Yeah.

That's that's the idea in the long run.

And I think for a gener for a few generations, that's been so true that maybe we got lulled into complacency.

Like in from, at least from, from like post-World War II to like 1980, it was just like innovation, innovation, and just like breakneck growth.

And the shared prosperity was spreading and wages were going up and like, you know, gender and race gaps and income wages were going down.

And it it just like and that that's some of that has reversed since about the 80s and i think arguably that's because the countervailing forces uh have diminished um community organizations have been less impactful in saying what goes on in in their community like local news has disappeared you know labor uh organizing hasn't been effective in the way it was before and maybe we need new models for all that sort of stuff.

So it's not to say that like

any one side is just evil, but it turns out that that kind of like market of countervailing forces can sort of help channel things in a productive direction.

And I think, again, I'm heavily influenced by Dron Asmoglu and Simon Johnson's take on this, but I think that we don't have sort of the countervailing forces we need.

And part of that is because of this idea of the so-called productivity bandwagon, the idea that innovation will just magically lead to shared prosperity, even if we don't do anything about the context around it, which I think is blatantly false.

And as they point out, always has been, that even in the Industrial Revolution, when productivity skyrocketed, it was 40 years before wages started growing after that.

And some of that wage growth happened because people who are now like packed into these factories started looking left and right and saying, hey, we all have some of the same problems that need to be dealt with.

And starting to argue for like, when we get replaced, we need training to do more jobs and better jobs.

Right.

So I'm returning to this mental image of Plinko again, right?

So Plinko applied to, again, that was your human analogy, or the human analogy that made sense to me about how it is that generative AI will arrive at something.

Yeah.

Which I stole from Cal Newport.

But I also think of it now because it seems like Plinko applies to just our fate as a species.

The idea that I hope our little guardrails, those pegs, can steer us.

Like, again, if that's, if one peg is local news, if one peg is a robust labor union, that we are sort of guiding the bouncing ball of innovation towards like

something that ends up not resulting in us regretting playing the game in the first place.

I hope so.

I hope so.

I mean,

and I will say for all like the obvious like dysfunction in America, I think

it's not so bad that America has been ahead on generative AI, because if we look at

say like, you know, authoritarian regimes,

like they have some pretty clear strategic uses for kind of automated data gathering and surveillance and all that sort of stuff well let's talk about let's talk about the apocalypse okay

so i want to now indulge the the fear for a second because what is the obvious way that this goes

you know this goes bad both on the level of like ah an authoritarian government now has this level of technology but also what can just happen that is actually materially um apocalyptic yeah i I mean, so again, when I was at this sort of literally sitting around like a campfire with some experts who were, and I was just occasionally interjecting a question, but mostly just listening to them talk to one another.

And they started asking one another to put a probability.

It was like, what do you think the probability?

I think it was that

AI will do something really bad.

in the next 10 years.

And they started throwing out probabilities.

And I was like, my contribution to discussion was, can we define really bad?

that was that was my level of expertise um

and they decided to define really bad as

killing 10 of global population which was way beyond what i had been thinking about i feel like really there is a bit or really bad bit of an understatement right and i think the i think the probabilities ranged from like

less than one to like 15% or something, which 15% chance of killing 10% of the global population in the next 10 years, like, right?

That's outrageous.

So then

my other question was,

what is your most like extant proximate concern?

Like, if that were to happen,

what is the thing that you're writing?

Like, is it that it would

launch a nuclear weapon or something like that?

And this was the one thing they were in agreement upon, which was that it can already tell people the ingredients for a biological weapon, basically, or a virus, like to engineer a virus.

So they seem to all have the same proximate concerns, which was somebody.

So again, that would be a centaur, right?

presumably it's not

um

so i think that would be a human using ai to create a bioweapon yeah i mean yeah

yeah i i don't know and the and you know to to be a little more positive not that this is directly

but we we talked about another nobel that was uh just awarded was for work on mRNA to catalyn krico and uh uh Drew Weissman, you know, which led to like COVID vaccine and everything.

And the code for the vaccine, vaccine, they had it in like

36 hours because we know how this stuff, then it was however long to test the vaccine, right?

But some of the technology we have now that can

decipher the genetic code of a virus and

help engineer a vaccine.

It's like,

I mean, basically the basics of that vaccine, they had them in like a day and a half.

And then it was just an issue of sort of testing, which is pretty friggin cool.

So the flip side of the whole can basically download the plans for the next,

yeah, a truly apocalyptic pandemic is we can also stop it, maybe.

Yeah, none of them said, none of the people that I, obviously some other people have said this, but none of the people that I've been around were concerned about that like becoming sentient and setting its own objectives thing.

So that one, that is the one that I would have sort of like raised my hand and said, so what about the whole idea that we're already dead or rather we're all in pods.

We don't even know it.

Yeah.

Like the whole thing that we've already been optimized.

We've been all rendered inefficient by the by the judgment of the machine and now we're already living in our matrix pots.

If we're living in the matrix, isn't it supposed to be like making our existence increasingly halcyon so that we don't rebel?

Because that doesn't feel to me like what's going on.

It's a good point.

Although, maybe that's what I would say if it's doing a good job of repressing.

God, yeah.

Is existence so sh

that we should be suspicious that it's, you know, actually

perfectly calibrated for the human condition.

I don't know.

Is this a sports podcast?

I feel like we mentioned A-Rod.

Yeah.

We mentioned A-Rod before.

Chess is a sport.

But by the way, I saw, I literally saw, I will now read off this very brief press release, but like truly every part of sports, as you know,

unsurprisingly, is like figuring out.

Like I saw September 8th, I got an email from the NBA.

It's their whole like tech accelerator program, NBA Launchpad, is their initiative to source, evaluate, and pilot emergent technologies, including artificial intelligence.

They're like, hey, help us figure out how to, quote, advance the NBA's top basketball and business priorities.

Interesting.

And I don't know what that will lead to.

Yeah.

But I know that in the same way that everyone's pitch deck has AI in it, so too is every league figuring out, hey, can we get some nerds in here to figure out how to make us, I guess, the best centaurs we can be.

Yeah, I mean, and I guess it'll be, are you going to go to the Sloan MIT Sports Analytics Conference this year?

Because it'll be interesting to see how much talk there is about

generative AI.

I mean, to that issue of the business objectives, it's a lot easier to see how it can work with their business objectives, where AI

has been for a while, right?

Like Facebook using AI to target ads or whatever.

The thing about that AI is it doesn't really matter if it gets a false positive.

Like if it advertises.

you know, vacuum cleaners to you and misses three out of five times, like who cares?

In other places, sometimes those false positives can be really important.

But I think sort of another bad case scenario, since you mentioned their business objectives, is if this mainly gets used to create platforms that just rely on advertising, because I think now we've seen some of the issue with

platforms that like algorithmically direct people's attention because

capturing those people's attention is everything and people's attention is wired to react to inflammatory stuff, which very often the case now is fake.

Right, right.

The idea that in fact, if you're not paying for the product, you are the product because they're selling against your attention span.

Yeah.

So, I mean, there are a lot of ways, like these are basic stuff that people have thought about way more than me, but maybe,

you know, there should be a lot more options for how users have a share in their data or whatever it is.

Or maybe there should be an advertising tax.

Like, I don't know.

There's a lot of things you can think of.

And advertising is good.

Like, people need to know about stuff.

But I think if all of this brainpower and technology goes just into like, how can we lock someone's attention to try to get them to like buy more stuff?

Because now we've seen the results of when that's the end game is just how much of someone's attention can we capture.

That goes to a place that we know now, which is like inflammatory bullshit.

Yes.

That is,

that's maybe the most depressing part of this entire conversation.

Yeah.

The idea that all we've been doing, what if

the whole promise of like those Boston dynamics robots and the whole promise of these Matrix illusions actually just redound to us

being more likely to click on an ad for the cheech and chong weed gummies that I get served endlessly.

I've been getting that too and I don't, I don't know why.

But let's stare at our navel a little bit, right?

Because we're in media and the idea of

not just misinformation, but disinformation.

The idea that we cannot trust our eyes anymore.

And as former fact checkers, this feels existentially concerning.

How does AI fit into that problem?

How intractable is that problem?

Yeah, I mean, on one hand, I wish I knew the answer to that.

Because I feel like

even looking, I hadn't been spending much time on social media, but I have been looking through the news a little bit lately.

And a large portion of the first few things I encountered on X, formerly Twitter, were fake.

And in a very little bit of looking, I could see that those were fake.

You know, in some cases, I've used AI tools to help me determine that those were fake, but you had to go and do that proactively, whereas your emotion is, you know, hitting share or whatever.

I think this is a huge problem.

And

I wouldn't have realized the degree to which propaganda is effective.

But I think now we realize it's very effective.

Last year, I interviewed a psychologist at Vanderbilt named Lisa Fazio, who studies misinformation.

And

she talked about something called the illusory truth effect, which is they do these, she was doing these studies on people where you can send them some nonsensical information, like textes them, like the earth is flat or whatever.

Like, obviously, I know some people, particularly in sports, believe that, but let's say that's not true.

And we turn to car viewing on the remote line.

But, but even for people who know that's not true, if you keep bombarding them with it, they will

think it's a little more likely to be true.

So they can say, I'm 99% sure that that's not true.

And then you bombard them with it.

And they're like, I'm 98% sure that's not true.

They're still quite sure it's not true.

But even on ridiculous stuff, familiarity, just exposure

can move the needle a little bit.

And right.

And now like

repeated exposure.

And so when I was talking to her about, well, how do we combat this?

She was like, well, you know, truth sandwich.

kind of idea where you give someone like something uh true and then you then you talk talk about the false thing and then you show them like the true thing again.

But that's like a way heavier lift than just like churning out a bunch of BS.

Right.

The true stuff was a much heavier lift.

But there was

another idea I heard from a woman named Yasmin Green at

Google.

She works in a part of Google.

I think it's called Jigsaw.

She talked about sort of inoculation where realizing they can realize some populations are going to be subject to a lot of propaganda and start doing like some preemptive inoculation and that that actually has some impact when you show people true knowledge and say, And you're about to, by the way, you're about to get like a bunch of propaganda.

Um, I shouldn't talk too much about her work because I really don't know the details of it at all.

But that stuck in my head, that idea of being able to see where propaganda is probably going to be unleashed and do some inoculation.

They're going to love the fact that we're basically just giving them vaccines now.

Right.

Right.

Maybe it should have a different name.

But

those all feel like you have to be a heck of a lot more thoughtful than just making a bunch of fake stuff and spewing it out into the world, which is a concern.

So I hope there are people working on AI tools that can identify, you know, or some kind of verification mechanism that for information to be true, it could be quickly verified in some way as likely true, right?

Because AI models do that for images.

It's whatever percent likely that this is an image of a fire truck.

Yeah, is this a stoplight?

Yeah.

So could it do that for much more complicated things?

Say this is likely to be true or or likely to be made by a human or whatever.

Because there was a time, like if you go back and even look at news coverage, there was a time when it was like social media is going to connect people across the world.

It's going to lead to increasing democratization and more voices and all these kinds of things.

You know, and I think some of those takes were

motivated reasoning

and often look pretty naive in retrospect.

Yeah.

So how optimistic are you at the end here?

We've talked about a lot of stuff across the

spectrum of, I guess,

human concern.

And in the end, you guy who is meeting these people up close and reading more than anybody else that I know and is trying to fact check all of it, you are where.

Yeah, well, and I should say, like, I'm by no means, by any stretch of the imagination, an AI expert, not at all.

But the fact that the AI experts I talk to disagree so.

vehemently on these things suggests to me that actually we should be having more voices in this conversation and that nobody really knows certain things, the answer to certain things right now.

I mean,

I don't know if it's just like my personality to be both like kind of skeptical of everything, but also kind of optimistic about humanity.

Because I think that's in some ways my personality bent.

And when I do read, I've been reading these like long economic histories, like the stuff we have done is friggin amazing.

Like again, to think about if you took someone from

even

150 years ago you know like some of our like great depending on the how the age works out like great grandparents or great great grandparents

like refrigeration electricity indoor plumbing like they would have rolled up to a house in a horse gone to the bathroom outside and then like lit some paraffin wax or something inside that's not that long ago and the fact that we've been able to organize all this

So I think as bad as stuff feels sometimes and as like annoying as the, you know,

and as like acerbic as public discussion can feel, I think it's worth

realizing that we've gone backwards in this country on lifespan a little bit recently as a few other countries, but for the most part, most people are living longer than they ever have before.

Many more people have been brought out of poverty.

You know, base, almost everybody was poor for almost all of human history until like the 19th century, basically.

Lifespan, American lifespan, again, recent dip but prior to that over the 20th century um increased 29.2 years on average like this is evidence that we can just do unfathomable stuff and have been doing so at a phenomenal rate in recent history uh so that that gives me hope because like if i if i have get like a few friends together and try to decide like what to order from a menu and it's like oh how do you ever get three people to agree on something stupid and yet these self-organizing systems of society have produced some pretty amazing things.

That gives me hope, but I think we need countervailing powers.

Yeah.

Yeah.

I would use an AI bot, incidentally, to help me figure out what to order for dinner if I'm sitting in a room with like two to three other people.

There you go.

I do bet on us in the long run.

I don't think that means there's not going to be some pain in the short run.

But yeah, but I bet I sort of have to.

I don't know.

Do you bet on us?

I guess

what's the alternative?

It's really hard to tell how much of that is like motivated reasoning and my own interest in innovation and

desire for things to work out in the end.

Yeah,

I feel like everything I consume as

a person who reads and tries to listen to world news and information would lead me to be deeply cynical about everything, including, and most pressingly,

the reliability of the human conscience.

But

I feel like

there's a sunk cost issue here, Dave.

Yeah, with humanity.

I mean, we put a lot into it.

Might as well keep going, stay around.

Yeah, I feel like I've kind of already put my bet on the table.

I'm all in on whatever the f this life is.

So I might as well,

yeah.

root for it like a like a dysfunctional sports team.

Yeah.

That's that's kind of a good analogy.

But I do think, I do think a lot of the question is in the details of how much pain will we have to go through before we find some solutions.

Because usually stuff gets to a breaking point, right?

Like even with,

so in my, before I was

gotten to sports writing, I was training to be an environmental scientist.

I was like studying the carbon cycle in the lower Arctic tundra.

And

things are not great with the environment, you know, with climate and stuff like that.

But even so, I think there are some like good changes and innovation in renewable energy has has moved faster than I ever would have thought from that time.

Given the entrenched interests, you know, that you could see as against it, it moved faster than I would have ever expected.

So I'm more optimistic about that.

And I would say, as like, there's horrible natural disasters all over the world, but for the most part, I would bet that you are less likely to die in a natural disaster in the world right now than any other time in human history because of other things.

Like we're better at building and all sorts of things like that.

And so I think we should recognize the optimistic side also.

The question is, what's the breaking point, right?

Like if AI,

short of becoming sentient and going matrix, which I'm not,

again,

the people who are involved in this who I've talked to don't seem to see that as a concern.

They're much more concerned about how humans, what humans will do with it.

I think the question is, what breaking point do we have to get to?

I'd like to be working on these problems before we have to get to a breaking point where we say, oh, it's so bad that we need a revolution, right?

Right.

Um, so I think we'll solve some of these problems one way or another, but I think it would be good to do it before things become so awful, you know, that like the guillotine comes out, you know.

Yeah, yeah, you mean a literal guillotine or like a futuristic guillotine?

A futuristic, yeah, yeah, before the automated guillotine robots come out.

So last thing here is, so Andre, your son, is how old now?

Four and a half.

Four and a half.

So Violet, my daughter, is three and a half.

Is this going to be the story of their life?

AI?

Because

I've gone through the whole like hype around crypto, obviously, and that seems to be just an ant compared to what this promises to be.

Yeah, yeah.

So the question is, like, is this, is this the new industrial revolution kind of thing?

Like touching everything, changing everything, changing every industry.

And keep in mind that at some point, Andre and Violet may call up this video and fact check us to see how wrong we are at the very end of this episode.

Yeah.

So I'll say, I'll say that I, I don't think it is the industrial revolution.

But that's because I'm not even sure how we could be that again, right?

Like, I don't see us increasing average lifespan by 30 years over a century again.

Like maybe, maybe I'm wrong about that, but

I do think it's going to touch every industry.

And, you know, they're going to grow up with it as just a natural thing.

Like, they're going to grow up talking to a lot of their machines and interacting with them in that way.

And you and I will be concerned about the things that those machines are learning about them and gathering about them.

And hopefully enough.

People like you and I will be concerned enough about that that we won't be free riding on the system anymore and we'll start writing and talking about how this has to change and maybe taking action and things like that.

So I do think this is going to touch basically every industry at some point or another.

Dave, thank you for,

I would say,

the most inspiring and terrifying conversation I have had this year.

That's a great way to put it because I think we exactly should be inspired and terrified right now.

I think that's a good place to be inspired and terrified.

Half inspired, half terrified, half man, half horse.

There you go.

Yeah.

That came full circle.

Well done.

Thanks for watching.

I can see why you have a show to host.

Yeah, yeah.

Until they come for me.

Right.

Until the machines come for podcasts.

So today what I found out is pretty simple.

I think

because it seems clear that an outright refusal to use artificial intelligence is just a losing strategy for this team of imperfect meat sacks that I like to call humanity.

Because, yeah, sure, at least some AI experts believe that there is a 15% chance of AI murdering every one in 10 people on Earth, apparently.

But there's also no going back at this point.

Which is why we need to not only work with AI,

but master it.

Like an instrument.

So that the good centaurs can be ready when the bad centaurs eventually generate a massively catastrophic bioterrorism weapon of some sort.

Or

increasingly ambitious fake pictures of me.

Which I can only presume have been visible on the DraftKings Network and our YouTube channel for the last minute or so that I've been talking.

You

traitorous fing centaurs.

This has been Pablo Torre finds out a Metalark media production.

And I'll talk to you next time.