Why Artificial Intelligence Is More “Big Bang Theory” Than Big Bang
Further reading:
How A.I. Could Transform Baseball Forever
Learn more about your ad choices. Visit podcastchoices.com/adchoices
Listen and follow along
Transcript
Welcome to Pablo Torre Finds Out.
I am Pablo Torre, and today we're going to find out what this sound is.
There's no kill switch on anything.
I mean, I don't want to scare you, but like,
there's no single switch to shut off any of these things.
Right after this ad.
You're listening to Giraffe Kings Network.
If you're looking to add something special to your next celebration, try Remy Martin 1738 Accord Royale.
This smooth, flavorful cognac is crafted from the finest grapes and aged to perfection, giving you rich notes of oak and caramel with every sip.
Whether you're celebrating a big win or simply enjoying some cocktails with family and friends, Remy Martin 1738 is the perfect spirit to elevate any occasion.
So go ahead, treat yourself to a little luxury, and try Remy Martin 1738 Accord Royale.
Learn more at remymartin.com.
Remy Martin Cognac Feene Champagne and Fortune Alcoholic Volume reported by Remy Control, USA Incorporated in York, New York, 1738, Centaur Design.
Please print responsibly.
So the reason today's episode is about artificial intelligence is because pretty much everything seems like it is on the table right now.
Up to and including the possibility that we're already floating around in like matrix style amniotic fluids and living in a simulation.
But even the more grounded takes, like the one that Microsoft's head of AI gave as a TED talk last week, sound like this.
It is clear that we are an inflection point in the history of humanity.
On our current trajectory, we're headed towards the emergence of something that we are all struggling to describe.
And yet, we cannot control what we don't understand.
And so I really wanted to understand where the f we are with artificial intelligence right now.
Not by talking to a tech CEO or one of these doomer prophets, but to my friend Josh Tiringl, a journalist who is now at the Washington Post, who has an especially sharp understanding of power and capitalism's winners and losers.
A sense that he'd honed as editor of Bloomberg Business Week, where he had interviewed Tim Cook and Barack Obama.
And then also as the guy who launched the TV show Vice News Tonight for HBO, which earned, by the way, an incomprehensible 41 Emmy nominations.
And last year, Josh was sitting around wondering what he should do next.
And so the Post called me and basically was like, hey, would you be interested in being a columnist on AI?
And the main attraction aside from like being interested in the subject is like, I'm a tourist.
Like I'm a licensed tourist.
I'm allowed to go with this credential, the Washington Post, which still has some meaning, into all these places and be like, hey,
show me your cathedrals.
Feed me your local food.
Tell me what's going on here.
And more often than not, they'll do it.
Yeah.
Tell me where I might get mugged also.
But it sometimes helps to tell people you're a tourist, right?
Because if you show up with like a crappy wheelie bag and a big smile, they're like, uh, sure, we'll talk.
What harm could you possibly do?
Let's talk.
Let's show him the new model.
Let's, so it's somewhat strategic, but it's also somewhat true.
It's like, I don't claim to know everything about AI.
I'm genuinely interested in how people present it.
And what I tend to keep finding, and this is true for AI
and for banking and for media, is that you know every trade is a conspiracy against the layperson in some way.
And if you can find the conspiracy, you can also tend to find what's real.
I want to be a proxy here for truly
not just tourists, but morons.
Like, help me understand this thing that I've committed to believing is the most important technology since what?
Since the smartphone, since high-speed internet, since how are you characterizing this in your head?
Okay, so let me just start with one very important thing.
Artificial intelligence has no definition, like none, right?
And I have heard from people who've said, oh,
well, of course it does.
It's like, really?
Like, point it to me.
And so because it has no definition, it's the perfect thing for people who want to sell, right?
Because nobody knows what it is.
Everybody thinks it has this incredible incredible power.
And so you could package anything as AI, right?
There's AI inside and people are like, oh, oh, oh.
So many pitch decks, I imagine, got edited in the last year and a half.
Yeah.
And truly just with like, you know, find and replace, right?
Yes.
And so there is no definition.
It is very powerful when used properly, but the most important thing to know is AI is software.
There's a great quote from the magician Pendulette, right?
And he talks about magic is sometimes just having worked an unfathomable amount of hours on something people just wouldn't expect.
Right.
And that's basically AI.
AI is an unfathomable amount of computing devoted to a task you simply wouldn't expect.
So when you type something into Chat GPT and you're like, now do it like Snoop Dogg, all it's doing is devoting this insane amount of computing power to probabilistically giving you a result.
To add a couple of isles.
To add some isles.
And you're like, oh oh my God, Snoop Dogg is in there.
And it's just endless amounts of computing power.
Right.
So like bring it down to its basics.
That's what it is.
But that computing power is the real star.
And that's why like companies like NVIDIA, which make these incredible chips, are the ones that are now trillion-dollar companies.
Right.
So the winners here, let's just frame it from, I want to also sportsify this whenever possible, but the people who are winning right now, the chip makers.
Yeah.
The people who actually make the hardware that allows the computers to run all of these probabilistic decisions are
killing it.
And the other people who are killing it are the ones who are making the data centers and the
power centers to power the data centers.
So like if you make energy, you're doing great.
If you make chips, you're doing great.
If you make like kind of okay AI software that's like a half step better than your competitor, you're probably not going to make it.
Like it's just too competitive out there.
Um, the other people who are winning, and
are, you know, I hate to say it, but like are people who figured out that AI is pretty useful.
Like, it's, I use it in my own life in ways that I feel very reluctant to boast about, but like, I don't.
And so, tell me what you're doing.
So, my editor is like, well, do you use AI when you write?
I'm like,
yeah.
He's like, what?
Like, not to write it.
Like, I don't, because it's you traitor.
Right.
I use it because
it can unfarms right like it's really good at logical sequencing it's not great at style if you ask an ai to write something for you you will get
the best 10th grade version of bad snoop dog of a bad snoop dog impression or like it'll kill it on like the ninth grade great gatsby essay but like that's not all that useful to me or to you but what it's really good at is un is structuring data right so that's what it does so if like I'm struggling on a paragraph in the middle of a column and I'm like, what?
I've been working on this for a half an hour and it's just getting more and more tied in knots.
I'll cut and paste it and say, put this in logical order and bullet points.
And like three out of four times, I'm like, oh, oh yeah, great.
Thank you.
And then I just move on and start to write and I'm back on track.
And so.
For people who figured out little hacks about the way AI can supplement their lives, like you're winning because it's not very expensive.
Everybody's trying to get you to use it.
And so much like the early internet, there were a bunch of people who like seem to suddenly have moved into the left lane on life's highway.
And you're like, yeah, I don't know if I'm down with that.
But then they get really far ahead.
Like that is happening.
Like I'm seeing individuals do that.
We've hit the over on when is the apocalypse going to be mentioned already in a conversation about AI.
So where are you, apocalyptically speaking?
So I've yet to see where the apocalypse comes in.
Like, so when I entered the chat, right, I started writing about this.
I started reporting on it about a year ago,
a bunch of the AI makers signed this open letter, basically
saying,
we would like governments to intercede with our work to make sure that there's no extinction risk.
Tonight, a stark warning that artificial intelligence could lead to the extinction of humanity.
It comes from dozens of industry leaders, including the CEO of chat GPT creator OpenAI.
The experts signed the statement, which says mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war.
I'm sorry, what?
You wait, you're making it.
You're going to keep making it and you want to compete with each other, but you want to avoid extinction.
And so I was like, okay, well, either this is
fantastic marketing.
Because like our product is so powerful, it could cause human extinction.
Or like there's something real.
I've yet to
be persuaded about the extinction level risk.
I think there's a lot of other risks, but like it's a tool.
It's a tool like a lot of other tools human beings have created and have used to destroy lots of human values throughout civilization.
I don't really see that as a primary risk out.
Really?
Okay.
So I enter this through the lens of the two characters that I think maybe people are most familiar with when it comes to AI, which is Sam Altman, who's the head of Open AI, who I want you to help explain and describe here.
And also Elon Musk, who has been, and this is sort of like the story of their relationship, has been a doomsayer, weirdly and conspicuously, and maybe understandably.
AI is
perhaps
more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production in the sense that it is, it has has the potential.
However small one may regard that probability, but it is non-trivial.
It has the potential of civilizational destruction.
And so just to rewind, Open AI, which is the maker of ChatGPT, was started as a nonprofit with a tremendous amount of investment from Elon Musk.
So, you know, this is like classic Marvel stuff, right?
Like
Elon and Sam start out together.
We're going to make AI for the good of humanity.
It's all going to be nonprofit.
And somewhere along the way, motives change and you know, there has been a falling out that I think has been well chronicled, right?
And Musk became discontent.
He started criticizing the organization, saying that it fell behind and is no longer competitive.
He then offered to become the president of Open AI to save it.
When Altsman and others refused, Musk left, withdrawing.
Sam, who I've spoken with and is a very nice guy,
he doesn't believe that AI is harmless, but he also thinks that the way to make sure that it is maximized is surprisingly through open AI and its responsible development of artificial intelligence.
I definitely grew up with Elon as a hero of mine.
You know, despite him being a jerk on Twitter or whatever, I'm happy he exists in the world, but
I wish he would
do more to look at the hard work we're doing to get this stuff right.
Responsible development in their world means we're going to be the stewards of the model, meaning we're not going to open source this so everyone can work on it.
And we're going to exercise what are called temperature controls so that it's not racist and it's not sexist.
You can actually just dial down the temperature on the number of parameters so that it's, you know, less creative, less wild, more normy.
Right.
The racism terror alert scale.
Right.
They can toggle between the colors.
Let's just narrow that down.
And then on the the other side, you have Elon, who I think genuinely believes that there is some risk here.
But also, Elon had a falling out with the company that is furthest ahead and decided to start his own AI company called XAI.
The company's goal is to focus on truth-seeking and to understand the true nature of AI.
Musk has said on several occasions that AI should be paused and that the sector needs regulation.
Elon benefits tremendously if by raising the alarm, everyone else's progress slows down.
But it is impossible to disentangle that sort of like pure capitalism that is coursing through everyone's blood from the also a genuine enthusiasm for technology.
But when I say pure capitalism, I just want to I want to put a number on it that is a real number from multiple firms.
We're talking about upwards of 30 trillion dollars of added economic value over the next 10 years, potentially.
But there are all these conflicting incentives.
And that's actually what's the most fun about observing it is like, it is the human carnival.
Like AI has brought out all of this like wicked behavior, but also incredible ingenuity in people.
But so the question of how does one
invite responsibility?
Where's the government in this?
Slow?
I mean, a little slow off the mark, to be honest.
Like what's interesting is the EU,
European Union,
for years has been famous for like being very bad at regulating technology.
And so what it would end up doing is like
all the
best internet companies are in America or in China.
The EU has almost none, largely because they would just penalize them.
So the EU five years ago really got started on AI.
And through this one guy, who is a Romanian member of the European Parliament, who's a lawyer,
really smart guy, he like found a way to thread the needle.
And what they decided to do is don't regulate the technology, regulate the output of the technology.
Imagine living in an America where Meta, Google, like none of the biggest stock market movers existed in the United States and you got no value out of it.
That's basically all of Europe, right?
Like
the top five stocks that have moved the American stock market over the last 15 years are all internet tech stocks.
Europe has none.
And so the FOMO of AI has been enormous for them.
And I think this guy, Dragos Tudorace, who's the Romanian guy, basically was like, okay, we can't do this again.
It's true that there are many who compare the arrival of artificial intelligence with the first industrial revolution, with the arrival of the engine.
And it's true that if we look at the impact that artificial intelligence has and has the potential to have, it is such a profound impact that it's going to have.
It's going to change the value chains in our economies.
It's going to change the way we interact in society, with the way we interact among each other as individuals, and it's also going to change the way we do politics, the way our democracies are structured and function.
So it is true, it is an enormous potential.
We have to figure out a way to do smart regulation that actually encourages some development.
And at the very last minute, there's one, well, there's really two European AI companies, but there's one AI company called Nestral, which is French.
And they too were like, yo, don't do this again.
Like, we need to be a player.
And so they didn't want to miss out on another 20 years of like a boom economy.
And they moved pretty quickly, but they're also methodical.
Like, they took their time.
They just, they started early.
And so they passed a regulatory regime a couple months ago.
It's really smart.
And they did it with not a ton of fanfare.
And meanwhile, in the United States.
I didn't know about that.
Yeah.
And about a year ago in the United States, Chuck Schumer started making a bunch of noise about like, well, this is a big deal.
We better get everybody in here.
The potential societal benefits from AI are astounding.
From medical advances in innovative materials to fusion energy and so much more.
But we also must recognize that AI poses monstrously complex challenges.
We're going to have.
like the last supper of last suppers with everybody.
And he did it.
He got everybody to come to a table in Washington and there's a great picture.
But they're like four and a half years behind at that point and there was a lot of noise about oh we're going to figure this out and it's been really quiet since the eu kind of like slipped in there it establishes differences like it says like hey if you're using ai
to sort your wardrobe that's probably a low level risk like extinction isn't happening because you're like does this brandy top work with these genes?
Like, you're probably good.
Speak for yourself.
But at the top level, yeah, if there's risk, you're going to have to comply with a regulatory regime and it looks like this.
And here's how we work it.
So they've done it.
And now there's a lot of talk in the U.S.
about like, well, is there a way we just sort of quietly adapt this humbly and just get on it?
The smart thing too is like for Europe, which is famously sort of anti-tech.
Yeah.
They're really saying like, come do business over here.
And we're good.
We, we, we got you.
We're not going to penalize you too much.
And companies see it and they're like, oh, this makes sense.
So yeah, they got out in front of it but i guess the the doomerism in me is now noticing wait a minute okay but there's the profit motive there is the economic incentive to me to be maybe more risk-seeking than a neutrally incentivized government might be
oh listen there
all of your fears about ai are largely going to be fears about capitalism like make no mistake this is you know a lot of people think oh, AI, it's a big technological or scientific advance.
And in the United States, we just all default to like, oh, it's a moonshot.
And we think back to NASA and in the 1960s, at one point, the United States government, the federal budget, 4.5% of the budget was going to NASA.
Just insane.
This is not a federal project.
AI is not a federal project.
It's all private.
It's all massive technology companies, most of whom are pretty entrenched because it takes a ton of money to, as we talked about, like to do that much computing, you already have to have a lot of money.
The infrastructure is so massive.
So it is, you cannot disentangle AI from capitalism.
You just can't.
Okay, I'm going to give you one more twist here that's really delightful, right?
Because you're right, that the person running the company suddenly becomes an enormous proxy for whether you trust the company.
Right.
So Sam Altman argues, we need these models to be closed.
You should not be able to look inside our model because if you can, and you are an amateur bomb maker, why what would you do?
Right.
Right.
I googled open AI, and now you're telling me that, oh, wait, this is closed.
Enter Mark Zuckerberg.
Mark Zuckerberg's also got a product called Llama, and Llama is behind OpenAI, right?
And Mark Zuckerberg isn't like the most famously ethical tech CEO in our history.
He is the person who in 2016 said it was...
insane to argue that Facebook had any
influence on a famous election.
During a conference Thursday, he said the idea that fake news on Facebook influenced the election in any way is a pretty crazy idea.
Mark Zuckerberg, who is behind on Lama and who has this sort of checkered ethical history, maybe not the most forthcoming to members of Congress in the United States, enters the AI chat and says, oh no,
closing your model, why that is terrible.
We're going to open our model.
And so they open up Lama so anybody can see inside it, which in the one hand, you're like, well, okay, open sourcing it.
That seems very transparent.
And on the other, again, capitalism is the fastest way to catch up to open AI because all of a sudden people are pouring in and testing and improving your model.
And so the most important takeaway is that, yes, you really need to evaluate the person running your AI company.
But the second most important takeaway is none of them are pure.
So how does Sam Altman feel to you as a trustworthy steward of all of this shit?
So I sat with Sam Altman, talked to him a couple times, and I like him.
He's smart.
He's kind of funny.
He'd fit right in this room.
You know, he talks about the fact that like, man, he'd love to spend his time thinking about the consequences of AI, but right now he's thinking about why this engineer won't talk to this engineer, right?
He's very relatable.
And to his credit,
he knows that one person shouldn't be responsible for AI.
And at every opportunity, he invites scrutiny of his decision making publicly and his morality, which is exactly what you would do if you wanted to run the largest AI company in the world, right?
Exactly.
Our own destruction.
And so that's why this is so hard to read and also kind of hilarious is like
each move has a counter move.
All of those moves together continue to advance these companies forward.
And it's this, it's a very rare moment when you get this in an industry, right?
Like it's a new industry funded with huge amounts of money, concentrated in the hands of truly six or eight people.
And so getting to know them individually, if you're wanting to know what's happening, it's a really handy way to understand what's going on.
Right.
And meanwhile, you have like, I presume, I mean, the equivalent of, or literally, Chuck Grassley being like, how can I see my kids' photos?
in like a congressional oversight committee meeting?
I will say, everyone I've spoken to in the Valley says openly, there's like 12 to 15 people in Congress who they decide
get it, right?
Which is not terrible.
Although when you add up the numbers of people in Congress, it's kind of befell.
I would have taken the under.
Yeah.
They say that there's 12 to 15 people who actually engage with them and get it.
Everybody out there loves Gina Raimondo, who's a Secretary of Commerce and
who's very smart and very business friendly.
She's like a Rhode Island Democrat, but is also Mitt Romney at the same time, which is a very strange conversation.
My daughter does dressage.
She does Olympic horse chantas.
But very smart and also very practical, right?
And so they feel like they can talk to her and she represents the interests well and she's very clear on where the no lines are.
Some of these CEOs will also tell me that when they do have one-on meetings with some of the senators who may be less in touch with technology and less knowledgeable about their phones, that they are just nodding and hoping that the meeting ends as soon as possible because there is no chance they can say something the other person's going to understand.
If you're looking to add something special to your next celebration, try Remy Martin 1738 Accord Royale.
This smooth, flavorful cognac is crafted from the finest grapes and aged to perfection, giving you rich notes of oak and caramel with every sip.
Whether you're celebrating a big win or simply enjoying some cocktails with family and friends, Remy Martin 1738 is the perfect spirit to elevate any occasion.
So go ahead, treat yourself to a little luxury, and try Remy Martin 1738 Accord Royale.
Learn more at remymartin.com.
Remy Martin Cognac, Veeen Champain, afforded an alcoholic volume, reported by Remy Control, USA, Incorporated, New York, New York, 1738, Centaur Design.
Please drink responsibly.
What, in your perspective, would be the thing that you wish the government was doing differently?
Yeah, I guess there's two things.
On the one hand, there are practical uses of artificial intelligence right now that would make our lives better.
And so I'm a,
Pablo, I'm a patriot.
I believe in America.
That's right.
And so I did a piece.
I was looking at Operation Warp Speed, which is one of those things that may sound very familiar.
It's how we got COVID vaccines in six months from zero, right?
And so I was just reading up on it.
And in May, we had nothing.
And in the fall, we had vaccines.
And I'm not going to say that the reason was AI.
There was a lot of stuff that went into it.
But basically, Gustav Perna, who was the general put in charge of it, he shows up in DC and he's like, I have three colonels, no budget, no plan.
He sits in front of a bunch of consultants, all of whom are like, oh, well, we can do this.
And nobody even understands the problem.
And basically he sits down with Palantir, which is a company co-founded by Peter Thiel and a guy named Alex Carr.
Yep.
They come in and they say, look,
we can show you, we can make a digital twin of the United States for you.
At this point, I am watching this movie and I'm like, those are the super villains, by the way.
Everything I know otherwise is these are military industrial complex guys.
I don't trust them.
They are military industrial complex guys.
You shouldn't trust them.
You shouldn't trust anybody.
There's no one I just, there's no one in this entire podcast you should trust.
Okay.
There's no one.
Not even Snoop Dogg.
Not even Snoop Dogg.
Because look,
Snoop Dogg has a history.
We know Snoop Dogg has a story.
Snoop told me he was quitting weed and it turned out to be just an advertisement for a new weed product.
It was
still not over that.
I wonder if Sam is in on that with him.
It all, it all goes to the bottom.
It's all coherent.
So these guys sit down and they're like, look,
in a military parlance, you need to see yourself, which means you need to know, where are my troops?
Where is the enemy?
How am I positioning myself?
And to give you an example, in order to produce and get vaccines in an arm,
you need to know the state of plastic production in the United States at all times, because you could make a bunch of vaccine.
Where are you storing it?
How are you getting it into vials?
So they were able in a matter of weeks to get all of the data that he needed to put it on a dashboard so he could run.
the operation to get vaccines in arms.
And the only reason we can do that is because it was an emergency.
So all the things in government that would slow such a project down ordinarily magically disappears.
And in war, that happens all the time.
And so the Department of Defense is actually, even though they're incredibly slow, they're pretty fast at getting AI into the battlefield in all sorts of ways that may be ethically challenging, but in general are like pretty effective.
So I would love it.
if the United States could begin to tell the story of how AI can improve people's lives because it would make it a little bit less scary.
The other thing it would do is create a market for positive uses of AI.
And the more we can say, oh yeah, I really wanted to do X, Y, and Z, the better off we're gonna be.
I mean, simple example.
We all drive around with smartphones to navigate dumb traffic lights.
Maybe we should have smart traffic lights.
That wouldn't be that hard to make a grid that actually understands.
Simple example.
There's an old person crossing the street.
We can see it.
You know, visual recognition can see it.
Hey, don't change the light.
Right.
Wait till they get to the other side.
So there's lots of ways that the tech, the tech, which exists right now, could be integrated into our lives to improve it.
So that would be one thing.
The other thing is to get a regulatory regime in place as quickly as possible, because the more it's stabilized,
the easier it's going to be to enforce.
And right now, enforcement, we haven't even really talked about that.
Like,
what do we do?
Because we don't even have a digital privacy law, Pablo.
I assume we're going to put handcuffs on robots.
I mean, maybe.
But we literally, we haven't even solved the first problem of the internet, right?
In 1996, we passed a law, Federal Communications Act, which basically said, like, well,
you can publish stuff on the internet, but you're probably not a publisher.
You're a platform.
And so this created the era that we all now joyfully live in, in which basically anybody could say anything defamatory or otherwise, but definitely the person who published it is not responsible.
So we don't, we haven't solved that first problem, and there's no hope yet that we will.
And so like, I would like to see a simple digital privacy act so that, you know, Taylor Swift or anybody else shouldn't be subjected to deep fakes.
Like, oh, maybe your voice should belong to you.
Maybe there should be a criminal penalty if that is replicated and distributed without your consent.
Because then you really could put people in handcuffs.
And by the time you put three, five, 10 people in handcuffs, the word does tend to get out.
Right.
The question of does the internet get to feel like
a place governed by law, as opposed to the exact opposite, which is its promise and also the hell that we all live in, remains seemingly also an open question.
Yeah.
And the AI is just the accelerant of all of the worst and best of this stuff, which is why like.
Am I worried about the existential risk?
I am worried about the existential risk of human beings.
This is an incredibly powerful tool that we invented, much like a bunch of other tools that we have invented from wheels and fire and poisons.
Can we just govern it like a little bit smarter?
Because if we can, I'm not really worried about existential risk.
Now I'm just like, could we just show some progress?
But in the movie that I think people have been playing in their heads, it's as simple as, it's as complicated as everything we've been talking about.
It's as simple as, can we unplug this thing?
Is there a kill switch?
There's no kill switch on anything.
I mean, I don't want to scare you, but like,
there's no single switch to shut off any of these things.
We build systems.
Like, that's what civilizations are.
And so we started as human beings, as a group of little molecules, and we built a little system and no shutoff on that.
The jugular vein.
There is no only kill switch.
Yeah, like you need the, you need the hockey skate.
No, there's no equivalent.
And so AI systems have been around for a long time.
Like not as chatbots, but they're built into the internet.
They're built into your phone.
There's no kill switch.
These are all circuits intertwined in infinity.
So like, no, you can't turn it off.
And so it sounds like what you're suggesting is that we have been focusing so much on the upside, the seemingly infinite ceiling.
towards which all of us are uh yeah pointing our pitch decks and our fears and anxieties and in reality what you're saying is what
Every new technology, people always focus on the very, very worst and the very, very best, right?
So when TV came along, everybody's like, this is going to be America's classroom, right?
Or this is going to rot their minds.
And where it kind of ended up is like, tonight at nine on CBS, the equalizer with Queen Latifah.
And my hunch is that that's probably what's going to happen with AI.
Like all things being normal, you're going to lift the floor on a bunch of people's performances at their work, at a bunch of interactions with government and civil society, production processes.
Is there going to be some like peak TV?
Yeah, there's going to be some amazing stuff.
There's going to be some real bad s ⁇ ?
Yeah.
But over time, human beings tend to take amazing things and make them pretty mid.
Your argument is that...
Artificial intelligence is actually, in ways that we need to really come to finally admit, a lot like young Sheldon.
It could just be that the young Sheldon secret is the secret, right?
It's like, good enough.
Yeah.
Good enough.
Yeah.
This brings me to the field that I ostensibly work in, which is sports, because sports,
to be very blunt about it, it, is unlike reality in ways that are tempting for people who are obsessed with math and optimization.
Yeah.
And also, like, look, I grew up in Baltimore and the signal event of my like public life as a child was the movement of the Baltimore Cults.
They were robbed of rights.
It was, it was so embarrassing, right?
And so I became a just devoted Baltimore Orioles fan, like a huge O's fan.
Yeah, a lot of texts about the Orioles, Orioles, which is truly a one-way enthusiasm.
Yeah, I've noticed, by the way, I've noticed.
So I've always loved sports.
When I was in high school, like a lot of kids, I went to a school that had senior projects.
And so I called the
head groundskeeper, this guy named Pat Sanaron, because Back then we had the white pages where you could basically find anybody in Baltimore in the white pages.
I called him and I was like, look,
I could come work for you for free for March.
And he was like, yeah, okay, just show up tomorrow.
And then I got boots and a uniform.
And so when you look at sports through the lens of your current job,
how much optimizing is left?
Like we've come so far post-Moneyball into, you know, I've moderated panels at the Sloan Sports Analytics Conference year after year after year to the point where I've become numb and bored by it in many ways.
And so what's new here?
So that was the question I asked, which is, you know, what we've seen in the initial studies of AI in the workplace is like, they're great at making your really bad performers average quickly.
So if you're a consultant and you're bad at writing decks,
your decks get better real quick.
If you're a lawyer and you're bad at writing briefs, get better.
It doesn't help the top very much, but it can really lift the floor.
So I started poking around to just figure out like, huh, I wonder if that's happening in sports.
I spoke to Daryl Maury, who is, of course, the dark prince of Sloan Analytics.
And Daryl was like, yeah, there's not in the models, there's not that much more room for AI to help us because we have optimized pretty much everything.
Now, Daryl, true to his reputation, did say, well, now I am working on one thing that I can't tell you about that I'm pretty excited about.
He's finally going to cure James Harden of his strip club addiction using artificial intelligence.
Well, that actually might work.
But Daryl at least conceded to the premise of like, there's not that much room.
He told me to talk to Shane Battier, a famous Duke alum.
Also had a second act after his career.
The Heat hire him.
And for a couple of years, he is in charge of analytics and communicating those analytics to players.
So I'd speak to Shane Battier.
And Battier is like, look, the challenges are very different in basketball versus baseball.
So in baseball, I know if I have a certain launch angle and a certain exit velocity, which is, you know, physics-based, it's a home run in every park from here to Tokyo, right?
It's literally a physics problem.
But what Batty pointed out is like, you know, the NBA is full of essentially non-repeatable motions leading to a repeatable motion, which is shooting.
Right.
Right.
Baseball is just repeatable motion.
The two main things you do are throw a baseball and hit a baseball.
And so where AI can really improve that is you've got statistics on one side, which can tell you the outcome of any action.
And you have biometrics on the other, which can show you how you get to an action.
And AI can merge those things.
So shane baddie is the person who's like yeah baseball man that's where it's all going to come together the incentives would baseball line up with the math perfectly there is like a fundamental when i talk to like the analytics people on the basketball side there is almost this jealousy of baseball because of the way in which the
it's so turn-based it is so atomized to the point where every decision feels like you can isolate it yeah it's like every i mean i don't know for your audience, how many of them play blackjack, but like, it's basically like the interaction between the pitcher and the hitter is between a dealer and a player.
And the card comes out and you know exactly what to do based on the card.
It's not to say it's easy, but there's an actual book on what the best response would be.
Exactly.
And so not only is there a book, now there's a market.
And this is where Moneyball, you know, the success of that book is just infuriating to anyone who's ever written because it is amazing.
It's my favorite book and also the thing I also am most jealous of.
It's incredible.
It's like we're 21 years in and it's still like.
Still relevant.
Not only relevant, but like newly relevant because the first version of its relevance was showing off how management, Billy Bean and the athletics, could find a new way to value and exploit labor, which they did, right?
Famously, Kevin Eucalyptus, the Greek god of walks.
And so for several years, a bunch of general managers who are too stupid to notice are getting their asses kicked by the A's.
All because Billy Bean and that team found a new way to value and exploit labor.
Right.
On base percentage.
Exactly.
Phase two.
So now that for 20 years, now everybody's on the game.
Phase two is actually labor because now they've seen how management values baseball players.
They value power and they value strikeouts.
And it turns out.
You can engineer your swing or your arm to do those things.
AI is pretty important to figuring out how to do that.
This sounds geeky, but it's really not.
We all know what statistics are.
That's the outcome.
On the way in, there's all this video capture and motion capture and sensor data that happens through cameras, which doesn't really naturally speak to statistics.
But AI is this great translator of different data.
And so now you can go to a driveline, you can go to your own team, they can capture the data.
And on an iPad, you can see what you're supposed to do to hit with more power.
And if you hit for more power, you get more dollars.
And so all of the baseball labor market is now really attuned to something that's driven by AI.
All right, so if you're not familiar with Driveline Baseball, which Josh just mentioned there, it's a player development company that now works with thousands of professional players and more than 40 all-stars.
It was started by a former software engineer and blogger named Kyle Botti, who also, by the way, got radicalized because he read Moneyball.
And Kyle's big insight here with Driveline is that they could use AI to translate this high-speed camera data at thousands of frames a second, and they could turn that into actionable real-life techniques to increase fastball velocity, for instance, or bat speed.
But another key thing for these players, their clients, which I did not realize until I read Josh's article about this, is that Driveline, specifically and proudly, is independent from these players' bosses.
Imagine you're in your own workplace and you've discovered that for a little bit of an amount of money, you can get really good and you're going to get a big raise.
And your employer knows it.
And they're like, oh, well, why don't we work together on that?
Well, you might, right?
Like they're like, oh, we'll pay for it.
And that sounds great.
But if they know how you got good, they also know.
how to train someone else to get good.
They also know how you might get bad.
And so having this independent training place that protects your biometric data, which belongs to you, is not a terrible idea.
Now,
the players that I spoke to are all very paranoid about their data, to the point that it's actually in the last collective bargaining agreement.
There's a little thing that says the players own their data.
Which is smart.
Very smart.
And also born of the paranoia and hatred between baseball players and baseball owners.
Right.
These guys who have to go sometimes in arbitration to a neutral judge to say, I believe I am worth this.
And their employer, their boss, who helped them ostensibly get to whatever that statistical benchmark is, say, actually, this guy sucks for these reasons.
They're having their actual numbers perpetually used against them.
Not to mention the last 20 years have been about the one book we just discussed called Moneyball, in which management is ruthlessly trying to find cheaper edges to use players.
So the players are really paranoid about the data.
So it makes sense to spend $20,000, particularly when you're making a lot more than $20,000, to go to a place that can teach you all this stuff, show you how your arm works, show you where the launch angle is, will give you updates, pretty good service.
And then you can go back and yes, the teams have some data on you, but you're protecting your own.
Now, after my piece published, I heard from two baseball executives.
One of whom said, these guys are fing crazy.
Like, what do they think we have that we would use against them?
Right.
And the other who said, yeah, that's probably pretty smart.
One of those executives was like, look,
I know you believe that money is important to me, but allow me to explain why it's not.
Right.
But, but, okay, but so the biggest change right now that people are underrating seems to be that athletes are already, in baseball specifically, already keen to the ways in which biometric data and their own personal analysis of of themselves is something that should be proprietary and not open and accessible to your employer, which feels like it is a warning to everyone about that, whether you're an athlete or not.
I think it's coming for everyone in some way, maybe not in the same way that baseball is, but like I have some friends who are lawyers, right?
One of the things that's amazing about the law is like, you know, there's no analysis of juries.
There's no analysis of opening statements.
Like if you knew certain questions led to certain responses across the great history, and remember, they're court reporters.
So we have transcripts.
Yes.
AI can do natural language processing.
Oh, I love this.
Can you imagine reverse engineering an outcome from all of that data?
Right.
So your firm might want to invest in that.
Right.
And you as a lawyer would definitely want to know it.
Right.
Opening statement analytics.
It should happen.
There's also so funny to me.
There's a big, I mean, but Pablo, this is so far ahead of where we even are.
There's a big story in New York right now about a cop who's a crooked cop and a bunch of the people he put away got together and became lawyers.
And what they discovered through their own analysis of a bunch of the other arrests was that this cop had hired a prostitute to testify in nine separate murder cases that he had prosecuted, that he had brought to the court.
So like, forget natural language processing of opening statements.
How about just frequency of expert witnesses across the court system?
Just the stuff that's hiding in a giant, essentially drawer full of paper.
Exactly.
So imagine the ways in which that could be brought to your workplace.
And then what's your relationship to your employer?
So if you're in a law firm, you both benefit from winning, theoretically.
It doesn't work like that for everybody.
And so in the places where your interests diverge, owning your performance data is going to be a thing.
Like I'm quite confident.
And that performance data is coming for everyone.
That brings up, I think, the fan concern here.
Again, let's think about the consumer, the people who are
both the story and yet generally ignored when it comes to how we write the the movie in our heads about what's happening here.
The trade-off fundamentally, and this is, I think, a larger theory that I have about sports and modernity and late capitalism.
The trade-off is between efficiency and entertainment.
And so, you're describing the ways in which optimizing is still possible, in which these edges can be gained, even post-Moneyball.
And the question is: Do you think it's going to get more fun
for us watching this?
I think it'll be just as fun as it's always been.
That's the weird part: is like
a lot of it will be demystified, but you still have to hit the ball, right?
You still have to look insane as you round third base.
You still have to scream at somebody.
Like, I do actually think the fun will still be there and that understanding doesn't necessarily negate our ability to enjoy things.
There's certainly elements where like unregulated, it's going to get out of control.
Like baseball was unwatchable.
Oh, yeah, they changed it.
All of these laws, so to speak, to make people do things that are entertaining again.
Yeah.
But I think that's also because those people were lazy and disagreeable, right?
Like
if you're the commissioner of baseball and you just think, well, people like baseball no matter what, you're out of a job, man.
Like, nobody has a right to exist.
And so I think what it's going to lead to is like more activism.
more like awareness that like things change more quickly.
But if you move in step with it, these are are still games we're going to enjoy.
I think it will expose the phonies pretty quick though.
Like if you're not aware of what's happening inside your game and you're not willing to take chances, you're going to lose.
And so like, I look at somebody like Adam Silver, where if you, if I told you 10 years ago, all right, well, there's going to be a play-in tournament.
And by the way, LeBron's not guaranteed to get in.
Steph Curry might get out like the first night.
Oh, there's going to be an in-game tournament.
Oh, and they might switch the way games are refereed in the middle of the season and tell no one.
First of all, David Stern would have broken a table with his hand.
And you'd be like, no, come on.
But they've been responsive, right?
And so like.
They've had to.
They've had to adapt because of market pressures.
Exactly.
And you got to give them credit for doing it with like no problems.
They know that's the new rule of the game.
Even the NFL, which is like, you know,
a little shady sometimes, they've been adaptive.
They've moved quickly.
And that's everything from streaming to this crazy new kickoff rule that sounds like
it goes to, I mean, and in the NFL, it's actually, there has been more fun because the analytics have suggested actually be more risk-seeking, go for two, you know, like go for it.
Go for two and like also adopt this like absolutely bonkers Thanksgiving family game kickoff rule where it's like got to be between like I don't even know what that rule is.
Right.
But I am a fan because I think what they've figured out is like, these things are going to change swiftly because the edge is going to force us to change them.
And if you don't, the game won't be fun.
So I think that the proper management of this is going to ensure that you and I have a really good time yelling at the TV for many years to come.
And that is because to bring it full circle here, the primary motive here is money.
Yes.
In so many words, again,
you're long on capitalism.
I'm long on humans.
I think capitalism is a great invention for the most part, and that it has spurred a lot of innovation.
And we are living better because capitalism forced people to invent things and make changes to the world.
But ultimately, like, I'm long on humans finding ways to adapt technology to make their lives better, however they define better.
And that even though there's six or eight people really controlling the AI world right now,
you know, nothing's more stubborn than a person who just wants to watch the Equal Engine.
And so it's like the anti-Mike Tyson, like, you know, Mike Tyson's famous quote, everybody has a plan until they get punched in the face.
In tech, it's like, yeah, everybody has a grand idea until they meet the guy who's like, is he going to get my food here faster?
Right.
And I tend to think that's where it's going to net out.
Right.
Josh, thank you for what turns out to be one of the most bizarrely inspirational conversations I have had on this show.
It's been my pleasure.
This has been Pablo Torre Finds Out, a Metalark Media production.
And I'll talk to you next time.