117. Nate Silver Says We're Bad at Making Predictions

42m
Data scientist Nate Silver gained attention for his election predictions. But even the best prognosticators get it wrong sometimes. He talks to Steve about making good decisions with data, why he’d rather write a newsletter than an academic paper, and how online poker led him to the world of politics.

Listen and follow along

Transcript

It's Stock Up September at Whole Foods Market.

Find sales on supplements to power up for busy weeks.

Plus, pack your pantry with pasta, sauce, and more everyday essentials.

Enjoy quick breakfast for less with 365 by Whole Foods Market seasonal coffee and oatmeal.

Grab ready-to-heat meals that are perfect for the office and save on versatile no antibiotics ever chicken breasts.

Stock up now at Whole Foods Market, in-store and online.

What does it mean to live a rich life?

It means brave first leaps, tearful goodbyes,

and everything in between.

With over 100 years' experience navigating the ups and downs of the market and of life, your Edward Jones financial advisor will be there to help you move ahead with confidence.

Because with all you've done to find your rich, we'll do all we can to help you keep enjoying it.

Edward Jones, member SIPC.

My guest today, Nate Silver, is the founder of the data-driven website 538.

He's also the author of the best-selling book, The Signal and the Noise.

Silver's best known for his shockingly accurate election predictions, but that's just the tip of the iceberg.

I think I have strength in dealing with imperfect information and dealing with uncertainty and kind of refining best guesses.

Welcome to People I Mostly Admire with Steve Levitt.

What I love most about Nate Silver is that he has such amazing instincts both for analyzing and writing about data.

Almost every time I read something he's written, I have the same reaction.

Damn, I wish I'd written that.

So let's start with the topic you're most famous for, and that's predicting election outcomes.

In 2008, your first foray into political prediction, you correctly predicted 49 out of 50 states in the Electoral College.

And then you, against all odds, did even better in 2012, getting every single state right.

And those two election cycles led people to believe you were some kind of a messiah or an oracle.

And I have to imagine that's both a blessing and a curse, right?

Very much so.

I remember telling my literary agent that I am being set up to fail here.

Inevitably, there'll be a time when the low probability outcome comes up.

Although in 2016, it wasn't that low, actually.

We had Trump with a 29% chance of winning on election day.

But yeah, it led to this misunderstanding of what I do and what the data science behind election forecasting is.

A lot of the time we wind up being less confident than the conventional wisdom about an outcome.

That's where the numbers point, right?

Polling has been error-prone in the past.

It will be error-prone again.

So you've been miraculous 2008 and 2012, and the 2016 election comes along.

It's Hillary Clinton versus Donald Trump.

And every respectable prognosticator had Clinton favored.

But you actually had Trump with a pretty good chance, 29% chance of winning.

And afterwards, I heard a lot of people saying, oh, Nate Silver and 538.

They really blew it in 2016, which is a completely predictable response, but I think absolutely the wrong takeaway.

And I'm sure you must agree with me that there were much more useful conclusions to draw from 2016.

A real opportunity to start understanding better the nature of predictions.

A lot of people who are into political forecasting just want to hear that their guy is going to win.

So you build up a large audience of progressive Democrats who think, oh, here's this guy that always has good news for Obama and for Democrats.

And then when that's not the case, it causes a lot of cognitive dissonance.

So it kind of felt like in 2016 where our model did have Hillary Clinton favored, but less so than the conventional wisdom had her.

So that was kind of like a thankless position where you're like, yes, this thing that you think is unlikely is actually less unlikely than you think, but still below 50%.

That's a hard message to convey in the heat of an election campaign.

I am a guy who thinks about markets, right?

Like I play poker.

I make sports bets sometimes.

So to me, if the consensus odds was that Trump had like a 15 or an 18% chance was the odds at prediction markets.

And if you're 29%, then you would actually bet quite a lot of money on Trump.

Now,

why was he more likely than people assumed?

It's partly because of a mathematical property of the way the Electoral College works, where people would look at these polls and say, well, Clinton's ahead in Wisconsin and she's ahead in Michigan and she's ahead in Pennsylvania and she'd have to lose all those states to lose, right?

And what are the chances of that?

The issue is that all those states are correlated.

They have the same basically like white working class voter base.

So when Trump does better than expected in Wisconsin, he'll probably also do better in other Midwestern states, Michigan, Ohio, Pennsylvania, and so forth.

And so our model realized that these things are highly correlated, that being up a tiny bit in a lot of states is actually not all that good.

Because if it's a uniform swing of the direction, then all of a sudden you lose all these states by a point, two points instead of winning them by a point or two points.

And that's basically what happened is that Clinton's support was overestimated in the upper Midwest, and that's a critical reach in the Electoral College.

And then you get Donald Trump as president.

So the point you just made is that it's not that there are 50 independent shocks, which each state getting some random draw.

It's that there's a set of shocks, a small number, and they trickle across states.

Is that why you think other people's model gave a higher percentage chance to Clinton?

Because essentially they were getting the standard errors wrong.

That's the main reason.

The Huffington Post, for example, had a model that had Clinton with the 99% chance of winning.

There was a model at Princeton that was like 99.some percent.

If you remove the part of your model that says that these states are correlated and not independent, then you'll get a really overconfident answer.

There are some other subtle things too.

Our model had priced in the fact that in 2016, you had a big third-party vote.

You had a lot of undecided voters.

So there were more votes up for grabs than typically.

A lot of people who say they're going to vote third party actually wind up under the pressure of the ballot booth, picking one of the two major parties.

But the main thing was just that you cannot treat this as 50 independent contests.

It's the same two people on the ballot in every state.

And I grew up in Michigan.

People in Michigan are not that different than people from Wisconsin or Ohio, despite rooting for different football teams and so forth.

Yeah, exactly.

You got back in form in 2020, right?

You got 48 out of 50 states.

And I have to say, I was surprised to see reading your new Substack account that you're not sure whether you'll even cover the 2024 election.

And I'll believe that when I see it, because the demand for your forecast is going to be intense.

The issue is that people look at me as some avatar for I don't even know what anymore, right?

But there's a lot of pressure to convey information to people that are not in a mood for rigorous analysis necessarily at all.

You have people who feel very strongly emotionally about this election.

But I think people have trouble grasping the idea that an election is one event drawn from a larger sequence or reference class is the nerdy way to put it.

And parties don't want you to believe that elections are probabilistic.

They want you to think that our guy is the righteous guy to inhabit the White House and that you as the voter control this process.

But yeah, it's a little bit of oil and water as far as what the the audience wants versus what a probabilistic forecast can really provide

i don't think i'd be exaggerating if i said that you are the number one celebrity data scientist in america that we pulled a representative group of americans we asked them to name a data scientist your name would come up more than any other And I love that for two reasons.

The first is because I have great admiration for what you do with data.

And the second reason is that when it comes to data science, I think you're essentially self-taught.

You don't have any fancy credentials like a PhD.

You didn't even major in the right subject in college.

You're an economics major in college.

Whereas the kind of people who get hired as data scientists at fancy tech firms, they tend to be statisticians or computer scientists by training.

And I've always argued that the most important determinant of a great data scientist isn't knowing lots of complicated techniques.

It's having common sense and curiosity, a knack for asking good questions, and the ability to tell a good story with data.

Your success, I think, should be an inspiration to every budding data scientist who fits that bill.

So that's my explanation for your success.

What do you think the secret is to your success?

I mean, it still is a little bizarre.

First of all, let me say one thing.

I do think actually the fact that I was an economics graduate at University of Chicago, by the way, is worth pointing out because I think economists are good at framing questions that can be answered rigorously, ideally, with data.

Yeah, I think of economics as essentially applied common sense.

That's why it's a good precursor to being a data scientist.

I also have a lot of hands-on experience in weird ways from playing poker, from building fancy models for fantasy baseball and things like that.

It's weird to be someone who's not terribly quote unquote political, you know, to be very caught up in elections and then people are making inferences about your political preferences based on what your forecast says.

That's been a little bit of a weird weird journey, I think, being in the right place at the right time too.

I mean, like interest in American elections increased vastly beginning in 2008 with the rise of Barack Obama.

We certainly have had a series of very close and dramatic and interesting elections, right?

Any country that can elect both Obama and Donald Trump back to back is a complicated country.

It's been interesting having a front row seat at this very confusing time in some ways to be an American.

I want to go back to you as a data scientist, though, because if you were going to be a physicist or a chemist, there's no way that you could be a wildly successful physicist or chemist by being a poker player and building some models.

And it's interesting to me, fascinating to me that somebody like you, as far as I can tell, you just were interested in questions and you would gather data.

and you try to make sense of the world and you just did that over and over and got better.

Being fair, I would say I'm a self-taught data scientist too.

But is your identity as a data scientist, or how do you think of yourself even?

It depends on what I'm doing.

I don't have the kind of engineer physicist

brain, right?

I think I have strength in dealing with imperfect information and dealing with uncertainty and kind of refining best guesses.

I think of the world in provisional and probabilistic terms.

I literally was a professional poker player for a couple of years in my mid-20s.

One disadvantage of academia is that it's a little bit slow-moving, potentially.

It takes a long time to publish papers, where if you're publishing an article, formerly at 538, now at my substat called Silver Bulletin, you're trying to do 80% as good a job in like 10% or 2% as much time.

And believe me, I think there is value in like rigorousness, but having a lot of reps for being able to see a data set quickly and get the gist of it with some degree of uncertainty more often than not is, I think, a valuable skill set.

It's also more fun.

That's the thing that sapped the fun for me out of academics.

I love coming up with ideas.

I love finding a pile of data and making sense of it and getting the basic idea.

And that takes maybe 10% of the entire time of doing an academic paper.

And about 50% of the time of doing an academic paper is addressing 74

different possible criticisms, which you know can't matter, but that somebody might say.

And so in order to get published, you need to essentially rule out or at least discuss or know about each of those.

And that's just not that much fun for me compared to the quick hit version.

Sometimes you'll be wrong, even when you do the really rigorous thing, because you don't see the whole story.

But the trade-off, both I think in terms of getting to answers and just in terms of pure joy of doing the research is much higher in a blog world than in an academic world.

And it's also, I don't mean to criticize academic writing, but it is a little bit of a straitjacket.

There is some art as far as like conveying what is the degree of epistemic certainty I have about this conclusion, right?

Is this kind of like conjecture written in dry erase marker or is it something I'm going to gravel and limestone because I'm so confident of it.

In academic writing, it often always sounds like the latter, even when you prefer the former.

The problem comes down to the nature of the adversarial relationship between the author and the anonymous referees whose job it is to say that a paper should or shouldn't be published.

And almost never will a referee say, hey, this kind of seems mostly right.

You should publish it.

If it isn't 1,000% right, then they say don't publish it.

And the consequence is that in academics, what I ended up doing was biting off tiny little problems that I knew I could answer and avoiding going after big questions where there might be question marks left over even after I did the best job I could.

And yet we still have a replication crisis.

But yeah, it leads to these kind of very precise answers that sometimes aren't the most accurate answers is the way to put it, right?

You're answering a very particular question with particular conditions or specifications.

There are so many ways that as a researcher, you can make different choices with how you handle a data set.

I mean, I don't think it's bad, but it can put you in a very potentially defensive mentality.

I actually changed the way I wrote papers after a while with exactly this spirit in mind.

Usually what economists do, which is crazy scientifically, is they come up with a theory, they go to the data, they test it, and inevitably the data don't match the theory.

So then what the economist does is say, okay, I need a different theory that matches the data.

And they come up with a different theory.

And then the way they write it up is, here's the theory, and it is validated by the data.

It is so unscientific, it's almost unimaginable, but that is the form most academic papers take.

But I decided to do something very different.

What I began to do later in my career was to say, hey, here's a data set.

I'm just going to describe what's in the data.

These are the correlations in the data.

And correlations aren't what we care about.

We care about causality.

But what I see are correlations.

And I would try to come up with every theory I could think of.

And then I'd say, which of those theories are or are not consistent with the data?

And then what I'd try to do is say, well, is there anything else in the data that I hadn't thought of before that might distinguish these theories?

And then go back and try to add it.

And it's a much more agnostic way and I think a much more scientific way of doing empirical analysis.

But it's hard to get papers published that way because it doesn't have this veneer of the quote scientific method where you go out and you have a hypothesis and you test it and you show that the data are consistent with it.

And we've also evolved to a place politically in the U.S.

where the kind of phrase, oh, trust the data or trust the science or trust the expertise has become a little bit loaded.

And if you know anything about science, then it is kind of a somewhat adversarial process, right?

And a lot of facts are seen as provisional and are subject to change and are subject to scrutiny, certainly, and skepticism.

And where are we in that balance between like healthy and unhealthy skepticism?

I don't know.

But if you're in a paradigm where admitting doubt is seen as weakening your argument, then that's not a very scientific way of thinking, I don't think.

We'll be right back with more of my conversation with Nate Silver after this short break.

This is a vacation with Chase Sapphire Reserve, the Butler, the Spa.

This is the Edit, a collection of hand-picked luxury hotels and a $500 edit credit.

Chase Sapphire Reserve, the most rewarding card.

Learn more at chase.com/slash Sapphire Reserve.

Cards issued by J.P.

Morgan Chase Bank and a member of FDIC, subject to credit approval.

People I mostly Mostly Admire is sponsored by LinkedIn.

As a small business owner, your business is always on your mind.

So when you're hiring, you need a partner who's just as dedicated as you are.

That hiring partner is LinkedIn Jobs.

When you clock out, LinkedIn clocks in.

They make it easy to post your job for free, share it with your network, and get qualified candidates that you can manage all in one place.

And LinkedIn's new feature can help you write job descriptions and then quickly get your job in front of the right people with deep candidate insights.

You can post your job for free or choose to promote it.

Promoted jobs attract three times more qualified applicants.

At the end of the day, the most important thing to your small business is the quality of candidates.

And with LinkedIn, you can feel confident that you're getting the best.

Post your job for free at linkedin.com/slash admire.

That's linkedin.com/slash admire to post your job for free.

Terms and conditions apply.

People I Mostly Admire is sponsored by Mint Mobile.

From new shoes to new supplies, the back-to-school season comes with a lot of expenses.

Your wireless bill shouldn't be one of them.

Ditch overpriced wireless and switch to Mint Mobile, where you can get the coverage and speed you're used to, but for way less money.

For a limited time, Mint Mobile is offering three months of unlimited premium wireless service for $15 a month.

Because this school year, your budget deserves a break.

Get this new customer offer and your three-month unlimited wireless plan for just $15 a month at mintmobile.com slash admire.

That's mintmobile.com slash admire.

Upfront payment of $45 required, equivalent to $15 a month.

Limited time new customer offer for first three months only.

Speeds may slow above 35 gigabytes on unlimited plan.

Taxes and fees extra, see Mint Mobile for details.

Let's take an example of something you've done recently, which I think is fascinating from the perspective of this conversation we're having.

It's in your sub stack, the silver bulletin, and it's about COVID-19.

And I thought the results themselves were really interesting, but what I especially liked is the way you talked about the results.

Could you just lay out the question you were trying to shed light on about COVID, your empirical strategy, and your findings?

It was a Friday afternoon and kind of what inspires me to write particular posts I'm never quite certain of.

But what I thought was a relatively straightforward finding that until the introduction of vaccines, so early 2021 basically, there was no relationship to speak of between the political orientation of a state and how many people were dying from COVID.

So you had, for example, some blue states like New York, New Jersey, Massachusetts that had very high death rates.

You also had some red states, Arizona, the Dakotas and whatnot, that had very high death rates from COVID.

Not much of a correlation.

And then once you can get vaccinated, you see pretty strong correlations.

The top of the list of COVID deaths is almost all red states.

The bottom of the list is almost all blue states.

Not perfect, but having looked at lots of data sets when it comes to American politics, you know when you see the red states and the blue states lined up in a particular way.

And the reason here is not because like COVID targeted Republicans, but because Republicans were quite a bit less likely to get vaccinated.

What I like about what you do is it's often very simple, easy to explain, and plausible.

By comparing the before vaccine time to the after the vaccine time, you're trying to essentially create something like a control group.

Something changed, and that's the introduction of the vaccine.

And according to the data, it very differentially affected states that were heavily Republican versus heavily Democrat.

And these are big magnitudes.

Do you have a sense of how many extra deaths you might be talking about in Republican states because of less vaccination, if the story is true?

Aaron Powell, so if you just lump all red states and all blue blue states together, meaning based on how they voted in 2020 and 2016, then the red states are about 35% higher.

Higher in death rate.

Yeah, but the very red states, like West Virginia, it's a larger gap than that potentially.

But yeah, if you just squeeze them into groups, then about 35% higher.

So I tell you what I thought was interesting about this is the real question isn't about politics.

The real question is about does a vaccine work.

But it's not that easy to figure out in real world data whether the vaccine works because of selection and who gets it and whatnot.

And this is an interesting,

maybe not cut and dry, maybe not completely overwhelmingly convincing, but an interesting case of thinking about data creatively to try to get at causality in a world where it's not that easy to get at causality.

And there are other ways to do it.

And what I also liked about your substack is you did this simple aggregate analysis and then you referred to a fascinating study that had actually gone into registration records to do the same thing.

Could you talk about that study as well?

Because I thought that was really interesting.

I hadn't seen that study before.

Yeah, this is a Yale study.

I think it was a collaboration between the management school and public health school.

And they actually are going to the individual level data.

They're looking up voter registration records in Ohio and Florida and running a search of people who died during that period.

And they found the same thing, that up until January or February 2021,

neither party's registered voters have higher excess deaths.

And then once vaccines are available, then Republicans Republicans do.

And the good thing about this is: A, they have individual level records.

B, because they're confining this analysis to individual states, Florida and Ohio, then there's less like regional luck of the draw and where COVID kind of happens to land or where a particular strain might have more effect.

So they're controlling for a lot of the things that my kind of quick and dirty analysis didn't do and find the same thing.

And that's again, as a researcher, when you start to say, okay, here are two pretty different methods and they have a similar result, one's more involved, one's simple, that starts to be pretty robust more often than not.

The other thing that I found really interesting, and it's a little bit behind the scenes, is the fact that there wasn't a difference between the red and blue states before the vaccine.

And presumably, if the Republicans didn't like the vaccines, they also didn't like a lot of the other policies we were doing to try to fight COVID, like restricting social contact.

The implication is that maybe those other policies weren't working very well, which is interesting interesting because we just don't talk about that as much as we should, thinking about future epidemics and what maybe we should be doing.

What I think it does is put an upper bound on how effective those other measures were in practice.

A non-finding doesn't mean the effect was zero.

It means the effect was uncertain is a subtle point that people miss sometimes.

Okay, so you did this Substack post and it said things you thought were pretty sensible and well supported by the data.

And so the critics come.

So tell me about that and how you think about criticism in this context.

This seemed like a very kind of moderate and sensible position to me.

Hey, the vaccines made a clear, obvious difference.

And the other stuff, it's not so obvious.

And so maybe it wasn't worth it.

That seems like a very centrist kind of take, but instead, it gives people two different ways to get pissed off at you, right?

Here's what I found entertaining.

as an outsider watching you do this, but I find incredibly frustrating when I'm the one being attacked, is that there's a real imbalance between the amount of work it takes to produce a thoughtful, data-driven piece that makes a sensible point about the world.

But then to criticize empirical work takes no time at all, right?

That drives me crazy, that mismatch between how hard it is to produce and how much people can sway readers just by criticizing you without any support.

A big culprit in this is also Twitter, or I guess it's now called X officially.

The fact that you can take a position and write a pithy tweet in a minute or 20 seconds, I think makes this issue worse and leads to a lot more tribal rivalry and kind of dunking on people.

I have kind of quite self-consciously pivoted away from Twitter toward Substack, toward my newsletter there for many reasons.

One is that you can get subscribers, including paying subscribers.

And so at least if you're the subject of some annoying controversy, you can make a little bit of money off it now.

But also to be able to control the tone and say, look, I'm going to take four hours with this subject and not four minutes.

So I want to go back to the book you wrote.

It's called The Signal and the Noise.

And knowing we'd be talking, I went back and I took a look at it for the first time since right after it came out.

And I have to say, it's really an awesome book.

You make a lot of simple, but I think important points.

So to me, the most succinct summary of the book is this.

You wrote, we need to stop and admit it.

We have a prediction problem.

We love to predict things and we aren't very good at it.

Yeah, that's the thesis of the book, basically.

Still go out and buy it.

I think the irony, it comes up a little bit with the Trump in 2016 prediction.

I'd call it a forecast technically.

But like people really demand certainty, right?

They assume that if someone's an expert, they must know all the answers with a high degree of confidence when there are times like in 2016, where the right answer is be less certain.

That's a kind of hard message to sell.

And it's not getting any easier in the kind of days of Twitter and other social media where people have access to a stylized interpretation of facts that flatter their political and other preferences.

But human beings in so many domains have failed, including economics, right?

Economics is notorious for challenges in predicting macroeconomic conditions, problems that we thought were solved, like inflation, obviously weren't in the past couple of years.

So there really aren't very many examples of successful predictions.

Exceptions include weather forecasting.

25 or 30 years ago, weather forecasting was literally a joke, but also there was very little predictive power more than a couple of days out.

And now they can precisely say next Tuesday at 3 p.m., 80% chance of rain.

It's quite useful.

So what makes weather forecasters good are a couple couple of things.

One is they do actually have physical models of the world.

It's not just purely statistical.

That helps a lot.

And also they have a lot of practice where if you make forecasts every day, 24 hours a day of temperature and wind and pressure and all these other variables, then experience really helps.

You get a lot better calibrated if you get a lot of feedback knowing when you're right and when you're wrong.

So you say we're bad at predictions, and then you get into what you think the main reasons are that we fail.

One reason you raise is that we focus on those signals that tell a story about the world as we would like it to be, not how it really is.

Aaron Ross Powell, this is especially true if you cover politics for a living and cover elections and polling.

I can guarantee you that like in October 2024, you'll have Republicans making these grand claims that Trump is going to win based on the data and Democrats say the same thing about Biden.

And it's kind of funny how people like just don't have an awareness of like how much confirmation bias they have.

It's like they might intellectually understand in some abstract way that like confirmation bias exists.

But partisan political preferences train you to see everything in a blinkered partisan way.

There's no particular reason that your view on marginal tax rate should correlate with your view on abortion, for example, or like transgender rights or something like that.

But parties try to get people to form coalitions by agreeing on a bunch of unrelated stuff.

And it's it's almost like a recipe toward confirmation bias.

I think about the game theory of politics a little bit, right?

It's not a coincidence that most presidential elections are about 50-50.

The parties are very efficient in some ways at forming coalitions, but that means they're taking complicated human affairs and complicated people and voters and swooshing them all down into one dimension.

And so that's a recipe for being yelled at if you have heretical, complicated political views.

You're listening to People I Mostly Admire with Steve Levitt and his conversation with Nate Silver.

After this short break, they'll return to talk about how online poker led Nate to the world of political forecasting.

This is the Chase Sapphire Lounge of Boston, Logan.

You got Clem Chowder.

In New York, Dirty Martini.

Over 1,300 airport lounges and one card that gets you in.

Chase Sapphire Reserve, the most rewarding card.

Learn more at chase.com/slash Sapphire Reserve.

Cards issued by J.P.

Morgan Chase Bank and a member of FDIC, subject to credit approval.

There's Marshawn Lynch.

You and I make decisions every day, but on prize picks, being right can get you paid.

So I'm here to make sure you don't miss any of the action this football season.

With prize picks, it's good to be right.

Download the Prize Picks app today and use code Pandora to get $50 in lineups after you play your first $5 lineup.

That's code Pandora to get $50 in lineups after you play your first $5 lineup.

PrizePicks, it's good to be right.

Must be present in certain states.

Visit PrizePicks.com for restrictions and details.

Honey, do not make plans Saturday, September 13th, okay?

Why, what's happening?

The Walmart wellness event.

Flu shots, health screenings, free samples from those brands you like.

All that at Walmart.

We can just walk right in.

No appointment needed.

Who knew we could cover our health and wellness needs at Walmart?

Check the calendar Saturday, September 13th.

Walmart Wellness Event.

You knew.

I knew.

Check in on your health at the same place you already shop.

Visit Walmart Saturday, September 13th for our semi-annual wellness event.

Flu shots subject to availability and applicable state law.

Age restrictions apply.

Free samples while supplies last.

Disney acquired Nate's website 538 in 2018, but they gutted his team there earlier this year as part of broader cost-cutting efforts.

I'm much more interested in how he grew 538 to be what it was than I am in the Disney fallout, but I guess I need to ask him about both.

So I've been focusing on Nate Silver, the data scientist, but you've also been pretty entrepreneurial.

What was the origin of 538?

Do I remember correctly that you were doing some kind of an anonymous political blog or something like that?

So you give my kind of very brief life history.

I went to the University of Chicago, graduated in 2000, got a job working at...

Let me ask you, how come you didn't take my class?

I think your classes were like not at the right time of day.

I was lazy as a student.

So I had like four-day weekends every weekend.

Not the most productive period of my college career, I will tell you that much.

But yeah, I graduated.

I got a job as a consultant for KPMG, which is a big accounting firm.

Found it pretty boring, but on the side, began doing a couple of things.

One is that I started playing internet poker, which was in a boom period back then, and also started working on models to forecast how Major League Baseball players would do.

I basically had a lot of free time at work and wound up quitting to do a combination of those things.

I was actually making most of the money playing poker, but putting more of the time in working on baseball statistics called Sabermetrics is the more technical term for it.

Did those things for about three years.

In 2006, the U.S.

Congress passed a law that essentially banned internet poker.

What it really did is kind of ban payment processing to online poker sites.

So there were workarounds, but a lot of the liquidity in the market dried up.

So having my livelihood destroyed by this act of Congress, I began actually in 2006 becoming more interested in the congressional elections that year.

I wanted to see the people who had passed this legislation voted out of office.

Vindictive, huh?

Jim Leach was this kind of very old school moderate congressman from Iowa who was a lead sponsor of the bill.

And he actually did lose to a random political science professor who was backed by poker players' contributions, actually, Dave Lopesack, I think the name was.

I was living in Chicago at the time, and it was hard not to be compelled.

By the 2008 election, you had all these mega stars, you had Hillary Clinton running, you had Barack Obama running, who had been a law professor at UFC and kind of a favorite of like campus progressives, of course.

But also, John McCain was a remarkable American war hero.

And Sarah Palin was a phenomenon that was a precursor in some ways to some type of Trump-style populism.

And so all that star power, coupled with this increasing interest among the population for moneyball style this, for economic style, that, right?

More kind of data-driven analysis, I started writing originally anonymously on this site called Daily Coast, which is a actually very liberal-leaning site.

And why anonymous?

I was afraid since I was mostly known as a baseball writer that people would not want to hear my political analysis.

Even though you had a reputation for being a really good data scientist in this other domain, you thought it was better to be an unknown that's funny.

And also to have people evaluate the work for itself and not having my name attached to it.

But sooner or later, I realized, okay, it doesn't make much sense to be like writing for someone else's site.

So I founded 538 and began building these models initially to see whether Obama or Hillary Clinton had a better chance against McCain.

Also, some forecasting of like the primaries, and that site just did way better than I would have thought.

This was still an era where it's the beginning of Twitter.

Things can go very viral very fast.

And all of a sudden you're getting more page views than some mid-sized newspaper.

And so it just wound up, in a mostly good way, taking over my life.

And there was just like tremendous demand for the intersection between interest in politics, interest in data, and just a kind of viral phenomenon unto itself.

So this was a one-man show for how long?

Or did you start hiring planes right away or how did that go?

I've always and still always am the guy who builds all the models himself, right?

Other people might write for the site.

There were several paid guest writers and some unpaid guest writers, but like I was very hands-on as far as the models go.

So just looking at my own academic career, as I got resources and started hiring people, the only thing I really found that I could delegate very well was data analysis and modeling.

And that was the part I liked best.

And I made this terrible career move of giving away the part of the process I liked best, but I didn't feel like I could have people write my academic papers.

And I didn't didn't feel like other people could have ideas that I wanted to write about.

It's interesting to me that you're saying you managed to avoid that fate at 538.

You really kept control of the modeling, which I'm impressed by.

Well, I should say there were lots of people that were very helpful as far as collecting and updating data, as far as like building these beautiful graphic interfaces and visualizations.

But the actual kind of data itself, I still code the election models in stata, and it is very labor-intensive.

Right now, all the the models are in a state where I think they're all pretty complete.

I don't plan to make major revisions.

But yeah, there were periods where over the period of like a year and a half, rebuilt our presidential election model, our presidential primaries model, the congressional model, and an NBA model.

And it was just like a lot of very late nights trying to debug some code.

Eventually, you partnered with the New York Times and you did that for three years.

And I think there's this general impression that when you're creating content for the New York Times, it must mean you're getting paid a bunch of money.

But at least for me, anyway, Stephen Dubner and I had a regular column in the New York Times magazine, and then we had our Freakonomics blog affiliate with the New York Times.

My memory is that it wasn't lucrative at all.

And in fact, I think I remember trying to talk you out of partnering with the New York Times.

Is that true, or is that just my imagination?

Journalism is a hard field to make money.

So they paid me decently well, but it is nice being on Substack where you write a good post and you get dozens of notifications in your inbox saying people have signed up.

And if it's a paid subscription, there's a pretty substantial lifetime value from a subscriber.

And yeah, it is very weird now, kind of having this very direct incentive to write good stuff or write stuff that pleases people, which might not be quite the same thing, after years of having at the New York Times and at ABC News of having no incentive-based compensation at all.

So you jump over to ABC, and then earlier this year, the nightmare scenario unfolded with Disney doing a bunch of cost cutting throughout the organization and they savaged 538.

Yeah.

What's your reaction to that?

To give like the slightly longer history, 538 was acquired by ESPN in 2013, which is part of Disney, and then transferred to ABC News, another part of Disney in 2018.

So one thing I'd say is it wasn't a big surprise.

Look, when ESPN bought 538, this was an era when ESPN thought they were the best business in the world, that we have these guaranteed subscriber fees and we show the NFL and the NBA, which is incredibly robust products, and our business will not be disrupted, right?

We have a huge profit margin.

And so actually with businesses like 538, they had no like business model at all.

They were clearly looked at as lost leaders, even though they, I think, could be very good businesses, right?

I mean, they have very loyal audiences.

They have very, frankly, affluent audiences.

But if you don't start running something like a business to begin with, then it's not in the DNA.

You literally have nobody whose job it is to really sell ads, for example, or find other ways of monetizing the site like subscriptions.

It's nobody's job, and so it doesn't happen.

And as Disney hits more headwinds with people cutting the cord and theme parks and the pandemic, and every part of their business now is under threat in some way, shape, or form, you just become a sitting duck at some point in time.

Let's talk about your new book.

You've been at it for a long time.

I can see talk about it online going back at least to 2021.

This must be some book you're working on.

Yeah, so the subject of the book is gambling and risk, which is an ambitious subject.

It starts out literally in the world of capital G gambling.

So the first two chapters are about poker, and there's a chapter about the history of Las Vegas, the history of casino gambling.

There's a chapter on sports spending.

That's the first half.

And there's chapters on venture capital and the cryptocurrency bubble and collapse.

There's a chapter in about economic progress and capitalism.

There's actually a lot of economics in the book, I think, in different ways.

So it's a very ambitious book that I hope will provide interest on every page.

You have built a life around analyzing data.

And what I find so shocking in the modern world is how little training and exposure the typical person gets in a school setting to data-related things.

Have you thought at all about the teaching of data science or data analysis and how we might do it to middle school kids or high school kids?

First of all, I think there should be statistics and probability and kind of logical thinking classes taught from a relatively early age, I would say.

And then for some reason, still with math education in the U.S.,

there's still sometimes too much of an emphasis on the technical side of things and not as much on like problem solving.

logical quote unquote rational thinking skills.

One thing I will do sometimes, I'll be asked to judge student research paper competitions.

So they're trying to solve some sports problem or some election modeling problem.

And almost invariably the people use way too many fancy techniques and aren't spending enough time asking basic questions of the data or thinking about confounding variables or figuring out like what a more robust strategy is for answering a question, all the things you were talking about before.

I haven't thought about what the curriculum would be, but a combination of statistics, but really logical thinking, I think, would benefit the students of the United States.

My advice if you want to learn how to analyze data yourself is to find a question you care about, get your hands on some data, and try to figure out the answers.

There is no substitute for that kind of real-world experience.

The second best thing you can do to learn about analyzing data short of doing it yourself is to read what Nate Silver has to say, either in his outstanding book, The Signal and the Noise, or in his new sub stack entitled Silver Bulletin.

So now it's the time in the show where we take a listener question and as always we bring on our producer Morgan.

Hi Steve.

A listener named Kletis wrote to the show.

Klaytis has been a high school educator for almost 20 years and says that getting kids to appreciate a subject is an important part of the learning process.

In our episode with mathematician Stephen Strogatz, the two of you developed the idea of a math appreciation course for high school students.

And that idea has gotten a lot of traction after that episode.

Klaytis likes the idea, but does have a concern.

Students who might really appreciate math might not excel at the subject.

For instance, Klaytis has seen some students pursue careers in video game design because they love video games, but when faced with the more challenging and boring realities of design and development, they've dropped out or failed.

So, what do you think about this?

So, there is no guarantee that just because you like something, you can build a career around creating the thing that you're excited about.

If we were able to create a math appreciation course that led a whole bunch of kids to think they wanted to do STEM, to try it and then decide they didn't, I still would call it a huge success.

I think such a better outcome than those kids never dipping their toe in the water because they were just afraid of it.

I'm a big believer in options and I'm a big believer in quitting.

So I think there's not that much lost if people try something and it turns out not to be their thing.

So Clayton had another example, which I thought was really spot on.

A lot of people loved your insights into economics after Freakonomics came out, but that doesn't mean they'd actually enjoy the process of wading through data to find these insights and writing academic papers.

Does that ring true for you?

Oh, completely.

After we wrote Freakonomics, I was a little nervous, really, in the same spirit of what Cletus is talking about, because I don't think I'm exaggerating if I say that since the book was written, maybe a thousand young people have come up to me and they've said, oh, I read freakonomics and that's the reason I'm doing economics now.

Initially, I would apologize and say, oh, God, I bet you hate real economics because freak economist and real economics don't have that much in common.

And interestingly, not one young person ever said to me, yeah, I'm actually really angry that you tricked me into doing economics.

Some of them said, no, I love economics.

And others said, oh, yeah, God, economics is hard.

I finally had to quit but not a single person blamed me for opening that door after a while i just stopped apologizing for it and i gave a different response which was just i'm so glad that i was able to have that small impact on your life and then i would say how do you like real economics and sometimes they'd say they liked it sometimes they wouldn't but it really reinforced for me the idea that it's really not the job of an introduction to a field to tell you what's life like as a professional doing it.

Really, the job is just to say, here are the amazing lessons that brilliant people over the last hundred years or last thousand years have come up with and rejoice in it.

Feel the wonder of what's been discovered.

This conflation between how we teach people at the entry level and a real career in something, they're totally different.

We talked about this with Carolyn Bertazzi and chemistry.

The chemistry you learn in high school, it's chemistry from the 1800s.

It's got nothing to do with the daily life of a modern chemist.

It's just the nature of introducing people to a subject is you give them the greatest hits.

What Steve Strogatz's point in math appreciation is: if a kid decides that they can't do math, that math is not for them, then they'll never even try a path that could possibly involve math.

And that's what we're trying to avoid.

Kleitis, thank you so much for writing in.

If you have a question or comment for us, our email is pima at freakinomics.com.

That's p-im-m-a at freakinomics.com.

It's an acronym for a show.

We read every email that's sent and we look forward to reading yours.

In two weeks, we're back with a brand new episode featuring Fei Fei Li, a computer scientist whose pioneering work in computer vision played a critical role in the development of artificial intelligence.

Now, she spends much of her time trying to bring humanity to AI, steering its development in ways that will benefit rather than harm society.

Every tool is a double-edged sword.

Just because I wish this is benevolent doesn't mean it will always remain benevolent.

As always, thanks for listening and we'll see you back soon.

People I mostly admire is part of the Freakonomics Radio Network, which also includes Freakonomics Radio, No Stupid Questions, and the Economics of Everyday Things.

All our shows are produced by Stitcher and Renbud Radio.

This episode was produced by by Julie Canfer with help from Lyric Baudich and mixed by Jasmine Klinger.

We had research assistance from Daniel Moritz Rabson.

Our theme music was composed by Luis Guerra.

We can be reached at pima at freconomics.com.

That's p-im-a at freakonomics.com.

Thanks for listening.

I'd rather kind of gouge my eyeballs out than have those conversations anymore.

The Freakonomics Radio Network, the hidden side of everything.

Stitcher.

This is the Chase Sapphire Lounge at Boston, Logan.

You got Clam Chowder.

In New York, Dirty Martini.

Over 1,300 airport lounges and one card that gets you in.

Chase Sapphire Reserve, the most rewarding card.

Learn more at chase.com/slash sapphire Reserve.

Cards issued by J.P.

Morgan Chase Bank and a member of FDIC, subject to credit approval.

This is Marshawn Lynch.

You and I make decisions every day, but on prize picks, being right can get you paid.

So I'm here to make sure you don't miss any of the action this football season.

With prize picks, it's good to be right.

Download the Prize Picks app today and use code Pandora to get $50 in lineups after you play your first $5 lineup.

That's code Pandora to get $50 in lineups after you play your first $5 lineup.

Prize picks, it's good to be right.

Must be present in certain states.

Visit prizepicks.com for restrictions and details.

Honey, do not make plans Saturday, September 13th, okay?

Why, what's happening?

The Walmart Wellness Event.

Flu shots, health screenings, free samples from those brands you like.

All that at Walmart.

We can just walk right in.

No appointment needed.

Who knew we could cover our health and wellness needs at Walmart?

Check the calendar Saturday, September 13th.

Walmart Wellness Event.

You knew.

I knew.

Check in on your health at the same place you already shop.

Visit Walmart Saturday, September 13th for our semi-annual wellness event.

Flu shots subject to availability and applicable state law.

Age restrictions apply.

Free samples while supplies last.