47. Robert Axelrod on Why Being Nice, Forgiving, and Provokable are the Best Strategies for Life

44m
The prisoner’s dilemma is a classic game-theory problem. Robert, a political scientist at the University of Michigan, has spent his career studying it — and the ways humans can cooperate, or betray each other, for their own benefit. He and Steve talk about the best way to play it and how it shows up in real world situations, from war zones to Steve’s own life.

Listen and follow along

Transcript

It's Stock Up September at Whole Foods Market.

Find sales on supplements to power up for busy weeks.

Plus, pack your pantry with pasta, sauce, and more everyday essentials.

Enjoy quick breakfasts for less with 365 by Whole Foods Market seasonal coffee and oatmeal.

Grab ready-to-heat meals that are perfect for the office and save on versatile no antibiotics ever chicken breasts.

Stock up now at Whole Foods Market, in-store and online.

I'm Scott Hanson, host of NFL Red Zone.

Lowe's knows Sundays hit different when you earn them.

We've got you covered with outdoor power equipment from Cobalt and everything you need to weatherproof your deck with Trex decking.

Plus, with lawn care from Scott's and, of course, pit boss grills and accessories, you can get a home field advantage all season long.

So get to Lowe's, get it done, and earn your Sunday.

Lowe's, official partner of the NFL.

It's been more than 30 years since I first read a book called The Evolution of Cooperation by Robert Axelrod.

The only reason I even read the book was that it was assigned reading for an economics course I was taking, and I was a diligent student, so I always did the assigned reading.

But I was never inspired by the material.

Until that is, I read The Evolution of Cooperation.

It's a book about game theory, specifically something called The Prisoner's Dilemma, and it captured my imagination.

It changed the way I thought about the world.

It made me think for the first time, wow, maybe I should do academic research.

I'm so excited today to be talking to its author, political scientist Robert Axelrod, for the very first time.

Welcome to People I Mostly Admire with Steve Levitt.

The prisoner's dilemma is one of those things that seems really simple when you first hear about it, but there's much, much more to it than most people realize.

I'm not exaggerating when I say that the prisoner's dilemma has become a guiding principle for the way I live my life.

But I know from trying to teach it to my students that it's actually a really hard idea to wrap your head around.

It's very counterintuitive.

I'm hoping that with more than three decades of experience, Robert Axelrod will be way better than me at explaining it.

I'm a little worried though, because if there's one thing I've learned from this podcast, it's that many experts have a shockingly hard time explaining to regular people what they actually do.

So let's see how it goes.

Roughly 40 years ago, you had the idea to run a little tournament for 13 academic game theorists.

Did you ever imagine at the time that this would launch a research agenda that would get over 50,000 academic citations and produce a best-selling book?

No, I just did it for fun at first.

And just to put it into perspective, I've had a pretty successful academic career, and this one little idea of yours has gotten more citations than all of my research papers combined.

And let me just say, I read your book in college, and it was one of the few things I ever read for class that blew my mind.

But to even start talking about what I found so exciting in your book, we first have to give folks a little crash course in game theory.

And specifically, your little tournament focused on something called the prisoner's dilemma, which I'm sure most listeners have heard of, but probably haven't thought very deeply about.

Well, let me use the original example where two criminals are arrested by the police.

So the police separate them and say to each one, if you confess, we'll give you a lighter charge.

And if you don't, then we'll punish both of you.

And the idea is that each one of them has an incentive to defect by turning state's evidence.

But if they both do that, then the police don't need either one of them.

If they cooperated with each other and kept their mouths shut, they would be better off and they would just get a lighter charge.

And so they are better off both cooperating, but each has an incentive to double cross the other.

Okay, Robert, can I tell you honestly, that sounded a lot like the explanation I give.

I think that you and I buried in this area have a really hard time getting far enough away to explain it.

Hey listeners, Morgan here, the show's producer.

Steve's right.

I think he and Robert aren't explaining the prisoner's dilemma very well.

That's because it's really hard.

So as a non-academic, let me try explaining it.

The prisoner's dilemma is a hypothetical scenario in game theory.

So pretend Steve and I are both in police custody.

We've robbed a bank, but the evidence the police have against us is weak.

The police put us in separate rooms and they come and talk to me.

They explain there are four possible outcomes to our situation and my punishment will depend on what I choose to do and what Steve separately chooses to do.

Two key pieces of information you need to know.

I don't get to talk to Steve and I only have two courses of action, rat him out or stay silent.

So scenario one, I rat him out and he says nothing.

In this case, I'm defecting from my partnership with him and and I'll get to walk free.

He'll go to jail for 20 years.

Scenario two, neither of us rats out the other.

We are cooperating with each other and due to the weak evidence, we're only sentenced to one year of jail time.

In the third scenario, we both rat each other out.

We both defect from our partnership and both go to jail for 10 years.

Less than the 20-year maximum as reward for agreeing to testify against each other.

And in the last scenario, I say nothing and he rats me out.

I go to jail for 20 years and he walks free.

The police leave and I'm left to think through my options.

So what's the best course of action for me?

Each player is better off defecting, double-crossing the other one, acting on their own interest only.

But if both of them do that, they do worse than they could have accomplished if they had cooperated.

So from a game theory perspective, defection is what's called a dominant strategy, meaning it's the best thing to do no matter what the other guy does.

If the other guy cooperates, then you can exploit their cooperation by defecting.

And if the other side defects, then you'd be a sucker to cooperate.

And so if you play the game just once, both players say to themselves, no matter what the other guy does, I'm better off defecting.

And that leads to a non-optimal outcome for both of them.

That's why it's called a dilemma.

Exactly.

So the best thing we can do collectively is both cooperate, but we can't do that because our private incentives get in the way.

And that leaves us stuck at the only other outcome, which is we both defect and we both get a pretty bad payoff, but not as bad if we were made the sucker by the other player.

So that's the setup of the game.

You're basically caught in a trap if you play it once.

But the interesting thing is that everything changes if you play the game over and over.

So can you explain that?

The original analysis that I read when I was 17 in high school said, if you know when the game is going to end, you defect right from the beginning.

If you don't know when the game is going to end or if you're some indefinite future, then all kinds of interesting possibilities arise.

For example, you might try to cooperate and see if the other guy does.

And you might try to cooperate twice and then respond to the other guy's defection by defecting five times.

There are all kinds of possibilities of using the history of the game when you're in the middle to decide what to do in order to try to figure out how you can maximize your score.

What you really want to do is get the other guy to cooperate.

So you want to elicit cooperation.

So the key thing about playing this game over and over is that you use the future rounds as a way to punish someone if they defect on you now.

So if you only play once, you have no method of punishment.

But if you play it over and over, you can keep someone honest by having a strategy where I'll cooperate with you as long as you cooperate.

But if you screw me over, then I'm going to punish you for that.

I'm going to stop cooperating.

And so still with everybody completely selfish, completely self-interested, you can find what game theorists call an equilibrium in which because of the threat of future punishment, we're able to get the really good payoff that comes with cooperation.

I call that the shadow of the future, the idea that the future can affect what you're doing now.

One of the fascinating things about the repeated prisoner's dilemma game is that there's no one obvious best strategy for how you should play it.

It really depends on your expectations about how your opponent will play and about their expectations about how you will play.

So, theory doesn't give us an answer.

And so, you went out and you decided to gather data.

Yeah.

I thought I should ask people who are familiar with the game and familiar with game theory how they would play the game and how they would play the game precisely enough that you can write a computer program to implement their advice.

That reminded me of computer tournaments for chess,

where each player is trying to devise a computer program strategy to play as well as possible.

And so I reached out to about a dozen academics mostly who had actually worked with the prisoners of them and published papers using it and said, how would you play?

Assuming that you're going to have this indefinite future and the other player is pretty smart too, but you're both selfish, what would you do?

So your tournament had 13 participants, I think.

Did you have to reach out to many more game theorists than that to get them to play or was everyone eager to play your game?

Virtually everybody was eager to give it a try because they all thought they knew best.

I was going to say.

And I promised a trophy for the one that scored highest.

So you reached out to me and you said, what you have to provide to me is a computer program, an algorithm.

Basically, you build a little bot, and your bot is going to play the prisoner's dilemma against all of these other gameplays.

You play this for like hundreds of iterations of the individual game.

And the winner is going to be the one whose bot at the end of a round-robin tournament playing against all the other entries has scored the most total points.

Right.

Just one thing.

It's important to avoid words like opponent and trying to defeat the other guy because that evokes zero-sum thinking.

And this is not a zero-sum game.

We all tend to fall into zero-sum thinking whenever there's any kind of rivalry or anything that looks like competition because zero-sum thinking is the easiest way to do it.

And it's wrong and self-defeating in many texts.

In fact, almost all contexts, except say sports and all-out war.

I stand corrected.

I used bad language and probably I'm going to use bad language again.

And you correct me every time I do that.

So one of the things I find interesting about the setup of your tournament is it's not that these game players are there in the room playing against each other, adjusting their strategies in real time.

They have to commit to something ahead of time.

And it feels a little bit like a genetic code.

It's like nature sets off a species with a genetic code, and then it competes in different environments against different players over time.

I want to make sure I don't say competitors or opponents or anything like that.

I think of the prisoner's dilemma in game theory as being basically economic concepts, but really an economist would never approach this problem the way you have.

They would never make it algorithmic and set up strategies to compete.

They would always think of bringing individual humans into a lab and playing the game against each other.

And I think one of the reasons that your results have had such a broad appeal is the prisoner's dilemma is really one of those rare areas of social science that's intrinsically interdisciplinary.

It spans economics, evolutionary biology, political science, math even.

So I really find that interesting that the perspective you brought to it, and you're not an economist, is totally different than the perspective that an economist would have brought to it.

The economists would have also possibly, and they have, studied what the equilibrium possibilities are.

In other words, if two players are doing given strategies, under what conditions would they have no incentive to change their strategy?

And the problem with this approach in this case is there are a lot of different strategies that would be in equilibrium.

And so there's a problem of how do you distinguish those.

And the tournament provides one way to analyze what works well in a variety of settings.

So before you get into the findings, the insights, how about we start with the simplest possible strategies?

One strategy would be to always cooperate.

And it's pretty easy to see that this strategy could fare disastrously against many opponents because not opponents.

Okay, my mistake.

It's going to take a lot of chiding and discipline to get me off of that language.

But explain why this is going to be a terrible strategy.

If the other player tries to defect and then finds that you cooperate anyway, and they repeat that until they're really convinced that you always cooperate no matter what.

Well, then they might as well always defect no matter what.

And so you always get the lowest possible payoff once this gets going.

So the other really simple strategy is to never cooperate.

Every round, you do what gives you the highest payoff, which is to not cooperate.

So you basically play the repeated prisoner's dilemma just like you would play a one-shot game.

Did any of the experts submit that strategy?

No, I think they appreciated that was just going to get them to be a situation where the other player eventually will learn that they're never cooperating, so why should I cooperate?

And that'll lead to a situation where both sides are always defecting, and that's not a very good payoff.

So what was the simplest strategy that anyone submitted?

Well, the simplest one is called tit for tat.

It cooperates on the first move and then does whatever the other guy did.

on the previous move.

Okay, so it's two lines of code.

In lay person's words, what is tit tit-for-tat?

It's reciprocity.

It's saying, I'm willing to cooperate if you are, and if you're not, I'm going to defect.

And so I'm just going to echo what you do.

And maybe that'll get you to realize that you're better off cooperating with me because then I'll cooperate the next time.

And then we could both do pretty well.

Okay, but a seeming weakness of tit-for-tat is that it's very reactive.

It never probes the other player to see whether he or she is a pushover by instigating a non-cooperative action.

And it's also super forgiving.

But the forgiveness is quite limited.

If you defect once, I'll forgive, but I won't keep forgiving no matter what you do.

Exactly.

So I forgive you as long as you're nice to me.

But it is a very understanding strategy, right?

Because someone I'm playing against can defect 50 times in a row.

I will defect also.

But as soon as that person says, okay, now it's time to cooperate, Tit for Tech says, okay, great, I'll cooperate too, which is quite different than human nature.

Most humans, if they've been defected against 50 times and then someone's nice to them once, are not going to be so forgiving as tit for tat.

That's my intuition.

You may be right, but it still pays to do that to see if you can't get out of this rut that you're in.

And you said that it was a very understanding strategy.

I would say the opposite.

It has very limited cognitive ability.

It could remember one move and react.

That's all.

The calculations are trivial.

So there are strategies similar to tit-for-tat that are much less forgiving.

And they have this great name that economists call the Grim Reaper strategy, this massive retaliation strategy.

Can you explain why this might seem like it could be pretty good in the context of the repeated prisoner's dilemma?

One of the submissions was maximal punishment, so that if you defect it even once, I'll never cooperate again.

And that would seem to give you a really strong incentive to cooperate.

However, the trouble is in this context, you can't communicate that.

And so if the other player does any exploring and maybe defects once, it's all over.

And then you both get the lower score for always defecting.

So while this massive retaliation seems like a good idea because it gives the biggest incentive for the other guy to cooperate, in this context where you can't talk and you can't publicly commit to it, it's a very ineffective, dangerous strategy.

Okay, so the strategies we've talked about so far, the tit for tat or this Grim Reaper strategy, both of them have the characteristic that they never defect first.

And I think you gave a name to that.

You call those strategies nice strategies.

Yeah, I couldn't find another word in English that says don't be the first to cause trouble or something like that.

So I just called it nice.

So then there's another set of strategies that are not nice.

So can you describe an example of a not nice strategy?

Well, a strategy might start off with a defection, and then

if the other side and the next move cooperated anyway, they might defect again and wait until the other side defected before they decide to change their mind.

And so this would be sort of exploratory and see what they can get away with.

Yeah, so there's a whole bunch of strategies that capture the notion that I'm willing to cooperate if you show me that you're tough.

But until you show me you're tough, I'm going to exploit you as much as I can.

Yeah, let me say, when I set this up with the chess playing analogy in mind, I thought that the best strategy would be pretty complicated because you would have to take all these considerations into account.

So, for example, several players did variations on try to learn what the other guy's doing and guess what their strategy is and then do an analysis to figure out what's the best strategy to use from now on if your beliefs are correct.

And even better, if it keeps updating its beliefs based on what the other guy does.

So it's always modifying what its beliefs are based on the new experience.

And that's a pretty complicated thing to do.

But I imagine that that complicated things might work well because they could take a lot of aspects into account.

So we've talked about a handful of different strategies, but before we reveal what strategy actually won the tournament, I want to pause and give listeners a chance to think for themselves.

What kind of strategy do you think is going to work?

Do you think it's going to be a simple strategy?

or a complicated one.

Is it going to be a tough strategy or one that's nice?

Or maybe none of the strategies we even talked about at all, something totally different.

What I do know is that before I knew the results, I had a very strong opinion about which strategy would win, and it turned out to be completely wrong.

Okay, so Robert, tell us who won the tournament.

I calculated every strategy, playing every other strategy.

And the one that got the highest score was actually the simplest one submitted, the tit-for-tat strategy.

Tit-for-tat took home the trophy took home the trophy the two-line program took home the trophy who submitted that anatole rapaport was a professor of peace research actually

and he submitted it but he warned me in a letter that he really wouldn't recommend it in public use because there's so many other complicating factors but nevertheless in this context it does really well

in of the ones that did well and the ones that did poorly what characteristics led to success or failure it turns out the most important characteristic is to be nice That is to say, never be the first to defect.

Just keep cooperating as long as the other guy does.

And that means that you don't start trouble.

Another characteristic is it pays to be forgiving.

It pays to not keep defecting for a long time after the other guy did.

And of course, tit-for-tat is maximally forgiving, but only in the short run.

One more characteristic is provocability.

You should be provocable.

In other words, You can't afford to be a sucker all the time.

You have to actually get mad or defect when the other guy defects defects because that teaches them a lesson.

And if you're not provocable, then you'll be a sucker much too much.

The effective strategies are nice, forgiving, and provocable.

Which seem like they're at odds, but they're not because niceness is about, am I ever a jerk to you without you asking me to be the jerk?

And being provokeable means, look, if you're a jerk to me, I'm going to come after you.

And those, of course, are two different characteristics of a strategy.

So when you saw the results and you saw the tit-for-tat one, were you in shock, mildly surprised and not surprised at all?

Well, I was pretty surprised.

I expected as an chess that it would take real sophistication to do well in this game.

So I was really surprised when the simplest of all

did best.

And then I wondered, is this a fluke?

And so I thought that it'd be good to get a lot more entries.

to see what a lot of other strategies might be and what would happen under this much bigger bigger context.

And so what I did then is I advertised in computer hobby magazines, and it was just a one-page explanation of the prisoner's dilemma and how to send in an entry.

With that, I got 62 entries, including some kids and some more professors.

And they came from a lot of different disciplines, and that was delightful.

And it's also true, the first time you played the tournament, nobody really knew what the results would be.

And for the second tournament, I told everybody in the advertisement soliciting entries, it says that Tit for Tat did best.

And so you ran the second tournament now with 60-something players.

How did the strategies look different?

Anatole Rapaport submitted the same one, the Tit for Tat.

Do you think he was confident that Tit for Tat would have a good showing the second time around?

I certainly wasn't confident.

I doubt if he would be because we knew that these other entrants, some of whom had a lot of time to think and work on this, and others from professionals had a lot of experience with this sort of thing and game theory analysis.

I suspect that he thought, as I did, that something else might do better.

Especially because people knew that tit-for-tat was the winner.

And so they were designing strategies that were gunning for it.

I have to beat tit-for-tat.

And so I put bells and whistles in my strategy that will be especially effective in fighting.

tit-for-tat.

But they also knew there would be a variety of others, and tit-for-tat would be just one of the players that they would meet.

And so many of them tried to exploit other strategies, try to find the weaknesses in other possible entries.

So they were out guessing each other, too.

This is classic game theory.

One set of people say, I have to beat tit-for-tat, because that's what one.

A second set of more sophisticated players say, well, I know a bunch of people are going to be out trying to beat tit-for-tat.

So what I need to do is build a strategy that will exploit those strategies that are trying to beat tit-for-tat.

But again, the word exploit.

Exactly.

Yeah.

So I'm here.

I fall into my exploit trap.

But yeah, it's really hard to avoid.

It's really very hard to avoid.

I cannot get out of my head this context of a lifetime of viewing interactions as being competitive and exploitive.

And so I make the mistake over and over of not talking about this with the right language.

It's just one more good point that comes out of the research that you've done.

So tell us the results of the second tournament.

Were they radically different than in the first tournament?

I nearly fell off my chair when I saw it because tit-for-tat won again.

And wow, this is one of the high points of my research career when I added up the scores and I thought, I'm on to something here.

This is not just a fluke.

This is really worth looking into and finding out how that happened.

You're listening to People I Mostly Admire with Steve Levitt and his conversation with Robert Axelrod.

After this break, they'll return to talk about real-world applications of the prisoner's dilemma.

I choose to work where everyone can see how busy I am.

I choose pedal pounds.

I choose the freshest of beats.

I choose to never burrito and drive.

Whatever you choose, choose transit and do your part to spare the air.

People I Mostly Admire is sponsored by Mint Mobile.

From new shoes to new supplies, the back-to-school season comes with a lot of expenses.

Your wireless bill shouldn't be one of them.

Ditch overpriced wireless and switch to Mint Mobile where you can get the coverage and speed you're used to, but for way less money.

For a limited time, Mint Mobile is offering three months of unlimited premium wireless service for $15 a month.

Because this school year, your budget deserves a break.

Get this new customer offer and your three-month unlimited wireless plan for just $15 a month at mintmobile.com/slash admire.

That's mintmobile.com/slash admire.

Upfront payment of $45 required, equivalent to $15 a month.

Limited time, new customer offer for first three months only.

Speeds may slow above 35 gigabytes on unlimited plan.

Taxes and fees extra, see Mint Mobile for details.

Carl's Jr.'s the only place to get the classic Western bacon cheeseburger.

Those onion rings, all that bacon, that tangy barbecue?

Well, have you tangoed with spicy western bacon?

Give me right out the jalapeno heat.

Take a Pepper Jack Punch.

For a limited time, it's high time for a spicy Western reintroduction.

Wrangle the best deals on the app.

Only at Carls Jr.

Available for a limited time.

Exclusive app offers for registered My Rewards members only.

This is Marshawn Lynch.

You and I make decisions every day, but on prize picks, being right can get you paid.

So I'm here to make sure you don't miss any of the action this football season.

With Prize Picks, it's good to be right.

Download the Prize Picks app today and use code Pandora to get $50 in lineups after you play your first $5 lineup.

That's code Pandora to get $50 in lineups after you play your first $5 lineup.

Prize picks, it's good to be right.

Must be present in certain states, visit prizepicks.com for restrictions and details.

Department of Rejected Dreams.

If you had a dream rejected, IKEA can make it possible.

So I always dreamed of having a man cave, but the wife doesn't.

What if I called it a woman cave?

Okay, so let's not do that, but add some relaxing lighting and a comfy IKEA Hofberg Ottoman, and now it's a cozy retreat.

Nice, a cozy retreat.

Man cozy retreat.

Sir, okay.

Find your big dreams, small dreams, and cozy retreat dreams in store or online at ikea.us.

Dream the possibilities.

Morgan, what do we have on tap today?

Hey, Steve.

So the UK has recently begun trials where they deliberately infect people with COVID in order to study the disease.

And we had a couple listeners, Brian B.

and Terrell W., write in to tell us about this since you've been a big proponent of human challenge trials, which is what these studies are called.

You've actually talked about it on the show a couple times.

You and Dr.

Monsev Slowy, the former head of Operation Warp Speed, had a disagreement about their use for COVID vaccine development, while Dr.

Bapu Jena, who is an economist and also a host on the Freakonomics Radio Network, actually agreed with you about their potential.

So how do you feel now that they're happening in the UK?

Well, I think it's great.

Do you know if they're using an incentive to get people to sign up?

So they're paying these young volunteers about $8,000 a person, which is not a lot, but is so much less than they could.

What I find so frustrating about it is, look, there have been 4.5 million deaths from COVID and 200 million infections.

And yet the medical ethicists are up in arms just because the infections are being done intentionally.

When the trade-off is that if we had learned from the beginning about how the disease spread or maybe about immunity or getting the vaccines out sooner, it could have saved 10,000 lives, 100,000 lives, a million lives.

So going along the COVID theme, a couple of our listeners, Emily and Sean, wrote in to ask if you knew anything about the effectiveness of COVID vaccine lotteries.

You talked about vaccine lotteries in our episode with Denby Samoyo since several states tried lotteries as incentives to vaccinate their populations over the summer.

So, sadly, Morgan, you know how big an advocate I've been for these lotteries, but the data suggests they haven't worked very well.

Now, the one exception was the first one, the Ohio lottery.

Because it was first, it actually got an enormous amount of free publicity from the media.

And it worked really well.

The estimates I did, just looking at the data myself, suggested that maybe 60,000 extra people got vaccinated.

That's about $50

per person on the margin for each extra vaccination.

And that is such a bargain.

My estimates are, again, very back of the envelope, but maybe every extra vaccination has an externality, a benefit to society of about $10,000.

So the Ohio lottery was a big success.

All the states that followed, zero evidence that they had any impact at all on vaccinations.

Do you have any other ideas for incentivizing people to get the COVID vaccine?

I'll tell you, listeners wrote in, a couple of them, with what is really an obvious, but I think a really great idea.

They just said, why don't insurers refuse to pay the hospital costs of patients who get COVID if they're not vaccinated?

And honestly, it's super simple and maybe it's completely politically unviable.

Maybe it's even illegal.

I'm not sure.

But if you are willing to make some people mad, that is the kind of program that would dramatically impact vaccination rates in the U.S.

Emily, Sean, Brian, and Terrell, thanks so much for writing in.

If you have a question for us, our email is pima at freakonomics.com.

That's P-I-M-A at freakonomics.com.

It's an acronym of our show.

Steve and I both read every email that's sent, so we look forward to reading yours.

Thanks.

I hadn't expected that it would take so long to talk about the tournaments themselves, which is frustrating because the part that's actually most interesting to me is the application of prisoner's dilemma logic in the real world.

So I'm excited to finally get to that now.

And if we have time left over, I also want to ask Robert about the intriguing work he's been doing in two very different areas, cancer and cybersecurity.

I suspect you've come across some pretty interesting real-world applications of the prisoner's dilemma and tit-for-tat.

Can you tell us about some of those?

I came across a book review in a sociology journal of something called the live and let-live system in trench warfare in World War I.

And the idea was that the artillery would shoot between the other side's first and second trench lines.

In other words, they would deliberately say, I'm not going to cause any damage.

And you could see I can be accurate about about it and hope that the other side would catch on and they would do the same thing.

And so they were basically cooperating with each other, which is very much, of course, tit for tat, because then if the other side defected, then they could defect too.

What was different about trench warfare from most warfare that's more mobile is that the same small units would be facing each other for long periods of time.

And so it demonstrated that even in the context of brutal war, they could develop develop this live and let live system.

That's really fascinating because you think of war as being maybe the one case where there's no room for cooperation.

And I thought the trench warfare case would be very helpful in explaining and illustrating what the prisoner's dilemma was and what the results might mean.

And that's true, but even more important, people found it very compelling.

In other words, the whole thing became more believable when I had this example.

So people loved the results.

And Richard Dawkins, who's the author of The Selfish Gene, wrote one of the most over-the-top prefaces I've ever seen for a later edition of the book.

That must have felt really good.

Well, it says it should replace the Gideon Bible.

Yeah, exactly.

So he definitely did not withhold praise.

But I got to say, I was surprised by Richard Dawkins' love affair with the research because I think that your findings fundamentally challenge the notion that selfish genes will thrive.

But I have to explain why, because it's not completely obvious why I'd say that.

So, the way that you generate cooperation in the prisoner's dilemma is by playing the game over and over so that future punishments reign in the strong desire of selfish players to act selfishly.

But isn't it true that you also can generate cooperation if players aren't totally selfish, but rather are altruistic towards the other players?

They get a little bit of joy when the other players do well.

That's a very different, different, but an equally plausible mechanism for generating the kind of cooperation in your game.

No, I don't think it's equally plausible in a biological context or really almost any.

Because if you're really just altruistic and you get a kick out of helping others without regard to how you're doing, then you're going to be taken advantage of all the time.

Okay, but I'm not saying without regard to what I'm doing.

I'm saying I'm exactly like the selfish player.

It's just that instead of putting zero weight on the other guy, I put a little bit of weight on the other guy.

And that just makes it easier to support cooperation.

Because when I think about my own life, I think that's a really good description of myself.

I'm not completely and totally selfish.

When I interact with strangers, I, for a variety of complicated reasons around identity and wanting to feel like I'm a good person or whatever, I act as if I care a little bit about them.

I'll be nice to them, even if I don't think I'm going to play the game with them over and over.

I agree that people are like that to some degree.

And what I find most interesting about it is that they are more altruistic or more cooperating toward people who are similar to them.

A version of this is ethnocentrism.

We tend to help out and be trusting and accepting of people who are in our ethnic group and not outsiders.

Another basis of cooperation that's been well studied by biologists is kinship.

And that's what the selfish gene is referring to, which is if I'm closely related to somebody, then our genes have this shared interest in helping that our genes survive and thrive.

Exactly.

So I think Richard Dawkins saw your work through the lens of that idea of cooperation based on genetic similarity.

But I still have this instinctive feeling that for human behavior, there's some altruism lurking around that's not based on kinship.

Looking at it from a societal perspective, we definitely would like to socialize people into being nice if the world is characterized by a bunch of prisoners' dilemmas being played all the time.

Because if you can get people to be nice, the overall societal payoff and the individual payoffs will actually be higher.

Do you agree with that?

Or do you think, oh, I'm like missing the point of what you're doing?

No, I think it's a supplementary point.

It's not contradictory to the value of strict reciprocity.

And I share with you the feeling that lots of people, especially good people, get some kick out of helping others.

One thing that I know I don't do, given the results of your tournament and the success of tit for tat, is that I am not nearly so forgiving as tit for tat.

When somebody does something wrong to me once, I dwell on that and I don't forgive them for a while.

If somebody does something wrong to me over and over and over and then suddenly changes their behavior, Boy, it takes a lot of work to get me back on board.

So that's probably where I could do better in life.

Do you try to stress forgiveness in your own life?

Yes, because if you point out what you regard as defection from the other side, I think it helps them identify what might lead to trouble in the future between you.

And then I do try to be forgiving.

One thing is to try to appreciate the other side's perspective about why they did it.

And maybe it wasn't because they're trying to take advantage of it.

Maybe they had some other interest or goal in mind entirely.

And this is one of the problems with cross-cultural interactions is that we often have trouble understanding what would be regarded as an insult to the other side, and they have trouble understanding what we take to be an insult.

One of the other findings of the tournament is that you could do really well without ever doing better than the player you're playing with.

In fact, tit-for-tet cannot possibly do better than the other player because it never defects until the other side does and never defects more than the other side does.

And so this is really counterintuitive.

You can win a tournament without ever doing better than the player you're playing with.

And that's another illustration of how in zero.

So I'm thinking that just doesn't make sense at all, but it makes sense in this context because what works well is to elicit cooperation, not necessarily to hurt the other guy.

That's a great point.

One of the things I really like about your research, both the design of it and the way you analyze it, is even after I think I understand what's going on, you offer those little gems like you just did there that, hey, TitforTat never outperforms the other player in any individual contest.

It's just over a whole bunch of contests, Tit for Tet and the other player that's playing against Tiffor Tet all do well.

And I love that.

That's the kind of thing that, in a little tiny way, changes my thinking about the world, which I think is the best thing you can say about a research project is that it changes the way you think about the world.

It's obvious after you say it that you could do really well without outdoing the other guy, but it's not obvious before you say it.

And in fact, it even seems nuts.

So we've talked a lot about the evolution of cooperation.

You've also looked at cancer.

Tell me about your research on cancer.

Well, a colleague of mine showed me a computer simulation of a growing tumor, and I was fascinated by it.

And the visuals allowed you to rotate the tumor and see it from all sides, and it showed which genes were active and so on.

And I said, well, I do agent-based modeling, which is that was the cells where agents interact with each other according to specific rules.

You do a simulation.

And she pointed me to a famous article called The Hallmarks of Cancer, which basically said that what cancer does is it overcomes the defenses of the host.

It overcomes, for example, the ability to control growth.

So the host has its mechanisms to control the growth, so no cells get totally out of hand.

What a tumor does is it has a high rate of mutation, and a cell line eventually finds ways of overcoming each of the host's defenses.

And what this led me to think was that it doesn't have to be a single cell line.

And an analogy I came up with years later, I think makes the point, which is if two thieves are trying to rob a house and one knows how to turn the alarm off and the other one knows how to pick a lock, they don't have to both know how to overcome all the defenses.

As long as they travel together, and collectively they can overcome the defenses, that's sufficient.

So that's what I thought was going on, perhaps, that there was cooperation here, that cells near each other were doing different things to overcome the host.

And my brother turns out to be an oncologist, and he helped me look into this.

And he found eventually that it was new that people hadn't said that before.

And so I worked with him, David Axelrod, and with the oncologist, Ken Pienta, to develop this idea that you can have cooperation within a tumor.

We got two interesting reviews when we submitted this speculation, one which said that what we were proposing was impossible.

And the other reviewer said, what we're proposing, everybody knows already.

We certainly know those both aren't true.

So have people taken that seriously at all?

When we explained ourselves better, we were able to get it published.

And yes, in fact, people have found that you could have two cell lines that each don't do very well, but you put them together and they could do very well.

And so you're basically getting this lab demonstration that you can have this cooperation among tumor cells.

And this led me to do a lot of more work with Ken Pienta over the years on cancer.

You've done some cyber stuff, too.

What's caught your attention within the cyber world?

I've long been interested in computer science and international security.

As cyber weapons became feasible and powerful, I realized that they could be very destabilizing.

They're very different from, say, nuclear weapons, which are stable, because a nuclear weapon, if it's used, you know it's been used and you know who did it usually.

And the survivor, the victim, can always strike back.

Major countries all have secure second strikes, so there's no incentive to go first.

But with cyber, it could be quite different.

You might be able to destroy the other side's command and control system, in which case they can't strike back.

And so you might have this reciprocal fear of surprise attack.

And then I heard about a roundtable involving about six, seven different countries, including the United States, Russia, and China, on the theme of military aspects of cyber stability, which is just what I was concerned about.

And I've been part of that for about five years.

It has been somewhat helpful in getting us to understand each other's language.

What do they mean when they say they've been attacked?

What counts as an attack?

Some understanding of what their sensitivities are.

For example, if the United States supported the secessionist movement in China, we might regard that as just promoting free speech.

They might regard it as a threat to the regime, even though that would be almost paranoid.

But this particular roundtable included some government employees, professors at the national defense universities, which are in charge of training the people that are going to become the most high-ranking military.

And they're also typically involved in development of doctrine.

And that's important in cyber.

It's interesting.

You're actually sitting down at the table with simultaneous translation and talking with Chinese academics and policymakers about cyber attacks.

I'm surprised that kind of dialogue actually exists.

Well, it does.

And it's in part because there's a mutual recognition that there could be misunderstandings, misperceptions, and that's especially dangerous in the cyber world where you might feel that you've been attacked, but you haven't been.

It was just a power outage.

And therefore, there is a desire to avoid unnecessary conflict.

So, Robert, one of my past guests was a guy named Yule Kwan, who was a winner on the TV show Survivor.

And when I asked him what his strategic approach was, he immediately went to tit-for-tat and referenced your work.

Did you know that you have been responsible for a victory on Survivor?

No, but I'll give you a nice example of a political science convention.

Colleague came up to me and said, your book really helped with my divorce.

And I said, well, I hope it saved your marriage.

And she said, no, no, I didn't want to save my marriage.

It helped with the settlement.

I realized that I've been a sucker all this time.

And I didn't have to be.

And so your book was an inspiration.

I got a much better settlement than I otherwise would.

That's hilarious.

People must come up to you all the time.

Are there other examples of where people have come up to you and said how your books changed your life?

One example was a soldier from Iraq who said that he realized that you shouldn't be the first to defect.

And often they would be approaching a village and they didn't know if the village was hostile or not.

And the villagers didn't know whether the soldiers, how hostile they might be.

And so, what he actually did is he said, I want my soldiers to put their rifle behind their neck as we walk toward this village.

They could see then that we're not intending to start any trouble.

Yeah.

And maybe they won't.

And if it doesn't work, well, we obviously have our rifles at hand.

But I thought that was really quite a striking application.

I'm not sure whether I should be embarrassed that the moral code that guides my daily life is essentially tit-for-tat.

I've read a lot of great philosophers and my fair share of self-help books, but the truth is, tit for tat is the one that speaks to me.

My first life principle is to be nice.

My second life principle is to be provocable.

When someone tries to take advantage of me, I'm willing to fight.

My third life principle is to be forgiving.

I believe in second and third chances, although I still have some work to do to reach the unlimited forgiveness embodied by Tit for Tat.

All in all, these principles seem like a pretty good framework to build a life.

Not bad for a strategy that only requires two lines of code.

Hey, one last thing.

I'm part of an organization called datascienceforeveryone.org, and we're trying to crowdsource some great ideas about how to build data science into the math and science curriculum at the K through 12 level.

So we're running a little contest.

If you're a teacher or just someone with great ideas, we want to hear them.

Our website is datascienceforeveryone.org, where four is the number four, datascienceforeveryone.org.

And you can see the contest there and spread the word.

We're trying to gather as many great ideas as possible.

Thanks again.

People I mostly Admire is part of the Freakonomics Radio Network, which also includes Freakonomics Radio, No Stupid Questions, and Freakonomics MD.

This show is produced by Stitcher and Renbud Radio.

Morgan Levy is our producer, and Jasmine Klinger is our engineer.

Our staff also includes Allison Craiglow, Greg Ripin, Joel Meyer, Tricia Bovita, Emma Terrell, Lyric Vowdich, Jacob Plementi, and Stephen Dubner.

Theme music composed by Luis Guerra.

To listen ad-free, subscribe to Stitcher Premium.

We can be reached at Pima at freakonomics.com.

That's P-I-M-A at freakonomics.com.

Thanks for listening.

Wait a minute.

I thought he asked me to withhold his name because it was so dumb.

The Freakonomics Radio Network, the hidden side of everything.

Stitcher.

I choose to work where everyone can see how busy I am.

I choose pedal palette.

I choose the freshest of beats.

I choose to never burrito and drive.

Whatever you choose, choose transit and do your part to spare the air.

Carl's Jr.

is the only place to get the classic Western bacon cheeseburger.

Those onion rings, all that bacon, that tangy barbecue?

Well, have you tangoed with spicy western bacon?

Can you ride out the jalapeno heat?

Take a pepper jack punch.

For a limited time, it's high time for a spicy western reintroduction.

Rankle the best deals on the app.

Only a Carl's Jr.

Available for a limited time.

Exclusive app offers for registered My Rewards members only.

Department of Rejected Dreams, if you had a dream rejected, IKEA can make it possible.

So I always dreamed of having a man cave, but the wife doesn't like it.

What if I called it a woman cave?

Okay, so let's not do that, but add some relaxing lighting and a comfy IKEA Hofberg Ottoman, and now it's a cozy retreat.

Nice, a cozy retreat.

Man cozy retreat.

Sir, okay.

Find your big dreams, small dreams, and cozy retreat dreams in store or online at ikea.us.

Dream the possibilities.