52. Max Tegmark on Why Superhuman Artificial Intelligence Won’t be Our Slave (Part 2)

30m
He’s an M.I.T. cosmologist, physicist, and machine-learning expert, and once upon a time, almost an economist. Max and Steve continue their conversation about the existential threats facing humanity, and what Max is doing to mitigate our risk. The co-founder of the Future of Life Institute thinks that artificial intelligence can be the greatest thing to ever happen to humanity — if we don’t screw it up.

Listen and follow along

Transcript

This is a vacation with Chase Sapphire Reserve.

The butler who knows your name.

This is the robe, the view, the steam from your morning coffee.

This is the complimentary breakfast on the balcony, the beach with no one else on it.

This is the Edit, a collection of hand-picked luxury hotels you can access with Chase Sapphire Reserve, and a $500 edit credit that gets you closer to all of it.

Chase Sapphire Reserve, the most rewarding card.

Learn more at chase.com/slash Sapphire Reserve.

Cards issued by J.P.

Morgan Chase Bank and a member of FDIC, subject to credit approval.

What does it mean to live a rich life?

It means brave first leaps,

tearful goodbyes,

and everything in between.

With over 100 years' experience navigating the ups and downs of the market and of life,

your Edward Jones financial advisor will be there to help you move ahead with confidence.

Because with all you've done to find your rich, we'll do all we can to help you keep enjoying it.

Edward Jones, member SIPC.

In last week's episode, with the remarkable Max Tegmark, we covered topics ranging from the origin of the universe to the disturbing reality of slaughterbots, AI-enabled drones built to kill.

Today, We continue our conversation discussing how artificial intelligence is affecting our lives already in ways we aren't even aware, and what Max is doing to ensure that AI becomes a force for good rather than evil as a co-founder of the Future of Life Institute, an organization that works to prevent global technology-driven catastrophes.

If we get it right with AI, it will be the best thing that ever happened because we're no longer going to be limited by our own

relative stupidity and inability to figure stuff out.

Welcome to People I Mostly Admire with Steve Levitt.

Max grew up in Stockholm before he moved to the US to get his PhD at Berkeley.

He was a tenured professor at the University of Pennsylvania before joining MIT's physics department.

Just one quick note before we dive back into the conversation.

Today's episode stands alone.

There's no need to have listened to part one of the conversation, but there's also no harm.

If you're the kind of person who likes to do things in order, go back and listen to part one first.

Listeners have been incredibly enthusiastic about it.

One of the scenarios that's really intriguing is to think about what happens if and when AI advances to the level where it has capabilities much greater than humans have.

Are you worried about that as a threat?

Yes.

Or not so much.

You're worried about that too, okay?

I'm both worried and very excited,

to tell you the truth.

Before we go into the future with superhuman AI, obviously another very relevant thing for right now is what artificial intelligence is doing to our democracy, because people are hating each other more and more.

We're getting increasingly polarized into our little filter bubbles.

And this is often blamed on opportunistic politicians or on social media.

But if you look at the actual cause behind this, it's obviously artificial intelligence.

We have these very powerful machine learning algorithms that analyze users and figure out how to keep them hooked for as long as possible staring into their rectangles.

And these algorithms, even though they were only told to maximize profit in terms of ads, they quickly figured out that the best way to engage people is to piss them off.

It's less important if things are true, more important if people click on them.

It's had a very dramatic effect, I would say, on our democracies in recent years.

People worry too much, maybe, still, about the bots coming to kill them, like in silly Hollywood movies, and they should worry more about the bots coming to hack them, because because that's already happening.

And I'm very interested in how we can solve that problem and restore our democracy to functioning better.

So you're talking about our incentives.

Yes.

And the incentive has always been to deliver news to people that people would like.

And so yellow journalism back 100 and something years ago was an example where journalists were giving people what they wanted.

And I think what you're saying is technology, AI, is just really good at figuring out how to exploit people's weaknesses, to take advantage of the fact that when I get on my phone, I'll keep on clicking on articles about the Kardashians, but I won't keep on clicking on articles about how important it is for the European Union to solve Brexit in the right way, for instance.

And you believe that is, in some sense, an existential threat by undermining democracy and the smooth functioning of society.

Yes, because I love the idea of democracy and I love the idea of the free market, incentivizing people people to efficiently accomplish things that we want accomplished.

But democracy works really well if people actually know what's going on.

If people have a very skewed view of what's happening, then welcome to today's world.

So one thing I know about you is that you don't just talk about things, you do something about it.

So what are you doing to solve this problem?

I confess I made a New Year's resolution to my wife.

some years ago that I'm not allowed to whine about things if I don't actually do something about it.

It's put up or shut shut up.

So when the pandemic hit, I thought, okay,

I'm going to spend all this extra time that I have now that all my travels and conferences were canceled to actually build something using machine learning not to analyze and manipulate the consumers of news, but to instead analyze the providers of news.

So I made some bots that just download millions and millions of articles from the internet and then have the machine learning read all these articles and then provide free tools tools for users who want to take a different approach to their news consumption.

I think about it a lot in the same way as my food consumption.

If you read Kahneman with system one and system two, right, it's very clear that you want to use your system two, your deliberative reasoning thinking to decide your diet before you go to the supermarket, rather than just impulse buying random things that come in front of you because you always shop hungry or something like that.

And if we can make our news diet more like this, that's very empowering.

That we take control over what news we consume by asking, what do I actually want to learn more about?

Rather than just impulse clicking at the moment on whatever the algorithms have put in front of me.

So what improvethenews.org does, that's the little free news aggregator we made, is you go in there and then you have these sliders.

It's okay, I want to see now what my conservative uncle is reading.

You could put the political slider to the right.

Oh, now I want to see what my university classmate here on the left is thinking.

And you put the slider over to the left.

And it makes it very easy for you to get all these different perspectives of what's actually out there.

And this is one of many projects that we're giving away for free with the idea that machine learning has zero marginal cost because it's just code, right?

So if we develop it in a university setting or a non-profit setting, and it's just a website that has no ads on it, anyone can use it.

And I'm hoping that we can, in this fashion, make it a lot easier for people, basically, to find out a more nuanced understanding of what's happening.

Because today it's too hard.

You have to do too much work to go out yourself and try to find all the different takes on the same story.

So I downloaded your Improve the News app and played around.

And what I found really fascinating about it is: look, I understand right and left.

It's not hard for me to know which media outlets are left and which are right.

But what was interesting is you have all these other dimensions like pro-establishment and anti-establishment and thorough versus breezy coverage of topics.

And I actually had a lot of fun just maxing out on the different dimensions and getting a chance to see what news I'm shown in a world in which I say I'm an anti-establishment right-wing person versus a pro-establishment left-wing person.

If I really want to do a good job of tailoring my news so I only get really crazy news, your app does a great job of doing that for me.

Yeah, thank you.

The reason that improvethenews.org works this way is because I'm a scientist.

Scientists hate when people tell them, don't read this person's theory because it's wrong.

This is exactly what Galileo fought so hard against, this kind of censorship where other people are like, oh, your feeble mind cannot handle being exposed to Breitbart or Counter Punch on the left or whatever.

It's so insulting.

It's so popular from big companies.

to always blame the consumer and say consumers want are stupid.

They want to have their prejudices confirmed, never want to hear anything they disagree with.

So we're just going to show them that.

Imagine if Galileo had put out a tweet saying, hey, Earth is orbiting the sun, actually, not the other way around.

The Pope's fact-checking committee would totally have said, no, this is wrong.

It violates our community guidelines.

That's why they put him in house arrest, in fact.

That's not proven to be very useful in science.

David Rand, who's a professor here at MIT, has vindicated this scientific approach to truth-finding.

What he he found was that actually people are quite interested in being shown other perspectives if it's done in a respectful way.

So explain the establishment filter on the Improve the News app, because I think it's something that most people don't even pay attention to in their news consumption.

This idea that some news sources are, as you call it, pro, meaning part of the establishment, or critical, as you call it.

meaning anti-establishment.

I love data and what you can do with it.

Since we had downloaded millions of news articles for this project, this wonderful student, Samantha DeLonzo here at MIT and I decided to see if machine learning could detect bias in a completely data-driven way.

We told the machine learning to just try to predict which newspaper had written each article from just looking at the words, okay?

And it was amazingly successful.

It discovered that there are a few thousands of words, in fact, that are like dead giveaways that are very emotionally loaded.

For example, if you have an article about abortion and it talks a lot about thetases, it's more likely to be from the left.

If it talks a lot about unborn babies, it's more likely to be from the right.

If you find an article about Black Lives Matter and it talks a lot about protests, it's more likely to be from the left.

If it talks a lot about riots, it's more likely to be from the right.

But the beauty of it is, I didn't make this up based on any kind of human intuition.

The machine learning just discovered these are the words you should pay attention to.

And then using them, it took all the hundred newspapers we had and classified them into this two-dimensional space of bias.

And we looked at this and we're like, whoa, the x-axis looks exactly like the traditional left-right axis, because there is Fox on the right, CNN on the left.

And then the other axis it discovered that was equally explanatory, the up-down axis turned out to be this establishment business.

So, what you're saying is there is a right-wing and a left-wing dimension, which we're all very attuned to.

But much more subtle is that this establishment versus anti-establishment really is a function of how prominent and established the outlet is versus the upstart outlets.

That Fox and CNN are very similar on this dimension.

Aaron Ross Powell, it's the one that you don't notice so much because establishment is basically all the big newspapers, which are very commercially driven.

If you look at, for example, articles about military stuff, if the article talks about nuclear weapons or nuclear war, the machine learning would say say this is definitely not from New York Times or from Fox News.

It's from some very small newspaper who is criticizing the fact that we have so many nuclear weapons.

Then most people, myself included, I think, spent many years just not even being aware that it's there because, of course, it's hard to miss the left-right controversy, but this one, it's easy to forget that it's even there.

One of the things that I've criticized both academics and the media about is I think if we separated out facts and interpretation, we could make a lot of headway.

It's not that often that people disagree on the facts.

I think much more often people disagree on interpretation.

So like I could imagine an amazing thing your app could do would be to say, hey, here are the facts that everybody agrees on.

And to put those facts right front and center and then say, and here's how people who are on the left interpret those facts.

Here's how people on the right.

Here's how people are anti-established.

I think that would be an amazing gift to society if you had the capability of doing that.

Oh, great minds think you like.

This has been one of the most common pieces of user feedback.

Man, we're actually building this.

That would be amazing because I honestly think that most

of the confusion and anger comes because when people argue, they're constantly confounding facts and narratives.

People tend to be quite civil if they agree on the facts.

So I really think that would be a very powerful tool in increasing the dialogue.

Far more important than just about anything else that we could do.

I love what you're saying here.

Why is it that people can argue passionately about things at a science conference, like whether there are parallel universes or not, and then have beer together afterwards?

Whereas that does not happen at all.

Right, exactly.

It's exactly because in a science conference, you do separate facts from opinions.

In fact, this is a tradition that even goes back to the Middle Ages when they used to have religious debates, where you started the debate by articulating the narrative of your opponent in a way that that they would agree with.

And only when both parties could articulate the other point of view in a way that the other one found was respectful and reasonable, then you got into the meat of the discussion.

You're listening to People I Mostly Admire with Steve Lovitt and his conversation with physicist Max Tegmark.

After this short break, they'll return to talk about the future of artificial intelligence.

People I Mostly Admire is sponsored by LinkedIn.

As a small business owner, your business is always on your mind.

So when you're hiring, you need a partner who's just as dedicated as you are.

That hiring partner is LinkedIn Jobs.

When you clock out, LinkedIn clocks in.

They make it easy to post your job for free, share it with your network, and get qualified candidates that you can manage all in one place.

And LinkedIn's new feature can help you write job descriptions and then quickly get your job in front of the right people with deep candidate insights.

You can post your job for free or choose to promote it.

Promoted jobs attract three times more qualified applicants.

At the end of the day, the most important thing to your small business is the quality of candidates.

And with LinkedIn, you can feel confident that you're getting the best.

Post your job for free at linkedin.com slash admire.

That's linkedin.com slash admire to post your job for free.

Terms and conditions apply.

People I Mostly Admire is sponsored by Mint Mobile.

From new shoes to new supplies, the back-to-school season comes with a lot of expenses.

Your wireless bill shouldn't be one of them.

Ditch overpriced wireless and switch to Mint Mobile where you can get the coverage and speed you're used to, but for way less money.

For a limited time, Mint Mobile is offering three months of unlimited premium wireless service for 15 bucks a month.

Because this school year, your budget deserves a break.

Get this new customer offer and your three-month unlimited wireless plan for just 15 bucks a month at mintmobile.com slash admire.

That's mintmobile.com slash admire.

Upfront payment of $45 required, equivalent to $15 a month.

Limited time, new customer offer for first three months only.

Speeds may slow above 35 gigabytes on unlimited plan.

Taxes and fees extra?

See Mint Mobile for details.

For a limited time at McDonald's, get a Big Mac extra-value meal for $8.

That means two all-beef patties, special sauce, lettuce, cheese, pickles, onions on a sesame seed bun, and medium fries and a drink.

We may need to change that jingle.

Prices and participation may vary.

Want to look and feel your best this summer?

Don't just think skin deep, think sell deep with Prolon.

Prolon is a plant-based nutrition program featuring soups, snacks, and beverages that nourish the body while keeping it in a fasting state, triggering cellular rejuvenation and renewal.

With proper diet and exercise, Prolon can help target fat loss, support lean muscle, and reset your metabolism.

So you look and feel your best all summer long.

Prolon is science-backed nutrition that can help change your relationship with food in just five days.

Get 15% off plus a $40 bonus gift when you subscribe at prolonlife.com slash Pandora Promo.

These statements have not been evaluated by the FDA.

These products are not intended to diagnose, treat, cure, or prevent any disease.

See site for details.

I'm Kimberly Adams, co-host of Make Me Smart, a podcast from Marketplace that makes today make sense.

Each weekday, we break down the headlines from the latest Trump executive order to the rising cost of starter homes.

We help you understand what's happening and how it impacts your life.

Join us so we can make sense of it all together because none of us is as smart as all of us.

Listen to Make Me Smart wherever you get your podcasts.

Hey, Steve.

Hey, Morgan.

So since you're a big fan of experimentation and collecting data, We get a lot of requests from listeners who want to try and experiment in their own life, but don't really know where to begin.

So a listener named Albie L wrote in.

He's a professional surfer and he says that a change in the sport of big wave surfing has been the popularization of inflatable vests.

These are vests that have CO2 cartridges in them and when a cord is pulled the vest inflates and brings the surfer to the water's surface.

These vests have been a game changer for the sport.

They prevent a lot of drownings and provide a lot of additional safety for surfers, which Albie acknowledges is really good for the sport.

But he does feel like there's been a trade-off.

He thinks that surfers, himself included, used to make smarter decisions before they were wearing the vests.

They would fall a lot less and they surfed safer.

Now, this is just a hunch of his, but it is true that the vests have emboldened a lot of inexperienced people to try big wave surfing, which is clearly a very dangerous sport.

So Albi wants to figure out if the vests are having a larger positive or negative effect on the sport and wants your advice on how he could go about collecting data.

Do you have an answer for him?

Let me just start by saying I've been on a tirade about teaching data skills in school.

And if Alby had been taught the kind of data skills that we should be teaching people, he'd know exactly what to do next.

It is a failure of our education system, which leaves Alby completely unable to think of how to do this.

I should also say it's not just Alby.

We get this question a lot about how to collect data.

So we can use this as a model for other people too.

Absolutely.

It's what we should be doing in schools, but apps in schools, let me step in and try to help a little bit.

Okay, so first let's take on Albie's question.

It actually has a name in economics.

It's called the Peltzmann effect after Sam Peltzman, an old Chicago economist.

And the idea is that you introduce a device that makes an activity safer, and the direct effect of that device is indeed to make things safer, but the indirect effect is to induce a behavioral response whereby people start taking more risks because they know that the device will help keep them safe.

Now, whether in total the net effect is to make things safer or more dangerous is actually indeterminate.

You need to look at the data to find that.

Although I will say empirically, I don't know of any very good cases where you can actually see a safety device making things more dangerous on net, although many people sometimes claim that.

Okay, so how would you go about doing it?

The key to any kind of causal analysis, and what Alby's after is causality.

He wants to know if there's a causal effect of introducing the CO2 cartridges into surface, is to find two sets of people who otherwise you think would have had the same kinds of outcomes, except that one of them is exposed to the new cartridges and another isn't.

Now, I don't know exactly the right answer because it helps to know the institutional details.

But if I were to start, I would just start with a before and after.

So if there's a particular time where these come available, I would look and see whether the injuries go up or down.

I'm assuming that Alby can maybe have some data source where he can see injuries.

Maybe you go to competitions, a particular competition one year, the year before these cartridges came in, and then the next year after, and then compare the number of people who had to withdraw because of injury in the pre-year versus the post-year.

Then the only other question you have to ask yourself is, has anything else changed from before and after?

Is it the case that surfboards has changed?

Is it the case that now the prize money is much bigger so people are willing to take more risk because a big win is worth more?

But when I approach problems that's essentially what I do but that's really in essence all you can do.

If you don't have a randomized experiment which LB doesn't have and probably can't have, you try to do the best you can to try to manufacture something like a natural experiment.

You look for cases where for no particular reason except for luck, one group of people had the cartridges and another didn't.

And in this case, the before and after is probably his best bet.

So it sounds like you're really saying his best bet is to look for some comparison and try to find as little change as possible in that comparison other than wearing the best or not wearing the best.

Yeah, and that is essentially the crux of what anybody wants to do in a world in which they can't run their own randomized experiment.

So I should also say that you talk about natural experiments quite extensively in our episode with Dr.

Bapu Jena, who is the host of the show Freeconomics MD.

And that would be a good resource for listeners who are trying to collect and analyze data in their own lives.

Thank you so much, Albi, for writing in.

We hope that provides some clarity for you.

If you have a question for us, you can reach us at pima at freakonomics.com.

That's P-I-M-A at freakonomics.com.

It's an acronym for our show.

Steve and I read every email that gets sent, so we look forward to reading yours.

Thanks so much.

The whole reason I invited Max to be a guest on this show is that I had read the incredibly interesting things he had written about the long-term implications of AI for society.

One and a half podcast episodes later, we haven't even gotten to the topic yet.

Well, this is my last chance, and I promise you, I'm not letting him out of here until we've covered that topic.

This has been an amazing discussion of the potential benefits and perils of AI in the short run.

I just want to get your longer-term perspective.

So, there is likely to come a time when AI goes beyond the capabilities of humans.

Obviously, it could be for good, it could be.

for destruction, but what's your guess about what a future holds in which human intelligence in many ways subservient to that of AI?

What I think is pretty clear is that artificial intelligence will become the most powerful technology ever because intelligence is all about information processing, and there's no law of nature that says that that can't be done better than in our warm, wet biological brains.

And that means that AI will eventually become either the best thing or the worst thing ever to happen to humanity.

So the really interesting question for me actually isn't to put odds on which way it's going to go, but to ask, what can we do now?

To influence it.

How do we influence it?

Yeah.

To influence it.

Yeah.

So what are you doing to influence the long-term trajectory of whether AI is the destruction of humankind or the greatest benefit we've ever had?

I co-founded this nonprofit called the Future of Life Institute.

And together with a bunch of wonderful scientists, tech people and others, we are trying to educate on these issues and above all, engage the people who are actually building these technologies to think about the social implications of what they do.

I mentioned how biologists have done that better than people have really in any other scientific field and they deserve our gratitude for it.

And it's really inspiring to see how the same thing is happening now in artificial intelligence where there's a lot of talk about AI ethics, AI safety, and so on.

The key thing to remember is that this is not a depressing topic like nuclear weapons, where either we screw up in a big way or nothing happens.

This is actually something where we could, on one hand, screw up spectacularly, or it could be this incredibly inspiring future because everything I love about civilization is a product of intelligence, human intelligence.

So obviously, if we can amplify that with artificial intelligence to figure out how to cure diseases, to lift everybody out of poverty and help life flourish, not just for the next election cycle, but for billions of years, that is such an incredibly exciting opportunity that we have.

And it comes back to this idea of thinking of humanity as a child.

We're very early still in what can become an incredibly long and rich life in the cosmos.

If we get it right with AI, it will be the best thing that ever happened because we're no longer going to be limited by our own relative stupidity and inability to figure stuff out.

We'll just be limited by the laws of physics, which is what AI is going to be up against.

So you just described a future in which AI is an incredibly smart, effective tool, but doesn't mind being a slave to humans.

Are you at all concerned about the possibility that if you create something that is far more talented than humans, it's not going to like being our slave?

Well,

of course.

That's concern number one.

If you ask why is it that we humans have more power on the planet than tigers, it's not because we have bigger biceps.

It's because we're smarter.

So obviously, intelligence gives power.

There are two approaches to to coexisting with more intelligent beings.

One is the slave approach, where we try to lock our future AIs in some sort of fictitious box and enslave it and force it to do our bidding.

I don't particularly like that approach, both because I think it's ethically very sketchy, just as slavery has been in the past, but also because it's very likely to fail.

If a bunch of five-year-olds tried to lock the world's smartest scientists in a box and force them to invent new technologies, they would probably break out too, right?

There is a a much better way, which is the way you coexisted with more intelligent beings when you were one year old, your mommy and daddy.

And why did that work out?

Because their goals were aligned with your goals.

They didn't take care of you because you forced them to, but because they wanted to.

This is a technical challenge for nerds like myself.

How do we make AI actually understand human values, adopt them?

and retain them as it gets ever smarter so that AI helps us rather than harms us.

And this is actually something that anyone listening to this who's interested in technology and computer science can go work on.

We've just launched a big grants competition to encourage grad students and postdocs, for example, to work on this kind of existential AI safety.

And it might take 30 years to find those technical solutions.

So we should start working on it now, not the night before some people on too much Red Bull switch on a super intelligence.

So you strike me as someone who's a rule breaker, a free spirit on all sorts of dimensions.

In your book, you talk about how you would post your preprints at 12.01 a.m.

I think that's such a good story about incentives and about creativity.

Could you tell that story?

I discovered that this preprint server, archive.org, that everybody gets their physics news from, had this system where if you were the very first person to submit a paper after their daily deadline, you would always be number one on the list of stories for that day.

So I would set my alarm and make sure I was first.

And then later on, some people did some research and found that the papers that were first on that list got way more attention than others.

So now they've actually very recently reversed it.

So that you have to be last to be first.

But don't tell anyone because

my new trick won't work.

I think that's your economics training coming.

That sounds very much like the thinking of an economist, looking at the incentives that are laid out by the server, which says if you log into the server at a certain time, it will give you more sites and figuring out how to do that.

I bet most of your colleagues don't think that way, which gives you an advantage.

Well, more generally, I am a very meta kind of person.

Whatever I'm doing, I love to take yet one more step back and see, hey, is there some way of even changing the process by which I do things or a better way of selecting what to work on to have more positive impact a lot of what's so good about you is you just don't follow the rules and that's led you to amazing places do you think we should encourage more of that is there too much conformity being demanded by society

i think i would like a little bit more non-conformity than we have right now.

I think science's successes have shown the value of non-conformity.

Science is hard.

And the reason we need this non-conformity and diversity of scientific thought is exactly because you can't predict in advance which ideas are going to work out and which ones are going to flop.

And if you let a lot of people chase their ideas for what they think is likely to be true, we're more likely to actually find the truth.

But can I end on an optimistic note,

please?

It's so easy to get overcome by gloom when reading the news and think about all the things that could go wrong.

If you come back to this metaphor of humanity as a young child, it's not enough to just tell them to be careful and not fall off cliffs and tell them about all the risks.

You also have to encourage them to dream big.

You really need the existential hope and the optimism.

Humanity itself needs to dream big.

And I would encourage everyone listening to this, spend some time next time you're having drinks with your friends, asking them about a really exciting high-tech future that they would love to live in.

And try to flesh it out in a lot of detail.

What would that world be like?

What are the amazing things we can do with advanced artificial intelligence and advanced synthetic biology and so on and so forth?

Because the more we can articulate this positive vision, the more likely we are to get to live in that future.

Like Max, I've always thought big, whether it's overhauling the way we teach math, saving the Amazon rainforest, or making the PGA tour as a golfer.

And I've pretty much, well actually always, failed to reach my goals.

But the thing is, I have a lot more fun chasing big goals, and I usually accomplish something along the way, even if it is a lot less than I hoped for.

By the time I realized I've failed, there's always some other crazy, impossible, even more tantalizing goal to go after.

There's no shame in failing, and don't let anyone convince you otherwise.

If you've enjoyed this conversation, check out Max's best-selling book.

It's entitled Life 3.0, Being Human in the Age of Artificial Intelligence.

Also, check out the Free Economics Radio episode number 477 entitled Why is US Media So Negative?

It pairs well with our discussion of Max's Improve the News app.

Just one last thing.

Scientists often lack information about public opinion that would help them navigate the ethical questions posed by new technologies.

So my team at the Center for Risk, we built a site to gauge public opinion on the ethics of tomorrow's tech.

Visit techethics.vote to make your voice heard.

That's tech like technology, ethics.vote.

Thanks for listening and we'll see you next week.

People I Mostly Admire is part of the Freakonomics Radio Network, which also includes Freakonomics Radio, No Stupid Questions, and Freeconomics MD.

This show is produced by Stitcher and Renbud Radio.

Morgan Levy is our producer and Jasmine Klinger is our engineer.

Our staff also includes Allison Craiglow, Greg Ripin, Emma Terrell, Lyric Boudich, Jacob Plementi, and Stephen Dubner.

Our theme music composed by Luis Guerra.

To listen ad-free, subscribe to Stitcher Premium.

We can be reached at Pima at freakonomics.com.

That's P-I-M-A at freakonomics.com.

Thanks for listening.

I want to make sure I don't get run over by a bus.

I want to make sure I don't get murdered.

A terrible strategy for career planning, right?

The Freakonomics Radio Network, the hidden side of everything.

Stitcher.