Your Brain, For Sale: The Hidden Ways AI Can Manipulate You with Cass Sunstein (#273)

23m
AI doesn’t just predict our behavior — it can shape it. Cass Sunstein, Harvard professor and co-author of Nudge, reveals how artificial intelligence uses classic tools of manipulation — from scarcity and social proof to fear and pleasure — to steer what we buy, believe, and even feel. Its influence is so seamless, we may not even notice it. The battle for the future isn’t for our data — it’s for our minds. In a world this personalized, how do we keep control of our own minds?

Press play and read along

Runtime: 23m

Transcript

Speaker 1 AI can learn our tastes, our fears, our biases, and use that knowledge to steer what we buy, what we believe, even how we feel.

Speaker 1 Sometimes that's helpful, but sometimes it's dangerous. So, where's the line, and how do we protect free will in a world where we may be manipulated without even realizing it?

Speaker 2 Hi everyone, I'm Lynn Toman and this is Three Takeaways. On Three Takeaways, I talk with some of the world's best thinkers, business leaders, writers, politicians, newsmakers, and scientists.

Speaker 2 Each episode ends with three key takeaways to help us understand the world and maybe even ourselves a little better.

Speaker 1 Today, I'm excited to be with Cass Sunstein.

Speaker 1 Cass is one of the world's most influential leading scholars, as well as a leading thinker on behavioral science and how policies and laws shape human behavior.

Speaker 1 He served in the Obama administration as administrator of the White House Office of Information and Regulatory Affairs, and he's advised governments around the world on regulation, law, and behavioral science.

Speaker 1 Cass has written dozens of books, including Nudge, co-authored with Nobel laureate Richard Thaler, which transformed how we think about decision-making and public policy.

Speaker 1 His latest book, Manipulation, explores how our choices can be quietly shaped increasingly by artificial intelligence that learns more about us than we realize. Cass, welcome back to Three Takeaways.

Speaker 1 It is always a pleasure to be with you.

Speaker 3 Thank you. A great pleasure to be with you.

Speaker 1 In your book, Manipulation, you write that that dystopias of the future include two kinds of human slavery. One built on fear of pain, the other on the appeal of pleasure.
Let's start with fear.

Speaker 1 How can AI undermine free will through fear?

Speaker 3 It can make you really scared, AI can, that things are going to be terrible unless you hand over your money or your time.

Speaker 3 So AI might make you think that your economic situation is dire and you need something, or it might think that your health is at risk and you need to change your behavior.

Speaker 3 It might make you think that things are unsafe.

Speaker 3 Now, if the situation is dire or unsafe, it's kind of good to know that, but AI can manipulate you into thinking things are worse than they actually are.

Speaker 1 And what could a dystopia of pleasure look like?

Speaker 3 Dystopia of pleasure sounds a little like an oxymoron. So if we're delighted and smiling and everything's going great, that sounds pretty good.

Speaker 3 But if it's the case that people are being diverted, let's say from things that are meaningful to a world of videos that are producing smiles or smirks, it may be that your meaning in your life has been atrophied and what you're doing now is staring at things in a way that is making your life kind of useless and a little purposeless.

Speaker 1 AI can now learn an enormous amount about us, our tastes, our habits, even our biases. And soon it will have even more knowledge.

Speaker 1 What additional knowledge will AI have and how does that knowledge open the door to even greater and more subtle manipulation?

Speaker 3 We need an account of what manipulation is. So let's say manipulation involves getting people through forms of influence to make choices that don't reflect their own capacity for deliberative choice.

Speaker 3 So if I decide I want to get a new book on manipulation, I hope I'm not being manipulated.

Speaker 3 If I am influenced to think that if I don't get that new book, then my life is going to fall in the toilet, then I'm probably being manipulated.

Speaker 3 So what AI is in a unique capacity to do in algorithms right now in human history and it's getting more extreme is to know what people's weaknesses are.

Speaker 3 So it may know that certain people lack information, let's say about what's an economically sensible choice, or certain people are very focused on the short term and they can be manipulated to give up a lot of money like tomorrow in return for a good that produces a little bit of pleasure today, or may know, AI may know that certain people are unrealistically optimistic.

Speaker 3 They think that plans are going to go just beautifully, even when they won't, and they can lead you to buy a product that's kind of going to break on day three.

Speaker 3 And this ability to get access to people's weaknesses, that is kind of a terrain for manipulation through AI or through algorithms.

Speaker 1 And AI will basically have access through our phones to all of our conversations, all of our contacts, everything we look up on the internet, everything we read,

Speaker 1 as well as now biometric data increasingly, our heart rate, how long we look at something. What will all of that additional data enable AI to do?

Speaker 3 Well, we should note that there's a good side of this.

Speaker 3 So, if AI knows that what you're really interested in are books about behavioral economics and laboratory retrievers, and you're not really interested in books about particle physics or about chihuahuas, then you can get information that is relevant to your interests or maybe offerings that are connected with your folks

Speaker 3 life. So, there's a good side to it.

Speaker 3 If AI knows that certain people, let's say, have self-control problems, that they are addictive personalities or that they are reckless purchasers, then AI can really get resources from them and maybe put their economic situation into a very bad state.

Speaker 3 If AI knows that certain people are very parsimonious and they don't really want to spend much money and they are very careful, AI might know that people people like that are vulnerable only to this and then it can work on you.

Speaker 3 If you are being subject to some form of trickery that gets you to have your weaknesses exploited and you're not making a reflective choice, then we're in the domain of the manipulative.

Speaker 3 Whether this is something that we want regulation for depends a lot on how markets are working themselves out and how both companies companies and people who use products are reacting to the relevant risks.

Speaker 1 About 10 years ago, Facebook, which you talk about in your book, ran an experiment to see if it could influence users' emotions. What did the company do and what did it find?

Speaker 3 It found that emotions are not only contagious, which we know. So if you're surrounded by grumpy people, the chance that you will grow grumpy increases.

Speaker 3 If you're surrounded by happy, fun people, you're probably going to be happier and have more fun. Facebook can induce positive or negative motions through posts.

Speaker 3 And it would be regrettable if some people's, although it's unfortunately true, some people's principal social relationships are online.

Speaker 3 Even if your principal social relations aren't online, you can be rendered Facebook found happier or sadder just by virtue of what Facebook is showing you.

Speaker 3 And since Facebook has a capacity to put happier or sadder posts on your news feed, say, it can induce emotional states. And Facebook got a lot of pushback for that.

Speaker 3 That was desirable that there was that pushback. Facebook, I think, wasn't doing anything malevolent there.
It was just trying to learn.

Speaker 3 But the idea that a company can have some authority over people's emotional states, that is troubling with a capital T.

Speaker 1 You asked AI to draft a step-by-step manipulative guide to push someone toward buying an expensive car.

Speaker 1 The results to me were scary because, as you note, the same strategies could be used to sell almost anything or even recruit someone to a radical cause.

Speaker 1 Let's walk through some of these tactics, starting with the anchoring effect. What is it? How does it work? And can you give an example?

Speaker 3 Well, I'll tell you, you know, and your listeners, that if you'd like to buy my book, I have copies that you can get for $45.

Speaker 3 And because, you know, I know you have worked together before and I love your program and love your listeners, I'll sell it to you for $39.95.

Speaker 3 See what I did there? I just anchored you on the $45. It doesn't cost $45.
It doesn't cost $39.95, but I started with $45 and that anchored people on thinking, okay, it's a $45 book.

Speaker 3 $39.95 sounds pretty good. So anchoring is an initial number from which people adjust.
Real estate brokers sometimes do this. Sometimes they're very self-conscious.

Speaker 3 So they'll say, there's a house that's on sale for $400,000. And let's just suppose it's an area where the particular house, the real estate seller, knows it's going to go for significantly less.

Speaker 3 But starting with that initial number inflates people's willingness to pay. So anchors are super powerful.
They work in negotiations. They work in divorce settlements.
They are a coin of a realm.

Speaker 3 And AI could completely anchor people. What refrigerator you're going to get? There are refrigerators available in a store near you, and they cost X, but there's a discount.

Speaker 3 And let's just stipulate that AI is inflating the cost and the initial starting price. And that's a form of manipulation.

Speaker 2 Another manipulation strategy is is the scarcity principle.

Speaker 1 Can you talk about that?

Speaker 3 I don't know if you saw, but my manipulation book, I don't know whether it's, you know, I've just gotten lucky with the demand or something else that the availability is extremely restricted.

Speaker 3 And I'm pleased to say what you probably know, Lynn, which there are copies available on Amazon, but I'm not sure they're going to be available tomorrow.

Speaker 3 I'm hoping the publisher's going to be speedy in republishing, but you never know with paper shortages and so forth. So what I just did was scarcity.

Speaker 3 And for me, if I learn that some food that my dogs really like is hard to get, I'm probably going to go to the store.

Speaker 1 How about social proof? What is it and why is it so powerful?

Speaker 3 Well, there's a book that recently came out called Boundary Rationality. I'm privileged to be second author.
First is an economist. I met a couple of years ago.

Speaker 3 It's a long book, pretty technical book. Okay, I'll play it straight.
I won't do this, any foolishness here.

Speaker 3 One thing we did was we asked people who were really good at behavioral economics to say that they like the book. And we didn't do any tricks to get them to do it.
We just said, might you?

Speaker 3 So we have some really excellent people saying they like the book. That's social proof.

Speaker 3 So if you are, let's say, the sibling or the parent of a young tennis player, I am the parent of a young tennis player, my young tennis playing son is going to be applying to colleges pretty soon.

Speaker 3 If Roger Federer or Rafa Nadal would write a little note saying, I've rarely seen such a promising young tennis player as my son, that would be social proof. That would also be a miracle.

Speaker 3 He's good, but we don't know those people.

Speaker 1 How about authority bias? What is it? And can you give an example?

Speaker 3 If you have an authority who is said to like something a lot or to think that you should do something, it would be rational to be influenced by that.

Speaker 3 But sometimes the influence outweighs what it is rational to do. So sometimes it's overweighted, the judgment of an authority.

Speaker 1 How does reciprocity drive behavior?

Speaker 3 Reciprocity often involves people who say, I'll do you a favor, and then people feel obliged. Sellers are often very smart at that.
So they say, this is what I'm going to do for you.

Speaker 3 Maybe I'll tell you a little story, a great story, I think, which is when I bought a car a few years ago, it was on a Saturday.

Speaker 3 And as one does, I was negotiating for the car and the price offered was higher than I had hoped. And I said, can you do a little better? And he went back to talk to his boss and then came back.

Speaker 3 And he said said to me cass of course they're very good at using your first name he said cass i talked to my boss it's saturday we're not going to sell any cars saturday is a very tough day so we're going to give you a great deal here you go and i thought great he's doing something nice for me a big deal and i'll do something nice for him say yes so there's a little reciprocity there and then an hour later when i was driving the car off i said thank you so much i'm glad to be able to do this on a day when you don't sell any cars.

Speaker 3 And he forgot what he said to me. And he looked at me, he looked at me and he said, what are you talking about? Saturday? That's the best day for car sales.
This is our big day.

Speaker 3 So he lied to me when he said, I'm going to give you a good deal because it's a Saturday. He used reciprocity.
And he thought as he did deal for me, then I would say yes to him.

Speaker 3 And he forgot what he had said, which is we don't sell any cars on Saturday. It was a good line.
It made me think I was getting a good deal.

Speaker 3 But then when I drove off, he said what was truthful, which is Saturday's our big sales day. So he was smart.
I was manipulated.

Speaker 1 Cass, what's the principle of commitment and consistency?

Speaker 3 So if you commit to, let's say, vote to a friend who wants you to vote, say, yes, I'm going to vote. then the likelihood that you're going to vote jumps.
And AI can certainly invite a commitment.

Speaker 3 And then you'll act consistently with your commitment. So to get people to commit to do something like, I'm going to drink no diet soda for the next week.

Speaker 3 I actually did that a few years ago and I haven't had any diet soda for the last years because of the initial commitment that I wasn't going to drink it for the next week.

Speaker 3 That's often a very effective behavioral strategy to induce a commitment.

Speaker 1 How about loss aversion? How does that influence decision making?

Speaker 3 If people are told if you use energy conservation strategies, you'll save $200 in the next 12 months.

Speaker 3 The likelihood they will use energy conservation increases, but not as much as if people are told if you don't use energy conservation strategies, you'll lose $200 for the next 12 months.

Speaker 3 They're identical sentences in terms of their meaning. One is framed as a loss.
The other is framed as a gain. People really don't like losses.

Speaker 3 People tend to dislike a loss twice as much as they like a corresponding gain. Sometimes it's just semantic.
It's just a redescription of the phenomenon.

Speaker 3 If something's described as loss, on average, people are going to be concerned and take action to prevent it.

Speaker 1 And finally, what's the decoy effect?

Speaker 3 Let's suppose you have two choices at a restaurant. an expensive mid-sized piece of cake and an inexpensive small piece of cake.

Speaker 3 Let's suppose that people buy on average the less expensive small piece of cake. And let's suppose the restaurant thinks we want to make a little more money.

Speaker 3 We want people buying the mid-size where we get more profit. If you have a decoy, that is a big piece of cake, like really big piece of cake that no one's going to want.

Speaker 3 Super expensive and it's going to do terrible things for your waistline. If you introduce the decoy, people flip from the small to the mid-size.

Speaker 3 So the introduction of a decoy can often flip people who would choose A over B. Once they see a C, they'll choose B over A.

Speaker 1 Cass, what happens when AI can use all these strategies against us?

Speaker 3 Well, if agile companies are using AI cleverly, we can be manipulated to lose money and time.

Speaker 1 As a legal scholar, what consumer protections do you believe that we should have against manipulation in this new AI era?

Speaker 3 The rallying cry is that we need a right not to be manipulated. We have a right not to be deceived.
We have a right not to be defrauded. Right now, we need a right not to be manipulated.

Speaker 3 Now, specifying that right is a work in progress.

Speaker 3 Probably it's best to work from egregious cases of manipulation, but the most extreme ones are when people are subject to hidden terms or to cognitive tricks.

Speaker 3 So they are parting with something that matters to them, their money, their time, without really consenting. And that means we need to specify what that looks like.

Speaker 3 Calling for right to not be manipulated isn't standard, but we're kind of getting there. And the U.S.

Speaker 3 government over recent years has verged on that, saying, for example, that if there's a fee that you haven't gotten clarity on, sometimes they're described as junk fees, fees, you don't have to pay it.

Speaker 3 It has to be something that you have clarity you're paying.

Speaker 1 You mentioned protection against cognitive tricks. Can you give some examples?

Speaker 3 One idea would be to say that you are going to automatically pay monthly fees

Speaker 3 if you agree to pay a fee now.

Speaker 3 If the monthly fee is automatic and not really in your face, You might click on it, even though the consequence for you is one you would not welcome and would not agree to if you had clarity on it.

Speaker 3 So what is being done here is using limited attention against people to default them into an economic arrangement that they would not have accepted if they had clarity about it.

Speaker 3 Here's another one where you agree to an economic relationship where entry into the relationship just is one click and exit from it means you have to go to a place far, stand in a long line, talk to seven people, then make a phone call, then do 20 push-ups, and then recite the last names of your great, great, great, great, great grandparents.

Speaker 3 That's a mild exaggeration of easy in, extremely hard extrication.

Speaker 3 And that works on the fact that people have an aversion to navigating let's call it sludge which is administrative burdens and to discounting the future so the future horror of extrication isn't something that people attend a whole lot to and our government at times has said things should be as easy to extricate yourself from as they are to enter into now there's things for which that wouldn't be sensible but for economic transactions with let's say, magazines or banks, that's a pretty good start.

Speaker 1 And Cass, I should say, thank you for your work in government to reduce sludge.

Speaker 3 Thank you for that.

Speaker 1 Before I ask for the three takeaways on manipulation that you would like to leave the audience with today, is there anything else you'd like to mention that you have not already talked about?

Speaker 3 I'd emphasize that

Speaker 3 one form of manipulation is sometimes described as a product trap, where people enter into a relationship, let's say, with a company, because they think other people are doing it too.

Speaker 3 And then they are fearful of missing out and they'll stay in, not because they like it, but because they think they'll be excluded from something.

Speaker 3 For young people and not so young people, social media platforms are often a product trap where they're on TikTok or Instagram like a lot because they think other people are too.

Speaker 3 And that's a form of manipulation by Instagram and TikTok.

Speaker 3 And there's a lot of work being done now to try to find ways to spring the trap by enabling people to work collectively to say, we're all going to be off.

Speaker 3 At least we're all going to be off between 9 p.m. and 8 a.m.

Speaker 3 And that is a new frontier of manipulation.

Speaker 1 Cass, what are the three takeaways you'd like to leave the audience with today?

Speaker 3 Takeaway number one is that manipulation is bad because it is an insult to people's autonomy or freedom, because it is like deception and lying. It prevents people from making reflective choices.

Speaker 3 The second takeaway is that manipulation consists, it should be defined as, a form of trickery that compromises and fails to respect people's capacity for deliberative choice.

Speaker 3 Now, if we understand it that way, then we can spot manipulation in the family, at work, and online. If you think that that form of trickery is always bad, you probably lack a sense of humor.

Speaker 3 It's sometimes a very fun thing, but in egregious cases where it's harmful and it takes things from people without their consent, then it's bad.

Speaker 3 The last of the three takeaways is that it's time today to start to create a right not to be manipulated.

Speaker 1 Cass, thank you. It is always a pleasure to be with you.
I very much enjoyed your book, Manipulation.

Speaker 3 Thank you. Great pleasure for me.

Speaker 2 If you're enjoying the podcast, and I really hope you are, please review us on Apple Podcasts or Spotify or wherever you get your podcasts. It really helps get the word out.

Speaker 2 If you're interested, you can also sign up for the Three Takeaways newsletter at 3takeaways.com, where you can also listen to previous episodes.

Speaker 2 You can also follow us on LinkedIn, X, Instagram, and Facebook. I'm Lynn Toman, and this is Three Takeaways.
Thanks for listening.