1135: Sandra Matz | How Algorithms Read and Reveal the Real You

1135: Sandra Matz | How Algorithms Read and Reveal the Real You

April 01, 2025 1h 27m Episode 1135

Companies harvest 6GB of your data hourly. Psychologist Sandra Matz explains how they predict everything from depression to politics—and how to fight back.

Full show notes and resources can be found here: jordanharbinger.com/1135

What We Discuss with Sandra Matz:

  • Companies collect ~6GB of data per hour on individuals through social media, credit cards, smartphones, and location tracking, enabling predictions about personality, politics, and mental health.
  • Facebook identified depressed teenagers in 2015 and sold this information to advertisers rather than providing support, prioritizing profit over well-being.
  • Algorithms need just 300 likes to know someone better than their spouse, while facial recognition can determine sexual orientation with 81% accuracy from facial features alone.
  • "Anonymized" data isn't truly anonymous — three credit card transactions can uniquely identify a person, revealing unintentional information beyond our curated online personas.
  • Data co-ops offer a practical solution for regaining control. MS patients in Europe and Uber drivers in the US have formed co-ops to collectively manage their data, allowing them to benefit from data aggregation while maintaining ownership and directing outcomes toward their shared interests rather than corporate profit.
  • And much more...

And if you're still game to support us, please leave a review here — even one sentence helps! Consider including your Twitter handle so we can thank you personally!

See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Listen and Follow Along

Full Transcript

Spring is here, and so are the deals at DeeDee's Discounts.

From trendy outfits to home makeovers, DeeDee's has all the deals you need.

I'm talking everything from sandals and sundresses to spring throw pillows and scented candles.

You love a good deal? Get in your bag and get to DeeDee's Discounts.

This episode is sponsored in part by Vital Proteins.

I want to tell you about Vital Proteins Collagen Peptides.

That's a mouthful. It's a daily supplement that supports your hair, skin, nails, bones, and joints all in one in one simple step each day.

Collagen makes up about a third of the protein in our bodies, but as we hit our 30s, it unfortunately starts to decline. That's when you might notice things aren't working quite like they used to.
Your joints, your skin, maybe even your hair. Vital Proteins steps in to help keep those areas supported so you can stay active and keep doing what you love.
Vital Proteins is the number one collagen peptide brand in the US, so they know what they're doing. It's super easy to take.
Just add a serving to your coffee, smoothie, even water. It doesn't even taste like anything, so it blends right in.
The key is consistency. Making it a daily habit is how you see the benefits.
Get 20% off by going to vitalproteins.com and enter promo code Jordan at checkout. These statements have not been evaluated by the Food and Drug Administration.
This product is not intended to diagnose, treat, cure, or prevent any disease. If your 2020 or newer car or truck bought or released from a California dealer has been in for repairs under warranty, listen up.
Don't let the dealership give you the runaround. With Lemon Law help, you won't be f***ed with.
Lemon Law help specializes in Lemon Law and has recovered millions for car owners just like you. With a reputation for big wins, they fight for your rights.
Best of all, you'll pay zero out of pocket. Call 877-294-1717 today for a free evaluation or visit LemonLawHelp.com.
Paid spokesperson. Every case is different.
Results vary. Courtesy of Roger Kiernos, Knight Law Group, LLP.
Coming up next on the Jordan Harbinger Show. Facebook in 2015 was actually accused of predicting whether teenagers on their platform were struggling from anxiety, depression, low self-esteem, and then they were selling them out to advertisers.
So this is like someone at their most vulnerable state. Not only are they suffering from anxiety, they're also teenagers.
They're still figuring out the identity. So the moment that you tap into this vulnerability, the damage that you can do, I mean, it's very obvious.
Welcome to the show. I'm Jordan Harbinger.
On the Jordan Harbinger Show, we decode the stories, secrets and skills of the world's most fascinating people and turn their wisdom into practical advice that you can use to impact your own life and those around you. Our mission is to help you become a better informed, more critical thinker through long form conversations with a variety of amazing folks from spies to ceos athletes authors thinkers and performers even the occasional cold case homicide investigator hostage negotiator gold smuggler or russian spy and if you're new to the show or you want to tell your friends about the show i suggest our episode starter packs these are collections of our favorite episodes on topics like persuasion and negotiation psychology and geopolitics disinformation china north korea crime and cults and more That'll help new listeners get a taste of everything we do here on the show.
Just visit jordanharbinger.com slash start or search for us in your Spotify app to get started. Today, my friend Zandra Mats shows us how companies steal our data, use our data to target us, not only to sell us things, but how they can essentially read our moods, almost read our minds, how AI and computers get to know us on such an intimate level, what they can do to predict things like depression, which unfortunately they monetize instead of helping us solve, and how we leave millions of digital footprints each day.
This is a bit of a deep episode on the data we leave and how it is used. A lot of fascinating details in here.
Algorithms can tell if someone is gay 81% of the time just using their face. That's pretty interesting.
They might have better gaydar than humans. How computers actually test their assumptions about us and what having a low phone battery means about you.
And yes, we already know you're one of those. Now, here we go with Zandra Mott.
I think everybody knows that companies collect a lot of data on us, but I guess I didn't personally realize how granular they could get with psychological targeting. Can you explain what that is, first of all? Yeah, so like psychological targeting is in a way taking all of the digital traces that you leave.
So that ranges from what you post on social media to you swiping your credit card to your smartphone, capturing all of these very intimate things like where you go based on GPS, like you making taking calls, and then translating those footprints into meaningful psychological characteristics. So anywhere from your personality, your values, your political ideology, sexual orientation.
So really painting a picture of the person behind the data. You know, it just occurred to me, I wonder if they collect data in the same way when I pay with my phone.
And the answer is now I'm just telling Apple and the credit card company everything that I buy. And I know that they know this because it'll pop up like, your Amex has been charged $47 for eating at this restaurant.
So of course they're logging that, right? They eventually use that against me. Exactly, it must go through a bank.
Yeah, it goes through a bank, but it's also, I'm telling you, Apple, which is what business do they have about what I'm buying? And the answer is they're in the business of data just like everybody else, probably. Yeah.
And it's funny because Apple is one of these cases where they shut down the third party tracking, but they still collect all of the data. So at the end of the day, they benefit because now they're holding the monopoly on the data that they capture.
Yeah. I noticed whenever you install something new, it's like this app wants to track you.
It's ask app not to track. It doesn't say we won't track you.
It says we're not going to let them track you. Don't worry, though.
We're still tracking you, obviously. We're still tracking everything that you're doing.
It's like location data. Somebody told me the other day, oh, I turned off location data so they don't know where I am at.
And I'm like, they know they're just not sharing it with you and your friends and your mom. It's not like the FBI can't find you.
Come on. Yeah.
And also it depends on what you turn off, right? You might be able to turn off the GPS. You still need to be connected to a cell tower.
Otherwise your phone doesn't work. The fact that you turn off GPS doesn't mean that you're not trackable.
So I was doing training for some journalists a couple of months ago in another country. And one of the things that they did was they put all of our phones in like a safe box.
And I said, oh, I'll just turn it on airplane mode. And they were like, oh, sweet summer child.
That's not going to do anything to stop intelligence services or whatever from finding or turning on the microphone to hear what we're doing. And I thought, oh, that makes sense.
Especially if they want to get the location data, they can do that. If the phone is in a lead box, then it just sort of vanishes.
And I guess they had us put it in the box in one place and then we moved to another place. So they just couldn't track us that way.
But that was the only method they had. And you had to put your watch in there, everything.
Oftentimes, people take it even a step further, right?

Well, I'm not using social media. Nobody can really track me across the internet.
It's so short-sighted because obviously you use your credit card, your smartphone, and there's CCTV on pretty much every corner. So people will find you.
It's hard to escape. I do want to address those counter-arguments in a little bit.
But first, I want to scare the crap out of everybody a little bit more. The way that computers and AI get to know us at an intimate level is really hard to describe to people who have lived with some level of privacy their whole life.
Growing up in the 80s in a medium-sized town, some people were all up in my business, but they didn't really know that much about me. I could hide stuff from parents pretty easily, but you had the ability to do that.
You, on the other hand, grew up in a really small town and everybody knew everything about you. This analogy was actually really good to illustrate the idea here.
It is a tiny, tiny town. So 500 people.
My parents keep reminding me that it's grown to a thousand now. Oh, big time.
It's 100% bigger. It doesn't make any difference because it still meant that everybody knew everything about me, right? Who I was dating, what I was doing on the weekend, which music I was into.
And what village neighbors do best is then make inferences about who you are. They saw me running to the bus every morning.
They probably figured out that I wasn't the most organized. And then it doesn't stop there.
Village neighbors are not just there to poke around in your life and your psychology. They then try to meddle with your life.
They are not really trying to figure out who you're dating. They want to influence who you're dating.
And sometimes that's really helpful because they know you and you get this feeling of there's someone who truly understands me. And when they have my best interest at heart, they're going to give the best advice that I can possibly get.
But also oftentimes it felt a lot more manipulative behind my back without me necessarily having control or appreciating the support that I was getting in any way. Oh, man, that would be really irritating because, you know, it's like your parents theoretically have your best interest in mind.
But the woman who lives five doors down, who babysat you twice when you were little, they don't really know you. They think they know you because they've made all these assumptions about you, but they could be wildly off base.
And also, oh, I think she should date that boy. Why? Because they both have dark hair.
What the hell does that have to do with anything? You're not exactly using your genius matchmaking skills. It's like the boy she's dating now, I don't like him because one day he dropped something on my lawn by accident.
Okay. 20 years ago, he dropped something on your lawn by accident.
Dog peed in your yard. So he's a bad kid now.
It's so idiosyncratic. It's actually what I find fascinating about the shift to the online world is we're doing it a lot more systematically.
So your neighbors, they had their own biases. They had their own perspective on the world and they were filtering all of the data that came in through their own lens and their own incentives.
Algorithms don't have the same incentives. They essentially do whatever you tell them to do.
They optimize for the goal that you set for them. So the way that I've been thinking about essentially that we live in this digital village where algorithms now replace our neighbor with essentially a digital neighbor who takes all of the data traces and makes the same predictions.
And for me, the important part, and this is coming back to something you said, is it used to be the case that what happened in the village stayed in the village. So maybe it travels to the next town.
But if I wanted to escape, I just moved to Berlin or I moved to a bigger place, New York, and that's it. But it's no longer true for the digital space.
Once your data is out there, everybody has access to it. It's funny because people will ask me something like, wow, your life is really not that private because you have a podcast and you have this online brand.
But the difference is I have been for the last 18 years-ish thinking about what goes online because I realize, oh, if I post this, it's part of my brand. If I post that, it's part of my brand.
Not that everything I do is branding, but it's just I'm consciously aware that like I can't start talking about, I don't know, buying a gun without people being like, okay, so now you're this political, this or maybe you're this guy or maybe you're that guy or maybe having a midlife crisis or whatever, right? You're doing something there. Whereas normal people, they post on Facebook and they go, I'm just telling my friends that I got a new rifle for hunting.
But what you're doing is telling the entire world for all of eternity that you are now a certain type of person or with 50% probability or maybe this person. You forget you posted that last Monday, but the algorithm never forgets that you own a Browning with a scope that you can use to shoot 100 yards.
They will never forget that for the rest of your life. The thing is that even if you don't explicitly put this out there, right? So a psychologist, we think of data in two categories.
One is these explicit identity claims, which is like posting on social media. That's you telling the world, here's the person who I am.
Here's how I want to see myself. And here's how I want other people to see me.
Now, that's a very intentional signal that you're sending. But then there's also all of the other traces.
You don't have to post about you buying a gun. I could just by tracking your GPS records, figure out that you probably went to a shop where most people buy guns.
And if you do this repeatedly, or you go to a shooting range, then my assumption is with like a high percentage accuracy that you probably own a gun. And for me, that's in a way the very intrusive part that we oftentimes forget is that it's not just this explicit signaling.
It's like all of these behavioral residue that we create without really intentionally thinking about it. That's interesting.
I hadn't even thought about the fact that you don't have to post it. It just comes out of the data exhaust or whatever they call it.
Yeah, I think it's data exhaust. Where it's like location data.
Oh, we don't need that. And it's actually now we can tell where this guy goes to lunch every day.
Oh, that's useful. We can advertise similar restaurants in the area.
Like that all becomes useful as soon as they figure out what to do with the mess of data that they have, which is what AI is getting better at doing and these kinds of things. One thing I thought was super interesting in the book was that these clues or data can predict depression.
And lots of people are depressed. I don't think that's a big surprise.
There's like a million suicides a year. Is that global? I assume that's global because that's enormous.
I think it's global. Yeah.
It's an insane amount. That's a huge number of people, no matter which way you slice it.
280 million people, give or take, are suffering from depression. I don't know how they figured that out or if it's an underestimate.
But so if we can predict depression, I have to assume that these companies are doing everything in their power to help users who are at risk as soon as they get word. Of course.
What else would they be doing? Not telling me a leather jacket and telling me it's going to make me feel better. It's not a hypothetical.
So Facebook in 2015 was actually accused of predicting whether teenagers on their platform were struggling from anxiety, depression, low self-esteem, and then they were selling them out to advertisers. So this was a slide that was actually circulated.
And you can imagine, right? So this is like someone at their most vulnerable state. Not only are they suffering from anxiety, they're also teenagers.
They're still figuring out the identity. So the moment that you tap into this vulnerability, the damage that you can do, I mean, it's very obvious.
Then there's also potentially beneficial use cases of that kind of tracking. It seems like if you're not a complete psychopath, a profiteer, I get it.
Advertising to people is profitable.

It seems like if you find out that complete psychopath, a profiteer, I get it,

advertising to people is profitable. It seems like if you find out that teenagers are depressed, one of the best ways to get every parent on your side as a tech company would be like, hey, imagine this headline.
Facebook saves 10,000 teenage lives per year with depression tracking, notifying teachers or caring adults, parents, doctors, healthcare people, authorities, whatever it is, every year by tracking them online. Parents would be like, here's your new phone so that you can use Instagram because this is the only insight we've really have into your life and they're keeping you safe, as opposed to the current narrative, which is the complete opposite.
A hundred percent. And I was recently talking to a mother whose son attempted to commit suicide and it's traumatizing and for me we actually have this opportunity there to catch it early because as you said typically how it works is you enter a full-on depression which first of all even for an adult when you commit to it is really difficult to get out because that's the point when you're inward turning you're not necessarily seeking out help and it's really hard to work your way out.
What you would ideally do, and this is where the tracking actually comes in handy, is you catch it early. And what you can do with your phone, for example, is just looking at your smartphone sensing data.
Maybe you're not leaving the house as much anymore as you used to. Maybe there's much less physical activity.
You're not taking as many calls anymore. So there's this deviation from your typical baseline.
And again, it might be nothing. Maybe you're just on vacation and you're having a great time.
But it's like I can send you this early warning signal that says maybe two people you nominate. Maybe if I know that I have a history, which is oftentimes the case, suffering from depression, and I see this coming in the future, I could say I'm going to nom nominate my spouse.
And when this happens, I want you to notify me and I want you to notify my spouse. It's not a diagnosis, right? It doesn't replace a clinician coming in and going through all the questions, but it's at least one way of saying, just look into it.
Maybe it's nothing, but to be on the safe side, why don't you try and get some support? And I think that's a total game changer. It seems like this would be so easy to implement because obviously they can trigger advertising if you're depressed.
They could easily trigger an email, a phone call, a notification in your spouse's use of that app or whatever. It could literally just call the police or there could be some sort of central way to handle this.
It surprises me that they haven't done it. Now, maybe that's naive because they're saying, oh, you'd say that's not profitable.
Why would they do that? It seems like it would long term be such a massive PR win and create almost an incentive for parents to get these products in the hands of their children that it would be ROI positive. I don't know.
What do I know? I'm not sure if Facebook were to offer this tomorrow. I still am not sure if I would want that.
So I would rather have a dedicated entity that's not Facebook. Facebook has like all of these market incentives it's committed to.
And you don't know what the leadership looks like tomorrow. There's this saying in the book, data is permanent and leadership isn't.
So even if you had a CEO who kind of today thinks we're going to use it to help people, who knows the data is going to be out there and they could use it in very different ways tomorrow. So I'd much rather have a dedicated entity that doesn't even have to collect my data.
There's now ways in which you can track locally on the phone and you just send your intelligence that says if these patterns show up, you alert locally on the phone and I never have to even collect the data initially. Now, that's not Facebook's business model.
Facebook's business model is grab as much data as you can and then you see how you can commercialize. So even if Facebook offered that, I personally would not trust them.
I don't think I trust them, but this is because of what they do with the data that they're already getting. If they years ago had said, hey, we can use this to target advertising.
Hey, and by the way, this looks like you might be starting down the path of having an eating disorder. We're going to notify somebody who you told us to notify or your school authorities.
Then maybe we would have a totally different opinion of Facebook instead of remember early. It was like, wow, I can keep in touch with all my friends from school.
This is incredible. I know what my aunt is up to.
I only talk to her like once a year. Now I see her photos every week.
This is such a glorious product. I don't see what could possibly go wrong with this.
And then it was like two years, three years later, it was how the hell do they know so much about me and why are they trying to influence what I purchase or the elections that we have? It was such a rapid downfall. Also didn't turn out to be connecting the world.
So there's that. Yeah, exactly.
How many digital footprints do we leave each day? Can you take us through a typical morning? Because a lot of people, again, they're going to say, I don't post updates on Facebook and I don't let the phone track my location or I don't even take my mobile phone with me when I go to the gym. Whatever it is, they're going to have some sort of reason that this doesn't apply to them.
And the amount of data we create is insane. It's just mind blowing.
So to start with, like, average person generates about six gigabytes of data every hour. That's just already the sheer volume.
And then when you break it down, it just really taps into all of these different parts of your daily routines in life. So if you wake up in the morning, probably what most people do is they grab their phone, which means that now just you unlocking the screen means that someone knows you probably woke up.
The phone was stationary. Maybe it was dark, so no ambient light.
You didn't open it. The moment that you unlock it, someone knows that you're up.
Then you're checking websites, you're sending messages, so you kind of know exactly who's connected to whom, what you're interested in. My morning routine is essentially just going to the deli, getting a coffee, which means that I swipe my credit card.
Again, someone knows that I've been out buying something in a specific location. And if you have a Fitbit or some kind of tracking device that counts your physical activity, also sees when you deviate from your routine.

So if you have a typical routine and sometimes you don't do the same thing, people might have a sense that something is up. Even if you don't have a Fitbit, take your phone with you on the walk or on the way to work.
There's cameras and with facial recognition, someone again knows what you do, where you go and so on. So there's all of these traces.
Your car now has like sensors in it that track anything from your speed. Maybe you're going over the speed limit.
Maybe you're not a great driver. You're going from A to B.
So this idea that it's just social media that is really tracking us and coming back to Facebook, there's just so many data traces. It's really impressive when you think about it that way.
To give people who aren't tech nerds an idea of how much data this is, this is a MacBook Pro storage every week or maybe two. So if you bought a laptop, it'd be full at the end of the week or halfway through the week, depending on how much storage you get in that thing.
And companies are storing this because there's so many people. So are they really storing a terabyte of information on every storing 52 terabytes of data on me personally every week? Because where is all of that data? It's a great question.
Some of it is you don't even need to store everything. But if you think about GPS records, oftentimes what you want is you want to extract the insights and you don't necessarily need to store the longitude latitude.
What you want is, yeah, I kind of get the places that you visit. Maybe I can map it against Google and see what happened in these places when you were there.
So a lot of the companies that extract insights that can then be used to tap into your psychology don't require the storage of the raw data. But then there's also other companies who have these massive servers.
So still, I think even with that amount of data, like storage is so cheap that it pays off at the end of the day. And you might be deleting it at some point, but just the longer you can keep it, the more of these behavioral trajectories you can actually generate and create about people.
Yeah, I suppose you don't really need every shred of data, right? If you have, say, someone's path that they walk every day to go get breakfast and then they go to the gym and then they come home, you can just say, it's basically this times a thousand. They don't have to get every time you cross the street.
And you can even store, well, there was a deviation, right? So even knowing that. So if you know, here's the typical, and now there's something that seems off, now you can trigger more data collection.
So there's also ways in which you can say, we see that it's a repeat pattern, and we're going to just break the data collection at the point that we see that it deviates. So yeah, I would say this times seven, but on the eighth day, she went across the street real quick and came back.
But then it was this again, 10 times in a row. I mean, that's not that much data, right? It's not every step, every longitude and latitude repeated over and over again.
And you're right, storage is super cheap. If you've got some sort of data farm or whatever in Iceland that's buried underneath the snow for cooling purposes, that just gets cheaper by the day.
So it doesn't really matter that much. It's just a crazy amount of data, given that it's one terabyte per week times hundreds of millions of people using these things.
The amount is just bananas. I think by now there's this estimate that there's more points of data in the universe than stars.
To me, it's just insane. It's quite impressive.
How did they count? That's already intriguing. Right.
Yeah, I suppose it's all just math. It's way above my pay grade.
Facebook knows us better than our closest friends and families. You wrote this in the book, and that is not terribly surprising, a little scary because I'd like to think my family knows me pretty well.
But Facebook does know me better. They'll show me some clothes where I'm like, oh, I have to buy that, even though I know objectively that I'm going to be disappointed as soon as it arrives and I find out that it was made by small children in Bangladesh or whatever.
Yeah. That sounds like me.
My husband keeps making fun of me for that. Total impulse buyer.
Yeah. But yeah, if you think about it, it's Facebook, but also Google.
You type questions into Google that you don't feel comfortable asking your closest friends or sometimes even spouse. It's not so surprising to me that all of the digital traces that we create can paint this picture of who we are in a more accurate way than the people around us.
I was doing a show yesterday with my producer. We were doing some work and I was like, isn't there a different word for pedophiles who are attracted to people in different stages of puberty? And he's like, one, how do you know that? And I was like, oh, it's a bit from a comedian.
Let me just look it up. And he goes, please tell me you didn't just Google different types of pedophiles.
And I was like, oh, yeah, shoot. I did.
Yeah, I still remember when I was doing my PhD with this one guy who was doing research on porn websites. And I remember his seminar talk where he wants to open a website and just pulls up and it's all porn websites.
So, yeah, you got to be careful on what you type into that search bar. And now generative AI, right? People ask these large language models the most obscure, absurd questions that are super intimate.
There's stuff that I've asked chat GPT where I feel the need to tell it. A friend of mine has asked the following question.
Asking for a friend of a friend. Exactly.
Because in 20 years when it's like, here's what you searched this day in 2025. I'm like, I do not want it to pop up.
I just don't want that to show up anywhere. And if it does, I want it to be like, you asked on behalf of a friend.
And I'm like, see, it wasn't me. Sure.
Sure, pal. ChatGPT, it's amazing because people don't even ask questions.
It's just statements. I think there's like some research.
People just say random stuff to ChatGPT because they want to get it out of the system, right? They just need to tell it someone and they don't want to tell it to the people who they think might be judging them after. We have an AI chatbot on our website where people can use it to search for things that are inside episodes.
And we get an occasional report of what people have searched for just so we can make it more useful. And it's not my team that's behind it.
It's an AI company. And they'll say, hey, FYI, this was a pretty funny month for searches.
Here's our top five favorites. And some of the stuff that people are searching is they're trying to get the AI version of me to tell them how to commit a crime because they think maybe Jordan knows how to get away with this.
It's interesting because obviously I'm not liable for that because it's not really me telling them how to do something. But it's a little scary that somebody can get inside my brain something I would never tell them.
And my AI version is, sure, I'll tell you exactly how I would hide a dead body. And it's like, why are you letting my AI brain tell it this? To me, it's fascinating.
Because if you don't feel comfortable asking another human being, that's one person who has to keep a secret. But you're asking a server.
You're asking essentially open AI. Now your question sits there for all eternity on a server.
It might be passed around. And I think that's something that people don't realize.
Somehow this intimacy of the screen feels like it's just not a person on the other side. If anything, it's probably more intimate and more dangerous to ask your question there.
Yeah, it's not a person on the other side. It's theoretically infinite number of people that can look at this at any time with no context and zero ability to defend yourself in the moment because you've been dead for 20 years.
Yeah, if you're lucky. How do companies like Meta and other social media companies, how do they get to know our preferences? Is it just me telling them that I like something when somebody posts something? Because I don't always like the things that I like.
Does that make sense? Makes a lot of sense. You're just probably going to be nice to people or cynical.
Great vacation. I couldn't care less, but you're going to feel bad if I don't click like on this and we're friends.
I'm supporting you. But no, you look like an ass.
Now you told them on the podcast. This is 87 selfies of your vacation.
These are completely uninteresting. You're clearly a narcissist.
Here's my like. No, but you're absolutely right.
And it's coming back to this distinction between identity claims, right? So you liking something, you posting something about your vacation, you're following a certain page that you want other people to see that you follow. And those are all these explicit identity claims.
But then there's all of this other stuff that they capture, all the way from how much time did you scroll through a specific ad that they're showing you, a specific piece of content, to here's like some of the more subtle nuances in the way that you use language. So coming back to this topic of depression, for example, it's not just you talking about symptoms and feeling down and maybe having these physical symptoms.
Even like the use of first-person pronouns is a sign of depression. And that's not something that you put out there intentionally, right? It's like, it's hidden in some of the cues that you generate either by you posting or by you just browsing the website.
Now, Facebook goes a step further because they also, first of all, buy third-party data. So they also buy extra data to know you even better.
And they even have data on people who are not using Facebook to contrast and see how they could potentially bring them in. So Facebook really goes far beyond you liking or not liking the vacations of your friends.
What was that about first-person pronouns indicating depression? What does that mean? Because I'm not sure I understood what you just said. Yeah, it's actually one of my favorite examples in that space.
It's essentially the use of first-person pronouns, which is I, me, myself. What we know is that is empirically related to depression, so emotional distress.
And I remember when I first heard about this, I was like, I don't understand why this makes sense. I would have assumed it's narcissism, as you mentioned, right? If you talk about yourself and your vacation and what you've been up to, it's probably self-focused and maybe you're a narcissist.
But what we know is that it's a signal that you're currently very focused on why am I feeling so bad? How am I going to get better? Am I ever going to get better? And because we have this inner monologue with ourselves and we can't constantly control it, that just creeps into the language. So people who are suffering from any type of emotional distress, they're just much more focused on the self and that leaks into the language.
And again, in your posts about vacation and everything that's going on, you don't explicitly intentionally use first person pronouns more when you're not feeling great. It's just something that leaks to the other side and leaks into your language.
That's interesting. So in theory, that happens when we're talking to people in real life also, or is this mostly online communication? Yeah, no, it's also talking in real life.
It's a pretty substantial effect. I think 40 times more than when you're not feeling depressed or emotionally distressed.
40 times more. It's 40%.
It's not 40 times. That's interesting.
That's unmistakable. Yeah.
It's like pretty substantial. Wow.
So in theory, even a smart TV or my phone, which is listening, even if I don't want it to be, or my Amazon Alexa thing, that could tell me if I'm depressed just by hearing what I'm talking about in the house or overhearing a phone conversation. Yeah, it's just like this passive listening into not just what you're saying, but how you're saying it.
You mentioned in the book that within 300 likes of me liking things on photos or whatever, the platform knows me better than my spouse. 65 likes, it knows me better than my friends.
That doesn't seem like that much. It's so little, right? So I remember when my colleagues published a study, I think the average and this is 10 years ago now, the average number of likes was 230.
So back in the day, the computer was already better than everybody except for the spouse. And you can very easily project into the here and now where you have a lot more data, you have a lot more sophisticated models.
So by now, the computer is probably better than the spouse. And again, it sounds so intimate.
But then if you think about the fact that a computer has access to the entirety of your digital life and some of the aspects that you're potentially trying to hide from other people you don't necessarily intend to signal, it's not as surprising. Yeah, that's not super surprising.
I guess if you think about it this way, and I'm sure this isn't a one-to-one kind of comparison, but if I had to remember 300 things that my spouse likes at the same time, I don't know if I could do that. That's a lot of different things.
How do they measure that? Is it like the newlywed game where you get asked questions and the computer just gets it right more than the spouse? So in this case, it's actually you complete a questionnaire. So you tell us, here's how I think of myself when it comes to personality.
And it's all kind of asking you about behavior. So how often do you enjoy socializing? To what extent are you making a mess of your environment? And then the spouse completes the same questionnaire.
So on your behalf. So I think Jordan would answer strongly agree to the question.
I make a mess of things. Not sure.
Hypothetically. Wow.
Yeah, that's interesting. And the computer gets it right more than the spouse.
Again, though, I think trying to remember 300 different things about your spouse at one time, it's a lot. It's just a superhuman feed of memory alone, let alone knowing someone that well.
And you also have a certain bias. There's like certain ways in which you want to see your spouse.
So once you have a certain way of seeing them, the way that you integrate new information is just almost aligned with the perception that you have anyway. So it's much harder for humans to update just because it's in a way functional to stick with the impressions that we have.
Now it's time for us to hopefully monetize you. We'll be right back.
This episode is sponsored in part by the Moonshot podcast. If you've ever wondered how big world changing ideas go from crazy concept to real life innovation, you've got to check out the Moonshot podcast.
This 10 episode limited series takes you inside Alphabet's Moonshot factory, AKA X, where they dream up and build things like self-driving cars, AI that codes itself, wildfire prediction tech, even laser powered internet. I'm not sure how that works, but I want to find out.
Get an inside look at the people, the challenges, and the wild process of turning impossible ideas into reality. You're basically sitting down with the minds behind these projects to answer big questions like how we can make clean energy work and if we can get tech to actually help save this planet.
What I like about this show is it's not just about success, it's also about failure, risk, and relentless optimism. So if you're into bold ideas, breakthrough tech, and the untold stories behind innovation, the Moonshot podcast is your kind of show.
Premiered March 10th with new episodes dropping every week. Find it wherever you get your podcasts.
This episode is sponsored in part by Airbnb. As Anthony Bourdain once said, travel isn't always pretty, it's not always comfortable, but that's okay.
The journey changes you, it should change you. Seeing the world opens your mind, introduces you to new perspectives and gives you experiences that no classroom or office ever could.
And with remote work being more common, there's never been a better time to take advantage of that freedom. Immerse yourself in new cultures.
Make lifelong memories. Keep a home base to return to.
Airbnb can make that a reality. Hosting on Airbnb is an easy way to earn extra income without taking on a second job.
And now, with Airbnb's co-host network, it's even simpler. Got a spare room, guest house, place that sits empty while you're away.
Instead of letting it just collect dust, let it help fund your next adventure. A co-host, they can take care of everything.
They list your space, they manage reservations, they communicate with guests, they keep it all in tip-top shape, and that unused space could cover your flights to Asia, book you a cool hotel in Spain, even help with everyday expenses, giving you the freedom to explore more. Whether you're saving for your dream getaway or you're just looking for financial flexibility, Airbnb makes it easy to turn your space into extra income.
Find a co-host at airbnb.com slash host. If you're wondering how I managed to book all these amazing thinkers, authors, creators every week, it's because of my network, the circle of people I know, like, and trust.
And I'm teaching you how to build a network for yourself for free, whether for personal or professional reasons, whether you're retired or just entering the job market. I have a course over at 6minutenetworking.com.
The course requires no payment. I don't want your money.
I don't even need to know who you are. You can be totally anonymous.
It's all super easy, down to earth, non-cringy. It's about your relationship building skills.
In just a few minutes a day, you can binge this course, practice a few things from it. It will change the way that you relate to others.
And that's the whole idea. And many of the guests on the show subscribe and contribute to the course.
So come on and join us. You'll be in smart company where you belong.
You can find the course again for free at sixminutenetworking.com. All right.
Now back to Sandra Motz. How do computers test their assumptions about what they know about us? I know I'm anthropomorphizing computers a little bit, but whatever.
How do they test and see if they're right? Because they have to do that somehow, right? That's actually how they learn, right? So machine learning is called that way because they learn by trial and error. So the way that we train a model, for example, to predict your personality from, say, Facebook likes, is we give it a lot of data where people completed a questionnaire, giving us answers of here's how I think about myself in terms of personality.
And then they have access to all of the likes and they just play the trial and error game. So maybe if you like the fan page of Lady Gaga, maybe that makes you more extroverted.
Did I get it right or wrong? Got it. Okay, I'm going to update my belief of what Lady Gaga actually means.
Same for the fan page of CNN. Maybe that makes you more conscientious and organized and reliable.
So essentially you just throw a lot of data at them. In the beginning, they're just randomly guessing.
And over time, they become a lot better because you give them feedback. You tell them, yep, that was a good guess.
No, this was a terrible guess. I see.
So it's just tons of trial and error. You have a really good analogy in the book about chick sexing.
Don't worry, it's still safe for work, folks. This is not the chick sexing that I tried in vain to accomplish in my 20s.
This is the kind of chick sexing that happens on a farm. Yeah.
Tell us about this, because this is a very good metaphor. I love it as an example, just to explain machine learning.
So there's like a profession that is essentially, it's called chick sexers. That's their name, which is amazing, right? I imagine you going to the conference and they ask for your title and you just say like, I'm a chick sex sexer.
I mean, that's a life goal on your bucket list. But anyway, the point is that in hatcheries, you very quickly want to determine whether a chick is male or female.
Because for all the vegetarians out there, you're onto something. The males, they get shredded pretty much right away because they don't produce eggs.
So they mostly keep the females. And it's really difficult to tell whether a chick is like male or female because they're generally so tiny, right? It's like a tiny baby chick.
So what chick sexes do, they essentially learn over time by having someone supervise and their actions. They just pick up a chick.
They look at the little vent and say, oh, I think this one is a female. And then the supervisor says, correct.
They put it in the one basket and they move on to the next. So they go through this trial and error game many, many times.
And it's not that it comes with an instruction manual. So it's not that they sit down for a two-week training course to see how to distinguish males and females.
They just start with a 50% baseline where they might get it right or they might get it wrong. And then over time, they develop this intuition and they see these patterns that sometimes they might not even be able to explain.
And I think of like machine learning the same way. Instead of looking at whether a chick is male or female, you might try to predict personality.
You might try to predict gender, sexual orientation, political ideology. And the input is essentially people's digital footprints.
And then over time, they just learn how to distinguish. Now, the interesting part is actually that this is how it used to be.
So we used to train these models specifically by supervising them. Large language models were never trained explicitly to make some of these predictions.
They were just trained to predict the next word in a sentence and use the entire internet to do that. And they can still make similar predictions.
So if I now give ChatGPT access to, let's say, your social media posts or your credit card spending, and I ask, what do you think is the personality profile of the person who generated the data? It's almost as accurate as the supervised models that we trained specifically for that task. So that's a totally different game because now anyone can use it.
You don't even need that data. You don't need the training process.
That's really interesting. And for someone like me with a thousand or two thousand hours of audio content out there.
Yeah. That's just a bonus for these companies.
They'll eventually be, I mean, I'm sure they already are ingesting all of that. There's a company right now that's making an AI clone of me, whatever that means.
And they're using all of the data from the show, but they're only going to use about a thousand hours, but apparently that's more than enough. I'm curious to see it because it will supposedly talk like me, have the same reactions as me.
It's basically as close as you can get to some sort of printout of my brain. And it's funny because it seems like such a waste to do it for me because the other people who have this are like Nobel prize winners.
I think I'm just low hanging fruit because I have so much data out there. Interesting.
And I was going to get there because I think to me that's the next level. So far we're talking about we can make these predictions about your personality.
But personality is helpful if you meet a stranger on the street and you know nothing about them. Knowing whether they're extroverted or introverted helps you understand how they might be thinking and how they might behave.
But the people that you're really close with, like your spouse, for example, you don't think of them necessarily of she's the extroverted, open-minded. You kind of know them a lot more intimately.
And I think that's what we're getting to with these digital doppelgangers. And then you can imagine, once I have a second Jordan, I can ask, well, how do I best persuade you? Would you buy this? Do you like this? Maybe not.
So how do I get you to buy into this vision that I have or buy this product that I want to sell you? So I think it's just becoming more and more intimate. For us, I was talking to this New York Times column writer, and she wrote this article of how she outsourced for a week her decisions to AI, digital twin version of herself.
And she's like, it was pretty good. It got it right like maybe 90% of the time.
But it just turned me into this basic bitch where it was like always the same. And that I can totally see.
What it's trained to do is, yeah, what is the most likely thing that Jordan is going to go say next? But what makes you unique is, yeah, you don't always say the most likely thing. You still have like this unique element of like, depending on who's on the other side, you come up with something new.
So I think that's the world that I'm worried about. I don't want to be boring as a digital doppelganger.
I found it really interesting that the algorithms can tell gay men by their faces 81% of the time. I feel like a lot of humans are good at detecting this too.
Isn't it called gaydar? Isn't that the whole idea? So this is like research by one of my colleagues. And I remember that he sat me down and he's like, well, computer can tell us like accuracy look of 80%.
I was like, I can do it too. And then I tried and I was barely better than chance.
So I think we oftentimes think that we know, and maybe I'm just particularly bad, but at least in their research, they show that it's not just me. Most people are actually much worse than they think.
That actually makes perfect sense. I think the reason people think they're good at it is because when the really obvious examples come through, they're like, oh, I got that one.
And it's like guy with the midriff shirt and walking a poodle in a sailor uniform. Okay.
Yeah. That guy's gay.
Good job, Columbo. But like the quote unquote normal looking person and you find out they're gay, you're like, oh, and you just like didn't know that about Tom, for example, it doesn't strike you as you would have gotten that wrong.
You just weren't thinking about it at all. So yeah, it makes sense that the algorithm would be better, but 81% of the time is really incredible.
Yeah. And it's also the scary part is I think we rely a lot on grooming signals, right? So the poodle or like the hairstyle, the shirt that you mentioned, this is all stuff that you can change.
I think core of the research, and I should say that this is quite controversial. So I think there's a lot of people questioning it, but the fact that you can tell it based on facial features alone, both for like sexual orientation, but also personality, that would really take the creepiness to the next level because you can leave your phone at home.
You can't leave your face at home. So for me, this is really like one of these, if this is true and there are potential reasons for it to be true, right? So we know, for example, that like hormones, they inflect testosterone, take testosterone.
Testosterone kind of influences what your face looks like because it makes you more masculine, but it also influences how aggressive you are. And so the fact that there is like these, for example, hormones that shape both is not completely delusional, I would say.
And for me, that's really extremely creepy. It is, although we're tiptoeing on the line of something that's going to get me canceled, but whatever.
There are plenty of aggressive, manly dudes who are also gay. So it's not just like, oh, well, look at this guy's cheekbones.
That's the whole like humans can do this too. It's OK.
The guy who looks like something out of a fashion show runway, like, yeah, maybe that guy has a higher chance in your mind of being gay. You don't really know.
But like the boxer whose photo is in there and has a big old beard. I don't know.
I probably wouldn't be my first guess. And I probably wouldn't ask because I don't want to get punched in the face.
Actually, beard. I think beard was one of the actually like higher likelihood, which I didn't pick up on.
I wouldn't see that coming now, especially the trend is, oh, you got to have a bushy special forces beard

and a trucker hat.

And it's like, this is the pinnacle of manliness

along with your tattoo sleeve.

And now it's like, well, according to our algorithm,

there's a higher likelihood that you actually like men.

For me, it's actually the interesting part

of this entire prediction space

is that there are certain signals

that are somewhat universal

and they are pretty stable over time.

One of my favorite examples is

if your phone is running out of battery,

that makes you less organized.

And that's probably going to be true for the next 20 years. For sure.
So, okay, I love that because when I see somebody whose battery is at 30% or 40%, especially if it's before lunch, I'm just thinking, I don't know if I can work with you. Clearly, your life is a mess.
How did you wake up with a phone that's not charged? What is wrong with you? That's what my husband tells me every single day. I think we allow battery sharing more often than we're not because he clearly is much more organized than me.
But yeah, so that's a signal that's not going to be different tomorrow. But then there's all these other signals that kind of cultural shifts.
Maybe something was a niche. Game of Thrones used to be like this.
It's a fantasy. And maybe it's just like these very nerdy, open-minded people.
Then it became like suddenly everybody was watching it. So that's an interesting part for prediction.
That's true. When people told me it was a show about dragons set in the ancient, I was like, don't even finish your sentence.
I'm never watching this. And then enough people were like, you have to watch it.
You have to watch it. You have to watch it.
And I started watching it and I was like, oh, this is really good. And then I remember telling other people, I was like, do you watch Game of Thrones? And they were like, I just can't with the dragons and the stuff.
And I was like, no, no, no, I get it. I know exactly what you mean, but I'm telling you that it's really compelling.
So fascinating how the window shifts. The battery thing makes me feel a lot better though, because I was kind of like, am I psycho level of unreasonable because I judge people based on their battery status? At least it's not just me.
Maybe. Yes.
Jeez. One great example of people prioritizing, I guess you'd say, online clout over real life is the amount of people that die taking selfie photos.
I can't remember the exact numbers, but it's insane. It's like more than from shark attacks and other stuff.
So again, it comes back to this question of what are we doing this for? And I think there's this fundamental need of humans to just talk about themselves. This is like why we see so many people posting on social media all the time.
You mentioned that it's so annoying if your friend posts like a hundred pictures that you don't want to see, but there's something that's inherently rewarding. There's research that I think is fascinating that shows that talking about yourself activates the same areas of the brain than having sex or taking drugs.
So it just essentially gives you this dopamine boost. If that's the case, it's not so surprising that a lot of people are like, I'm just going to use this time now, those five minutes that I have and post something on social media.
So people are willing to give up money in experiments to just be able to talk about themselves. Good Lord.
That's really sheds a lot of light on why I started a podcast. I had no outlet for anything anywhere else.
It's just funny. It makes me think, oh, okay, maybe if I'd done more drugs, this show wouldn't exist.
Certainly if I'd been able to have more sex, that show wouldn't exist. I'd still be a lawyer, so thank my bad luck for that.
It's very interesting to me that this phenomenon where, say you're out with friends and you order some food, I don't know if it's a conscious rule, but I basically give 10 seconds for them to take photos before I start eating. Because if I start eating right away, I ruin it.
But I'm also not going to wait five minutes while they get the right angle and the right lighting and they rearrange the food on the table. That's like the end of my tolerance for this.
And I won't travel with people who take more than a couple of selfies at each place. I get you want one or two.
You went to a castle. It is impressive.
It's really cool. But if you're trying different poses and different angles, I'm just leaving you behind.
You can take an Uber. The food thing I've never understood.
You're never going to go back to these pictures. You post them on social media, you're never going to go back and say, oh, I wish I could find this picture of the pizza that I had on 72nd Street.
And we even know the moment that you take pictures, you actually reduce the likelihood that you are going to remember that moment because you're now outsourcing your memory to your phone. It's like, okay, this is on my phone.
I took a picture, so I don't need to remember it. So there is even something that's taken away by us taking all these pictures all the time.
That's like how you get worse at math if you only use a calculator and you never try to add, subtract, or divide on your own. Yeah.
Oh, that's interesting. I didn't realize that you would remember something less because you took a photo.
It's almost counterintuitive. I get the logic behind it.
Your brain says, I don't need to remember this. I have a photo.
But you would think that focusing on it for an extra few seconds, trying to frame it in your phone camera, looking at it longer, that would make you remember it more. But actually, it's the opposite.
I think that's the old school. I think you're still coming from the generation where you had like 24 pictures and you're like, OK, that's something that's worth photographing.
Whereas now it's a click, click, click, click, click. This no longer has something that is worthwhile.
It used to be like a dollar or two by the time you got the film, took it to Walgreens to have it developed and waited however long you need, you know, a week or whatever, three days to have it developed. It ended up being, I don't know, a buck or two.
It was expensive. You're right.
Now I've got a Sony over here that I got to film my kids and film events and stuff like that. And it holds, I think when I put the memory card in and it showed at the top, it was like, you can film 16 hours of 4K video or 9,999 photos.
And I was like, oh, it just stops counting because there's probably 30,000 photos available on here. Back in the day, you're like, man, this is like maybe one of these moments where I should take a second one just in case.
Now you have 10 by default. The new cameras that are out there for sports photography, I think when you hold the button, first of all, it sounds like some kind of machine gun from Terminator, but it'll take, I want to say, 100 photos in a second or something like that at maximum speed, which is great if you're trying to catch a jumper at the peak of their jump at the Olympics and you want to get the perfect moment of them going over a bar or something like that.
That's when it makes sense. But when you're taking a picture of your kids deuce in the potty at home, it's like before they're smashing their head into the wall.
That's what you capture. Yes.
I want the wave in the skin of the forehead when it smashes into the drywall. Yeah.
Just as a memory. The good old days.
Exactly. Oh, my gosh.
Tell me about Facebook status updates and word clouds. I miss Facebook updates.
I mean, maybe they still exist, but it used to be. And I know I sound old AF when I say this, AOL Instant Messenger.
Do you remember that? Did you use that? Yeah, I do. ICQ and yeah.
Right. You had your away message and you're like, okay, I got to think of something creative and fun.
Like off days, you just pick a quote from an author you like or something, but on days where you think of something really funny, you put that in there and everyone's checking everyone else's away message all the time to see if there's a, and it was like, if you could do that day in and day out and make it fun, people were like, this guy's smart. And Facebook, the original Facebook status updates where you just typed in the box what you were doing, that was like almost like a status game for good writers in college.
And it was also informative. I think like right now we're just posting, first of all, anything and like pictures of food.
And so it's lost its appeal. And it's not just Facebook updates, right? You can think of Facebook status updates as the same of like you posting on Twitter, even in a way Instagram.
Pictures that we take in a way tell the same story, right? Can write about you going on a vacation or you can post a picture of you on vacation. There's a lot that we can learn.
So we already talked about

emotional distress, depression, all of the personality traits, and some of them are really obvious. Oftentimes when people talk about machine learning, AI, it's just like magic in a black box and we don't really know what it's doing.
If you talk a lot about going out and parties and weekends, you're probably more extroverted than the person who talks about sitting at home, reading, gardening, and interested in fantasy novels. So those are the obvious ones.
Sometimes there's the ones that are a little bit less obvious and maybe more interesting for psychologists. For example, income.
This is like one of the topics that I study. Can we predict someone's income, someone's socioeconomic standing based on what they post? And again, you see the obvious ones, like the rich people post about luxury vacations and brands.
Yeah, that makes sense. But you also see that people who have lower socioeconomic status, so lower levels of income, they are, first of all, much more focused on the present.
And they're also much more focused on the self. And it's not that they're, again, like these narcissists that just only can focus on the here and now.
It's just freaking damn hard to think about the future in anything other than how do you make your ends meet if you don't have that much money. So there's these subtle cues that we can parse out when we look at what they talk about that are actually interesting beyond just prediction.
How the rich and poor talk online is actually quite fascinating. The idea that people who have lower socioeconomic status or people who are really having trouble making ends meet, can we just say poor? If you can't make ends meet, you're not doing so.
I mean, I actually feel like that if we use labels, labels matter. And I know why people don't like them, but it's most of the time I think you don't like them because they make them feel uncomfortable.
No, you should feel uncomfortable because there's people who are poor and it's just a freaking hard life

to live. It's tough.
And I never thought about that because, of course, if I saw somebody who

only talked about themselves and things they were doing that day, it would seem to me that they were

not thinking long term because of some character defect or they're not smart enough or something

like that. But now, of course, it makes total sense that if you can't think far enough in advance

because you're just trying to literally feed your kids or you don't have gas to get to work and you're that poor, it's not necessarily a character defect or you having screwed up your life. rich people, is it really that obvious that they just talk about luxury brands and vacations? Or are there some more subtle cues that out people as high socioeconomic status? Because I can't

name one single time where I've been like, just getting back from my business class flight to Turkey and staying at a five star hotel. Here's my dinner.
I just don't do that. Yeah.
Some of them are more subtle, right? It's oftentimes the opposite. So if poor people talk about the present, you might be like more future focused.
So it's always a contrast the way that these models work. Even the fact that you talk about going to the Seychelles or like an exotic place just means that you don't have to be bragging about going to the five-star hotel on your next vacation.
The fact that you can afford to fly outside of the country, which most people haven't done in a lifetime, that alone is an indication that you're doing pretty well. That is a good point.
I hadn't even thought about that necessarily. There's still a stat that something like less than half of Americans have a passport or something like that.
Yeah, you don't have to say, I'm going to a five-star hotel in the Seychelles. You just have to say, oh, immigration is so slow in, I don't know, India.
Okay, well, you went to India, even though you're complaining about something. And probably if you're complaining about that, it gives you an extra boost in socioeconomic status.
Good point. Yeah.
I suppose if you're just excited that it's the first time you've ever left the country, you're not complaining about immigration status. You're like, I can't wait to eat after I get out of the six hour line.
You're just excited to be there. I thought it was quite fascinating about how these algorithms can tell if you're extroverted or introverted.
You mentioned based on likes, if you like fantasy novels or if you like going to music festivals, that makes sense. There was a theory in the book or a hypothesis in the book that attractive people become more extroverted and outgoing because of the attention they receive as kids.
That makes a lot of sense. Or is that just pure speculation? That's a real research finding.
And it comes back to what we talked about earlier with face signaling potentially parts of your identity on a psychological level. So we talked about testosterone kind of being related to aggression.
This idea that like your environment responds to you in a certain way, right? If you're kind of this beautiful kid, perfectly symmetric face, blue eyes and constantly smiling, people around you are probably going to be a lot more kind of appreciative and they're going to talk to you and they're going to approach you a lot more often. And the fact that those kids then grow up to be somewhat more social and extroverted and craving the social affirmation and social stimulation is not super surprising.
So it's like one of these ways in which actually who we are interacts with our environment and that in turn, again, influences who we are. This makes so much sense.
And it seems like it might be something that you could encourage in kids, regardless of how attractive they are, just by interacting with them a lot, putting them in environments where they are interacting with other people, adults and kids. You're right.
There's still that spontaneous element. My daughter, she's three.
She loves to sing and dance and she'll be like, turn your chair around. The show's going to start.
And I'm just like, where did you learn this crazy extroverted behavior? But then she's in music class and then the teacher's paying attention to her and the other kids are paying attention to her. So it is a reinforcing cycle.
My son, who looks exactly like me, so he's very cute. He has it in sort of an almost like a negative way where he's like, whenever I do bad things, people pay attention to me at school or otherwise.
And I'm just like, oh, no, this is not the reinforcing. That's you were hoping for.
Yeah, this is not what we want. We want him to be reinforced.
He's good. But he is not shy at all.
It's crazy. He talks to the cops when they're here.
He just has no fear at all. The interesting part is also, I think the way that we oftentimes think of personality is like it's the static, like you are either extroverted or you're introverted.
But it's actually a lot more dynamic than I think even personality psychologists assumed a couple of years ago. So it's not just that you can develop over the lifespan.
So most of us become nicer, less neurotic. So there's like these trends that we see when people get older, but we also kind of very much fluctuate across situations.
So like your son, depending on what the feedback is, might be kind of more reserved or more extroverted. So I think there's also something that when we interact with our kids, and I just had a kid, so he's like one year old, I just constantly think about how do I expose him to these different situations where sometimes I tell him like, look, it's totally okay to be quiet and sit in the corner and kind of just think for yourself for a second.
But then also, I want him to have these other situations where he can be a lot more outgoing. So I almost think of it as like this repertoire where you have a certain tendency, right? There's a pretty substantial genetic component to personality, but then there's also you being able to adjust to different contexts.
And I think that's something that we can teach kids and even tell them, look, if you behave differently across situations, that doesn't make you hypocritical. That can still be like this authentic version of yourself.
It just means that you're adjusting to whoever is on the other side or what the context requires. You mentioned, though,

facial recognition can take faces from photos or videos taken by other people or just the CCTV

that's present in whatever stadium you're in or on a street corner if you live in China.

So it doesn't really matter if you don't use social media. You're still a part of this surveillance capitalism system or whatever we want to call it.
Yeah, absolutely. And for me, really intrusive part is that it's not just the ability to make inferences about who you are, right? You mentioned China.
The reason for why the Chinese social scoring system is creepy in a way is that it also influences what you can do and what you cannot do. So it doesn't stop it.
I want to try and understand who you are. I'm also going to influence the path that your life can take, maybe the choices that you're making.
So in China, if the government predicts based on your data that you might have a higher likelihood of voicing dissent or protesting, you're not allowed into Beijing. I teach this class on the ethics of data, but that's what's happening in China.
What do you think here kind of companies decide whether you might get a loan or not, whether you might get credit or not, what your insurance premium is or not? It's very similar. We try to understand how you might behave and then we shift the offerings that we have.
We might try to sell you something that you don't need. So I think this notion that it's not just about privacy, it's really about the second step of people then interfering with your ability to make your own choices.
For me, that part is almost creepier. It is creepy.
And there's not much we can do about it. Because if an AI is making a decision to give someone a loan or not, and by the way, you know damn well, it's going to be like this person's battery was 25% at lunchtime when they applied.
We're not giving them a loan. They're totally irresponsible.
It's not going to say that's the reason it's going to say, oh, based on 20,000 factors that weighed a little bit to the left on whatever line, you're just short of making it. We're not going to be able to weigh all 10,000 of those factors.
The fact that you applied and didn't finish the application all at once and your battery status at the time and your location kept changing and like the fact that your jobs have changed so much, there's going to be 10,000 of those. It's not going to be like, we didn't give you a loan because you're brown.
That's going to be an obviously not okay thing. But when it's 10, 20, 30,000 different little factors and they don't interrogate the AI as to why they just blindly accept it because it's accurate 99 of the time that's where we start to run into these problems i would imagine and they might all be related to some of the protected categories right if we know that some of the behaviors that we show are related to you having lower socioeconomic status or to your ethnicity or to your sexual orientation then you don't need to capture that category because it's like somewhere embedded in the traces that you leave.
To some extent, I think on the global level, when we try and understand what are these models doing and are they potentially discriminating against people, I still think that there's something that we can actually do to probe. Oftentimes people say, well, we don't know what the models are doing because it's like these complicated neural nets and we just can't open a black box.

It can still look at the output.

If you're thinking about, are we going to give people a loan or not?

And you just see that none of the women are getting any loans and none of the women are getting hired into technical roles. Maybe then that's something that the model is picking up on.
So even if you don't fully understand what it's doing, you can always look at the predictions and see, is there anything that we see among the categories or the social demographics that we want to protect? That seems to be off in terms of how often we do the thumbs up that the person gets the loan or gets the job. You untrustworthy, good for nothing deadbeat.
We'll be right back. This episode is sponsored in part by Tonal.
Getting in shape is not easy. And the older I get, the more I realize how important, of course, it is.
You hit a certain age. Suddenly, it's not about looking jacked.
It's about, hey, how do I get out of this chair without grunting like my dad? I still grunt, though. I think it's part of, I think you have to.
I think you are mandated to do that after age 40. Anyway, that's why Tonal is great.
It's basically like having a full gym and a personal trainer mounted to your wall. It uses digital weights, so it adjusts in real time as you lift, meaning it knows when you're struggling and it backs off just enough or pushes you when it knows you got more in the tank, which frankly is smarter than most of us when we're training solo.
It's a great idea. Gym memberships and personal trainer fees, they add up fast.
With Tonal, you're making an investment once, you're getting way more out of it, plus no commute. You finish a workout, you're already home, which if you've got kids, you've got a busy schedule is huge.
You could put this thing in your office, which would be awesome. Another thing I love is Tonal doesn't just throw random workouts at you.
There are structured programs designed by legit coaches. It keeps track of your progress, adjusts your weights automatically, and they make it addictive in the best way it's gamified.
You can actually see yourself getting stronger, which keeps you coming back. And right now, Tonal is offering our listeners $200 off your Tonal purchase with promo code Jordan.
That's tonal.com and use promo code Jordan for $200 off your purchase. That's T-O-N-A-L.com, promo code Jordan for $200 off.
This episode is also sponsored by Progressive. You choose to hit play on this podcast today, smart choice.
Progressive loves to help people make smart choices. That's why they offer a tool called AutoQuote Explorer that allows you to compare your Progressive car insurance quote with rates from other companies.
So you save time on the research and you can enjoy savings when you choose the best rate for you. Give it a try after this episode, of course, at Progressive.com.
Progressive Casualty Insurance Company and Affiliates. Not available in all states or situations.
Prices vary based on how you buy. This episode is also sponsored in part by BetterHelp.
You know what is wild? We will spend hundreds of dollars on a new phone and supplements to optimize ourselves. But when it comes to our mental health, suddenly it's like, I'm good.
I'm just going to power through the existential dread. Here's the thing though.
Therapy, it's not just for when everything's falling apart. It's like maintenance.
It's like changing the oil in the car. You don't wait until the engine explodes.
And therapy has helped me through all the curve balls thrown my way. Personally, traditional therapy, it can be expensive.
I get the hesitation. That's why BetterHelp is actually pretty brilliant.
It's online. It's flexible.
You pay a flat weekly fee. So no surprise bills that make you need more therapy.
They've got over 30,000 therapists, like a small army of people ready to help you get your head on straight. And if you don't vibe with your therapist, there's no awkward breakup text.
You just switch. Easy.
No driving across town. No weird waiting room chairs.
You just click a button. Boom, you're in.
Bottom line, your mental health is worth it. Visit betterhelp.com slash Jordan to get 10% off your first month.
That's betterhelp. H-E-L-P dot com slash Jordan.
If you like this episode of the show, I invite you to do what other smart and considerate listeners do,

which is take a moment and support our amazing sponsors

who make this show possible.

All of the deals, discount codes, and ways to support the show

are searchable and clickable over at jordanharbinger.com slash deals.

You can always surface codes using the AI chatbot on the website as well.

And if you can't remember the code, you're not sure if there is a code,

go ahead and email us, jordan at jordanharbinger.com. We are more than happy to surface that code for you.
It is that important that you support those who support the show. Now for the rest of my conversation with Zandra Motz.
I also found it shocking how easy it is to identify somebody personally based on, what was it, three credit card transactions? It seems like if it's that easy, you could also extrapolate a lot of info about people from those transactions once you identify them. So if you find me based on three transactions, some software I bought, a haircut, and where I ate lunch, now you've got a zillion other transactions you can identify me with that are like, here's every bit of clothing he bought.
Forget Facebook status, where I spend my money is at least as identifying and intimate as that data. Yeah, totally.
And it identifies your different levels, right? So the example that you gave, like the three data points, that's coming from this notion of, even if we anonymize data, but even if like I got all of the credit card spending from everybody in Manhattan and we say, but it's anonymized because we're not using any names. We're not using date of birth.
We're not using an address because your spending signature is so unique, right? Almost like a fingerprint that it's very easy. And if I know three things about you, I can just easily identify you in there.
And then you're absolutely right. It's like, if you think of identity at the next level, it's not just that I know, well, it's Jordan.

Now I can also make inferences again about maybe you're like the impulsive person because you're constantly paying late fees and maybe you're not the most organized one. It's, again, something that might or might not show up in my own spending record.
So it's also like one of these things where like oftentimes people say, well, your online selves, they're so curated. right? And if you wanted to be like a more organized and reliable person online, you can do this because you just control.
Yeah, that's true for some of them, but my phone is still running out of battery and I'm still paying these late fees. And if I wanted to be someone completely different across all of my different kind of digital traces, I would probably actually become that person at some point if I was changing my lifestyle entirely.
Those people are just looking at the photo where it's, look, I just woke up and I'm in full makeup and I'm in my yoga gear sponsored by Alo. That's what people mean by curated, but you can't fake the rest of it.
The fact that your battery's low, your late fees are half your credit card transaction, you have a massive balance from month to month that keeps running because you can't pay it off. That stuff you can't really hide.
You can put on a brave face, a shellacking, a veneer over what it is, but you can't hide from the company. They know you're full of crap.
And I have to admit, I pat myself on the back a little bit when you said in the book, the person who buys gym equipment and then donates to charity is an example of somebody who has their shit together. I looked at my credit card statement and I was like, what have I done? Okay, I spent 500 bucks on gym equipment.
Oh, there's my Amnesty International donation. And I was like, I'm a good person, according to the data.
Science doesn't lie. Like a personality psychologist would say, there's no good or bad traits.
There's just some that are more socially desirable, right? Take it to the extreme. If you're like super extremely organized, you're turning into my husband who is super sweet, also borderline OCD.
We just moved and there's a gazillion boxes in the apartment. It's just like everything is completely disorganized.
And all I want is to be able to walk from the bathroom to the bedroom. And I opened the drawer of the cutlery and it's perfectly meticulously organized.
I'm sure he spent two hours sorting the cutlery where there was still like 100,000 boxes in the apartment. So where I might be going through the boxes a little bit more quickly and maybe a little bit less thoroughly, but maybe a bit more efficiently.
So no inherently good or bad traits. That's really funny.
Like, we don't have underwear, but all of our cereals are alphabetized in the cabinet. Oh, my God.
Yeah. That's what you're doing.
Yeah, yeah. That's really funny.
Moran is a really interesting, your husband is a really interesting character. By the way, he was on episode 265 of this show.
And you had your first impression of him, which was that accurate? Because you went out on a date or something. How much of your predictions of him initially turned out to be right later on? Yeah, very accurate.
So I met him. We were actually both giving a talk at the conference for digital happiness, but he showed up late.
I was about to go on stage and the organizer comes and says, hey, the person was supposed to speak after you. He's not here yet.
We've no idea where he is. We can't reach him.
Could you just take the entire hour? I'm like, fine. And then midway through, he shows up and they usher me off the stage.
Fast forward, it doesn't take that long for me to realize that he's smart and hot. So we go out after the session and we actually end up in his place.
And he kind of has these huge bookshelves and they're perfectly sorted by here's the topic. Here's the height of the books, all perfectly aligned.
Cutlery is perfectly sorted. I remember trying to put down my glass on the table and he freaked out to put a coaster on him.
It was like an intellectually curious and somewhat borderline OCD and late. And that was still spot on today.
That sounds about right. I mean, he's French and he's Jewish.
So you're lucky he didn't miss your wedding, actually. Yes.
That's really funny. And also, you ended up at his place after the talk.
Location data says you're a little bit easy there, Sandra. And that's actually, it's funny because that was a lot of the inspiration for the work on digital footprints from the physical space.
So there's all of this work on if you snoop around the bedroom or the office of a stranger and you just pick up on all of these cues. And some of them, the same way that we post on social media, are curated.
You have a poster out there and certain books on the shelf that you want other people to see. But then a lot of them are also like very subtle.
What is in your bin? Are your glasses sorted in the way? Do they have watermarks? So I think a lot of the work that we've been doing in the digital space was actually inspired by the physical space and the way that we make these inferences about strangers all the time as humans. I remember vaguely in the 90s when I started dating, there was a cliche that a woman would come into your house and look in your medicine cabinet to see what sort of drugs you had in there.
That actually was the original snooping around thing before digital stuff existed, because you would open that cabinet and you'd be like, oh, OK, here's the real stuff that they're not going to tell me for months. What were they hoping to find? That's a good point, because this is sort of before all the personality pills and Adderall and everything.
So I don't know. Are you looking to see if they're diabetic? Back then, what would have mattered? I don't know.
Yeah, but it was a thing for sure. Hemorrhoid cream.
Hemorrhoid cream. Who doesn't have a couple tucks laying around? I remember I was interviewing a very important, very distinguished man.
He's like, can you come to my hotel and do it? So we did. We set up in his hotel and I was like, I need to use the restroom real quick.
And I remember going into the restroom and he had all this hemorrhoid stuff laid out on the counter. And I was like, good to know that the CEO of this giant, massive multinational company has serious hemorrhoids.
Poor guy. Humbling experience.
Yeah. Because you can't be like, hold on, I'll be right back.
Nope. He's just going to find out about my hemorrhoids.
Well, I'm sure he played it cool. He did.
He played it very cool. But I guess at that point, when you're a billionaire, it's like, yeah, I got hemorrhoids.
What are you going to do about it, podcaster? Whatever. You can leave if you have a problem with that.
What is the limit of psychological targeting? Can I change someone's mind entirely, or do I just influence people who are straddling the fence? Yeah, it's a great question, because I think if you look to the media, it's like totally black and white, right? It's like either this warfare tool and it's like changing your mind and it's changing your core identity. That's probably not the case.
So I always think about it if that's something that you couldn't do in an offline world. If you think about your hardcore, diehard Republican uncle and by having long conversations with him, you can't convince him to take on a certain view on the world.
If you think about your hardcore, diehard Republican uncle, and by having long conversation with him, you can't convince him to take on a certain view on the world, you're probably also not able to do this online with algorithms, even though you can target them repeatedly and maybe you can send them down a rabbit hole. Changing someone's core identity takes a lot more than just a couple of ads and maybe even repeatedly.
But the thing is that it usually doesn't even need that, right? Oftentimes it's like our choices are kind of small ones. We're not even aware of what's the cereal that you choose.
What are you deciding to wear today? What are the news that you're trying to read? And where does that take you in terms of how you think about the world? So oftentimes when I think about influencing behavior, it's like these small changes. And the same way that we do this in an offline world, right? Coming back to kids, humans are born to do this.
Kids know exactly how they talk to their mom to get the candy as opposed to their dad. And it's not that by doing so change who the other side is.
It just makes it more likely that they behave in a certain way. And for me, it's like taking what we've been doing for centuries in an offline world, and we're just applying it at scale.
And in a way that's no longer bi-directional. It used to be the case that I do this to you, you do this to me.
Right now, this is mostly happening from big companies to influencing your behavior. Yeah, that's interesting, because it does seem like the sort of most basic, mediocre use of all this psychological targeting is selling me shirts.
It just seems like, can't you do more with this? I remember Cambridge Analytica, right? It was like, oh, they totally influenced the election. Did they or was it not much? So did you swing an election and you convinced a diehard Hillary supporter to suddenly stay at home and not vote? probably not but could you maybe have influenced some of the people who were not sure if they wanted to go out and vote? And maybe you caught them at the moment where they were really scared about immigration and you changed them from a Democrat to a Republican? Probably.
I think that the point of Cambridge Analytica is it wasn't necessarily even something that political campaigns have been using data for a long, long time. And Obama was celebrated for the use of like, well, there's someone who's trying to understand their constituents and try and see what they're interested in.
But like, what do they care about? How do I talk to them? I think what Cambridge Analytica, more distinct from the previous attempts, at least in the public mind, was that people could suddenly make sense of it. But even if they had to data everything before, we don't think about ourselves in like these separate data points.
I don't think of myself as here's my browsing history and here's my social media and here's my credit card spending. I think of myself as this holistic person that's impulsive and maybe a little bit neurotic and curious.
And I think once you told the public that there's a company that can predict whether you are emotionally volatile or whether you might be introverted, outgoing, I think that's what resonated with people. So do I think that they won the election by doing this magical brainwashing? Probably not.
Could something like psychological targeting swing an election when they're on the margins? Probably yes. Do we need psychology for that? Again, not entirely sure because you can make very similar predictions with kind of skipping that step.
Can we use this technology to decrease political polarization? These companies, they know what we're all like. They can file us into echo chambers.
Can we reverse that process? Yeah, it's something that I've been super intrigued by. And who knows if it's going to plan out.
But I always think of it as a technology, right? The technology at the core is trying to say, can I understand where you're coming from? Here's your point of view. Here's your view on the world.
Here's your values. Here's your personality.
And now you could imagine using that to explain to you, here's how the other side sees the world. I can convince a Democrat to kind of understand here's maybe why a Republican is more opposed to immigration, more opposed to abortion.
not in a way that a Republican would try to convince you, right? Because they're coming from their own perspective, but in a way that Democrats think about the world,

that's oftentimes a much more promising way of convincing the other side,

or at least making you a bit more receptive to arguments of the other side. And this is proven by research, by the way.
It's essentially this idea of, can I tap into your own moral campus to make you think about the world in a slightly different way? That would be an interesting experiment. Again, though, it has to be profitable or these companies won't actually want to do it.
Back to the privacy idea. What about people that think they don't need privacy because they have nothing to hide? I hear that argument all the time.
There's so many people that say, look, I have nothing to hide. I don't care if they're collecting data on me.
Yeah. And now you have to stop me because it's one of these topics that I could talk about forever because I hear this question all the time.
So again, in the classroom, when I talk about here's what we can do with your data, there's always at least one person who says that. And in a way, I can even partially relate to this, right? Because it feels like, well, I tried everything.
It just feels like an uphill battle that I can't win. So I might as well give up.
But it's a very privileged position to be in, first of all, right? So the fact that you don't have to worry about your data being out there just means that you're currently in a really good spot. If I can predict your sexual orientation, your mental health from all the traces that you leave in many parts of the world is not just preventing you from going to Beijing.
That could mean the death penalty still in a lot of countries. So it just means that you're currently in a good spot.
And what I think is even more true is that you don't know what it's going to look like tomorrow. If you don't have to worry about your data right now, that might change entirely in the US.
I think the Roe versus Wade Supreme Court decision made that painfully real for many, many women. But suddenly overnight, you had to worry about your Google searches because maybe you're looking for kind of some pregnancy-related abortion related advice.
Maybe you were traveling across states taking your phone so I can see you're traveling to another state. Maybe again, based on your GPS records, here's exactly the location.
Maybe you went to a certain clinic. Maybe you came back and you were suddenly no longer looking for certain things on Google and Amazon.
So I think this notion that data is permanent and leadership isn't should make all of us kind of worried. And maybe that's the government changing, but it could also be just the leadership of companies going from one day to the next.
That's a good point. Being gay is illegal in more than half the world.
I think it's a little bit less, but it's like still many, many more countries than you would imagine. So it's illegal or at least could be illegal in

a large number of places. Religious affiliation.
As a Jewish person, we don't like lists. We don't

like tracking. It's like one of the most compelling examples of like why data can become extremely

dangerous. Like what we know from like Nazi Germany in the Second World War is that religious

affiliation in parts of Europe was part of the census, which made it extremely easy for the

Nazis to come in and say, well, we're just going to go to City Hall, quickly check the record and see here's person A, B and C. They live in this place.
Now let's go and find them. Now, fast forward to today, and we know that atrocity is very vastly based on whether the data was available.
And today you don't need part of the census because you can just passively predict it from all of the traces that you generate and from all of the data that you create. So for me, this notion that we just don't know what tomorrow is going to look like is just a good reminder that you probably should care about your privacy.
Now, I actually do think that most people do, right? If you then show them the offline equivalence and you say, OK, look, your smartphone tracking whereabouts 24-7 is like a person walking behind you, observing your every move. That's the stalker that goes to jail.
The person reading your messages, like Google, and that's the mailman opening your mail. Again, a person that goes to jail.
So when you give them these comparisons to the offline world, I think most people actually wake up to like, oh, maybe I do care about my privacy and maybe I just haven't figured out how to protect it better. Speaking of stalkers, surely it's happened that somebody has used online data to find and hurt someone.
I'd be shocked if that hasn't happened multiple times already. Yeah.
And it's actually led to legislation. It's like a very interesting example.
There's this case of a judge in New Jersey whose son was actually tragically murdered by someone that she persecuted before. And they found the data online, got it for like, I think a couple of dollars from a data broker, found her home, had like this entire dossier on her and her family, murdered her son because she wasn't there.
And that led to legislation that's now protecting judges from their data being out there being sold by data brokers. And to me, it really raises this question.
If we think that judges should be protected based on their data, why not protect everybody else? Right. I think there's many other people who you would be worried about people getting their hands on your data and then tracking you down.
No kidding. Yeah.
Hey, we got to protect judges. OK.
What about these zillion other people that don't necessarily have political power that don't try to take their stuff offline? I remember that case that was particularly disgusting. And I get that judges are in a more vulnerable place than a lot of other folks.
But what about all the other people that are in a similar place? You've got prosecutors. Sure.
Okay. Part of the legal system.
What about police officers? Okay. What about teachers or disciplinarians in the school environment? Or just like, maybe I don't want a stalker either.
How's that? You know? Anyone, right? You're a surgeon, you make a mistake. There's always the worry that someone at some point has beef with you and is trying to track you down.
So I think anything that we apply to a part of the population where we worry about data, I think should apply to everybody. How do we do this? The book goes into detail, so we don't need to go into like weeds too much.
But one of the ideas you had was preventing companies from getting too many data points. Why is that a good idea? I think of it as a puzzle, right? So if we think about what can I learn about a person based on their data? We talked about social media, which is being this curated one, and then your smartphone sensing giving us a different angle.
And you can imagine that once you put all of these pieces together, you get a much more accurate reading of who that person is. So if you're a company who can fill every single letter in the alphabet with a subsidiary, you can imagine that they hold pretty much this entire picture of who you are.
So one example, and I'm certainly not the first one to suggest that, right? Scott Galloway, Tim Boo have been saying this for years, is if we could break up the tech monopolies, at least be a way of not having them capture this entire picture of who you are. I think that's probably a hard sell.
I think there's easier ones where there's now technologies that allow you to provide the same convenience and service and personalization, but without having to collect the data in the first place. And for me, that's something that you can implement from today to tomorrow.
I know in the book, you talk about taxing data brokers. I liked your idea, and this is very European and I cannot see how it would happen here, but God bless you.
Data co-ops where essentially we all own our data. How would this work? Because that's like crazy talk to us Americans that we would own our own data.
It's both owning the data and then collectively managing it. So the idea of data co-ops is saying there's people who have a shared interest in using their data.
That could be like my favorite one in Europe is one that looks at patient suffering from MS. So it's like one of these diseases that is so poorly understood.
It's determined by genetics, your medical history, your lifestyle. So you need quite a lot of data from patients to understand what might be driving symptoms and how to get better.
And oftentimes what happens in the medical space is you send it to pharma companies. And in the best case, it takes years for them to develop a drug.
And then you're paying like thousands, if not millions of dollars for that. What my data does, it's essentially owned by its members.
So it's people who suffer from MS coming together under this kind of legal entity of a data co-op. So it's member owned and it's legally obligated to act in the best interest of their patients.
And what it can do is it can essentially say, we better understand based on research how the disease works, but we can also now communicate directly with your doctors in almost like an Amazon recommendation style and say, we've seen patients with similar symptoms and a similar trajectory respond really positively to these kinds of treatments. Why don't you try this as well? And then the doctor can give feedback and make the system even better.
But you'd be surprised. So this is one example.
But in the U.S., there's a data corp for Uber drivers. So they're essentially pooling data to see how do we optimize the routes? How do we make sure that we're not getting overly tired and exhausted? So I think there's many instances where you can actually see this playing out in the U.S.
as well. It's becoming more popular.
It's really encouraging because it seems like something that would be almost impossible, right? Oh, you're giving the data to Facebook, so we're not going to share it. And I don't love the idea of always giving stuff to the government, but it almost seems like you'd need federal regulation of how our data gets used in order for it to not get misused.
But it really is disappointing to me that these companies are not hitting the low-hanging fruit of finding out who has PTSD after coming back from war or after some traumatic event, finding out who's depressed, finding out who's going down the path of getting an eating disorder. Because I've heard about, I should say, young women especially, they search for something and then the algorithm feeds them more of that and then they search for more of it.
And so it's clearly really obvious and not that hard to predict who is getting an eating disorder in real time. And then they just don't do anything about it.
And it's not even that they don't do anything. So it's oftentimes reinforcing.
Algorithms just become more and more extreme in the way that they make recommendations. Are you hopeful for the future of how this looks? Because I hate ending on a sour note of like, and now your kids are all going to be depressed and have eating disorders and no one's going to care except for us on this podcast.
I think you have to. So I constantly oscillate between being totally depressed and kind of thinking about it in a more optimistic way.
And I've been called naive so many times by people who say like, we're all doomed. Why are we talking about these positive use cases? And my take on this is that we need this positive counter narrative.
And I actually think about it in the context of kids, right? You can tell your kid, if your kid misbehaves and throws food on the floor and takes down all of the books, which is not hypothetical, I'm going through this as we speak. I get it, I feel it.
So you can tell them like, don't do this because it's bad and because it's like nothing that you should be doing, but you're not going to be really successful. What's much more successful is to show them something that they can do instead.
But if you tell them, instead of throwing food on the floor, why don't you do this? The chances that you're going to change that behavior is much, much more likely. And I think about that the same way in the context of technology.
Yeah, we can say, here's all of the challenges and do we need more regulation? Probably. Should we get rid of some of the abuses? Absolutely.
But I think if we think about the overall trajectory of technology, if we don't have these positive, here's what you should be doing instead, I think we're never going to get there. So maybe we're not going to end up with this utopian future that I sometimes have in mind, but I do think we need these positive visions to even get us started.
Sondra, thank you so much. Really interesting episode of the show.
This was very enjoyable and I liked it. Thank you so much.
You're about to hear a preview with James Patterson and what would make the bestselling author walk away at the top of his game. It's rare that I don't write.
What I discovered was that I loved doing it.

And then I started writing stories, and I just loved it.

I didn't know whether it was any good, but I loved doing it.

And I would just write, write, write, write, write.

So when the first book came out, Thomas Behrman's number gave Little Brown a blurb, and he said that I'm quite sure that James Patterson wrote a million words before he started this book.

It was a great compliment. And then I decided I'd try try novel.
I'm really happy with the way that turned out. One of the things you always like to do at the end of the chapter is they must turn that next page.
That's a strength. The weaknesses sometimes don't go as deep as I should.
Here's the secret. Hit them in the face with a cream pie, and while you have their attention, say something smart.
That's it. No cream pie.
They didn't even notice it, so forget about it. You're just talking to yourself, and if you don't say something smart once you get their attention, it's irrelevant.
You surprise people, which I think is important for my kind of book. We need heroes, and one of the things about the military, and it's very true in this book, in American Heroes, but also Walk in My Combat Boots, the military is about we, not me.
And one of the things I think we need to get back to a bit more is we. And it's hard to come by now, duty, honor, sacrifice.
It just has to be more we rather than just me. To hear more as James Patterson reveals the moment that changed his life and the unconventional process that's helped him sell over 400 million books, check out episode 1100 of The Jordan Harbinger Show.
All things Zandra Motz will be in the show notes at jordanharbinger.com. Advertisers, deals, discounts, ways to support the show, all at jordanharbinger.com slash deals.
Please consider supporting those who make the show possible. Also, our newsletter, We Bit Wiser, it will make you smarter.
It'll make you more practical. Something should sink in.
It's only a two-minute read. It's almost every Wednesday.
We talk about psychology, relationships, decision-making. If you haven't signed up yet, I invite you to come check it out.
It is a great companion to the show. JordanHarbinger.com slash news is where you can find it.
Don't forget about 6 Minute Networking as well. That's over at 6MinuteNetworking.com.
I'm at JordanHarbinger on Twitter and Instagram. You can also connect with me on LinkedIn.
In this show, it's created in association with Podcast One. My team is Jen Harbinger, Jace Sanderson, Robert Fogerty, Ian Baird, and Gabriel Mizrahi.
Remember, we rise

by lifting others. The fee for the show is you share it with friends when you find something

useful or interesting. The greatest compliment you can give us is to share the show with those

you care about. So if you know somebody who's interested in data, data privacy, how AI and the

internet are monetizing what they know about us, definitely share this episode with them.

In the meantime, I hope you apply what you hear on the show so you can live what you learn,

and we'll see you next time. Own a 2020 or newer car or truck that's been in for repairs under warranty? You might have a lemon.
Defective vehicles, known as lemons, sometimes slip through even the best automakers. You don't have to settle for one.
Lemon Law Help is here to get you the compensation you deserve. With millions recovered for car owners, car owners they're known for big wins best part no out-of-pocket costs to you

call now at 855-952-5252 or visit lemonlawhelp.com don't wait get the help you need today

paid spokesperson every case is different results vary courtesy of roger kernos night law group llp