On with Kara Swisher

Signal’s Meredith Whittaker on Surveillance Capitalism, Text Privacy and AI

October 17, 2024 1h 6m
What do cybersecurity experts, journalists in foreign conflicts, indicted New York City Mayor Eric Adams and Drake have in common? They all use the Signal messaging app. Signal’s protocol has been the gold standard in end-to-end encryption, used by Whatsapp, Google and more, for more than a decade. But it’s been under fire from both authoritarian governments and well-meaning democracies who see the privacy locks as a threat. Since 2022, former Google rabble-rouser and AI Now Institute co-founder Meredith Whittaker has been president of the Signal Foundation, the nonprofit that runs the app. Kara talks with Meredith about her fight to protect text privacy, the consolidation of power and money in AI and how nonprofits can survive in a world built on the surveillance economy.  Questions? Comments? Email us at on@voxmedia.com or find Kara on Threads/Instagram @karaswisher Learn more about your ad choices. Visit podcastchoices.com/adchoices

Listen and Follow Along

Full Transcript

Support for On with Kara Swisher comes from Saks Fifth Avenue. Saks.com is personalized, and that can be a huge help when you need something real nice, real fast.
So if there's a totem jacket you like, now Saks.com can show you the best totem jackets, as well as similar styles from brands you might not have even thought to check out. Saks.com can even let you know when the Gucci loafers you've been eyeing are back in stock, or when new work blazers from the row arrive.
Who doesn't like easy personalized shopping that saves you time? Head to Saks.com. At UC San Diego, research isn't just about asking big questions.
It saves lives and fuels innovation, like predicting storms from space, teaching T-cells to attack cancer, and eliminating cybersecurity

threats with AI. As one of America's leading research universities, they are putting big

ideas to work in new and novel ways. At UC San Diego, research moves the world forward.

Learn more at ucsd.edu slash research.

Thumbtack presents the ins and outs of caring for your home. Out.
Uncertainty. Self-doubt.
Stressing about not knowing where to start. In.
Plans and guides that make it easy to get home projects done. Out.
Word art. Sorry, live laugh lovers.
In. Knowing what to do, when to do it,

and who to hire. Start caring for your home Swisher, and I'm Kara Swisher.
My guest today is Meredith Whitaker, president of the Signal Foundation, the nonprofit that runs the Signal messaging app, one that I use all the time because it's pretty much the only one I trust. Signal has been around for a decade and only has 70 to 100 million users a month, which is peanuts compared to WhatsApp, just under 3 billion.
But Signal is not a lightweight in the tech world. Its Signal protocol is considered the gold standard in end to end encryption.
In fact, it's the tech that WhatsApp and Facebook Messenger and Google use. The difference is that Signal users actually keep all their metadata locked up too, which is why it's become the messaging app of choice for people who are really concerned about privacy.
Cybersecurity experts, NGO workers, indicted New York City Mayor Eric Adams, Drake, and me too, as I said. Meredith Whitaker has been leading the Signal Foundation since 2022, and she's kind of a perfect person to do the job.
More than one reporter has called her Silicon Valley's gadfly, which is also my title, by the way. After starting at Google in 2006, Whitaker quickly moved up the ladder, founding Google Open Research and MLab.
In 2017, while she was still at Google, she also co-founded the AI Now Institute. This was very early at NYU with Kate Crawford to research the social implications of AI.
So basically, Whitaker was a rising star. But then in 2018, after she helped organize walkouts to protest sexual misconduct at the company, citizen surveillance and Google's military contracts, the company told her to cool it and she left.
Whitaker has been the no-hold bars advocate for data privacy in a world increasingly run by what people call the surveillance economy. I'm excited to talk to her again about that, the increasing consolidation of power and money in tech, especially in AI, where she sees this privacy fight going, and how a nonprofit or even for-profit startups can survive.
She's still a firebrand, and our expert question comes from another of those people, Renee DiResta. So this should be good.
One more thing,

I've been nominated for the best host in the Signal Listener's Choice Awards, which has nothing to do

with the Signal messaging app. We're not trying to butter up judges here.
But if you agree that I am

the best host, and you know, you really should, you can follow the link in the show notes. Thank you so much.
All right, let's get to it. Meredith, welcome.
Thanks for being on On. So happy to be here, Cara.
And nice to see you again. I know.
It's been a while since we did an interview. It was 2019.
You were still at Google building up the AI Now Institute on the side. Now you're at the Signal Foundation, a nonprofit.
Talk a little bit about that shift and what happened there and why you decided to do this. Yeah, it feels like centuries in tech and in my life.
I mean, for me, this is all part of a single project, really. Like, how do we build technology that is actually beneficial, actually rights-preserving? How do we stop the bad stuff, start the good stuff? And, of course, technology is vast, and these companies are huge, and there are many angles to take.
So, you know, I was also at the FTC trying to help it there. AI Now trying to produce better research.
And now Signal to me is just the most gratifying dream job in a sense, because we are actually building and shipping high availability tech that is rejecting the pathology and the toxic business model that I've been pushing at, prodding, fighting for almost 20 years in my career. Talk about why you left Google.
It had gotten, I don't think hostile is the right word, but not what you wanted, as I recall. Yeah.
I mean, it was a combination of hostility from some in management who didn't like honest conversation about ethical business decisions, let's say. Oh, that.
Yeah, that old thing. Not evil, but adjacent.
Not evil, just in polite company. Don't mention the evil.
Yeah, so I had raised a lot of alarms. I'd participated in organizing against some really troubling business decisions, some of the surveillance and defense contracting that were just being made based on bottom line, not based on kind of ethics and duty of care.
and in you know I kept that up for a number of years but at some point I felt that I had

hit the end of the road the pressure from google to stop some of the retaliation that i and a lot of my colleagues were facing meant that we were just spending more and more time strategizing how to you know keep our toehold other than you know not building say ai now institute which has gone on to do incredible things, not thinking about positive projects in the world. And my adrenals needed a rest.
I needed a change of pace. And I'd also been at Google for over 13 years at that point.
When you say there was pressure, explain that for people who don't understand within these companies.

Google's always been a place where things are debated since the beginning, or it was, even if two people really did run the place or control the place. Yeah, look, I mean, I joined in 2006, which was a wild and free time at Google.
And they really did nurture a culture of open conversation, communication. You know, there was the sort of Usenet message board vibe on our internal mailing list.
You just go on and on debating the nuances of any given point. So that was sort of the nervous system of Google when I jumped in there.
And of course there was a huge amount of money. So there was a lot of room to play around, to fail, to learn things, to start new initiatives.
Now, it doesn't mean

that decisions were made by consensus, right? But it means that was the environment that was nurtured and that attracted a lot of people. Yeah, on every topic.
It wasn't just very serious ones. I remember a huge argument over kombucha there at one point.
Yeah, yeah, yeah. Yelling at the founders about the shitty kombuchas.
There was a famous thread on goji berry pie that went on for like 3,000 posts, right?

So, you know, I really, I learned my poster skills pretty early. But, you know, that muscle still remained.
And a lot of people were there because they believed the rhetoric, right? Like, don't be evil is a bit trite. And it's certainly, you know.
Far down the line, there's a lot to the left of it. You could do a lot of bad things to the left of it, right? Yeah, and evil to whom.

I mean, come on. But nonetheless, it was, you know, in a socially awkward discipline of kind of nerds who do computer science, pointing to don't be evil was often invoked, just to say like, yo, I'm uncomfortable, right? So there was this reflex in the company.
And as they moved, you know, let's say they moved closer and closer to the red lines, they were able to swear off in the beginning, because they were so far away, because the money was coming in, and we'll solve the problem of what happens when we have to choose between a billion dollars and hanging on to our ethics, right? That, you know, that seemed like a fantasy. And of course, they started hitting up against these red lines in 2009, you know, kind of requests from the Chinese government.
You know, they held firm there. And then we went into the mid-2010s, and they're signing up to be defense contractors, you know, building AI drone targeting and surveillance software.
So you had started the AI Now Institute on the side. Explain for people what that is.
And then we're going to get to Signal because it's how you got here is an interesting journey, I think. Yeah.
No, my path is wild and winding. So I had founded a research group at Google, and that was a research group that the nucleus of which was a measurement lab, this large-scale internet performance measurement platform with a consortium of folks in Google and outside of Google at the Open Tech Institute.
So then I, you know, I hear about AI, it's machine learning back then around like 2012, 2013, 2014. And I'm like, Oh, what is this? This seems cool.
It's like a statistical technique that does

some stuff. Oh, wait, you're taking really flaky data, like not way more flaky than mine.
And way higher up the stack. So it's making, you know, like up the stack to making decisions about human beings.
Yeah. And the garbage in garbage out.
Garbage in garbage out. And then like garbage into a black box that you then treat as a godhead, right? Like that's the issue, right? Like it's not, you know, you're calling this intelligence, but actually you've just sort of masked its provenance, masked its flaws, and are using it to imply that these, you know, massive corporate infrastructures are somehow sentient and alive.
And what I'm not saying is that patterns in data that is responsibly collected aren't useful for decision making. Obviously, they are, right? You know, the issue is that there is a toxic business model at the heart of this, that those patterns aren't always responsible, and that we forget that data isn't a synonym for facts

at our peril. Yeah, I just interviewed Yuval Harari and he made a salient point, a very simple

one, that there's a lot of information out there but not a lot of facts. And that's hard to discern.

And of course, this higher intelligence isn't going to know the difference because you put it

in there, right? Because it's only going to know what it knows. So you got worried, you leave.
Explain how you got to Signal and what your thought was on why it was important. Well, I've actually been a big fan of Signal, involved in the world of Signal since, you know, around the beginning.
When you work at the network layer, at the kind of low layer, you're privileged to begin to learn pretty quickly that everything is kind of broken, right? There are security flaws, there are privacy issues, like duct tape and sticky tape and like a handful of core libraries maintained by open source contributors who live on a boat and won't answer emails. Like you're like, oh, this is the internet.
Wow. And I think like I began to be animated by questions of privacy and security pretty early because of that exposure and because it was the most interesting place, frankly, like it was where the fresh air of politics, you know, met the theory of the internet.
And, and so I, I had been a fan, I'd known Moxie for a number of years. And I was on the that's Moxie.
Moxie Marlinspike is the founder of Signal, co-author of the Signal Protocol, and really carried Signal as a project on his back, putting huge amounts of time and energy into it to do what is almost impossible, which is create this virtuous, nonprofit, open, high availability communications tech that is not participating in surveillance, that is not participating in targeting or algorithmic tuning or, you know, content curation or any of the other things that we've seen go real, real south with the others. Right.
Now, let's talk about Signal Messenger, which is the core product. For a while, you know, a lot of the big concerns around messaging apps were green versus blue bubble barrier or stupid things like that.
But there's more important things. What does it do differently and what doesn't it do so people can understand the difference between all the different messaging apps? Signal's big difference is that we are truly private.
We collect as close to no data as possible, and we develop open source so that our claims, our code, our privacy guarantees don't have to be taken on trust. You can actually verify them.
And because of the reputation we built up in the community, because the Signal Protocol was a massive leap forward in applied cryptography and actually was the thing that enabled large-scale private

messaging on mobile devices, we get a lot of scrutiny. And that promise of many eyes making better, more secure code has really delivered for us.
Right. People know what you're doing.
And so it's like a messaging app like any other in terms you could message back and forth. But what does it do differently and what doesn't it do? Yeah, so it protects your privacy, let's say up and down the stack.
We use our own gold standard cryptography that actually others license from us. WhatsApp licenses it, Google licenses it.
It is the gold standard. Yes, this is end-to-end encryption.
End-to-end encryption. And we created the kind of the gold standard there.
And it protects what you say. So you and I are texting Kara, Signal doesn't know what we're saying on Signal.
Like you can put a gun to my head, I can't turn that over. But we go well beyond that too.
Because of course, metadata, this fancy little word for data about you and your friends is also incredibly revealing. So we don't collect data about your contacts, your groups, your profile photo, when you text someone, who's texting whom.
So all of that required research and actually design, like building new things, solving problems, because the ecosystem we're in has been built with the assumption everyone wants to collect all the data all the time and keep it to do whatever. So we actually have to go in and be like, well, we can't use that common library for development.
Because if we use that, it would collect data. Let's give a concrete example.
When we added GIF search to Signal, because everyone likes a reaction GIF, right? Or at least boomers do. And we couldn't just use the Giphy library.
That would have taken a couple hours. We would have tested it in.
The engineers go home and go to sleep. No, we had to rewrite things from scratch.
This was actually a significant architectural change. It took a number of months.
And when we implemented it, it meant that we weren't giving any data to Giphy. They have no idea, whatever.
So when they're acquired by Meta, we don't have to worry. You don't have to worry, right, exactly.
So this is this end-end encryption. Who's using it now, and where are you seeing growth right now? Yeah, I mean, our user growth has been steady, and I think, again, this just, you know, the bloom is off the big tech rose, right? People do not want to be surveilled.
There is a giant demand for privacy. And so, you know, Signal is global core infrastructure.
We're used by journalists everywhere, human rights workers. We are the core infrastructure in Ukraine for safe communications, for sensitive information, you know, government, military.
We are, you know, core communications in governments across the world, right? Just for, you know, we don't want a data breach to expose sensitive information. I think, you know, every time there is a, what we call a big tech screw up or a massive data breach, we see spikes in signal growth.
We also see spikes when there are,ical volatility. So you see when there was the

uprising in Iran around women's rights, we saw a massive spike in use, and then we saw the government or the ISP try to block it, and then we stood up proxies to try to get people access Anyway, so it's really, you know, it's when people suddenly recognize the power that personal sensitive information can give those who might want to oppress or harm or, you know, otherwise hurt their lives. Or just sell you things.
Exactly. Or sell you things and, you know, and then like decide what news you get, decide if you get an ad for a good rate on a home loan or a bad rate, right?

These things that are subtle but also really meaningful.

WhatsApp is peddling privacy in the form of encryption as a selling point but still collects metadata.

Talk about this business model for a huge slice of the tech sector at this point, data collection, surveillance capitalism, profits, et cetera? I mean, I think of this as really the original sin, right? Like the Clinton administration

knew there were privacy concerns with the internet business model. They had reports from the NTIA,

they had advocacy from human and civil rights organizations. This was a debate that played out over the 90s as the rules of the road for the internet were being established.
And this is why I get itchy when people are like, they could never have known. And I'm like, literally, there were reports before any of this were done, laying out exactly how this would go down.
And it went down that way and slightly worse. This wasn't a matter of guileless

innocence leading to innovation that got out of control. This was a business model choice where,

you know, the Clinton administration said absolutely no restrictions on commercial

surveillance. And they also endorsed advertising as the business model of this internet.
And like,

of course, what is advertising if not know your customer? We got to get more and more data,

right? So it's an incentive. It's like a flywheel incentive for surveillance.
We want as much

to of this internet. And like, of course, what is advertising if not know your customer, we got to get more and more data, right? So it's an incentive.
It's like a flywheel incentive for surveillance. We want as much data about our customers as possible so we can target them with ads.
And what does that incentivize? That incentivizes huge clusters of compute and data storage so that you can keep this data that incentivizes things like MapReduce that is sort of the precursor to a lot of the AI models now. That incentivizes, you know, social media that calibrates for virality and sort of like upset and cortisol and like, you know, it's like amygdala activation, basically.
I always say engagement equals engagement. Yeah, there you go.
Exactly. And why does it equal engagement? Not because like we like engagement, but because that means you see more ads.
You click on more ads. You contribute more data.
The cycle continues. And this business model is super profitable.
So that's the norm. So let's talk about finances.
You said there isn't a business model for privacy on the internet. Now, Signal is not just opposed to surveillance capitalists.
As we said, it's a nonprofit funded by donations. include a big chunk from WhatsApp founder, Brian Acton, who is also a co-founder and board member at Signal.
You don't take investments, you don't have advertising, the app is free, but you still need money to pay your engineers and keep your servers running. Talk about how you do that.
Yeah. Well, our costs are about $50 million a year.
And every time I say that, I get a couple tech founders, a couple tech execs come up to me and say like, congratulations on keeping it lean. Right? So we're, you know, we're doing really well, but what we're doing is big, and requires resources because tech is capital intensive.
So right now we are funded by donations. That's our nonprofit status.
And again, as we just sort of touched on, that nonprofit status is not a nice to have. It's not like, oh, we like, you know, charitable giving.
No, it's a prophylactic against the pressures of a business model that are opposed to our mission, which is private rights preserving communication. so we are looking at different models right now for how we grow this.
How do we sustain signal? And how do we make sure that signal isn't just a lonely pine tree growing in a desert, right? We need an ecosystem around us. We can't be just the, you know, the sole example of one that got away from that business model.
And I think, you know, things like how do we set up endowments that can sustain signal long-term? How do we think about, you know, tandem structures or hybrid structures where things that would otherwise be polluted and toxified by exposure to a toxic business model are kept cordoned off? You know, there's some vision in there that we could inject, but the flat fact is that's the cost. Yeah, and there's nothing you want from your users except use, right? It's sort of a free, it's like a free park or something like that.
So protecting privacy isn't also something that is not a moving target. There are new systems on the horizon, quantum computing comes to mind, which require a complete overhaul of encryption systems, which you depend on.

You're already preparing for Q-Day, as the Wall Street Journal recently called it. Very dramatic over at the Wall Street Journal.
But explain what Q-Day is and what you've been doing to deal with that. Some people have a vague knowledge of quantum computing, but it can unencrypt everything very quickly, basically.
Yeah, it's very, very powerful computing that, you know, basically can factor large primes, which is what we depend on in cryptography very quickly, right? And so this would break the premise of, you know, kind of unbreakable math is the guarantor of, you know, current crypto systems, and a future in which we have sufficiently powerful quantum computing, which I guess is what Q Day is, although I would have thought it was like a QAnon thing. Yeah, Q is a letter we have to stop using.
Yeah, I'm like, oh, cool. X and Q.
Yeah, X and Q. I know, we're reducing our literacy as we speak.
But, you know, there is, quantum computing is is developing, and there's no clear answer to when we will have a quantum computer that can actually do that. But it's catastrophic enough that we can't rest on hope or postpone it.
So Signal was the first private messenger, the first messenger to implement post-quantum resistant encryption for our signal protocol. And the resistance we added protects against the kind of attack we can be worried about now, which is called harvest now, read later.
And that just means you collect all the encrypted data. It's garbled bits.
It means nothing, but you save it and you save it and you save it. And at a time when these sufficiently powerful quantum computers exist, you then apply them to decrypt it.
The Harvest Now thing is really interesting for people who don't understand. It's like stealing all the safes and putting them in a room, and then someday you'll be able to figure out how to open them, essentially.
That's a perfect— Yeah. Yeah.
So one of the things, obviously, is reputation. So in 2022, federal investigators said they had gotten access to signal messages, helped them charge leaders of the Oath Keepers in the January 6th plot.
It wasn't clear how they got those, and I'm sorry to say this because I think he's the biggest information civ on the planet. Elon Musk questioned the app on X, something about known vulnerabilities are not being addressed.
Any idea what he meant? I mean, I'm not going to ignore everything that imbecile says, but what kind of impact do reports impose? I know, I know, large sigh. Yeah.
I mean, I don't know what he was talking about. I think, you know, getting to the Oath Keeper point, look, the way that the cops usually get data is someone snitches, someone has it on their phone, they get access to the phone.
You know, it's Occam's razor, and it's not that complicated. We're dealing with people, we're dealing with, you know, conspiracies, which never really work out that well.
But it is a good hook if you want to scare people about security claims, particularly because, you know, right, like 99999% of people can't themselves validate these claims, which makes this kind of weaponized information environment really dangerous and, you know, really perturbing to us, which is why we're so careful about this. When Elon tweeted that, I don't, you know, there, what I can say for sure, and this is what I posted on Twitter, we have no credible reports of a known vulnerability in Signal.
We have a security mailing list that we monitor assiduously, where we haven't heard anything, there are no open critical vulnerabilities that we have heard of. So, you know, it's kind of, you put us in a position of proving a negative.
And so, you know, so it was this off the cuff tweet, it caused a lot of confusion, I was sort of dealing with that for a number of days, you know, not because it was serious, but because it seriously freaked people out. We had human rights groups.
We had people calling us just saying, like, look, this is life or death for us, right? If Signal is broken, you know, we're going to lose people. We need to know for sure.
And what I can say is we have no evidence that this is true. I will say since then, Elon has tweeted screenshots of messages on his phone that are signal screenshots.

So, you know, you can put that together.

Of course he uses. Yeah, everyone uses signal.

He's got a lot of secrets to keep, I think.

Well, I mean, anyone who has anything they're dealing with that is confidential or has any stakes generally ends up using signal.

Yeah.

We'll be back in a minute. Hi, this is Debbie, your Blinds.com design consultant.
Oh, wow. A real person.

Yep. I am here to help you with everything from selecting the perfect window treatments to...
Well, I've got a complicated project.

Oh, not a problem. I can even schedule a professional measure and install.

We can also send you samples fast and free.

Hmm. I just might have to do more.

Oh, okay.

So the first room we're looking at is for guests.

Shop Blinds.com now and save up to 45% site-wide. Blinds.com.
Rules and restrictions may apply. Last week was our first playoff game, and my plaque psoriasis was so itchy under all my gear.
Sometimes just thinking about scratching could take me out of the moment. And then my doctor told me I could get clearer skin with a pill called Otesla.
Oesla apremolast is a prescription medicine used to treat adult patients with plaque psoriasis for whom phototherapy or systemic therapy is appropriate. O-Tesla can help you get clearer skin after just four months.
Okay, ready for the next game. Talking to my doctor about a pill was a total game changer.
Don't use O-Tesla if you're allergic to it. Get medical help right away if you have trouble breathing or swallowing, swelling of the face, lips, tongue, throat, or arms.
Severe diarrhea,

nausea, or vomiting, depression, suicidal thoughts, or weight loss can happen. Tell your doctor if any

of these occur and if you have a history of depression or suicidal thoughts.

Live in a moment. Ask your doctor about Otesla.
Call 1-844-4OTESLA or visit otesla.com for prescribing info, info about cost, and more. Your snacking routine can get a little dull.
Time for an Oikos remix or light and fit remix. Like a crunchy storm of sea salt, praline pretzels, dark chocolate, and butter toffee.
Showering down into a smooth, creamy yogurt. Enjoy six remix varieties, three Epic Complete Protein Oikos remix options, or three craveable light and fit remix options.
See remixyogurt.com. It's fair to say you faced a lot of headwinds in this battle to maintain tough standards on encryption, and everybody does remember Apple's battle with James Comey, of all people, if people don't remember it was James Comey.
But last year, the UK passed the UK Online Safety Act. The EU has been debating child sexual abuse regulation known as chat control bill.
Basically, they're all touted as efforts to protect users, especially children online, which seems like a good thing, right? But you and other security actors have been pushing back. Talk about these two bills, what they do and would do to your model.
Yeah, I'll just kind of characterize them in one brush because, like, ultimately they are aiming for the same thing. TLDR, in the name of protecting children from, you know, abuse, harmful content, they would mandate or would give a body or agency the power to mandate scanning everyone's private messages and comparing them against some database of prohibited speech or content.
And this isn't possible while preserving end-to-end encryption. Like, that's the mathematical truth.
A backdoor, you implement a backdoor, you have broken the mathematical guarantee of encryption. And we have only to point to the fact that the Wall Street Journal just reported that, you know, apparently the Chinese government, no surprise to anyone, has been hacking interception points, so backdoors in US systems, right? So this is not a game.
This is not a hypothetical. This isn't the technical community raising large hyperbolic flags.
No, this is the reality. And any backdoor in a network compromises the whole network.
So you backdoor the, you mandate scanning of signal in the UK. Of course, communications cross borders and jurisdictions all the time.
And then that means Amnesty International, housed in the UK, when they're talking to human rights defenders in Uganda, where being gay is punishable by death, working to get people's information, to get asylum cases going, to get people out and to safety, that conversation is then compromised. So I just, in fact, spoke to Hillary Clinton, and she was talking about how they use Signal and WhatsApp to help women get out of Afghanistan after the U.S.
military withdrawal, and they needed that secrecy to protect them. And you noticed you see a surge in downloads when conflicts arise, but these back doors, everybody gets in then.

Everyone's able to get in.

Yeah, a back door is not something you can control.

Once there's a door, anyone can walk through it, right?

So this is the magical thinking that we talked a lot about when we were pushing back on this bill, right?

You want a golden key, I think, as James Comey said with the Apple showdown.

You want a magical wand. You want a secret portal that only you have the spell to open.
Well, that doesn't exist. That's a fairy tale.
What does exist is a critical vulnerability in the only core systems we have to guarantee confidentiality and cybersecurity of communications. And if you undermine those, if you open that door for everyone, it means that the technology we have for that type of security no longer exists, no longer matters, right? So it's serious.
How much does the recent arrest of Telegram founder and CEO Pavel Dourav impact a debate? Clearly, Telegram's known as a cesspool. You know, my son was like, it's for sex and drugs, mom, just so you know.
It's often named in one breath with signaled because the way company talks about privacy and encryption in the larger sense. But for those who don't remember, Derov was arrested in connection with distributing child sex abuse material and drugs, money laundering, working with organized crime.
He's accused of failing to allow authorized law enforcement interception. Basically, he didn't give investigators access on the app.
You're not a social media company, but talk about the difference and how that has affected you all. Because you're adjacent to him, of course.
Yeah. I mean, I think the discourse is exactly the right way to frame this.
The impact of the arrests, the talk and the kind of information, hyperbole, the way this sort of became a martyr story, and the lack of really concrete information, which never helps, meant that there was a lot of questions, right? And I remember being like, wait, what happened? What are the charges or sorting through the French legal system? But ultimately, you know, it doesn't affect us, right? We're not a social media app. You can't go viral on Signal.
You can't broadcast to millions of people. It doesn't have sort of encounter features.
It's a very different thing. And for people who understand, these are groups that they create on WhatsApp or on Pavloduro's platform, Telegram.
And millions and millions of people. And then they have a kind of like what's happening near me.
So you can feature that with geolocation. So there's all sorts of things happening there that mean that the legal and regulatory thresholds and duties they face are wildly different from Signal.
They're a social media company. They broadcast things to millions of people.
They are constitutively not private and not very secure. Signal has designed ourselves so we are an interpersonal communications app.
We intentionally do not add channels or features where you can go viral. We intentionally steer clear of those kind of duties because you cannot do those duties.
You can't meet those obligations while being robust in our privacy and security guarantees. That's just full stop.
Right. But there is that idea, the concerns that total accruciation doesn't help the good guys, it aids in a bad actors.
I think that's the bigger worry about CSAM. This is child sexual abuse materials.
Every week we had a question from an outside expert. Let listen to this one, and I'd love you to answer it.
Hi, Meredith. This is Renee DiResta, author of Invisible Rulers and previously the technical research manager at the Stanford Internet Observatory.
My question for you is, as AI makes it easier to generate synthetic, non-consensual, intimate imagery and CSAM, how specifically should platforms and governments

respond to the production and dissemination of this harmful content? Is it possible to implement

effective measures against these abuses without infringing on privacy and free expression?

So what are your thoughts on this? One of the things that's important is there are

significant and justifiable concerns in this area, right? In certain areas, drugs,

child sexual abuse, etc. Then how do you protect against it? Yeah, I mean, absolutely.
This is a very serious area, right? And that's one of the reasons it has been so, let's say effective in floating a lot of these anti-encryption proposals because it takes the air out of the room, frankly. Like a lot of people have experience with this, sadly, and it is extremely difficult and extremely emotionally engaging.
So, you know, I think we need to take the issue itself seriously first, right? How do we protect children full stop? And then begin to look at what are the slate of options that we have? Where is the problem coming from? You know, are we funding social services? Are we ensuring that there are infrastructures in place where when a child reports that something bad is happening, that, you know, a priest or a teacher or their uncle are involved in something horrifying, how do we take care of that child and protect them? Why is Prince Andrew walking around in a country that is fixated on encryption as the culprit here, right? And this is not saying that platforms don't have responsibility here, but it is saying that I think when you look at the facts here, when you look at the number of people in different countries' law enforcements who are actually dedicated to reviewing this material and tracing it down, I can't say those numbers publicly because they were given to me privately, but we're talking about tens of people. In one case, we're talking about two people total.
So a lot of times we're talking about, the issue is a haystack full of needles and not enough people to categorize the needles. We're talking about resources there.
We don't have basic trust and safety tooling available to startups. So there are many places to invest in actually tackling this, both online, right? You know, go after payment processors.
A lot of what's happening is sort of sextortion, and that's a node there. You know, there are reporting infrastructures for public platforms and social media.
There are all sorts of research on this. Attack the business model, right? All of those are options on the table.
To me, what leads to my distrust of a lot of the remedy space is that with all of that being obvious, with a lack of investment in social services, with the culture we have where children are often not believed here, still encryption is manufactured as the cause of this problem, when there's very, very little evidence that private communication has any causal role in this issue. Right.
But of course, I think Derov flouted not helping. Yeah, but he also doesn't run a private communications app, right? Like, none of that was private.
It was just a flex of like, yeah, we're not going to help, right? So that's a social media platform just saying no. And I think there was a, you know, how do they say it? A fuck around and find out moment.
Yeah, for him. Right? But that's very different from private communications and encryption.
And it's weird how there's a sort of a transitive property by which encryption becomes the, you know, the problem to be solved in every case, even when the evidence doesn't support that. Well, it's sort of a brute.
It's like throwing a hammer at a piano to make music or something like that. So in the U.S., we're seeing a lot of states are passing bills requiring age.
One of the solutions is age verifications and restricting social media apps and access for minors. Florida, South Dakota, Oklahoma, named few.
There's agreement with a lot of these bills that's been used as a smokescreen. You've called it surveillance wine in an accountability bottles.
Talk a little bit about these ideas of restricting young people and then what you meant by surveillance wine in accountability bottles. Yeah.
Well, I mean, look, I don't think restricting young people ever works as a young person who always figured it out faster than my parents. This is a paradigm where I have very low hopes and I have even lower hopes when I see the folks at the tip of the spear of this movement, which are frankly often large biometric and age verification companies like Yoti who are selling the remedy, right? So if we pass a bill that requires age verification to get into websites, none of these platforms are going to do that themselves.
They're going to contract with a vendor or a third party who will bolt on age verification software and run that for them, because that's a liability shield and you don't want to build what you can lease or borrow. And then we get into a situation where age verification is a mass surveillance regime that is similar to tracking people's content and habits online, right? You can't know that someone's a child without knowing who is also an adult, to be clear about that.
And so we begin to legislate a tracking and monitoring system that, one, won't really work based on all the evidence to date, and two, is attacking the problem at the level of restriction, not at the level of platform business models, right? And this is where we get into accountability wine or surveillance wine

and accountability bottles, which is really like you and I lived through this. We recognize that there is something really wrong with the big tech business model, right? The accountability is needed.
And we saw in the mid 2010s that there was a real call for this. And what came out of that were some good ideas.
And then like this, I think some bad ideas wrapped in accountability, right? So instead of, you know, going after the surveillance supported, you know, advertising social media business model, cutting off the streams of data, perhaps, you know, implementing a very strong federal privacy law in the US that would undermine that model, take a bunch of money off the table, but clean up a lot of the global harms, we're looking at bolting more surveillance and monitoring onto the side of it. So it's giving the government and NGOs and whoever else a piece of that monitoring instead of reducing the monitoring itself.
And so I think it's, you know, how do we tune these regulations and how do we, you know, how do we find the political boldness to actually go up against these business models and those who are profiting from them instead of sort of, you know, try to make our name as someone who did go up against them, but actually propose remedies that don't go up against them. And I guess that's the age-old question of, you know, how do we find real, bold political leaders? Are you worried about the impact of the outcome of these laws? Say, here in the United States, we have this election, a potential autocrat who would love surveillance, although I don't think you'd understand it at this point.
I am. I am.
I, you know, I would say I am, I'm a bit of a political exile and that I'm concerned with centralized power wherever it exists, whether that's in large tech corporations or in governments. I don't think handing more tools to governments and then imagining we live in a counterfactual world in which those governments will always be run by us and benevolent adults is correct.
And I think a lot of people sort of, you know,

who've been pushing for accountability often live in that world. And I also am, you know, frankly, I think a lot about what the collateral consequences could be of a very bad law in the US that affects the big tech companies that control the world's information and infrastructural resources, right? You have three companies based in the US with 70, 7-0% of the global cloud market.
You have five information platforms, social media platforms, four of which, the biggest four, are jurisdiction in the US, which at this moment control most of the world's information environment. So that's a lot of power to be homed in one jurisdiction, particularly given the kind of volatility we're seeing and the way that just people in general, as we move through generation after generation who are kind of native to tech and kind of understand these things are beginning to recognize just how much power and control is housed in these companies.
I think that recognition is seeping into the bedrock of popular consciousness.

And, you know, I want to reduce this toxic business model.

I want to create an ecosystem that is way less concentrated before someone with malicious intent gets their hands on that throttle.

We'll be back in a states. Campfire season's back, and that means s'mores.

But when you're at home treating yourself, take them over ice with Duncan's S'mores Cold Brew Concentrate.

And suddenly you're always treating yourself.

The home with Duncan is where you want to be.

Click or tap the banner to shop now.

I'm Claire Parker. I'm Ashley Hamilton.
And this is Celebrity Memoir Book Club. And we're thinking like monks this week.
If you've ever thought Kevin O'Leary, Jeff Bezos, the founder of Headspace, those are men that are very, very monk-like. Oh boy, does Jay Shetty have the book for you.
He's written a book that tells you how to use your monk

mind to become more

like a billionaire monk. Pulling

from three highly disputed years

at an ashram, he's telling you stories

of like when he was in 8th grade and got a bad

grade on a test, and how that was scary,

and how now he knows Will Smith.

And if you want to reach your higher self, the billionaire

version of you, think like a monk. Or listen to this week's episode of Celebrity Memoir Book Club.
Out now. Last time we spoke, AI was all we talked about.
Things have changed dramatically, but you were warning back then about the surveillance economy and power consolidation. Cassandra, I would say, I think you inspired me a lot to start really talking about it and pointing it out over and over and over again.
So how are you feeling about AI now and how it's related to this new AI economy, this idea of surveillance capitalism? Because these systems are going to get, you're saying we should stop it now. Is it even possible, given the consolidation of power in tech? Yeah.
Well, I am, you know, look, I am a consummate optimist. I wouldn't be able to get up and do this if I didn't believe that change was always possible.
I think we are, we're in a frothy, hypey moment. And I do see the AI market wobbling.
I see the definition of what AI is sort of wafty right now. And I see a real struggle by the incumbents to try to keep that market going and maintain their advantage.
And so I can explain a little bit why I see that, right? And maybe we'll just start with what AI is, this deep learning paradigm, which is, you know, all the transformer models, chat GPT, all of this is still deep learning, right? We haven't moved into some sort of escape velocity for a new form. It's actually pretty old.
The algorithms are from the late 1980s. Now there's been sort of moves that improve them, but nonetheless, it's pretty old.
What is new is the massive amounts of data available. And this is data that's collected, created, that people are enticed to deposit by these large surveillance platforms.
And the massive amounts of computational infrastructure, which was basically created in order to sort of, you know, support this business model. And then in the early 2010s, they were like, oh, you know what, these old algorithms, machine learning algorithms, but we're going to call them AI because it's flashier, do new and interesting things, improve their performance when we match them with our massive amounts of data, when we match them with the computational power we have.
Yeah. So right now, for people that know most of AI technology, they're held or financed by one of the big names, Microsoft, Google, Amazon, Apple, or Meta, X less so.
Most of the AI chips, the GPUs are controlled by NVIDIA, which you've called a chip monopoly. So what you're essentially saying is they've assembled all the parts, right? That's really what's happened.
They've got the data they didn't have before. They've got the compute power, and it's all in the hands of people that can afford it.
There's also the idea they've been pushing for a while that bigger is better.

You know, they're always like, we need to be, I've heard it from Mark, I've heard it from all of them is we need to be this big in order to fight China. That's usually the boogeyman they use,

which is a boogeyman, let's be clear. So we spoke with Mustafa Suleiman a few weeks ago,

who said that even $1.3 billion from Microsoft is enough to make inflection AI successful. So he

took the whole ship to his funders. We're seeing valuations in AI that are insane, $157 billion for a startup, OpenAI, and the money coming from just a few sources.
You said the market is wobbly, but it doesn't feel wobbly. It feels like it's the consolidation of power again.
So I'd love you to talk about what that means. You know, I think what is wobbly here is that there isn't a clear market fit for this, right? Yeah.
We have this round-trip investment. So it's, you know, the big companies, you know, I think it's 70% of Series A in AI startups were coming from the big infrastructure providers, and it was often in the form of credits to use their infrastructure.

So we're talking about something really muddy there, but it's not an organic startup ecosystem, right? And the path to market. So if you want to monetize this, it still goes through these companies.
You're either selling access to a cloud API by Azure, Google Cloud, whatever, or you are monetizing it by integrating it into your massive surveillance platform, you know, a la meta, and, you know, kind of using it to sell ads. Let me rewrite this email for you.
Exactly. Which, you know, no, thank you.
The email was one word and it was fine. That's what I always say.
And so I think it's, you know, I think there still hasn't, if we're, you know, we're talking about billions, hundreds of, you know, trillions of dollars. We're talking about capital to the moon.
We're talking about the capital no one else can reach. And then we have like a bot that messes up our email or Target spending a huge amount of money for their company to develop this chat bot to help employees that was immediately roasted by everyone because it was so wrong.
It was so bad. It was so annoying.
Or, you know, at Upwork Research just published a survey that said 77% of the people from, you know, executives through rank-and-file employees who they interviewed said AI made their work messier, not better, right? So, like, when the rubber meets the road on the actual business model, we're still struggling to figure out like, what does this do that's worth hundreds of billions of dollars? Let me push back on that. You could say, I heard that at the early internet, what do I need this for? You couldn't have imagined an Uber when apps happened.
You just couldn't have, like nobody could. And eventually they're making money now, like a ton of money, but still a lot of these companies you wouldn't have imagined then.
So I think we're sort of in the stupid assistant phase, but it's not going to stay there necessarily. Maybe you think differently.
I don't. I think it will improve and become better and show what it's used for.
I think we're going to see a vast culling of the market because we simply don't need many, many big, big bloated models. That are the same, LLMs.
That are the same and that, you know, are very resource intensive. I think we also need to be super careful about how we're measuring better.
And this gets into benchmarking and evaluation. I just published a paper with a couple of co-authors.
You're looking at this bigger is better paradigm. And we actually, you know, you see that smaller models, more purpose-built, like with more better curated domain-specific data often perform better in real life.
Radiologists or something like that or certain cancer cells. Sure, like a lot of health applications.
So I think it's, you know, again, I'm not saying throw the baby out with the bathwater data isn't, you know, isn't useful for anything. But I think this particular type of like massively bloated model that, you know, it's not going to stop hallucinating.
We are bolting kind of relational databases onto the side of these, you know, probabilistic systems trying to kind to stabilize them so that they're not as embarrassingly wrong on main like they are in search and Google right now. But nonetheless, that's not a solution to the core problem, that they don't have information augured in facts.
So you're saying not useful, like the same candy bar with different wrappers, the same shitty candy bar with different wrappers. What do you imagine would be useful? These smaller, as you noted in your paper, these smaller databases where you, these LLMs that are really specifically useful, for example.
I mean, that's a bit of a tricky question because it's hard to answer when the claims being made around AI's utility by sort of the marketing are that it's useful for everything, right?

I think we need to really like break it down.

Like what would be useful in education, right?

And this is where I'll point to that some of the AI now work looking at some of the industrial policy.

It's like, is AI even the thing that's useful there?

Or are the climate costs, are the opportunity costs, are there, you know, do we need school

lunches or a chatbot, right?

And I want the freedom to answer that question before I have to sort of take AI as a given

and be like, you know, how to make it more useful.

Because there are places where data analysis is super, super important and useful, right?

But this is not a general everything tool.

This is a product being produced by big tech, which is sort of making more use of the derivatives of this already toxic business model, creating more data that is often faulty or, you know, harmful, but nonetheless powerful affects our lives. And that is being sold as kind of a skeleton key for everything when it isn't actually proven as useful.

Just more for surveillance economy.

As you said, I spoke with historian and philosopher Yuval Harari recently, as I said.

That's his nightmare scenario.

He's calling for more regulation in AI and everything to slow it down.

Now, you are a senior advisor on AI to the FTC.

What needs to happen from your perspective?

Well, the market could slow down or the business model could slow down. I think things like the CrowdStrike outage, which is when Microsoft effectively cut corners on quality assurance, on testing, on monitoring for a very, very critical update that affected core infrastructure like healthcare systems, traffic lights, banks.
And cost, cost money. Yeah, it cost money for them to do this right.
So they didn't do it right. And global infrastructure was offline for days.
So the evidence that this sort of concentrated business model is bad is it's no longer deniable. So I do think that combined with the danger of this sort of concentration in one jurisdiction, the concerns about sovereignty that you're seeing across the globe, will, I believe, impel real investment and real focus on building healthier alternatives, you know, like Signal as a sort of model for the thing we need.
I also think the climate costs are just undeniable, right? Well, you know, they're building nuclear devices for that. Three Mile Island is very open.
I mean, did no one Google Three Mile Island or did they hallucinate? Well, to be fair, fossil fuels are worse than nuclear, just by stats. Fossil fuels are terrible.
But these companies will claim that they are carbon neutral, but you scratch the surface there. No, they're not.
It's nonsense. And you see that, you know, they're buying, you know, kind of carbon neutrality certificates and using weird accounting.
Right, right. Yeah.
But let me finish up by asking, like, because the biggest company here, obviously, is getting the most money and everything else is OpenAI, which is sort of the quarterback right now of this thing. Yeah, Microsoft.
Microsoft, as we call them. Yes, yes, that's right.
Well, they don't like that. But you've written about how the term OpenAI was always misleading.
Obviously, they're ditching their nonprofit status. I had talked about this several times to many people who were sort of, they were really, they really did believe in the mission, I have to say.
And I kept saying, there's too much money now. I'm sorry.
I don't know who you are, but you're naive at best to

think that this amount of money is going to keep this a nonprofit. You obviously probably took a salary hit when you joined Signal.
It's a real nonprofit. Talk about, is it possible to unwind that mindset? You just said we all need organizations, many more like Signal, But this is an enormous amount of money.

How do you disassemble the mindset there? And how do you get it into a more mindset like yours? Because I just don't see any of this amount of money going any other direction. Quickly, one aside, Signal pays very well.
So if you're looking for a job, check out our job. We do try to retain and attract the best talent.
And I think in the OpenAI case, it was never about just the models, right? You need massive compute. It can cost a billion dollars for a training run.
You need massive amounts of data. And so you're going to have to figure out either convince a big company to give that to you, convince someone with billions and billions of dollars to burn it on that.
And once they burned it on that, what do you do with a model? Like, I'm going to let that hang in the air. What do you do with a model? You can't use it at scale again without a path through to market that goes through these companies.

So this is when I talk about AI being a product of big tech, like that's very, very literal,

right? They have the data, they have the compute, they have the access to market, either, you know,

meta is applying it within Facebook, at, you know, for services, this email prompt thing,

or open AI is advertising it as a like dongle as a, you know, for Azure customers who sign up to that,

We're going to have a great day. know, for services, this email prompt thing, or OpenAI is advertising it as a like dongle as a, you know, for Azure customers who sign up to that.
Or you have something like Mistral, which is a national champion in France, building open source large models. But how do they actually get to market? How do they make good to their investors and, you know, their business plans? They license it to Microsoft, who then licenses out through their cloud.
So when I talk about 70% of the market in cloud being controlled by three companies, we also have to fold that into the AI conversation and recognize that AI has not introduced any new software libraries or new hardware. This is all stuff we've had in the past that we know about that exists.
This is not novel. What is novel is this massive amount of data and the way that it's being used to sort of train these statistical models that can then be applied in one way or another.
So what happens then from your perspective? What happens? I mean, obviously, it was never going to be a nonprofit after the money raising happened. And they need the money, FYI.
They can't not have the money to grow. And they've got everybody on their tail too, at the same time.
What is their fate? What happens to a company like that? I mean, it seems like there's a lot of like interpersonal things happening. It seems like know, it seems like court drama as well.
So predicting. You were around at Google.
I was there even before you. It was.
It was. Come on.
It's true. Touché.
Touché, Carl. Amazon.
Come on. Yeah.
The hot mess of Twitter. Come on.
Oh, God. Yeah.
So hot mess aside. Yeah.
I would say they just kind of slowly become an arm of Microsoft, you know, the same way, you know, maybe there's an alphabetization of Microsoft, the same way Google kind of spun out, you know, different entities as an antitrust prophylactic. You know, but I don't, you know, again, there was never a model to be a nonprofit long term, given their vision, in my view.
And I think what you've seen is just a series of whittling away at that until it no longer exists. So, you know, Microsoft, Amazon, Google, those are the three big clouds.
You know, that's the gravitational pull in which model creators are going to ultimately get sucked into. You know, NVIDIA maybe.
Is there a model for a non-profit in AI? I think so. I mean, I think so.
But again, we got to take this term AI back a little bit. It does not simply mean massive, massive, massive law of scale, you know, bigger is better AI.
There are many forms of AI that are smaller, more nimble, like more actually useful. And I think there is a, you know, there could be a model for AI research that is asking questions that are less useful to the big players, to this bigger is better paradigm, and perhaps more useful for smaller use, for use in things like, are we training models for things that aren't profitable, like environmental monitoring or civic something or another? I think there is a model there.
I think, again, though, what I'm seeing is a misunderstanding of that fact and a misunderstanding of just how capital-intensive this bigger is better AI is, that's having governments around the world who are anxious about sovereignty concerns and, like, want their own, basically, throwing money at AI without understanding that that's going back

into the same companies,

that's not going to ensure sovereignty.

So it's like, oh, great,

you have a $500 million European AI fund.

Well, let me break it to you slowly.

That's half a training run.

That's right, that's right.

So like, what are you doing?

You can't afford it, yeah.

You can't just regulate them.

Just regulate them the same way with privacy

and everything else.

So last question, we spoke five years ago, light years ago, same way with privacy and everything else. So last question.

We spoke five years ago, light years ago. It seems like a million years ago.
A trillion years. A trillion years ago.
Looking down the road, what do you think the next five years will be the most important things for your work, for signal, tech in general, if you had to prognosticate? Yeah. Well, what I am working on, what I'm kind of obsessed with right now, in addition to just, you know, building and sustaining Signal, which I love, is how do we find new models to sustain better tech, right? Like, once we've cleared the weeds of this toxic model, once we've prepared the ground, how do we grow things that are actually beneficial? How do we create a teaming ecosystem? How do we encourage open tech and democratic governance, which I think is a thing we don't talk about enough, frankly, but how do we have a part in deciding what tech is built, who it serves, how we assess it? Some of the scrutiny that Signal receives from its loving and sometimes belligerent community of security researchers and hackers is part of our strength, right? How do we expand that to people? How do we shift from a monolithic five platforms control our news to a much more heterogeneous ecosystem that's a little warmer, a little fuzzier, a little RSS feed ready, so to speak? I think those are problems that aren't new, but that I think there is a real new appetite to actually tackle because it's getting too obvious.
When you have Andreessen Horowitz and Y Combinator coming out and saying like, we're the champion of little tech, we know that the death knell has rung for big tech. And what we need to do is then like define what comes after.
Yeah, absolutely. All right, Meredith, thank you so much.
I love talking to you. I should talk to you more often.
I love talking to you, Cara. Every five years.
And I really appreciate it because I think people don't understand it. Please, everyone use Signal.
I use it all the time. Thank you.
It's free. It's not stealing my stuff.
And it's really, it's another moment where I'm like, where is the signal AI thing? Well, I'm team Cara on that. Okay.
All right. On with Kara Swisher is produced by Christian Castro-Russell, Kateri Yoakum, Jolie Myers, and Megan Burney.
Special thanks to Sheena Ozaki, Kate Gallagher, and Kaylin Lynch. Our engineers are Rick Kwan and Fernando Arruda, and our theme music is by Trackademics.
If you're already following the show, you're not drinking the surveillance wine. By the way, it tastes terrible.
If not, back into the accountability bottle for you. Go wherever you listen to podcasts, search for On with Kara Swisher and hit follow.
Thanks for listening to On with Kara Swisher from New York Magazine,