Signal’s Meredith Whittaker on Surveillance Capitalism, Text Privacy and AI
Questions? Comments? Email us at on@voxmedia.com or find Kara on Threads/Instagram @karaswisher
Learn more about your ad choices. Visit podcastchoices.com/adchoices
Press play and read along
Transcript
Speaker 1 Support for On with Carrou Swisher comes from Saks Fifth Avenue. Saks Fifth Avenue makes it easy to holiday your way, whether it's finding the right gift or the right outfit.
Speaker 1 Saks is where you can find everything from a lovely silk scarf from Saint Laurent for your mother or a chic leather jacket from Prada to complete your cold weather wardrobe.
Speaker 1 And if you don't know where to start, Saks.com is customized to your personal style so you can save time shopping and spend more time just enjoying the holidays.
Speaker 1 Make shopping fun and easy this season and get gifts and inspiration to suit your holiday style at SACS Fifth Avenue.
Speaker 3 Support for this show comes from OnePassword. If you're an IT or security pro, managing devices, identities, and applications can feel overwhelming and risky.
Speaker 3 Trellica by OnePassword helps conquer SaaS sprawl and shadow IT by discovering every app your team uses, managed or not. Take the first step to better security for your team.
Speaker 3
Learn more at onepassword.com slash podcast offer. That's onepassword.com slash podcast offer.
All lowercase.
Speaker 3 Support for this show comes from OnePassword. If you're an IT or security pro, managing devices, identities, and applications can feel overwhelming and risky.
Speaker 3 Trellica by OnePassword helps conquer SaaS sprawl and shadow IT by discovering every app your team uses, managed or not. Take the first step to better security for your team.
Speaker 3
Learn more at onepassword.com/slash podcast offer. That's onepassword.com/slash podcast offer.
All lowercase.
Speaker 1 Hi, everyone, from New York Magazine and the Vox Media Podcast Network. This is on with Kara Swisher, and I'm Kara Swisher.
Speaker 1 My guest today is Meredith Whitaker, president of the Signal Foundation, the nonprofit that runs the Signal messaging app, one that I use all the time because it's pretty much the only one I trust.
Speaker 1 Signal has been around for a decade and only has 70 to 100 million users a month, which is peanuts compared to WhatsApp, just under 3 billion. But Signal is not a lightweight in the tech world.
Speaker 1 Its Signal protocol is considered the gold standard in end-to-end encryption. In fact, it's the tech that WhatsApp and Facebook Messenger and Google use.
Speaker 1 The difference is that Signal users actually keep all their metadata locked up too, which is why it's become the messaging app of choice for people who are really concerned about privacy.
Speaker 1 Cybersecurity experts, NGO workers, indicted New York City Mayor Eric Adams, Drake, and me too, as I said.
Speaker 1 Meredith Whitaker has been leading the Signal Foundation since 2022, and she's kind of a perfect person to do the job.
Speaker 1 More than one reporter has called her Silicon Valley's Gadfly, which is also my title, by the way.
Speaker 1 After starting at Google in 2006, Whitaker quickly moved up the ladder, founding Google Open Research and MLab. In 2017, while she was still at Google, she also co-founded the AI Now Institute.
Speaker 1 This was very early at NYU with Kate Crawford to research the social implications of AI.
Speaker 1 So basically, Whitaker was a rising star, but then in 2018, after she helped organize walkouts to protest sexual misconduct at the company, citizen surveillance, and Google's military contracts, the company told her to cool it and she left.
Speaker 1 Whitaker has been the no-hold bar's advocate for data privacy in a world increasingly run by what people call the surveillance economy.
Speaker 1 I'm excited to talk to her again about that, the increasing consolidation of power and money in tech, especially in AI, where she sees this privacy fight going, and how a non-profit or even for-profit startups can survive.
Speaker 1 She's still a firebrand, and our expert question comes from another of those people, Renee DeResta. So, this should be good.
Speaker 1 One more thing: I've been nominated for the best host in the Signal Listeners Choice Awards, which has nothing to do with the Signal messaging app. We're not trying to butter up judges here.
Speaker 1
But if you agree that I am the best host, and you know, you really should, you can follow the link in the show notes. Thank you so much.
All right, let's get to it.
Speaker 1 Meredith, welcome. Thanks for being on.
Speaker 2 So happy to be here, Cara, and nice to see you again.
Speaker 1
I know. It's been a while since we did an interview.
It was 2019. You were still at Google building up the AI Now Institute on the side.
Now you're at the Signal Foundation, a nonprofit.
Speaker 1 Talk a little bit about that shift and what happened there and why you decided to do this.
Speaker 2 Yeah, it feels like centuries in tech and in my life.
Speaker 2 I mean, for me, this is all part of a single project, really. Like, how do we build technology that is actually beneficial, actually rights-preserving?
Speaker 2 How do we stop the bad stuff, start the good stuff? And of course, technology is vast and these companies are huge and there are many angles to take.
Speaker 2
So, you know, I was also at the FTC trying to help it there. AI now trying to produce better research.
And now, Signal to me is just the...
Speaker 2 It's the most gratifying dream job in a sense, because we are actually building and shipping high availability tech that is rejecting the pathology and the toxic business model that I've been pushing at, prodding, fighting for almost 20 years in my career.
Speaker 1 Aaron Powell, talk about why you left Google. It had gotten, I don't think hostile is the right word, but not what you wanted, as I recall.
Speaker 2 Yeah, I mean,
Speaker 2
it was a combination of hostility from some in management who didn't like honest conversation about ethical business decisions, let's say. Oh, that's that.
So, yeah, that old thing.
Speaker 1 Not evil, but you know,
Speaker 2 not evil, just you know, like in polite company, don't mention the evil.
Speaker 2 Yeah, so I, you know, I had raised a lot of alarms. I'd participated in
Speaker 2 organizing against some really troubling business decisions, some of the, you know, kind of surveillance and defense contracting that were just being made based on bottom line, not based on kind of ethics and duty of care.
Speaker 2 And
Speaker 2 in, you know, I kept that up for a number of years, but at some point I felt that I had hit the end of the road.
Speaker 2 The pressure from Google to stop some of the retaliation that I and a lot of my colleagues were facing meant that we were just spending more and more time strategizing how to
Speaker 2 keep our toehold other than not building, say, AI Now Institute, which has gone on to do incredible things, not thinking about sort of positive projects in the world. And
Speaker 2
my adrenals needed a rest. I needed a change of pace.
And I'd also been at Google for over 13 years at that point.
Speaker 1 When you say they didn't want, there was pressure, explain that for people who don't understand within these companies.
Speaker 1 Google's always been a place where things are debated since the beginning, or it was, even if two people really did run the place or controlled the place.
Speaker 2
Yeah, look. I mean, I joined in 2006, which was a wild and free time at Google.
And they really did nurture a culture of open conversation, communication.
Speaker 2 You know, there was the sort of Usenet message board vibe on our internal mailing list. You just go on and on debating the nuances of any given point.
Speaker 2 So that was sort of the nervous system of Google when I jumped in there. And of course, there was a huge amount of money.
Speaker 2 So there was a lot of room to play around, to fail, to learn things, to start new initiatives.
Speaker 2 Now, it doesn't mean the decisions were made by consensus, right? But it it means that was the environment that was nurtured and that attracted a lot of people. Yeah, on every topic.
Speaker 1
It wasn't just very serious ones. Oh, everyone.
I remember a huge argument over kombucha there at one point. Yeah, yeah, yeah.
Yelling at the founders about the shitty kombucha.
Speaker 2 There was a famous thread on Goji Berry Pie that went on for like 3,000 posts, right? So, you know, I really, I learned my poster, poster skills pretty early.
Speaker 2 But you know, that muscle still remained.
Speaker 2 And a lot of people were there because they believed believed the rhetoric right like don't be evil is a bit trite and it's certainly it's also far down the line because there's a lot to the to the left of it you can do a lot of bad things to the left yeah right yeah and evil to whom i mean come on um but nonetheless it was you know in a socially awkward discipline of kind of nerds who do computer science pointing to don't be evil was often invoked just to say like, yo, I'm uncomfortable, right?
Speaker 2 So there was this reflex in the company.
Speaker 2 And as they moved, you know, let's say they moved closer and closer to the red lines, they were able to swear off in the beginning because they were so far away, because the money was coming in.
Speaker 2 It will solve the problem of what happens when we have to choose between a billion dollars and hanging on to our ethics, right? That, you know, that seemed like a fantasy.
Speaker 2 And of course, they started hitting up against these red lines. In 2009, kind of requests from the Chinese government,
Speaker 2 you know, they held firm there. And then we went into the mid-2010s, and they're signing up to be defense contractors, contractors, you know, building AI drone targeting and surveillance software.
Speaker 1
So you had started the AI Now Institute on the side. Explain for people what that is.
And then we're going to get to Signal because it's how you got here is an interesting journey, I think.
Speaker 2 Yeah, no, my path is wild and winding.
Speaker 2 So
Speaker 2 I had founded a research group at Google, and that was a research group that the nucleus of which was a measurement lab, this large-scale internet performance measurement platform with a consortium of folks in Google and outside of Google at the Open Tech Institute.
Speaker 2 So then I, you know, I hear about AI. It's, it's machine learning back then around like 2012, 2013, 2014.
Speaker 2
And I'm like, oh, what is this? This seems cool. It's like a statistical technique that does some stuff.
Oh, wait, you're taking really flaky data, like not way more flaky than mine,
Speaker 2 and way higher up the stack.
Speaker 2 So it's making, you know, like up the stack to making decisions about human human beings yeah and the garbage in garbage out idea garbage in garbage out and then like garbage into a black box that you then treat as a godhead right like that's the issue right like it's not you know you're calling this intelligence but actually you've just sort of masked its provenance masked its flaws and are using it to imply that these you know, massive corporate infrastructures are somehow sentient and alive.
Speaker 2 And what I'm not saying is that patterns in data that is responsibly collected aren't useful for decision-making. Obviously, they are, right?
Speaker 2 You know, the issue is that there is a toxic business model at the heart of this, that those patterns aren't always responsible, and that we forget that data isn't a synonym for facts at our peril.
Speaker 1 Yeah, I just interviewed Yuval Harari, and he made a salient point, a very simple one, that there's a lot of information out there, but not a lot of facts.
Speaker 1 And that's hard to discern.
Speaker 1 And of course, this higher intelligence isn't going to know the difference because you put it in there, right? Because it's only going to know what it knows. So you got worried, you leave.
Speaker 1 Explain how you got to Signal and what your thought was on why it was important.
Speaker 2 Aaron Powell, Jr.: Well, I've actually been a big fan of Signal, involved in the world of Signal since
Speaker 2 around the beginning.
Speaker 2 When you work at the network layer, at the kind of low layer, you're privileged to begin to learn pretty quickly that everything is kind of broken, right?
Speaker 2 There are security flaws, there are privacy issues, like duct tape and sticky tape and like a handful of core libraries maintained by open source contributors who live on a boat and won't answer emails.
Speaker 2 Like you're like, oh, this is the internet. Wow.
Speaker 2 And I think like I began to be animated by questions of privacy and security pretty early because of that. exposure and because it was the most interesting place, frankly.
Speaker 2 Like it was where the fresh air of politics, you know, met the theory of the internet. And
Speaker 2 so I had been a fan. I'd known Moxie for a number of years.
Speaker 2 Moxie Marlinspike is the founder of Signal, co-author of the Signal Protocol, and really carried Signal as a project on his back, putting huge amounts of time and energy into it to do what is almost impossible, which is create this virtuous, non-profit, open, high-availability communications tech that is not participating in surveillance, that is not participating in targeting or algorithmic tuning or
Speaker 2 content curation or any of the other things that we've seen go real, real south with the others.
Speaker 1 Right. Now, let's talk about Signal Messenger, which is the core product.
Speaker 1 For a while,
Speaker 1
a lot of the big concerns around messaging apps were green versus blue bubble barrier or stupid things like that. But there's more important things.
What does it do differently and what doesn't it do?
Speaker 1 So people can understand the difference between all the different messaging apps.
Speaker 2 Signal's big difference is that we are truly private.
Speaker 2 We collect as close to no data as possible, and we develop open source so that our claims, our code, our privacy guarantees don't have to be taken on trust. You can actually verify them.
Speaker 2 And because of the reputation we built up in the community, because the Signal protocol was a massive leap forward in applied cryptography and actually was the thing that enabled large-scale private messaging on mobile devices, we get a lot of scrutiny and that promise of many eyes making better, more secure code has really delivered for us.
Speaker 2 Right.
Speaker 1
People know what you're doing. And so it's like a messaging app, like any other, in terms you could message back and forth.
But what does it do differently and what doesn't it do?
Speaker 2
Yeah. So it protects your privacy, let's say, up and down the stack.
We use our own gold standard cryptography that actually others license from us. WhatsApp licenses it, Google licenses it.
Speaker 2 It is the gold standard.
Speaker 1 Yes, this this is end-to-end encryption.
Speaker 2 End-to-end encryption. And we created the kind of the gold standard there.
Speaker 2
And it protects what you say. So you and I are texting Cara.
Signal doesn't know what we're saying on Signal. Like you can put a gun to my head.
I can't turn that over.
Speaker 2 But we go well beyond that, too, because of course, metadata, this fancy little word for data about you and your friends, is also incredibly revealing.
Speaker 2 So we don't collect data about your contacts, your groups, your profile photo, when you text someone, who's texting whom.
Speaker 2 So all of that required research and actually design, like building new things, solving problems, because the ecosystem we're in has been built with the assumption everyone wants to collect all the data all the time and keep it to do whatever.
Speaker 2 So we actually have to go in and be like, well, we can't use that common library for development. Because if we use that, it would collect data.
Speaker 2 Let's give a concrete example when we added gift search to signal, because everyone likes a reaction gift, right? Or at least boomers do.
Speaker 2
And we couldn't just use the Giphy library. That would have taken a couple hours.
We would have tested it in. The engineers go home and go to sleep.
Speaker 2
No, we had to rewrite things from scratch. This was actually a significant architectural change.
It took a number of months.
Speaker 2
And when we implemented it, it meant that we weren't giving any data to Giphy. They have no idea, whatever.
So when they're acquired by Meta, we don't have to worry. You don't have to worry.
Right.
Speaker 1
Exactly. So this is this end-to-end encryption.
Who's using it now, and where are you seeing growth right now?
Speaker 2
Yeah. I mean, our user growth has been steady.
And I think, again, this, this just, you know, the
Speaker 2
bloom is off the big tech rose, right? People do not want to be surveilled. There is a giant demand for privacy.
And so, you know, Signal is global core infrastructure.
Speaker 2 We're used by journalists everywhere, human rights workers. We are the core infrastructure in Ukraine for safe communications, for sensitive information,
Speaker 2
government military. We are core communications in governments across the world, right? Just for, you know, we don't want a data breach to expose sensitive information.
I think
Speaker 2
every time there is what we call a big tech screw-up or a massive data breach, we see spikes in signal growth. We also see spikes when there's geopolitical volatility.
So you see
Speaker 2 when there was the uprising in Iran around women's rights, we saw a massive spike in use and then we saw the government or the ISP try to block it and then we stood up proxies to try to get people access anyway.
Speaker 2 So it's really, you know, it's when people suddenly recognize
Speaker 2 the power that personal sensitive information can give those who might want to oppress or harm or, you know, otherwise hurt their lives.
Speaker 1 Sell you things.
Speaker 2 Exactly. Or sell you things and
Speaker 2 then
Speaker 2 decide what news you get, decide if you get an ad for a good rate on a home loan or a bad rate, right? Like these things that are subtle, but also really meaningful.
Speaker 1 WhatsApp is peddling privacy in the form of encryption as a selling point, but still collects metadata.
Speaker 1 Talk about this business model for a huge slice of the tech sector at this point, data collection, surveillance, capitalism, profits, et cetera.
Speaker 2 I mean, I think of this as really the original sin, right?
Speaker 2
Like the Clinton administration knew there were privacy concerns with the internet business model. They had reports from the NTIA.
They had advocacy from human and civil rights organizations.
Speaker 2 This was a debate that played out over the 90s as the rules of the road for the internet were being
Speaker 2 established. And this is why I just like I get itchy when people are like, they could never have known.
Speaker 2 And I'm like, literally, there were reports before any of this were done laying out exactly how this would go down. And it went down that way and slightly worse.
Speaker 2 This wasn't a matter of guileless innocence leading to innovation that got out of control.
Speaker 2 This was a business model choice where the Clinton administration said absolutely no restrictions on commercial surveillance. And they also endorsed advertising.
Speaker 2 as the business model of this internet. And like, of course, what is advertising if not know your customer? We got to get more and more data, right? So it's an incentive.
Speaker 2
It's like a flywheel incentive for surveillance. We want as much data about our customers as possible so we can target them with ads.
And what does that incentivize?
Speaker 2 That incentivize huge clusters of compute and data storage so that you can keep this data. That incentivizes things like MapReduce, that is sort of the precursor to a lot of the AI models now.
Speaker 2 That incentivizes, you know,
Speaker 2 social media that calibrates for virality and sort of like upset and cortisol and like, you know, it's like amygdala activation, basically.
Speaker 1 I always say enrichment equals engagement so yeah there you go exactly and it why does it equal engagement not because like we like engagement but because that means you see more ads you click on more ads you contribute more data you know the cycle continues and this business model is super profitable so that's the norm so let's talk about finance as you said there isn't a business model for privacy on the internet now signal is not just opposed to surveillance capitalists as we said it's a nonprofit funded by donations include a big chunk from whatsapp founder brian acton who is also a co-founder and board member at Signal.
Speaker 1 You don't take investments, you don't have advertising, the app is free, but you still need money to pay your engineers and keep your servers running. Talk about how you do that.
Speaker 2 Yeah. Well,
Speaker 2 our costs are about $50 million a year. And every time I say that, I get a couple tech founders, a couple tech execs come up to me and say, like, congratulations on keeping it lean, right?
Speaker 2 So we're, you know, we're doing really well, but what we're doing is big and requires resources because tech is capital intensive. So right now we are funded by donations.
Speaker 2
That's our nonprofit status. And again, as we just sort of touched on, that nonprofit status is not a nice to have.
It's not like, oh, we like charitable giving.
Speaker 2 No, it's a prophylactic against the pressures of a business model that are opposed to our mission, which is private rights preserving communication.
Speaker 2 So we are looking at different models right now for how we grow this. How do we sustain Signal? And how do we make sure that Signal isn't just a lonely pine tree growing in a desert, right?
Speaker 2 We need an ecosystem around us. We can't be just
Speaker 2 the sole example of one that got away from that business model. And I think things like how do we set up endowments that can sustain Signal long term?
Speaker 2 How do we think about tandem structures or hybrid structures where things that would otherwise be polluted and toxified by exposure to a toxic business model are kept cordoned off.
Speaker 2 There's some vision in there that we could inject, but the flat fact is that's the cost.
Speaker 1 Yeah, and there's nothing you want from your users except use, right? It's sort of a free, it's like a free park or something like that.
Speaker 1 So protecting privacy isn't also something that it's not a moving target.
Speaker 1 There are new systems on the horizon, quantum computing comes to mind, which require a complete overhaul of encryption systems, which you depend on.
Speaker 1 You're already preparing for Q-Day, as the Wall Street Journal recently called it. Very dramatic over at the Wall Street Journal.
Speaker 1 But explain what Q-Day is and what you've been doing to deal with that. Some people have a vague knowledge of quantum computing, but it can un-encrypt everything very quickly, basically.
Speaker 2 Yeah, it's
Speaker 2 very, very powerful computing that
Speaker 2 basically can factor large primes, which is what we depend on in cryptography, very quickly, right?
Speaker 2 And so this would break the premise of kind of unbreakable unbreakable math as the guarantor of current crypto systems and
Speaker 2 a future in which we have sufficiently powerful quantum computing, which I guess is what Q Day is, although I would have thought it was like a QAnon thing.
Speaker 1 Yeah. Q is a letter we have to stop using.
Speaker 2 Yeah, I'm like, oh, cool words. X and Q.
Speaker 2 X and Q. I know, we're reducing our literacy as we speak.
Speaker 2 But there is quantum computing is developing, and there's no clear answer to when we will have a quantum computer that can actually do that, but it's catastrophic enough that we can't rest on hope or postpone it.
Speaker 2 So, Signal was
Speaker 2 the first private messenger, the first messenger to implement post-quantum resistant
Speaker 2 encryption for our Signal protocol. And the resistance we added
Speaker 2
protects against the kind of attack we can be worried about now, which is called harvest now read later. And that just means you collect all the encrypted data.
It's garbled bits.
Speaker 2 It means nothing, but you save it and you save it and you save it. And in, at a time when these sufficiently powerful quantum computers exist, you then apply them to decrypt it.
Speaker 1 The harvest now thing is really interesting. For people who don't understand, it's like stealing all the safes and putting them in a room.
Speaker 1 And then someday you'll be able to figure out how to open them, essentially.
Speaker 2 That's a perfect.
Speaker 1
Yeah. Yeah.
So
Speaker 1 one of the things, obviously, is reputation. So in 2022, federal investigators said they had gotten access to signal messages, helped them charge leaders of the Oath Keepers in the January 6th plot.
Speaker 1 It wasn't clear how they got those. And I'm sorry to say this because I think he's the biggest information civ on the planet.
Speaker 1 Elon Musk questioned the app on X, something about known vulnerabilities that are not being addressed. Any idea what he meant? I mean, I'm not going to ignore everything that Imbecile says, but
Speaker 1 what kind of impact do reports impose? I know, I know, large psi.
Speaker 2 Yeah. I mean,
Speaker 2 I don't know what he was talking about.
Speaker 2 I think, you know, getting to the the Oath Keeper point, look, the way that the cops usually get data is someone snitches, someone has it on their phone, they get access to the phone.
Speaker 2
You know, it's Occam's razor and it's not that complicated. We're dealing with people.
We're dealing with, you know, conspiracies, which never really work out that well.
Speaker 2 But it is a good hook if you want to scare people about security claims, particularly because, you know, right, like 99999%
Speaker 2 of people can't themselves validate these claims, which makes this kind of weaponized information environment really dangerous and, you really perturbing to us, which is why we're so careful about this.
Speaker 2 When Elon tweeted that, I don't, you know,
Speaker 2 what I can say for sure, and this is what I posted on Twitter, we have no credible reports of a known vulnerability in Signal.
Speaker 2 We have a security mailing list that we monitor assiduously where we haven't heard anything. There are no open critical vulnerabilities that we have heard of.
Speaker 2
So, you know, it's kind of, you put us in a position of proving a negative. And so, you know, so it was this off-the-cuff tweet.
It caused a lot of confusion.
Speaker 2 I was sort of dealing with that for a number of days, you know, not because it was serious, but because it seriously freaked people out.
Speaker 2 We had human rights groups, we had people calling us, just saying, Look, this is life or death for us, right? If signal is broken, you know, we're going to lose people. We need to know for sure.
Speaker 2 And what I can say is, we have no evidence that this is true.
Speaker 2 I will say, since then, Elon has tweeted screenshots of messages on his phone that are signal screenshots. So, you know, you can put that together.
Speaker 2 Of course, he's everyone uses signal.
Speaker 1 He's got a lot of secrets to keep, I think.
Speaker 2 Well, I mean, anyone who has anything they're dealing with that is confidential or has any stakes generally ends up using Signal. Yep.
Speaker 1 We'll be back in a minute.
Speaker 4 In business, they say you can have better, cheaper, or faster, but you only get to pick two. What if you could have all three at the same time?
Speaker 4 That's exactly what Cohere, Thomson Reuters, and Specialized Bikes have since they upgraded to the next generation of the cloud, Oracle Cloud Infrastructure.
Speaker 4 OCI is the blazing fast platform for your infrastructure, database, application development, and AI needs, where you can run any workload in a high availability, consistently high performance environment and spend less than you would with other clouds.
Speaker 4
How is it faster? OCI's block storage gives you more operations per second. Cheaper? OCI costs up to 50% less for compute, 70% less for storage, and 80% less for networking.
Better?
Speaker 4 In test after test, OCI customers report lower latency and higher bandwidth versus other clouds. This is the cloud built for AI and all your biggest workloads.
Speaker 4
Right now, with zero commitment, try OCI for free. Head to oracle.com slash Vox.
That's oracle.com slash Vox.
Speaker 5 The world is buzzing with AI tools, but instead of making things easier, they've made things overwhelming. There's a better way.
Speaker 5 Enter Superhuman, the AI productivity suite that gives you superpowers so you can outsmart the work chaos. With Grammarly, Mail, and Coda working together, you get proactive help across your workflow.
Speaker 5
Experience AI that meets you right where you are. Unleash your superhuman potential today.
Learn more at superhuman.com slash podcast. That's superhuman.com slash podcast.
Speaker 6 Avoiding your unfinished home projects because you're not sure where to start? Thumbtack knows homes, so you don't have to.
Speaker 6 Don't know the difference between matte paint finish and satin, or what that clunking sound from your dryer is? With Thumbtack, you don't have to be a home pro. You just have to hire one.
Speaker 6 You can hire top-rated pros, see price estimates, and read reviews all on the app. Download today.
Speaker 1 It's fair to say you faced a lot of headwinds in this battle to maintain tough standards on encryption, and everybody does remember Apple's battle with James Comey, of all people.
Speaker 1
If people don't remember, it was James Comey. But last year, the UK passed the UK Online Safety Act.
The EU has been debating child sexual abuse regulation known as chat control bill.
Speaker 1 Basically, they're all touted as efforts to protect users, especially children online, which seems like a good thing, right? But you and other security experts have been pushing back.
Speaker 1 Talk about these two bills, what they do and would do to your model.
Speaker 2 Yeah, I'll just kind of characterize them in one brush because, like, ultimately, they are aiming for the same thing. TLDR, in the name of protecting children from from
Speaker 2 abuse, harmful content, they
Speaker 2 would mandate or would give a body or agency the power to mandate scanning everyone's private messages and comparing them against some database of prohibited speech or content.
Speaker 2
And this isn't possible while preserving end-to-end encryption. Like, that's the mathematical truth.
A backdoor,
Speaker 2 you implement a backdoor, you have broken the mathematical guarantee of encryption.
Speaker 2 And we have only to point to the facts that the Wall Street Journal just reported that apparently the Chinese government, no surprise to anyone, has been hacking interception points, so backdoors, in U.S.
Speaker 2
systems, right? So this is not a game, this is not a hypothetical, this isn't the technical community. raising large hyperbolic flags.
No, this is the reality.
Speaker 2 And any backdoor in a network compromises the whole network. So you backdoor the, you know, you mandate scanning of signal in the UK.
Speaker 2 Of course, communications cross borders and jurisdictions all the time.
Speaker 2 And then that means Amnesty International, housed in the UK, when they're talking to human rights defenders in Uganda, where being gay is punishable by death, working to get people's information, to get asylum cases going, to get people out and to safety.
Speaker 2 That conversation is then compromised.
Speaker 1 So I just, in fact, spoke to Hillary Clinton, and she was talking about how they use Signal and WhatsApp to help women get out of Afghanistan after the U.S.
Speaker 1 military withdrawal, and they needed that secrecy to protect them. And you noticed you see a surge in downloads when conflicts arise, but these back doors, everybody gets in then.
Speaker 1 Everyone's able to get in.
Speaker 2 Yeah, a back door is not
Speaker 2 something you can control. Once there's a door, anyone can walk through it, right? So this is the magical thinking that we talked a lot about when we were pushing back on this bill, right?
Speaker 2
You want a golden key. I think, as James Comey said with the Apple Showdown.
You want a magical wand. You want a secret portal that only you have the spell to open.
Well, that doesn't exist.
Speaker 2 That's a fairy tale. What does exist is a critical vulnerability in the only core systems we have to guarantee confidentiality and cybersecurity of communications.
Speaker 2 And if you undermine those, if you open that door for everyone, it means that the technology we have for that type of security security no longer exists, no longer matters, right? So it's serious.
Speaker 1 How much does the recent arrest of Telegram founder and CEO Pavel Durov impact the debate? Clearly, Telegram's known as a cesspool.
Speaker 1 You know, my son was like, it's for sex and drugs, mom, just so you know. It's often named in one breath was signaled because the big company talks about privacy and encryption in the larger sense.
Speaker 1 But for those who don't remember, Durov was arrested in connection with distributing child sex abuse, material, and drugs, money laundering, working with organized crime, is accused of failing to allow authorized law enforcement interception.
Speaker 1
Basically, he didn't give investigators access on the app. You're not a social media company, but talk about the difference and how that has affected you all.
Because
Speaker 1 you're adjacent to him, of course.
Speaker 2 Yeah. I mean, I think the discourse is exactly the right way to frame this.
Speaker 2 The impact of the arrests, the talk and the kind of information hyperbole, the way this sort of became a martyr story, and the lack of really concrete information, which never helps, meant that there was a lot of questions, right?
Speaker 2 And I remember being like, wait, what happened? What are the charges? Sort of sorting through the French legal system.
Speaker 2
But ultimately, you know, it doesn't affect us, right? We're not a social media app. You can't go viral on Signal.
You can't broadcast to millions of people.
Speaker 2 It doesn't have sort of encounter features. It's a very different thing.
Speaker 1 And for people who don't understand, these are groups that they create on WhatsApp or on Pavel Durov's platform, Telegram.
Speaker 2
And millions and millions of people. And then they have a kind of like what's happening near me.
So you can feature that will like with geolocation.
Speaker 2 So there's all sorts of things happening there that mean that the legal and regulatory thresholds and duties they face are wildly different from Signal. They're a social media company.
Speaker 2
They broadcast things to millions of people. They are constitutively not private and not very secure.
Signal has designed ourselves so we are an interpersonal communications app.
Speaker 2 We intentionally do not add channels or features where you can go viral. We intentionally steer clear of those kind of duties because you cannot do those duties.
Speaker 2
You can't meet those obligations while being robust in our privacy and security guarantees. That's just full stop.
Right. But
Speaker 1
there is that idea of the concerns that total Christians doesn't help the good guys. It aids and abets bad actors.
I think that's the bigger worry about CSAM. This is child sexual abuse materials.
Speaker 1 Every week, we get a question from an outside expert. Let's listen to this one, and I'd love you to answer it.
Speaker 7 Hi, Meredith. This is Renee DeResta, author of Invisible Rulers and previously the technical research manager at the Stanford Internet Observatory.
Speaker 7 My question for you is, as AI makes it easier to generate synthetic, non-consensual, intimate imagery and CSAM, how specifically should platforms and governments respond to the production and dissemination of this harmful content?
Speaker 7 Is it possible to implement effective measures against these abuses without infringing on privacy and free expression?
Speaker 1 So, what are your thoughts on this? One of the things that's important is there are significant and justifiable concerns in this area, right? And certain areas, drugs, child sexual abuse, et cetera.
Speaker 1 Then, how do you protect against it?
Speaker 2 Yeah, I mean, absolutely. This is a very serious area, right? And that's one of the reasons it has been so,
Speaker 2 let's say, effective in floating a lot of these anti-encryption proposals because it takes the air out of the room, frankly.
Speaker 2 Like a lot of people have experience with this, sadly, and it is extremely difficult and extremely emotionally engaging. So, you know, I think we need to take the issue itself seriously first, right?
Speaker 2 How do we protect children full stop?
Speaker 2 And then begin to look at what are the slate of options that we have. Where is the problem coming from?
Speaker 2 You know, are we funding social services?
Speaker 2 Are we ensuring that there are infrastructures in place where when a child reports that something bad is happening, that a priest or a teacher or their uncle are involved in something horrifying, how do we take care of that child and protect them?
Speaker 2 Why is Prince Andrew walking around in a country that is fixated on encryption as the culprit here, right?
Speaker 2 And this doesn't, this is not saying that platforms don't have responsibility here, but it is saying that I think when you look at the facts here, when you look at the number of people in different countries, law enforcements who are actually dedicated to reviewing this material and tracing it down, I can't say those numbers publicly because they were given to me privately, but we're talking about tens of people.
Speaker 2 In one case, we're talking about two people. total.
Speaker 2 So a lot of times we're talking about, you know, the issue is a haystack full of needles and not enough people to categorize the needles. We're talking about resources there.
Speaker 2 We don't have basic trust and safety tooling available to startups. So there are many places to invest in actually tackling this, both online, right? You know, go after payment processors.
Speaker 2 A lot of what's happening is sort of sextortion and that's a node there.
Speaker 2
You know, there are reporting infrastructures for public platforms and social media. There are all sorts of research on this.
Attack the business model, right?
Speaker 2 All of those are options on the table. To me,
Speaker 2 what leads to my distrust of a lot of the remedy space is that with all of that being obvious, with a lack of investment in social services, with the culture we have where children are often not believed here.
Speaker 2 Still, encryption is manufactured as the cause of this problem when there's very, very little evidence that private communication has any causal
Speaker 2 role in this issue.
Speaker 1 Right. But of course, I think Gerav flouted not helping.
Speaker 2
Yeah, but he also doesn't run a private communications app, right? Like none of that was private. It was just a flex of like, yeah, we're not going to help.
Right.
Speaker 2
So that's a social media platform just saying no. And I think there was a, you know, how do they say it? A fuck around and find out moment.
Yeah. For him.
Right.
Speaker 2 But that's very different from private communications and encryption.
Speaker 2 And it's weird how there's a sort of a transitive transitive property by which encryption becomes the, you know, the problem to be solved in every case, even when the evidence doesn't seem to be a problem.
Speaker 1 Well, it's sort of a brute. It's like throwing
Speaker 1 a hammer at a piano to make music or something.
Speaker 1 So in the U.S., we're seeing a lot of states are passing bills requiring age verification. One of the solutions is age verifications and restricting social media apps and access for minors.
Speaker 1
Florida, South Dakota, Oklahoma, name a few. There's an agreement with a lot of these bills that's been used as a smokescreen.
You've called it surveillance, wine, in an accountability bottles.
Speaker 1 Talk a little bit about these ideas of restricting young people and then what you meant by surveillance wine in accountability bottles.
Speaker 2 Yeah. Well, I mean, look, I don't think restricting young people ever works.
Speaker 2 As a young person who always figured it out faster than my parents, this is a paradigm where I have very low hopes and I have even lower hopes when I see
Speaker 2 the folks at the tip of the spear of this movement, which are frankly often large biometric and age verification companies like Yoti, who are selling the remedy, right?
Speaker 2 So if we pass a bill that requires age verification to get into websites, none of these platforms are going to do that themselves.
Speaker 2 They're going to contract with a vendor or a third party who will bolt on age verification software and run that for them because, you know, that's a liability shield and
Speaker 2 you don't want to build what you can lease or borrow.
Speaker 2 And then we get into
Speaker 2 a situation where age verification is a mass surveillance regime that that is similar to tracking people's content and habits online.
Speaker 2 You can't know that someone's a child without knowing who is also an adult, to be clear about that. And so we begin to
Speaker 2 legislate a tracking and monitoring system that one, won't really work
Speaker 2 based on all the evidence to date. And two, is
Speaker 2 attacking the problem at the level of restriction, not at the level of platform business models, right?
Speaker 2 And the, you know, this is where we get into accountability wine or surveillance wine and accountability bottles, which is really like you and I lived through this.
Speaker 2 We recognize that there is something really wrong with the big tech business model, right? That accountability is needed. And, you know, we saw in the mid-2010s that there was a real call for this.
Speaker 2 And what came out of that were some good ideas. And then, like this, I think some bad ideas wrapped in accountability, right? So instead of
Speaker 2 going after the surveillance-supported
Speaker 2 advertising social media business model, cutting off the streams of data, perhaps implementing a very strong federal privacy law in the U.S.
Speaker 2 that would undermine that model, take a bunch of money off the table, but clean up a lot of the global harms, we're looking at bolting more surveillance and monitoring onto the side of it.
Speaker 2 So it's giving the government and NGOs and whoever else a piece of that monitoring instead of reducing the monitoring itself.
Speaker 2 And so I think it's, you know, how do we tune these regulations and how do we, you know, how do we find the political boldness to actually go up against these business models and those who are profiting from them instead of sort of try to make our name as someone who did go up against them, but actually propose remedies that don't go up against them.
Speaker 2 And I guess that's the age-old question of, you know, how do we find real bold political leaders?
Speaker 1 Are you worried about the impact of the outcome of these laws?
Speaker 1 Say here in the United States, we have this election, a potential autocrat who would love surveillance, although I don't think you'd understand it at this point.
Speaker 2 I am. I am.
Speaker 2 I would say I am. I'm a bit of a political exile in that I'm concerned with centralized power wherever it exists, whether that's in large tech corporations or in governments.
Speaker 2 I don't think handing more tools to governments and then imagining we live in a counterfactual world in which those governments will always be run by us and benevolent adults is correct.
Speaker 2 And I think a lot of people sort of have been pushing for accountability often live in that world.
Speaker 2 And I also am,
Speaker 2 frankly, I think a lot about what the collateral consequences could be of a very bad law in the U.S.
Speaker 2 that affects the big tech companies that control the world's information and infrastructural resources, right? You have three companies based in the US with 770% of the global cloud market.
Speaker 2 You have five information platforms, social media platforms, four of which, the biggest four are jurisdiction in the US, which at this moment control most of the world's information environment.
Speaker 2 So that's a lot of power to be homed in one jurisdiction, particularly given the kind of volatility we're seeing and the way that just people in general, as we move through generation after generation who are kind of native to tech and kind of understand these things, are beginning to recognize just how much power and control is housed in these companies.
Speaker 2 I think that recognition is seeping into the bedrock of popular consciousness. And, you know, I want to reduce this toxic business model.
Speaker 2 I want to create an ecosystem that is way less concentrated before someone with malicious
Speaker 2 intent gets their hands on that throttle.
Speaker 1 We'll be back in a minute.
Speaker 1 Last time we spoke, AI was all we talked about. Things have changed dramatically, but you were warning back then about the surveillance economy and power consolidation.
Speaker 1 Cassandra, I would say, I think you inspired me a lot to start really talking about it and pointing it out over and over and over again.
Speaker 1 So how are you feeling about AI now and how it's related to this new AI economy, this idea of surveillance capitalism?
Speaker 1 Because these systems are going to get you're saying we should stop it now is it even possible given the consolidation of power in tech yeah
Speaker 2 well i am you know look i am a consummate optimist i wouldn't be able to get up and do this if i didn't believe that change was always possible i think we are we're in a a frothy hypey moment and i do see the ai market wobbling I see the definition of what AI is sort of wafty right now.
Speaker 2 And I see a real struggle by the incumbents to try to keep that market going and maintain their advantage. And so I can explain a little bit why I see that, right?
Speaker 2 And maybe we'll just start with what AI is. This deep learning paradigm, which is, you know, all the transformer models, ChatGPT, all of this is still deep learning, right?
Speaker 2
We haven't moved into some sort of escape velocity for a new form. It's actually pretty old.
The algorithms are from the late 1980s.
Speaker 2 Now there's been sort of moves that improve them, but nonetheless, it's pretty old.
Speaker 2 What is new is the massive amounts of data available, and this is data that's collected, created, that people are enticed to deposit by these large surveillance platforms, and the massive amounts of computational infrastructure, which was basically created in order to sort of
Speaker 2 support this business model. And then in the early 2010s, they were like, oh, you know what?
Speaker 2 These old algorithms, machine learning algorithms, but we're going to call them AI because it's flashier, do new and interesting things, you know, improve their performance when we match them with our massive amounts of data, when we match them with the computational power we have.
Speaker 1 Yeah, so you, right now, for people that know most of AI technology is either held or financed by one of the big names, Microsoft, Google, Amazon, Apple, or Meta, X less so.
Speaker 1 Most of the AI chips, the GPUs, are controlled by NVIDIA, which you've called chip monopoly. So what you're essentially saying is they've assembled all the parts, right? That's really what's happened.
Speaker 1 They've got the data they didn't have before, they've got the compute power, and it's all in the hands of people that can afford it.
Speaker 1 Um, there's also the idea they've been pushing for a while that bigger is better.
Speaker 1 You know, they're always like, we need to be, I've heard it from Mark, I've heard it from all of them, is we need to be this big in order to fight China.
Speaker 1 That's usually the boogeyman they use, which is a boogeyman, let's be clear.
Speaker 1 So, we spoke with Mustafa Suleiman a few weeks ago, who said that even $1.3 billion from Microsoft is enough to make inflection AI successful. So, he took the whole ship to his funders.
Speaker 1 We're seeing valuations in AI that are insane, $157 billion for a startup, Open AI, and the money coming from just a few sources. You said the market is wobbly, but it doesn't feel wobbly.
Speaker 1 You know, it feels like
Speaker 1 it's the consolidation of power again. So I'd love you to talk about what that means.
Speaker 2 You know, I think what is wobbly here is that there isn't a clear market fit for this, right?
Speaker 2 We have this round-trip investment. So it's, you know, the big companies, you know, I think it's 70% of series A in AI startups were coming from the big infrastructure providers.
Speaker 2
And it was often in the form of credits to use their infrastructure. So we're, you know, we're talking about something, something really muddy there, but it's not an organic startup ecosystem.
Right.
Speaker 2 And the path to market. So if you want to monetize this, it still goes through these companies.
Speaker 2 You're either selling access to a cloud API by Azure, Google Cloud, whatever, or you are monetizing it by integrating it into your massive surveillance platform, you know, a la meta and kind of using it to sell ads.
Speaker 1 Let me rewrite this email for you.
Speaker 2 Exactly, which, you know, no thank you. The email was one word and it was fine.
Speaker 2 That's what I always say.
Speaker 2 And so I think it's, you know, I think there still hasn't, if we're, you know, we're talking about billions, hundreds of, you know, trillions of dollars. We're talking about capital to the moon.
Speaker 2 We're talking about the capital no one else can reach. And then we have like a bot that messes up our
Speaker 2 or Target spending a huge amount of money for their company to develop this chat bot to help employees that was immediately roasted by everyone because it was so wrong. It was so bad.
Speaker 2 It was so annoying.
Speaker 2 Or, you know, at Upwork Research just published a survey that said 77% of the people from executives through rank and file employees who they interviewed said AI made their work messier, not better.
Speaker 2 Right. So like when the rubber meets the road on the actual business model, we're still struggling to figure out like, but what does this do that's worth hundreds of billions of dollars?
Speaker 1
Let me push back on that. You could say, I heard that at the early internet.
What do I need this for? You couldn't have imagined in Uber when apps happened. You just couldn't have.
Like nobody could.
Speaker 1 And eventually they made, they're making money now, like you know, a ton of money, but still a lot of these companies you wouldn't have imagined then. So I think we're sort of in the
Speaker 1 stupid assistant phase, but it's not going to stay there necessarily. You know,
Speaker 1
maybe you think differently. I don't.
I think it will improve and become better and
Speaker 1 show what it's used for.
Speaker 2 I think we're going to see a vast culling of the market because we simply don't need many, many big, big bloated models.
Speaker 1 That are the same, LLMs.
Speaker 2
That are the same and that are very resource-intensive. I think we also need to be super careful about how we're measuring better.
And this gets into benchmarking and evaluation.
Speaker 2 I just published a paper with a couple of co-authors looking at this bigger is better paradigm. And we actually, you know,
Speaker 2 you see that smaller models, more purpose-built, like with more better curated domain-specific data, often perform better in real life.
Speaker 1 Radiologists or something like that, or certain cancer cells. Sure.
Speaker 2 A lot of health
Speaker 2 applications. So I think it's,
Speaker 2 you know, again, I'm not saying throw the baby out with the bathwater data
Speaker 2 isn't useful for anything, but I think this particular type of like massively bloated model that
Speaker 2 it's not not going to stop hallucinating.
Speaker 2 We are bolting kind of relational databases onto the side of these, you know, probabilistic systems, trying to kind of stabilize them so that they're not as embarrassingly wrong on, you know, on main like they are in search and Google right now.
Speaker 2 But nonetheless, that's not, you know, that's not a solution to the core problem that they don't have information augered in fact.
Speaker 1
So not useful. You're saying not useful, like the same candy bar with different wrappers, the same shitty candy bar with different wrappers.
What do you imagine would be useful?
Speaker 1 These smaller, as you noted in your paper, these smaller databases where you, these LLMs that are really specifically useful, for example.
Speaker 2 I mean, that's a, that's a bit of a tricky question because it's hard to answer when the claims being made around AI's utility by sort of the marketing are that it's useful for everything, right?
Speaker 2 I think we need to really like break it down, like, what would be useful in education? Right. Right.
Speaker 2 And this is where I'll point to that some of the AI now work, looking at some of the industrial policy. It's like, is AI even the thing that's useful there?
Speaker 2 Or are the climate costs, are the opportunity costs, are there, you know, do we need school lunches or a chat bot? Right.
Speaker 2 And I want the freedom to answer that question before I have to sort of take AI as a given and be like, you know, how to make it more useful.
Speaker 2 Because there are places where data analysis is super, super important and useful, right? But this is not a general everything tool.
Speaker 2 This is a product being produced by big big tech, which is sort of making more use of the derivatives of this already toxic business model, creating more data that is often faulty or, you know, harmful, but nonetheless powerful, affects our lives, and that is being sold as kind of a skeleton key for everything when it isn't actually proven as useful to us.
Speaker 1
Just more for surveillance. economy, as you said.
I spoke with historian and philosopher Yuval Harari recently, as I said, that's his nightmare scenario.
Speaker 1
He's calling for more regulation in AI and everything to slow it down. Now, you are a senior advisor on AI to the FTC.
What needs to happen from your perspective?
Speaker 2 Well,
Speaker 2 the market could slow down or the business model could slow down. I think things like the crowd strike outage, which is when
Speaker 2 Microsoft effectively cut corners on quality assurance, on testing, on monitoring for a very, very critical update that affected core infrastructure like healthcare systems, traffic lights, banks.
Speaker 1 And cost. Cost money.
Speaker 2
Yeah, it cost money for them to do this right. So they didn't do it right.
And global infrastructure was offline for days.
Speaker 2 So the evidence that this sort of concentrated business model is bad is it's no longer deniable.
Speaker 2 So I do think that combined with the danger of this sort of concentration in one jurisdiction, the concerns about sovereignty that you're seeing across the globe, will, I believe, impel real investment and real focus on building healthier alternatives, you know, like Signal as a sort of model for the thing we need.
Speaker 2 I also think the climate costs are just undeniable.
Speaker 2 Right.
Speaker 1 Well, you know, they're building nuclear devices for that. Three Mile Island is very open.
Speaker 2 I mean, did no one Google Three Mile Island or did they hallucinate?
Speaker 1 To be fair, fossil fuels are worse than nuclear, just by fossil fuels.
Speaker 2 I mean, fossil fuels are terrible, but the you know, these companies will claim that they are carbon neutral, but you scratch the surface there.
Speaker 1 It's nonsense.
Speaker 2 And you see that, you know, they're buying, you know, kind of carbon neutrality certificates and using weird accounting. Right, right, yeah.
Speaker 1 But let me finish up by asking, like, because the biggest company here, obviously, is getting the most money and everything else is Open AI, which is sort of the quarterback right now of this thing.
Speaker 1 Microsoft.
Speaker 2 Microsoft.
Speaker 1
Yes, yes, that's right. Well, they don't like that.
But
Speaker 1 you've written about how the term OpenAI was always misleading. Obviously, they're ditching their nonprofit status.
Speaker 1 I had talked about this several times to many people who were sort of, they were really, they really did believe in the mission i have to say and i kept saying there's too much money now i'm sorry i don't know who you are but you're you're naive at best to think that this amount of money is going to keep this a non-profit you obviously probably took a salary hit when you joined signal it's a non it's a real non-profit talk about um
Speaker 1 is it possible to unwind that mindset you just said we all need organizations many more like signal but this is an enormous amount of money how do you disassemble the mindset there?
Speaker 1 And how do you get it into a more mindset like yours? Because I just don't see any with this amount of money going any other direction.
Speaker 2
Quickly, one aside, Signal pays very well. Okay.
So we're, you know, we're, so if you're looking for a job, check out our jobs page. We do try to like retain and attract the best talent.
Speaker 2 And I think in the open AI case, like it was never about just the models, right?
Speaker 2
Like you need massive compute. You know, it can cost cost a billion dollars for a training run.
You need massive amounts of data.
Speaker 2 And so you're going to have to figure out either convince a big company to give that to you, convince someone with billions and billions of dollars to burn it on that.
Speaker 2 And once they've burned it on that, what do you do with a model? Like, I'm going to let that hang in the air. What do you do with a model?
Speaker 2 You can't use it at scale again without a path through to market that goes through these companies.
Speaker 2 So this is when I talk about AI being being a product of big tech, like that's very, very literal, right? They have the data, they have the compute, they have the access to market.
Speaker 2 Either, you know, Meta is applying it within Facebook
Speaker 2 for services, this email prompt thing, or OpenAI is advertising it as a like dongle as a, you know, for Azure customers who sign up to that, or you have something like Mistral, which is a national champion in France, building open source large models.
Speaker 2 But how do they actually get to market? How do they make good to their investors and
Speaker 2 their business plans? They license it to Microsoft, who then licenses it out through their cloud.
Speaker 2 So this is when I talk about 70% of the market in cloud being controlled by three companies, we also have to fold that into the AI conversation and recognize that it's a, you know, AI has not introduced any new software libraries or new hardware.
Speaker 2 This is all stuff we've had in the past that we know about that exists. This is not novel.
Speaker 2 What is novel is this massive amount of data and the way that it's being used to sort of train these statistical models that can then be applied in one way or another.
Speaker 1 So, is what happens then from your perspective? What happens? I mean, obviously, it was never going to be a nonprofit after the money raising happened. And they need the money, FYI.
Speaker 1
They can't not have the money to grow. And they've got everybody on their tail, too, at the same time.
What is their fate? What happens to a company like that?
Speaker 2
I mean, it seems like there's a lot of like interpersonal things happening. Like it, you know, that's every company.
Court drama as well. So predicting.
You were around it, Google.
Speaker 1 I was there even before you were.
Speaker 2 It was like
Speaker 2
touche, touche card. Amazon.
Come on. Yeah.
Speaker 1 The hot mess of Twitter.
Speaker 2
Come on. Oh, God.
Yeah. So hot mess aside,
Speaker 2 I would say they just kind of slowly become an arm of Microsoft, you know, the same way, you know, maybe there's an alphabetization of Microsoft, the same way Google kind of spun out different entities as an antitrust prophylactic.
Speaker 2 But I don't, again, there was never a model to be a non-profit long-term, given their vision, in my view.
Speaker 2 And I think what you've seen is just a series of whittling away at that until it no longer exists. So Microsoft, Amazon, Google, those are the three big clouds.
Speaker 2 That's the gravitational pull in which model creators are going to ultimately get sucked into. You know, NVIDIA, maybe.
Speaker 1 Is there a model for a non-profit in AI?
Speaker 2
I think so. I mean, I think so.
But again, we got to take this term AI back a little bit. It does not simply mean massive, massive, massive law of scale, you know, bigger is better AI.
Speaker 2 There are many forms of AI that are smaller, more nimble, like more actually useful.
Speaker 2 And I think there is a, you know, there could be a model for AI research that is asking questions that are less useful to the big players, to this bigger is better paradigm, and
Speaker 2 perhaps more useful for smaller use, for use in things like, are we training models for things that aren't profitable, like environmental monitoring or civic something or another?
Speaker 2 I think there is a model there. I think again, though, what I'm seeing is a misunderstanding of that fact and a misunderstanding of just how capital intensive this bigger is better AI is.
Speaker 2 That's having governments around the world who are anxious about sovereignty concerns and like want their own basically throwing money at AI without understanding that that's going back into the same companies.
Speaker 2
That's not going to ensure sovereignty. So it's, you know, it's like, oh, great, you have a $500 million European AI fund.
Well, let me break it to you slowly. That's half a training run.
Speaker 1 That's right.
Speaker 2
That's right. Right.
So like, what are you doing? You can't afford it.
Speaker 1
Yeah. You can't just regulate them.
Just regulate them. Same way with privacy and everything else.
So last question. We spoke five years ago, light years ago.
It seems like a million years ago.
Speaker 1 A trillion years ago. Trillion years ago.
Speaker 1 Looking down the road, what do you think the next five years will be the most important things for your work, for Signal, tech in general, if you had to prognosticate?
Speaker 2 Yeah, well, what I am working on, what I'm kind of obsessed with right now, in addition to just, you know, building and sustaining Signal, which I love, is how do we find new models to sustain better tech, right?
Speaker 2 Like once we've cleared the weeds of this toxic model, once we've prepared the ground, how do we grow things that are actually beneficial? How do we create a teaming ecosystem?
Speaker 2 How do we encourage open tech and like democratic governance, which I think is a thing we don't talk about enough, frankly, but like how do we have a part in deciding what tech is built, who it serves, how we assess it?
Speaker 2 You know, some of the scrutiny that Signal receives from its
Speaker 2 loving and sometimes belligerent community of security researchers and hackers is part of our strength, right? How do we expand that to people?
Speaker 2 How do we shift from a monolithic five platforms control our news to a much more heterogeneous ecosystem that's a little warmer, a little fuzzier, a little RSS feed ready, so to speak?
Speaker 2 I think those are problems that aren't new,
Speaker 2 but that I think there is a real new appetite to actually tackle because it's getting too obvious.
Speaker 2 When you have Andreessen Horowitz and Y Combinator coming out and saying, like, we're the champion of little tech, we know that the death knell has rung for big tech.
Speaker 2 And what we need to do is then, like, define what comes after.
Speaker 1
Yeah, absolutely. All right, Meredith, thank you so much.
I love talking to you. I should talk to you.
I love talking to you.
Speaker 2 Not every five years.
Speaker 1 And I really appreciate it because I think people don't
Speaker 1 understand it. Please, everyone, use Signal.
Speaker 1 I use it all the time.
Speaker 1
I thank you. It's free.
It's not stealing my stuff. And it's really, it's another moment where I'm like,
Speaker 1 where is the Signal AI thing?
Speaker 2 Amen. Well, I'm Team Cara on that.
Speaker 1 Okay, all right.
Speaker 1 On with Carraswisher is produced by Christian Castro Roussell, Kateri Yoakum, Jolie Myers, and Megan Burney. Special thanks to Sheena Azaki, Kate Gallagher, and Kaylin Lynch.
Speaker 1
Our engineers are Rick Kwan and Fernando Aruda, and our theme music is by Trackademics. If you're already following the show, you're not drinking the surveillance wine.
By the way, it tastes terrible.
Speaker 1 If not, back into the accountability bottle for you. Go wherever you listen to podcasts, search for On with Kara Swisher, and hit follow.
Speaker 1 Thanks for listening to On with Kara Swisher from New York Magazine, the Vox Media Podcast Network, and us. We'll be back on Monday with more.