
Social Media’s Original Gatekeepers On Moderation’s Rise And Fall
Listen and Follow Along
Full Transcript
Elon's losing his fucking mind online.
It's like, like really today. Hi everyone from New York Magazine and the Vox Media Podcast Network.
This is On with Kara Swisher and I'm Kara Swisher. My guests today are amazing.
Del Harvey, Dave Willner, and Nicole Wong, three of the original content policy and trust and safety people on the internet. Del was the 25th employee at Twitter.
She started in 2008 and eventually became head of trust and safety before leaving in 2021. Dave worked at Facebook from 2008 to 2013, eventually becoming head of content policy, and he wrote the internal content rules that became Facebook's first published community standards.
Nicole is a First Amendment lawyer who worked as VP and Deputy General Counsel at Google, Twitter's legal director for products, and Deputy Chief Technology Officer during the Obama administration. These three were absolutely key in designing safety and content policies at social media under very difficult circumstances,
but it's a hugely influential, mostly invisible job that affects pretty much everyone who uses the internet and a lot of people who don't. But their efforts to make the internet safer and having some guardrails are being unwound by people like President Trump, Elon Musk, and Mark Zuckerberg.
So this is a perfect time to go back and look at the history of trust and safety and content moderation. I'm very excited to talk to these three particular people because despite the idiocy of Elon Musk and Mark Zuckerberg, there are thoughtful people thinking through these incredibly difficult issues, not making them partisan, not reducing them and making them seem silly.
They're not yelling censorship. They're not yelling about the First Amendment,
which they don't know anything about.
These are hard issues, and they treat them like hard
and complex issues like adults do.
Others, total toddlers having tantrums.
That's the way I say it.
Anyway, our expert question comes from Nina Jankiewicz,
a disinformation researcher and the CEO
of the American Sunlight Project.
She herself has had a lot of experience with disinformation, including being attacked unnecessarily and unfairly. So stick around.
Support for On with Kara Swisher comes from Saks Fifth Avenue. Saks.com is personalized, and that can be a huge help when you need something real nice, real fast.
So if there's a totem jacket you like, now Saks.com can show you the best totem jackets,
as well as similar styles from brands you might not have even thought to check out.
Saks.com can even let you know when the Gucci loafers you've been eyeing are back in stock,
or when new work blazers from the row arrive.
Who doesn't like easy, personalized shopping that saves you time? Head to Saks.com. Craft is where function meets style.
It's where precision meets performance. It's where doing it yourself meets showing the world what you're capable of.
The all-new Acura ADX is a compact SUV crafted to take you where you need to go without any compromises. With available Google built-in, all-wheel drive, and a 15-speaker bang and all-of-some premium sound system, the all-new ADX is crafted to be as alive to the world's possibilities as you are.
The all-new ADX, crafted to match your energy. Acura, precision crafted performance.
Learn more at acetic.com or call 877-351-0300. Remember to ask for Botox Cosmetic by name.
To see for yourself and learn more, visit BotoxCosmetic.com. That's BotoxCosmetic.com.
It is on. Dave, Del, and Nicole, welcome and thanks for being on On.
You three are some of my favorite people to talk about this topic. I've talked to all of you over the years about it.
You helped pioneer trust and safety on social media and created a field that hadn't existed before. So I'm excited to have you together for this panel.
Thank you for coming. Thank you.
Thanks for having us. I can't actually remember if we've actually all been on a panel together.
I know. Have you? No, I don't think so.
I don't think so. Well, here we go.
See? History is made. So let's start with a quick rundown where things stand today, and then we'll go back to the beginning and figure out how we got here.
Mark Zuckerberg recently announced that Meta is getting rid of fact checkers, replacing with community notes. I have nothing against community notes, but they always seem to be shifting around in all their answers.
They also loosen rules around certain kinds of hate speech now, including hate speech aimed at LGBTQ plus people and immigrants. They're quietly getting rid of their process for identifying disinformation.
I'd love to get everyone's reaction to this move, starting with Dave, since you ran content policy at Facebook until 2013. Yeah.
So there's a few different things. I share your appreciation for community notes as an approach.
And I think in a lot of ways the sort of fact-checking part of this got front-loaded in how it was all reported. And I honestly think that's a bit of a distraction from a bunch of the other parts that you touched on, which are far more important.
Three that seem particularly notable to me, one, it came out that they are also turning off the sort of ranking algorithms around misinformation or potential misinformation, which is going to really change how information flows through the system. Explain what that is.
So they've historically tried to detect whether content might be misinformation and added that into the mix of sort of how content shows up in people's feeds. You can think of it as changing the velocity with which certain kinds of information spreads through the network.
They're turning off the dampening on that. That feels to me like a much bigger deal in terms of the amount of content it affects and the amount of views that it affects than whether or not fact checks are appended to a relatively small number of stories because of the scalability of the process.
So, like, just on the truth question, that feels like the much more significant change, if a little bit harder to understand from the outside. Even on top of that, though, the changes they made around hate speech, and there's two coupled ones, I think are pretty significant and quite worrisome.
So, one, they're moving away from proactive attempts to detect whether or not things might be hate speech just across the board. They seem to be turning down those classification systems.
They're justifying that by saying it's going to lead to fewer false positives. That is true, right? If you stop looking, you will make fewer over removals.
But it also means that particularly in more private spaces where folks are aligned around particular sets of values that are maybe not so awesome, there's not going to be really any reporting happening out of those spaces because it's a group of people who already agree with that content talking to themselves. You can make arguments, what's the harm there? It's a group of people talking to themselves, but groups of people on Facebook talking to themselves sometimes storm the Capitol.
So there are real harms that emerge not just from the speech. They also made a number of changes to the actual hate speech policies themselves and very surgically, frankly, carved out the ability to attack women, gay people, trans people, and immigrants in ways that you are explicitly not allowed to attack other protected categories, and in ways that allow more speech than the company has really ever allowed at any point where it had a formalized set of rules.
Right, like Christians. You can't attack Christians.
Nicole? Yeah, there's so much to dig into on this change. High level, it struck me.
The fact-checking thing, I think, is somewhat of a red herring because it's such a small part of the ecosystem that it's looking at. So the thing you take from it is who is the audience they're talking to? If you read Joel Kaplan's blog post, he adopts the language of censorship around content moderation.
That I have to assume is deliberate. Front-loading all of this, we're not going to fact-check you.
We're going to let you say the same types of things you get to say on TV and on the floor of Congress. That has an audience.
And so to me, a lot of this is about who they've decided to try to appease, right? That is a huge amount of it. The other thing that struck me, and it is the higher level changes that they are making, which I think are more destructive than the removal of the fact-checking, which is the basically refusal to throttle certain types of content and the targetness of that content that they're going to let sort of loose.
Last time I was with you, Cara, we talked about like, what are the pillars of design? We talked about architecture. Right.
What are the pillars of design for social media? Personalization, engagement, and speed. They are releasing their hold on that personalization.
They explicitly say, we're going to allow more personalized political content so you can be safe in your own bubble. We're going to speed it up.
We're not going to stop the false statements and other vitriolic statements that we have in the past, and we're going to let you go on it. They are picking on all three of those pillars for what we know is rocket fuel for the worst type of content.
And you cannot believe that that's not deliberate. Right.
It's a design decision is what you're saying. Speaking of that, let's talk about other platforms.
I often say that X has turned into a Nazi porn bar, and today it really is. I have to say Elon's really trotting out all the Nazi references today, more of them.
He's doubling down on his behavior. I think he's trolling us with these fascist salutes, and now he's been tweeting Nazi-related puns right now over on X.
There's obviously not anything happening there in terms of trust and safety. So what's the prevailing thinking on content moderation today? Let's start with you, Del, since you were the original Twitter person here, and then Nicole and Dave.
I mean, I think that in a lot of ways, it depends on the platform, right? Like it's worth a little bit of not all platforms goes here. Right.
Because yes, you have a couple of really big ones that are doing some really odd things. And also you have a lot that aren't doing the same sort of extremist behavior.
I think that there is still very much a value being ascribed to trust and safety. I think we are seeing in some part a shift toward recognizing that trust and safety is more than just content moderation.
And I think that's one of the most important learnings that hopefully people can take away from everything that's happening in the space, which is trust and safety starts when you're beginning to conceptualize what your product is. You can design it safely.
And I think that what we're seeing right now with the guardrails sort of being pulled away from misinformation ranking, all this content that we know that is extremely explosive, drives extreme amounts of engagement. Like that removal of guardrails on a couple of companies is making a huge, like fiery scene over here.
And then there's a whole bunch more companies that are trekking on and I think trying to stay out of the crossfire. Such as, if you could really quickly.
Well, I think, you know, the two big companies that are tracking on and, I think, trying to stay out of the crossfire. Such as? If you could really quickly.
Well, I think you know the two big companies that are currently spiraling.
And then you've got all the others, right?
You've got Reddit.
You've got, you know, Pinterest.
You've got all the different federated sites,
all these different communities that are still soldiering on. Right.
Nicole? So, Dell said something which I think is so important, which is content moderation comes at the end in a bunch of ways, right? Where you have people who are sort of outside what you want your product to be. So, the first choices you're making about content are really about what is the site you're on.
And so when you think about what are the other platforms that are not getting into the kind of hot water we see, it's not just about their scale, although a lot of it is about their scale. It's about their focus as a site.
So LinkedIn is a professional site and you generally see them having fewer controversies because if you're not talking about professional information, you're not on the right place, right? That's not your audience. Pinterest to me is kind of the same thing, right? Like you're here to get inspired about whatever it is that inspires you, housing, goods, shoes, fashion, whatever it is.
If you're not talking about that, then you should be somewhere
else. I think that a bunch of what we see for the platforms that are having trouble with this
are the platforms that deliberately went out there and said, I want to be everything to everyone.
And it's really tough to sort of manage that playground. And so that to me is a bunch about
the content moderation. I think what we've seen, certainly since I keep drafting me out here, like the historical artifact that I am, but like, we're going to get to the early days in a second.
Right. But like, the tools that they have, the professionalism of the teams that are doing the work are so much better in the last decade or 20 years.
That I think there's a lot that's happening with content moderation. What I think is less clear is how does a platform position itself and design itself to be resistant to some of the worst parts of the content ecosystem.
I think you're absolutely right with the everything store tends to be a problem. There's porn and Twinkies.
You know what I mean? Exactly. You have a lot going on.
Exactly. And if you had your aisles clearly separated and you could be aware that you were walking into the Twinkie aisle or into the porn aisle and you were of age to access those things, that might be one situation.
But that's not the situation. That's correct.
The Twinkies are mixed in with the porn. I got it.
They are, in an unfortunate way. Which sometimes they it is.
Sometimes it is. But again, need those signs.
Yeah, Dave. I am maybe a bit of a fatalist, but in a sort of positive way, in that I think a lot of the need and origin for trust and safety arises from people using these services and getting into conflict and demanding resolutions to the conflicts they have.
And that's not going to stop being a thing because people are not going to stop getting into silly fights on the Internet or much more serious ones, depending. And you are sort of driven, frankly, by customer demand, famously advertiser demand, but even just basic user demand to create some amount of order, even if it's just on questions of spam and CSAM and whatever.
And so, to me, trust and safety, it waxes, it wanes in the face of all of these things, but the overall trajectory has been towards more professionalization, more people doing this work. It's not even clear to me from Facebook's announcements if there are force reduction implications.
Some of the stuff they said, I mean, to be clear, I'm not a big fan of these changes, but they did talk about adding more folks to deal with appeals, adding more folks to do cross checks, which, great, cool, and don't seem employment negative. So, there's a little bit of a, like, the meta-narrative of, oh, crisis and moderation, trust and safety is going away.
I think on some level is maybe what they want. Which they want the Trump administration to think they're really killing all these people.
Absolutely. All right.
So, let's go back to the aughts, because you guys are early days people, and so am I. We all go way back to the 90s when AOL had volunteer community leaders that moderated chat rooms and message boards.
We'll save that for another episode.
But Nicole, you were a VP and deputy counsel at Google when they bought YouTube in 2006.
Saddam Hussein was executed that year, and then two related videos were published on the platform,
one of his hanging, another of his corpse, and you had to decide what to do with them.
This is not something you ever thought you'd have to do. And I remember being in meetings at YouTube and different places about this, like, what do we do now? We thought it was all cats, and it's not, kind of thing.
So walk us through your deliberative process and talk about the principles you used at the beginning as a First Amendment lawyer working for a private company. Wow, you're taking me way back.
Way back, yeah. I want people to understand that this was from the get was a problem.
Yeah, yeah. No, it absolutely was.
And luckily, we had a little bit more time, right? A, we had the grace of being sort of the cool new kids. And B, because it was so new, there was a little bit of buffer to make hard decisions.
So, what I recall from that was at the time of Saddam Hussein's execution, remember he had been captured, pulled out of a hole, and then executed by hanging. And there were two videos, one of which was of the execution.
The other was of his corpse.
And the question was, do these violate our policies around graphic violent content? Or is it news? Yeah. Well, was it news or was it historically important? And so my recollection of is if we had exceptions for content that might be violative, as in violent, but had some historical significance, there were others like, you know, educational significance, artistic significance, that sort of thing.
And as I recall, the call that I made was the actual execution was a historical moment and would have implications for the study of the period later on. But the video of his body seemed to be gratuitous.
And so once you know he's been executed in a certain manner by certain people in a certain context, what does the body do in terms of historical significance? And so we took down the one of the corpse. We kept the one of the execution.
I was so much less professional than either Dell or Dave's organizations at the time. That was Nicole's, like, here's my thought.
And here's how we're going to make it stand. But that was the decision at the time.
The decision at the time. Really difficult.
That's something you'd never thought. Totally difficult, presumably.
I wouldn't know what to do. Twitter initially took a very public ideological stance in favor of free speech.
It did pay off, though, as a lot of press. And the press dub, the Arab Spring, was the Twitter revolution in 2011.
In the middle of Arab Spring, the general manager of Twitter in the UK famously said, the free speech wing of the free speech party. It should be noted at the time, free speech was generally considered to be more of a left-wing ideology in a weird way.
The platform obviously undergone multiple transformations, and we'll get to that later. But how has your philosophy about trust and safety changed over that time? And talk about what you were thinking then.
I mean, the very first thing that we started with in terms of policies, because there really was nothing when I showed up, the first thing I was assigned was like, can you come up with a policy for spam? Because every now and then people are encountering spam and we don't ever think it'll be a big problem because you can choose who you follow. But, you know think we should have a policy around it and I was like oh it'll be a problem yes I will I will make you a policy and then after that it was copyright and trademark and making sure that we had a relationship with the National Center for Missing and Exploited Children and all of the sorts of like you get your initial ducks in a row of these are these core functions that you need to make sure that you have in place to give people a tolerable user experience.
Like all of those are things where people have expressed needs and strong sentiments in those areas. So you start with those.
And then we started expanding from there. A huge challenge for years was that we only had two options.
We could leave it alone, or we could suspend the whole account, which is a terrible. Right.
You couldn't take it down. You couldn't just.
Right. You couldn't take down just the content.
It had to be the whole account. Once we added on the ability to just do it to a single piece of content, that was such an exciting day for us.
And the advances since then, like there are so many possible things you can do now in Trust and Safety that just weren't even things we could imagine 10 years ago, I would say even. Dave Dave, in 2009, five years after Facebook was founded, the same year the company first published its community standards, a controversy erupted over Holocaust denial groups on the platform.
At the time, you defended Facebook's position to allow these groups and said it was a moral stance that drew the same principles of free speech found in the Constitution. Years later, Mark said they don't mean to lie.
In an interview with me, that he eventually reversed course. He had a little different take than you had from what I could glean.
I don't know what he was saying. I'll be honest with you.
I thought it was muddy and ridiculous and ill-informed. But what's your stance today, and how has your thinking evolved? And talk a little bit about that, because you can see making an argument for, it's like the Hyde Park example, let them sit on the corner and yell.
Yeah. So, the initial reluctance to sort of get into, the initial stance on Holocaust denial when we took it was downstream of an intuition that we frankly weren't capable of figuring out how to reliably police what was true.
I think in some ways that has borne out. Like, I don't know that the attempts at that have gone super well.
That is true. So I think the sort of intuition that gave rise to the stance was right, but there are multiple ways of getting at what is problematic about that speech that don't rest on the fact that it is solely on the fact that it is false and sort of commit you to being the everything fact-checking machine, which we were just like— You'll make a mistake, yeah.
—deeply aware. Well, I mean, we were like—there were like 250 people, and most of us had just graduated from college.
Mm-hmm. And we were smart enough to know that we just, like, couldn't.
Mm-hmm. So we had to adopt a we can't or we won't because we couldn't.
Mm-hmm. Very good point.
I will say, over the course of my time there, and particularly since I've left, my thinking on this has been influenced a lot by a woman named Susan Benish, who runs something called the Dangerous Speech Project, which studies the rhetorical precursors to genocides and intercommunity violence, and has done a really good job of providing really clear, explicit criteria for the kinds of rhetorical moves that precede that violence in a way that, to me, was usable and specific, such that you could turn it into an actual set of standards you could hope to scale, which it's, I guess, been a little implicit in some of what I've said, but my obsession from very early on, and in some ways still is, is this question of, okay, we've got a stance, but can we actually do the stance? Because if we can't do it, it's sort of, is in some sense misleading to take it, right? I would also chime in and say that while I am in strong agreement that that is not the way to do it, that if you can't enforce something, you shouldn't have it as a policy.
There has also been, for any number of years, any number of attempts to solve product problems by saying, we're going to write a policy to fix that.
Which is, quite frankly, I'm impressed that you managed to get them to not say, well, we're going to do it anyway. Yeah.
Yeah. Yeah, no, that's fair.
No, that's totally fair. And there was, I mean, I think Zuck's public sort of stance on founding the company because of the Iraq war seems a little bit revisionist to me.
I wasn't there, but that wasn't what I heard. But it is true that it was founded in the shadow of the Iraq War.
And to your point about sort of freedom of expression being a liberal value, there was definitely a sort of punk rock American idiot vibe around being asked to take things down. But also, like, I don't know, I was a child and we didn't know what we were doing.
And I have learned several things over the course of the last 20 years. Which Zuckerberg would never admit.
He would never admit he didn't know what he was doing. That is something he would never come out of his mouth.
But one of, just so you know, the company was founded on raiding girls' hotness in college. But okay, fine.
Iraq War. I'll go with Iraq War.
Masculine energy. Masculine energy.
I think that's what they call that. That's right.
We're back to the same place. Everyone's like, are you surprised? I was like, no, this is what he did.
He's a deeply insecure young man who became a deeply insecure 40-year-old. But when you talk about the moral stance, it was the idea that we should be able to tolerate negative speech,
which is a long-held American thing. Yes.
The Nazis and Skokie, you know. But it turns into something where people game the system and allow what you were talking about, which is the precursors to actual violence, where speech is the precursor.
Yes. Yeah.
Yes, and that's absolutely right. And some of that was literally a question of the academic work happening or us becoming aware, some combination of it happening and us becoming aware of it to have a framework where we could really figure out, okay, if you're comparing, this is all going to sound kind of obvious now because it's one of those ideas that when you hear it, it's obviously correct.
But there are these sort of rhetorical moves you can make that dehumanize people and serve to authorize violence against them, not by directly inciting violence or calling for violence or threatening violence, but by implying that they are less than human, that they are other, that they are filthy, that they are a threat, that they are liars about their own persecution,
that serve to make violent action okay, I think we're
seeing some of that now, and part of the reason
I found the, to circle back to your first
question, part of the reason I found the recent change is so disturbing
is they are designed
to carve out things like claims that people are
mentally ill, which is like down
the middle, dehumanizing
speech that obviously fits into this category. Or using the word it for trans people.
Yeah. We'll be back in a minute.
It's been reported that one in four people experience sensory sensitivities, making everyday experiences like a trip to the dentist especially difficult. In fact, 26% of sensory-sensitive individuals avoid dental visits entirely.
In Sensory Overload, a new documentary produced as part of Sensodyne's Sensory Inclusion Initiative, we follow individuals navigating a world not built for them, where bright lights, loud sounds, and unexpected touches can turn routine moments into overwhelming challenges. Burnett Grant, for example, has spent their life masking discomfort in workplaces that don't accommodate neurodivergence.
I've only had two full-time jobs where I felt safe, they share. This is why they're advocating for change.
Through deeply personal stories like Burnett's, Sensory Overload highlights the urgent need for spaces, dental offices, and beyond that embrace sensory inclusion. Because true inclusion requires action with environments where everyone feels safe.
Watch Sensory Overload now, streaming onizziness. Tell your doctor about medical history, muscle or nerve conditions including ALS or Lou Gehrig's disease, myasthenia gravis, or Lambert-Eaton syndrome and medications, including botulinum toxins, as these may increase the risk of serious side effects.
For full safety information, visit BotoxCosmetic.com or call 877-351-0300. See for yourself at BotoxCosmetic.com.
At UC San Diego, research isn't just about asking big questions. It's about saving lives and fueling innovation.
Like pioneering AI technology for more precise cancer treatments, earlier Alzheimer's detection, and predicting storms from space. As one of America's leading research universities, they are putting big ideas to work in new and novel ways.
At UC San Diego, research moves the world forward. Visit ucsd.edu slash research.
So let's go on to our favorite time, the Gamergate controversy. It was harassment campaign.
I know, this has just been one panoply of horror.
Aimed at women in the video game industry
that include doxing and death threats. It happened in
2014. I recall it extremely
well. And in some ways, it was the birth
of a loose movement of very angry and very
online young men that morphed into the alt-right.
Del, Twitter got a lot
of negative press that came from
Gamergate because of Twitter, along with Reddit
and 4chan, had relatively little content
moderation and there was harassment on the site.
Walk us through the controversy and how the... A lot of negative press that came from Gamergate because of Twitter, along with Reddit and 4chan, had relatively little content moderation.
There was harassment on the site. Walk us through the controversy and how the aftermath led to changes in how you approach trust and safety.
You sort of went from being that very free speech heavy company to eventually focusing on brand safety, which is important to product, as you just noted. I would say that what you saw was reflective of, in many ways, also the company's investment in trust and safety and whether or not there were the tools and actual functionalities to do certain jobs, because the same way that certain policies may have existed at Facebook because there was no way to operationalize them, Similar ones certainly existed at Twitter in terms of it sure would be nice if we could do X, but there's no feasible way to do that.
And so we can't. And if we try to, we're going to set ourselves and people up for failure on it.
And I think that what you've seen is trust and safety. And this goes back to what Nicole mentioned earlier, like content moderation, you're kind of late in the process when it's gotten to content moderation, right? Ideally, once you're at content moderation, someone has generally experienced a harm and you're trying to mitigate that harm.
Right. Whereas if we look at things like proactive monitoring or designing your product not to have some of these vectors for abuse or even educational inputs for people about what to expect for, hey, it's likely a scam if this.
All of those things come before content moderation and have a much higher likelihood of impact and ripple effect. And it's, I think what you have seen is a slowly growing awareness of the earlier we can intervene, the earlier we can build in these components.
There's this slow growth of like, oh, we should do more of that. And I think that's the biggest shift since sort of the beginning of Gamergate is more of a panoply of options for actioning, along with more cognizance around needing to figure out the point of origin.
Anticipating consequences. So let's talk about that.
In 2017, Myanmar's army committed genocide against the Muslim Rohingyas. They raped and killed thousands, drove 700,000 ethnic Rohingyas into neighboring Bangladesh.
In the run-up to the genocide, and I was right in the middle of this, Facebook had amplified dehumanizing hate speech against the Rohingya. They did it despite the fact that Facebook had been implicated in anti-Muslim issues, violence in Myanmar a few years before.
This is when it really started to get noticed. How much responsibility would you assign? I'm going to start with you, Nicole, to something like Facebook and what should have been done differently.
And Dave, if you want to jump in, you can too. But when this happens, the direct responsibility was pretty clear.
But you could say, I'm a phone, and you can't blame a phone for people organizing, for example. Yeah.
I mean, I want to go back to a little bit about the Gamergate, but I'll connect it up to the Rohingo part. Because what I recall about Gamergate, and which I think changed the trajectory of some of the content policies that we ended up doing, is that academics like Danielle Citron connected the dots between harassment as a suppression of speech.
harassment not just being you're being mean to me which has always existed on the internet but but that that it is a incursion on someone else's rights and particularly those who are least able to bear it, who have the weakest voices. That to me was where Gamergate was like, oh, we actually should not just allow all the speech because that speech is suppressing speech.
That connects for me into how we handle things like minority voices, like the Rohingya who may not even be on the service, right, but are being harassed. And so their rights in some ways are being taken away.
And Dave will be able to speak to this better about how Facebook decided to handle it. I think that a bunch of it has to do with the design of your ability to detect and your policies about when you intervene.
And those are hard. Those are always hard.
So because I wasn't at Facebook or WhatsApp, I don't actually know specifically the kind of conversations they were having about how to balance out when you see the harm, whether you have the right detections for it, and what is the correct position of the company to intervene in what may start as small private conversations. there was clearly a moment where it became broadcast, right?
Where it became about propaganda and pushing people into a certain direction that was very,
very toxic and harmful and had terrible consequences on the ground.
And then the question is, what is a company sitting in the U.S.?
What is their obligation to investigate, to get involved, to send in help, right?
So Dave, you left Facebook in 2013 and then it moved to the 2016 election where Facebook was pilloried for Cambridge Analytica, spreading fag news, the Russian propaganda, creating news bubbles and media silos. Talk a little bit about the difficulty of being the world's social media police, I guess, is kind of what I'm thinking about.
And then, of course, it got blamed for the election of Donald Trump. That's where it led to.
What do you imagine is where you need to be then? Yeah, so, I mean, I think there's a lot of question in that question and in everything we've said so far. I think, I do think that platforms have a responsibility to intervene that arises out of the fact that they have the ability to intervene.
And this is where the phone analogy falls down for me, right? Like, the technology that we're monitoring, yes, it is a communications technology. And also, it does not work identically to all prior communications technologies.
And those design choices change in an almost, like, existential way. It's like, well, too bad you're here now, figure out the meaning of your life.
Like, too bad you have this product now. It creates responsibilities through its design.
And you sort of don't get to accept, in my view, the upside of those design choices from a sort of growth and monetization possibility point of view without inheriting some of the downside, right? Like, I think they're linked personally. I don't have, like, a formula or that could become a regulation about how that responsibility should work, but that is where I have netted out on this entire thing.
I do, though, think that, returning to the point earlier about general-purpose communities, that leaves you in a very difficult position for sites that are aspiring to be an undifferentiated place for everybody in the whole world to talk to each other we don't all agree as humans and we don't agree to the level of real violence right all over the world and so the notion that like it is it is possibly the case that the building of that kind of space is a little bit of a, at best, you've accepted the ring of power and now have to, like, go find a volcano to throw it into while the ringwraiths try to kill you.
Like, that might actually be the best you can end up with in that kind of a design choice.
Whereas if you are a Reddit or a Discord, everything has a context attached to it, which narrows the problem to something that feels manageable.
Well, more possible to not definitely end up hated by everyone. Right.
Which I think is sort of what you're doomed to otherwise. I also think like a bunch of the companies, the ones that I was at, right, were sort of like, it's the internet.
Anyone can access us.
And we forgave ourselves for not having people on the ground to understand where we were. Because we're not offering advertising there.
We have no people on the ground. Just because they pick it up doesn't make it our responsibility to serve them.
There was a bunch of that, which I think that the Rohingya moment changed that and said, like, actually, the very fact that you are being accessed and you can see the numbers, right, imposes the obligation on you. Right.
The algorithmic implication is what turbocharges. Absolutely.
Right. Okay.
So, Del, one of the things in the aftermath of this, and including the election and then the COVID pandemic, where Biden said Facebook was killing people by allowing vaccine misinformation, though he later walked it back. Trump himself, obviously, a fountain of disinformation.
There was a period, I think we can call the peak content moderation, right? And some of Trump's tweets got flagged. The New York Post reporting on Hunter Biden was suppressed.
And after January 6th, Trump got kicked off of social media platforms. Am I correct? You were the one who actually kicked him off? Is that you or all of you as a group? Yeah.
Well, it was a group decision. I didn't go out there and just, you know, YOLO my way into the day.
But it was something where we looked at, you know, there were these violations on the 6th where we said if there are any additional violations of any kind we're going to treat that as suspension worthy and on the 8th a couple days later there were a series of tweets that ended in what was taken as a dog whistle by a number of trump's followers at the time of, you know, I will not be attending the inauguration. And that sort of like, here's a target that I won't be at, was how it was interpreted by any number of people responding to him.
And that was, I think we actually published sort of the underlying thinking, and that was the bridge too far. Bridge too far.
You know, I have been saying he keeps doing it. When are you going to, and I said it to Facebook, if he keeps violating it and you don't do anything, why do you have a law in the first place, essentially? And one of the things I wrote in October of 2019, I wrote this, which was something interesting.
And I'm going to read it to you. It so happens in recent weeks, including at a fancy pants Washington dinner party this past week, and I've been testing in my companions with a hypothetical scenario.
My premise has been to ask what Twitter management should do if Mr. Trump loses the 2020 election and tweets inaccurately the next day that there had been widespread fraud.
And moreover, that people should rise up in armed insurrection to keep him in office. Most people I have posed this question to have the same response, throw Mr.
Trump off Twitter for inciting violence. A few said that he should only be temporarily suspended to quell any unrest.
Very few said he should be allowed to continue to use the surface without repercussions if he was no longer president. One high-level government official asked me what I would do.
My answer, I never would have let it get this bad to begin with. Now, I wrote that in 2019, and I got a call from your bosses, Del, not you, saying I was irresponsible for even imagining that, and how dare I, essentially.
But talk about how difficult it is to anticipate, even though I clearly did. Well, I would note again, you didn't get the call from me.
You didn't get, you didn't call me, you didn't, it was one of, you know who it was. Anyway.
I do know who it was. And my point is, you know, I think you're looking at, by this point, we're already seeing some ideological shifts in people's outlooks on how they wanted to handle content.
We're seeing pushback. We saw pushback on labeling content as misinformation.
And in fact, part of the pushback we got at one point was somebody who we were talking about how there's some misinformation that is actually so egregious that it merits removal as opposed to simply labeling it as misinformation. And that's because there's some types of misinformation that even if you label it, this is misinformation, people are like, that proves it's true.
And it was really difficult to frame that in such a way because there was this, well, why wouldn't they just believe the misinfo label? And there were all these conversations where we're like, but people aren't people that like you might work that way, but other people don't work that way. Elon bought Twitter in October of 2022, and he quickly started reinstating people who've been banned, obviously Trump, also Andrew Tate, Laura Loomer, Nick Fuentes, white supremacists, names we may not know.
One of the top priorities was reinstating the Babylon Bee, which started this whole thing, a satirical Christian site. It was taken off of Twitter when it misgendered Rachel Levine, who was then Assistant Secretary for Human Services and called her Man of the Year.
The right wing obviously is obsessed with trans people, and they've been very effective job of dehumanizing and scapegoating him. But satire does have its place.
And I said before, I thought Twitter was heavy handed in this case. Del, looking back, how do you think about Twitter's policies around something like that? And did you expect there to be so much resistance, I guess, in that regard, given the topics and Elon's obsession with trolling people? I am perhaps not surprised by the degree of response.
And also our policies existed and were clear. And the responses that that tweet was getting were all further dehumanizing.
Like, at one point, there was the best answer to bad speech is more good speech. The best answer to bad speech is not lots more bad speech agreeing with it.
That's a good point.
So when you have something that violates our policies, pretty clear cut is doing so on the heels of a lot of other people making the same joke and targeting this individual.
It turns into like, yeah, this is pretty clearly a policy violation. We're going to take action on it.
Yes, that upsets some people. And you know what? I'm sorry that upsets you.
I think it probably upsets people who are trans more that you feel like they don't deserve to exist. Right, right.
But nonetheless, it led to Elon buying Twitter. I think it's one of the biggest reasons.
He called me obsessively about it. I can tell you that.
A number of things he called me obsessively about. This one really bothered him.
Looking back, it was the tip of the spear in conservative fight against content moderation. The GOP who took back the House shortly after as Jim Jordan began using the House Judiciary Committee to investigate Biden administration's voted censorship regime, supposed anti-conservative censorship and bias in tech, and Stephen Miller's legal organization began suing disinformation researchers.
From a conservative point of view, content moderation was an attempt to impose progressive values on Americans. They think they're just undoing the damage.
Nicole, you worked in government, putting aside obviously bad faith arguments of which this is just littered with them.
Is there any point to be made here about these companies which are private going too far? Oh, there's so many points. Let's start, like, you know, as you were sort of recounting that history, right, it strikes me the acquisition of these platforms by people like Elon Musk, this very sort of top-down drive of what is that platform for, it strikes me that there has been a transition of believing when we started it, these were communication platforms that are intended to democratize the way that we communicate with each other, to let small voices that were blocked out by mainstream media rise so that we would hear from a wider panoply of people and allow them to communicate with each other.
That's not what these policies are for right now. These policies are about creating a bullhorn.
Who they are trying to attract to their services is very specific, and it is not about cross-communication and global understanding. It is about a propaganda machine.
And so to me, like that is a really different goal, right? And the policies just follow from that. If we want the other internet that we started with, we have to change the goal.
That is a change of ownership, apparently. So that leads to a question.
Each episode, we get an expert to send us a question. Let's hear this one.
Hi, I'm Nina Jankiewicz, a disinformation researcher and the CEO of the American Sunlight Project, a nonprofit committed to increasing the cost of lives that undermine democracy. The big question I would ask is, with a consolidating broligarchy between tech execs and Trump in the U.S., and online safety regulation on the rise in places like Europe, the U.K., and Australia, how are tech platforms going to reconcile the wildly different regulatory environments around the world? Dave, why don't you start with this one? Nina obviously underwent a great deal of attacks, propaganda, largely unfair.
But this idea of consolidating broligarchy in these owners who aren't going to give up these platforms by any means, and then you have, you know, online safety regulation elsewhere. Yeah, it's, I think we're in a very interesting situation where it seems to me looking at them that they don't know the answer to the question either, right? That question sort of presumed they had a plan.
I'm not sure they do. I'm not sure, you know, I don't know him at all, but like, it doesn't seem to me like Elon necessarily makes plans.
And whatever it is that Facebook's gambit is here seems to basically be a bet that maybe Trump will be mean to Europe for them. And hopefully then somehow they won't have to do this.
Which feels, I don't know, I'm not convinced that the EU is going to think that's cool and totally go with it. But who knows? And so it does feel a little bit like a bet on sort of actually splintering this further and trying to use American economic power to put pressure on people to back off them.
At least that seems to be my view of Facebook's theory embedded in what they've done. I'm not at all convinced that that's going to work because this becomes a pretty core
sovereignty At least that seems to be my view of Facebook's theory embedded in what they've done. I'm not at all convinced that that's going to work because this becomes a pretty core sovereignty and power issue, and linking it to government pressure that way makes that actually more true.
And so, I don't know, maybe we see a splinter net, maybe we see things increasingly blocked, maybe we see the use of AI technologies, which I do think are going to change moderation in ways that are going to be somewhat helpful to the level of flexibility we have, end up with very different versions of Facebook functionally being available or Twitter functionally being available to people in different parts of the world. I don't know.
I think it'll be some combination of those things, you know? That would just be a profoundly reckless way of understanding how they exist in the world, though, right? Like, these are companies that have people on the ground in these countries who are subject to the laws of those countries, who have users on the ground. Like, it strikes me as enormously short-sighted about their ability to continue as a business if they think they're going to blow off the rest of the world.
This is why it has felt, from the get-go,
this set of announcements has felt like weirdly panicky and irrational to me.
And that's sort of why.
It doesn't, I don't understand what the plan is here
beyond like 2026.
We'll be back in a minute.
Fox Creative.
This is advertiser content from Mercury.
Hey, I'm Josh Muccio, host of The Pitch, a Vox Media podcast where startup founders pitch real ideas to real investors. I'm an entrepreneur myself.
I know and love entrepreneurs. So I know a good pitch and a good product, especially if it'll make an entrepreneur's life easier.
So let me tell you about a good product called Mercury, the banking service that can simplify
your business finances. I've been a Mercury customer since 2022.
From the beginning, it was just so clearly built for startups. Like there's all these different features in there, but also they don't overcomplicate it.
Here's your balance. Here are your recent transactions.
Here you can pay someone or you can receive money. These days, I use Mercury for everything, like managing
contractors, bill pay, expense tracking, creating credit cards for my employees. It's all in Mercury.
Mercury, banking that does more. Mercury is a financial technology company, not a bank.
Banking services provided by Choice Financial Group, Column N.A., and Evolve Bank & Trust,
members FDIC. Well, Shopify's got your back.
They make it simple to create your brand, open up for business, and get your first sale. Get your store up and running easily with thousands of customizable templates.
No coding or design skills required. All you need to do is drag and drop.
Their powerful social media tools let you connect all your channels and create shoppable posts, so you can sell everywhere people scroll. Plus, Shopify can help you with the finer details of actually managing your business, like shipping, taxes, and payments, all from one single dashboard.
You don't need to let dreams of your new business pass you by this year, because established in 2025 has a nice ring to it, doesn't it? Sign up for a $1 per month trial period at shopify.com slash box business, all lowercase. Go to shopify.com slash box business to start selling with Shopify today.
shop Shopify.com slash VoxBusiness to start selling with Shopify today. Shopify.com slash VoxBusiness.
Craft is where function meets style. It's where precision meets performance.
It's where doing it yourself meets showing the world what you're capable of. The all-new Acura ADX is a compact SUV crafted to take you where you need to go, without any compromises.
With available Google built-in, all-wheel drive, and a 15-speaker bang and all-of-some premium sound system, the all-new ADX is crafted to be as alive to the world's possibilities as you are. The all-new ADX, crafted to match your energy.
Acura, precision crafted performance. Learn more at acura.com.
So a couple more questions. I recently interviewed John Lacoon, Meta's chief AI scientist.
He says AI has made content moderation more effective, as you just said, Dave.
Del, do you agree?
You've spoken about how trust and safety are perpetually under-resourced.
I know they are.
Do you think that AI gives them the tools
to do their job better,
assuming people running the platforms
want to effectively moderate content
in the first place?
And I know Mark went on about it to me
many, 10 years ago about AI fixing everything,
but go ahead.
Assuming that you are using AI to help with scale and you still have humans involved in the circuit to make sure that it hasn't gone wildly awry, like, yes, please. Absolutely.
We have been begging for tools for years and AI is a tool like any other. It depends on how you use it.
If you deploy it carelessly, then it's going to cause problems. But a lot of what Dave has actually been working on is in this space.
Well, let me just lead to, Dave's been doing some really excellent work in this space, so I just want to shout him out. Okay.
So Dave, generative AI is the new frontier when it comes to issues we've been talking about. We'd want all would want all the trust and safety in AI, but it's hard to trust the technology.
It sometimes hallucinates, and then there's other issues like character AI that's shown AI has the potential to be very unsafe. I just recently interviewed a mother who alleges her teenage boy took his own life after everything started a secret relationship with an AI chatbot.
It's a very compelling story. Dave, you worked on safety at OpenAI and Anthropic, and you're doing your own thing now.
What does safety look like for AI? Can you go into it more? Do you think it'll end up being safer or more dangerous and corrosive? I mean, well, could it be more duration? I don't know. I was going to say, unfortunately, challenge accepted.
No. Some parts of it are very similar, and other parts of it are very different, right?
So the set of interventions you have around your AI chatbot are a superset of the ones you have for content moderation. So you have monitoring of inputs, what people are writing to the chatbot or people are trying to post.
You have monitoring of outputs, like what the chatbot says back. and you have all the different ways of going about that
whether that's flagging algorithmically
or human intervention or a combination of those things. But you also, in the context of the AI chatbots, do have the ability to try to train the models themselves to behave in more pro-social ways or more the way you want them to.
That's that woke AI Elon keeps talking about. I'm it's any AI, right? Like if you're just...
Or mean AI or racist AI. I'm here to ruin all the fun.
This is what I do professionally. But you do have that level of intervention.
And like, if you think of the AI as a participant in a conversation, like in a chatbot product, your alternative is actually it's two users having that conversation and you don't have any say in what either of them wants to try to do. And so in some sense, at least in my view, single person interactive chatbot services, in theory, once everybody gets good at this and there's a problem here of deploying the technology before we've gotten good at it, should be something that we can actually make more safe because you have all the same points of intervention plus other ones that are not perfect but add another sort of layer of safety and add another layer of cheese to the Swiss cheese.
So I have two more very quick questions, one for Nicole and then one for all of you. We talk most about YouTube, X, Twitter, Meta, since this is where three of you work, but TikTok is the elephant in the room.
It may or not not be banned. I have some thoughts on that.
I'm not going to go into them, but it may or not be used for Chinese propaganda. Elon may or may not end up owning it.
There's all kinds of ways, but I have thought that, and Trump just said it today, I said he's going to give it to Elon, and Trump just said, I'm thinking of giving it to Elon. Let's just say Elon does end up controlling TikTok.
Nicole, game out. Any consequences for us if it happens? He obviously has links in China that are problematic, including his factories and his car sales.
All kinds of relationships there. Questions about his conflicts of interest.
Thoughts on where that's going? TikTok is such a dumpster fire of an issue, both at a policy and a technical level. I think there's nothing about his ownership of X that indicates it's going to be a healthy environment.
So to the extent we wanted to ban TikTok because we thought it would be unhealthy for Americans to be on it, that doesn't strike me as it's going to get better just because Elon has taken it over in the U.S. I'm probably going to get myself in so much trouble for this comment.
I say worse. I mean, like, the ban itself was so poorly thought through and handled.
If we want to solve foreign-owned apps on our phones as a security issue, let's have that conversation, but have it broadly, not just TikTok. If we want to have a conversation about propaganda and misinformation spreading on social media, let's have that conversation, but not about TikTok, right? There's a whole bunch of ways we could try to tackle the surveillance and collection of U.S.
persons' information.'s pass federal comprehensive privacy law and stop having this stupid conversation about TikTok. So like, to me, like the TikTok thing, I don't know where it's going to end up, but we still have, we're not going to avoid the social conversation we actually need to have that keeps us safe.
All right. Nonetheless, we're having it, unfortunately.
Elon Musk, Sundar Pichai, Mark Zuckerberg, Sam Altman, TikTok CEO Xiao Chu were all honored guests, including also Tim Cook. TikTok explicitly thanked Trump for helping restore it, even though it's not restored, because Apple and Google are declining to let it be downloaded, because they understand there's a law and they need to follow it, and also the Supreme Court said so.
Some users have reported that TikTok is hiding anti-Trump content, but we'll see if that's actually the case. Either way, it raises the possibility that some of the most influential communications platforms that drive our culture in the hands of oligarchs, they don't like that word.
It hurts Mark's feelings. I'm sorry, Mark, but that's what you are.
You have aligned themselves with Trump. What are the implications of this new power dynamic between a president like Trump and social media placards? And what
do you expect to see flip in the next few months and years? So Del, you go first, Dave, and then
Nicole. I think that we are most likely going to see some period of time where everybody goes,
no, look, everything's fine still. Everything's totally fine.
And then things are going to crash and burn. How so? It's going to start with all of a sudden these marginalized groups don't have protections anymore and they start getting targeted more.
And maybe they try to do the right thing and counter with good speech or defending themselves or what have you. But eventually when the content that's attacking them keeps getting, heck, upranked even, surfaced algorithmically, they're going to stop pushing back.
They're going to leap. They're going to go elsewhere.
Then they're going to essentially have had their speech chilled. There are only so many people who they can appeal to in terms of this sort of pro-fascism, anti-woke, United States number one opinion of things.
And the EU is just not going to be chill with this at all. So there's like multiple different ways that this could end up in a giant fireball, but it feels like at least one of them is pretty inevitable.
And then we will come in and we will clean it up like we do, and we will go back to trying to make things right again, because that's what we do. All right.
Dave? Yeah, I'd agree with that. And again, this gets to the sort of like, not acceptance, but fatalism about the sort of journey of things.
I used to say to my teams at Airbnb that the question is not where we end up on this, it's how stupid the journey has to be. And I think we're a little bit doing that.
And that's not to dismiss it, because a lot of people are going to get hurt by the stupidity of the journey, which is a tragedy. Which you noted in your wonderful thread on that.
But it is to say, don't despair, because these pressures simply exist. I think if you run these sorts of platforms, there are a lot of decisions you don't really get to make.
It just seems like you get to make them. And then you encounter the forces that sort of press you towards particular directions and are either worn away or, like Richa said, state of acceptance and understand the business you're actually in.
And sometimes, you know, we have to go on a finding out journey around this stuff. I don't have a prediction about exactly how it falls apart.
I do think, thought occurred to me earlier when Nicole was talking, that in some ways we're seeing social media really become media. Like one way this potentially develops is these are all cable networks now because they're more broadcast-y.
Very point. And you have the segregation into like your MSNBC Blue Sky, your CNN threads for normies.
And just like CNN, threads wants to be Fox News because they're the cool one that everybody loves using a lot. And you may see segregation in that regard, which is, I don't love, but is like a way of resolving the social context problem in some ways.
But it also makes these things much more propagandistic. I do agree, though, with Dell that the more extreme edges of this, there is a conflation between the fact that Elon is more wealthy now because buying Twitter and setting it on fire was strategically advantageous to his broader portfolio.
That's correct. Which is different than, like, did this work out well for Twitter as a product? Where the answer is, like, very obviously no.
Yeah, no, he didn't care. Right, and it wasn't the goal.
It wasn't the goal. his broader strategy is successful, but insofar as you view the platforms themselves,
they created a vacuum which created threads and powered the rise of blue sky. Like, there was a homeostatic reaction, and it seems to be continuing and now starting to roll up some of Meta's products.
So it's not the case. There hasn't been backlash.
It just hasn't resulted in a cathartic outcome in the bigger picture. As yet.
As yet. You're absolutely correct in why he bought that.
Actually, Mark Cuban pointed it to me the day he bought it. He said, it's nothing to do with this platform.
It's everything to do with influence. Later, we'll have, which was interesting.
Nicole, finish up. You're coming to me in a tough week.
Tell me why. Of the first day of EOS.
I think what is going to happen on social media, I think that we've already been seeing it. I sit on Blue Sky, right? And so every time there's a X thing, there's a surge in the Blue Sky numbers that comes.
So I think that it's likely we see people sort of dispersing to find what is the healthiest place for them to be, where they can find their people and the conversations they want to have. I worry that the platforms, none of them rise to the moment.
And what we end up doing is we sit in small text groups on Signal, which is candidly what I've been for the last six months, is like we just, we make our world very small. I think there's a, for the setting aside the role of social media, I think there's a bigger problem with Trump and the closest to social media and the sort of so far not very distinguished work of the mainstream media to hold them accountable.
Right? So if we believe that social media is one place that people get their information, but that amplification really happens when it hits mainstream media, we are not getting what we need in terms of having a trustworthy source of information. People are going to seek those trustworthy sources of information.
It may end up being, it has to be in our small text groups, because I'm not sure what the trajectory is of where we're going to find it, but people are going to look for it. And so, the question is, who's going to rise to the occasion for that? Right.
And it's also, you know, with all this data sort of sloshing everywhere, I do think people are going to get smaller. You're right.
I think the dissipation is really important to think about. And architecture is one, as you, you don't realize the impact you had when you talked about architecture to me around these things.
How you make something is how they find. And I'm going to read you, actually, it's odd that you said that.
I just wrote an afterword to my book, Burn Book, and I'm quoting Paul Virilio, who's a French philosopher who talks about these things. And he was being interviewed, and I'll read this to you, just like a very quick reaction if you think it's a good end or a bad end.
And Paul Verrilli once talked about technology embedded into our lives in a science fiction short story in which a camera has been invented, which can be carried by flakes of snow.
Cameras are inseminated into artificial snow, which is dropped by planes. And when the snow falls, there are eyes everywhere.
There is no blind spot left. The interviewer then asked the single best question I've ever heard and wish I had the talent to ask it of the many tech leaders I have known over three decades.
But what shall we dream of when everything becomes visible? And from Virilio, the best answer, too, we'll dream of being blind. It's not the worst idea.
Do you think it is? What shall we dream of when everything becomes visible in the way it has? Each of you. Last question.
Del? I'll take a stab at it and say I would wish for once everything has become visible to be able to identify those things that are meaningful. Great answer.
Nicole? I think that's such a terrific answer. I had a similar like sometimes seeing everything is overwhelming, right? So you need to know.
The hate. What makes things worthwhile and meaningful and permits progress and distill that part of it.
Dave, last answer. I mean, my flippant reaction is we'll dream of going outside and touching real grass.
You're a grass toucher. I knew it.
No, but in the sense that like, I don't think we are fitted for that world. And so the dreams will be dreams of escape.
Whether those are withdrawing to smaller spaces or wishing that somehow the truth was less painful and was understood as meaningful or wishing to be invisible, which was where my mind immediately went when you asked the question. It's going to be a dream of escape, because we're not, I don't think, prepared for that much awareness.
That's absolutely true. Well, thank you for all your efforts and trying to help us get through that, and I really appreciate each one of you and your thoughtfulness.
Sometimes tech leaders can seem so dumb, and the people that work for them are not. The people who work for them think a lot and think hard about these issues.
So I wanted to shine a light on that, and I appreciate it. Thank you so much.
Thank you. Thank you.
Thank you so much. On with Kara Swisher is produced by Christian Castor-Russell, Kateri Yoakum,
Jolie Myers, Megan Burney, and Kaylin Lynch.
Nishat Kirwa is Vox Media's
executive producer of audio.
Special thanks to Kate Gallagher.
Our engineers are Rick Kwan and Fernando Arruda.
And our theme music is by Trackademics.
If you're already following the show,
you must be chock full of masculine energy.
If not, go outside and touch some grass.
Go wherever you listen to podcasts,
search for On with Kara Swisher and hit follow.
Thanks for listening to On with Kara Swisher
from New York Magazine,
the Vox Media Podcast Network and us.
We'll be back on Thursday with more. Subtle results.
Still you, but with fewer lines. Botox Cosmetic, out of botulinum toxin A, is a prescription medicine used to temporarily make moderate to severe frown lines, crow's feet, and forehead lines look better in adults.
Effects of Botox Cosmetic may spread hours to weeks after injection causing serious symptoms. Alert your doctor right away as difficulties swallowing, speaking, breathing, eye problems, or muscle weakness may be a sign of a life-threatening condition.
Patients with these conditions before injection are at highest risk. Don't receive Botox cosmetic if you have a skin infection.
Side effects may include allergic reactions, injection site pain, headache, eyebrow and eyelid drooping, and eyelid swelling. Allergic reactions can include rash, welts, asthma symptoms, and dizziness.
Tell your doctor about medical history, muscle or nerve conditions,
including ALS or Lou Gehrig's disease,
myasthenia gravis,
or Lambert-Eden syndrome and medications,
including botulinum toxins,
as these may increase the risk of serious side effects.
For full safety information,
visit BotoxCosmetic.com
or call 877-351-0300.
See for yourself at BotoxCosmetic.com.
Craft is where function meets style. It's where precision meets performance.
It's where doing it yourself meets showing the world what you're capable of. The all-new Acura ADX is a compact SUV crafted to take you where you need to go without any compromises.
With available Google built-in, all-wheel drive,
and a 15-speaker bang and all-of-some premium sound system,
the all-new ADX is crafted to be as alive to the world's possibilities as you are.
The all-new ADX, crafted to match your energy.
Acura, precision crafted performance.
Learn more at acura.com.