How Big Tech Sets the Agenda in Trump’s America
Listen and follow along
Transcript
Support for this podcast and the following message comes from Sutter Health.
From life-changing transplants to high blood pressure care, Sutter's team of doctors, surgeons, and nurses never miss a beat.
And with cardiac specialty centers located in the community, patients can find personalized heart care that's close to home.
Learn more at Sutterhealth.org.
Sling has more of the live sports, news, and entertainment channels you love and less of the ones you don't, so you save hundreds on TV.
And with rewards on Sling, you could win up to $10,000 in cash and other prizes just for watching 30 minutes of TV.
Customize your channel lineup or watch for free and get rewarded for doing it.
Sling lets you do that.
Visit Sling.com to learn more and get started.
Restrictions apply.
This is the New Yorker Radio Hour, a co-production of WNYC Studios and the New Yorker.
Welcome to the New Yorker Radio Hour.
I'm Evan Osnos, sitting in today for David Remnick.
I'm based in Washington for the New Yorker, and I can tell you that some of the best political reporting on Trump's second term has actually come from a tech magazine, Wired.
First, Wired published a series of scoops about Elon Musk's reign in the White House.
And even though Musk has moved on, Doge is still having a massive and disruptive influence in our government and in many people's lives and jobs.
It could be startling to realize how much the tech industry is setting the agenda for American politics right now.
Donald Trump, who after all used social media to triumph in politics, has also become a major promoter and beneficiary of cryptocurrency.
And meanwhile, AI is at the top of every serious conversation about the world and how it's changing.
Wired has been covering all of this really well.
So I recently sat down with its global editorial director, Katie Drummond.
Drummond is also the co-host of Wired's podcast called Uncanny Valley.
Katie, we are talking about six months into this administration, and we'll get to what Wired has explained recently as Doge 2.0 in just a second.
But I want to start by going back for a moment to when this effort began,
because Doge has been a hard target for journalists to pin down.
It's operating very much in the shadows by design.
It's not subject to a lot of the usual disclosure rules at other agencies.
But Wired has really dug into this over the course of the administration very successfully.
I mean, you found patterns, you've found individuals.
It seems like the experience of covering tech turned out to be unusually helpful to covering politics right now.
And I want to know how that became clear to you.
How did you and your colleagues confront this puzzle of covering Doge?
I would say without giving myself too much credit, because as we've talked about, I am Canadian, so I don't like to do that.
But, you know, I think it's been very clear to all of us and certainly very clear to me as a journalist for the last several years that the technology industry is now the locus of power in the United States, right?
And arguably around the world, right?
The CEOs of major tech companies hold so many of the keys that determine the way we all live our lives, certainly the way our country is governed.
And what became clear to me, I think, in the run-up to the 2024 election, even before Elon Musk really jumped in sort of in July of 2024, was that that overlap between technology, the technology industry, and politics and federal politics would be a major storyline.
We were prepared because we hired a politics team.
So we just said, we have to start covering this in a more deliberate way.
But we had the benefit of already having really strong coverage of the tech industry.
We have a business team helmed by Zoe Schiffer, who actually wrote a book, a fantastic book, about Elon Musk and his takeover of Twitter.
When you read a lot of our Doge reporting, you'll see several bylines on those stories.
And that's because we had business reporters working with politics reporters to cover any sort of given news event.
Was there an actual confluence of information coming from the West Coast and information coming from Washington.
Did you ever find that these two were leading you to common destinations in a way?
Absolutely.
And I think we had first mover advantage to what turned into a very, very big story, right?
The story of Doge.
We got a flood of tips and information from people inside of these federal agencies, right, who were watching this happen, who were interacting with Doge operatives, Doge workers inside their agencies, and felt like they had to say something.
And then sources, you know, know, close to and around Musk and sort of adjacent to this almost like right-wing faction of Silicon Valley were talking to us and who were sort of helping us understand where Silicon Valley and the tech industry stood vis-a-vis the administration and Doge.
You've actually hit on what is kind of the essential point of encounter, perhaps, in this current era of politics, because there are the civil servants who represent the traditional backbone of American governance.
And then you have this new group of disruptors, as they would describe themselves, coming in.
And each one regards the other as a fundamental obstacle to human flourishing.
You were already alert to that, that something was about to happen in Washington that local inhabitants, and I'm talking to you, of course, as somebody who lives here, really didn't fully understand about what was about to happen.
I think that's right.
It was Metta and Mark Zuckerberg many, many years ago who used the very cliched slogan, move fast and break things.
And that is very much something that many of these people embody.
And so you start moving fast and breaking things and sending emails asking for, you know, bulleted lists of productivity from people who have been civil servants for two decades.
I mean, that is an extreme culture clash.
This has also shown us something important, which is about the ideological underpinnings of Silicon Valley, this period.
You know, you've covered the industry for years and you've seen the ebb and flow of ideas.
This goes all the way back to when you started at Wired back in about 2009, I think, right?
And back then, of course, the cliche about tech was it's fairly left-leaning, it's Obama-aligned, you know, even if it was always quite hyper-market-oriented.
And now, of course, we see this much more complicated picture, a much more overt conservative streak, to put it mildly.
And my question is: was it always there and underappreciated?
Was it quiet?
Or has it grown really in this period?
And if so, why?
I mean, obviously, the tech industry and Silicon Valley are not a monolith.
There are certainly many, many people living and working in San Francisco, in Silicon Valley in tech, who are very left-leaning, who are looking at what's happening with great horror.
And obviously, right, then there is the more conservative streak, the Mark Hendreessens of of the world, who certainly have taken that
set of philosophical ideas and sort of that set of politics and really taken it to the extreme in recent years.
What I'm seeing from leaders in the tech industry is less about
an overt
set of political beliefs, right?
Like we are liberal or we are leftist or we are right-wing or we are right-leaning, we're all in on the GOP.
It's much more craven and cynical and opportunistic than that.
It is
we are going to move in the direction that is in the best interests of our business, the best interests for our shareholders, you know, for our board, for our own pocketbooks, for our own desire to be dominant.
We are going to go wherever we need to go to realize our ends.
And so, if that means we will be sitting front, row, center at Donald Trump's inauguration in the year 2025, then that is what that means.
And so, I think the change that I've seen since maybe 2016 to now is less about, oh, all of a sudden a bunch of them have just shifted to the right.
It's much more,
oh, they feel very much empowered
to follow their own interests and the interests of their companies and their industry wherever that may lead them, even if it is unsavory or perhaps unethical or perhaps ethically dubious at the very least.
They are opportunistic to to the nth degree.
And that is what we're seeing.
That's immensely important.
If that's the case, and I think the evidence out there certainly supports that, I want to understand how that happened.
Not to glamorize the past.
Let's remind ourselves that Bill Gates, after all, was brought before Congress.
There was an effort to try to break up Microsoft back in its day for monopolistic tendencies.
But the way in which you described, I think, this period is very persuasive.
And I'm curious what you think was driving it.
How did this theory of what they would describe as shareholder responsibility, where they're taking care of their investors, took on this fundamentalist form?
You know, I remember working at Wired the first time around.
So this was like 2008, 2009.
Wired in that moment was great.
It was a great Wired.
It was so
optimistic in its view of what technology was doing to the world.
And this is not exclusive to Wired.
I mean, this was tech journalism overall in that moment.
And I think spoke to the view of the vast majority of just the average sort of American citizen was, wow, social media is here.
Look at what it has the potential to do.
Look at how the internet can connect all of us.
What an incredible global utopia we may one day soon live in, where we can all spend time on Facebook or Twitter sharing ideas, building bridges, resolving conflict, da-da-da-da-da-da-da.
Silicon Valley was telling a great story.
Everyone was buying it.
These companies grew to be tremendously powerful.
The consequences of their actions and the consequences of what they built became very clear to all of us, right, as the years went on and on and on.
They felt that backlash, right?
They felt the media quote unquote turning on them, the public turning on them.
They felt this shift in sentiment.
A lot of those tech leaders took that very personally.
I mean, they felt that backlash and it led us to where we are today, which is,
well, you know, if I don't sort of have the hearts and minds of the press and of the general public, fuck them.
That sort of turn around 2016, sort of as the promise of social media started to crumble and we all saw what was really there and sort of what the real consequences here were, I think that that backlash led Silicon Valley leaders to this almost like futile sense of,
well, I hold all the keys, I have all the money, I already have all of your eyeballs, I have all your data,
so I will do with it what I wish.
Consequences come what may.
And the reality is, other than sort of public perception of a company like Meta or Mark Zuckerberg not being particularly favorable, what are the consequences for these companies, right?
They know that they can operate with relative impunity, and they are now lining themselves up next to a president who will allow that to continue to happen, right?
Who will not take great effort to regulate AI, to regulate, I mean, God forbid we think about regulating social media 20 years too late, but they are lining themselves up with an administration that can create the most hospitable environment for them because they are frankly, you know, tired of hearing from the press, tired of hearing from the public, learn their lesson from those backlashes that they worked through.
And I think just feel that they are of a size and a scale and a level of power that
they can operate in this sort of very cynical way, in this very opportunistic way.
Now, we've just had this amazing moment when Elon Musk was essentially defenestrated from Washington by first, you know, his collapsing relationship with the president, but even more importantly, almost by the public in the sense that Tesla sales collapsed.
You had his favorability go through the floor.
Do you get the sense that the tech community takes from that any lesson of, oh, we seem to have crossed some threshold the public couldn't accept?
Or do you think they just say disruption has always generated backlash and we must continue forth?
No, I mean, look, I think there were a lot of lessons that they could have learned from what went wrong with Doge, with Musk.
But from everyone that I've talked to, and we have some fantastic politics reporters who are talking to, you know, sources inside the GOP as well as sources in Silicon Valley.
If they learned anything, ultimately it was
be quieter.
Like Elon Musk is so loud.
I mean, he was so out in public.
He was so out in front.
You don't need to be in President Trump's meetings with his cabinet, with your son.
Like, don't be weird.
Elon Musk flamed out because he was wildly overexposed and it caused all kinds of headaches for himself and for the administration.
I mean, he said crazy stuff.
He said, we're going to cut $2 trillion out of this budget without understanding that there weren't trillions of dollars to feasibly cut in the first place.
But ultimately, if anything, if I'm being totally honest, I think that the tech industry in Silicon Valley, if they've learned anything from what Elon Musk was able to accomplish, it's that this is open season.
Like this is an invitation from the president, from his administration, for these tech elites to ascend to wherever they want to in this country, provided they play by the GOP and Trump's rules.
I'm speaking with Katie Drummond of Wired.
This is the New Yorker Radio Hour.
I'm Evan Osnos.
More in a moment.
The New Yorker Radio Hour is supported by AT ⁇ T.
There's nothing better than feeling like someone has your back and that things are going to get done without you even having to ask, like your friend offering to help you move without you even having to offer pizza and drinks first.
It's a beautiful thing when someone is two steps ahead of you, quietly making your life easier.
Staying connected matters.
That's why in the rare event of a network outage, ATT will proactively credit you for a full day of service.
That's the ATT guarantee.
Credit for fiber downtime lasting 20 minutes or more, or for wireless downtime lasting 60 minutes or more caused by a single incident impacting 10 or more towers.
Must be connected to impacted tower at onset of outage, restrictions, and exclusions apply.
See ATT.com/slash guarantee for full details.
ATT, Connecting changes everything.
The headlines never stop, and it's harder than ever to tell what's real, what matters, and what's just noise.
That's where Pod Save America comes in.
I'm Tommy Vitor, and every week I'm joined by fellow former Obama aides Jon Favreau, John Lovitt, and Dan Pfeiffer to break down the biggest stories, unpack what they mean for the future of our democracy, and add just enough humor to stay sane along the way.
You'll also hear honest, in-depth conversations with big voices in politics, media, and culture like Rachel Maddow, Gavin Newsom, and Mark Cuban that you won't find anywhere else.
New episodes drop every Tuesday and Friday with deep dives every other weekend.
Listen wherever you get your podcasts, watch on YouTube, or subscribe on Apple Podcasts for ad-free episodes.
This is the New Yorker Radio Hour.
I'm Evan Osnos, a staff writer at The New Yorker, and I'm in for David Ramnick this week.
I've been speaking with Katie Drummond, the global editorial director of Wired.
Wired's always been a major voice in tech journalism, but these days, and I think especially since Trump took office again, there's hardly any daylight between technology and politics and economics.
Wired has really risen to the challenge of covering these intersections.
Katie Drummond started at Wired as an intern, and she worked at Vice and Bloomberg and other publications before taking the helm at Wired in 2023.
So we'll get back to our conversation, which was recorded for the New Yorkers Political Scene podcast.
Katie, I want to talk about AI.
We can't have any conversation apparently in 2025 without bringing that up.
But in all seriousness, it's connection to some pretty big societal questions that are just over the horizon, or I would argue actually now already upon us.
The big difference here, it seems to me, between this generation of technology and what's come before, you know, whether it was the mechanization of agriculture or electricity, it's the speed of diffusion, about how fast this is sweeping through our lives.
Just the sums of money involved are eye-watering already, the amount of money that AI is hoovering up, and we're about to see big changes.
How are you sensing that these companies are talking about their role in the societal implications of this?
Ultimately, they see their role and they see what's happening with AI as an inevitability.
And I think that that's interesting because they are not saying or thinking,
well, we're creating this incredible technology and we look forward to seeing whether or how or to what end it's adopted.
They are seeing this, talking about this, positioning themselves and positioning their companies in a way that very overtly says, this is here and more is coming and there's nothing that you can do about it.
And are they right about that?
Is there an inevitability?
They would argue, of course, well, look, if we don't do it, China's going to do it, and so we have to get there first, and we have to claim the moral high ground, and so on.
I have a lot of reasons to be skeptical of that argument, but this idea of inevitability seems to be directly at odds.
And I've heard them voice it.
It seems at odds with the other thing that they said for years, which was we must make sure that the worst consequences and powers of AI are contained and prevented.
How did they suddenly go from saying this thing is a dangerous entity to saying it's coming for us, so let's just try to manage the downside?
Yeah, I mean, I think that a lot of that catastrophizing, not all of it, but a lot of it, was marketing.
They want these models and they want this technology to sound as big
and daunting and powerful and impressive and scary as they possibly can, right?
I think that that is by design.
Like, I think it's important if you are spending billions of dollars and raising billions of dollars to make what you do sound not only inevitable, but really, really, really powerful.
Their PR tactics change as the market changes.
And I think the doomsday scenario stuff was a very effective way to inculcate, to establish to Americans and to the rest of the world and to the CEO of every company who is like sipping AI Kool-Aid somewhere in their office right now, that this is serious stuff.
You know, a lot of these AI leaders talk using that sort of dystopian doomsday language, or they talk about the potential crises that AI will unleash, whether it's like 50% unemployment, or, you know, Sam Altman said, AI is going to usher in this era of, you know, widespread rampant fraud because of, you know, the ability to imitate a person.
A lot of that language and that hyperbole masks the fact that these individuals have a stake in exactly the scenarios that they are outlining.
So, Sam Altman oversees something called World ID as part of his enormous empire, which is designed to use biometrics to literally scan your iris and give you a number.
I mean, I can't believe, like, I should start writing novels.
So, he's talking about this dystopian future, how convenient that he already runs a company that has a solution.
When you listen to people talk about artificial intelligence, you always have to ask yourself, what is their motive?
What is their incentive?
Do they run an AI company or are they an independent researcher with an institution that is not funded by Microsoft or Google?
There is so much hype around this technology and it can be very hard to discern, even for me, and like I run wired, I'm just going to be real with you.
Even for me,
it can be very hard to distinguish like what's actually happening here and then how much of this is just BS or how much of this is just marketing.
One of the projections that you hear from Dario Amadei at Anthropic is that he expects half of all entry-level white-collar jobs to disappear in five years.
And it's sort of been a controversial subject.
But if you listen, you know, the CEO of Ford Motor Company came out and says, I expect that half of the white-collar workforce is going to disappear eventually.
And Amazon says we've probably had peak employment.
I mean, there are these indicators that something is changing in the labor force, even if it's not the sort of sci-fi dystopia that they've been marketing over the years.
How do you think about what is actually going to happen to white-collar work over the course of the next 10 or 20 years?
If I had a perfect answer to that, I would be a consultant.
I would have quit this job.
I would be so rich right now.
You know, I don't think that anybody really knows.
It is true that the nature of employment will change.
It is true that there are some jobs that I could imagine right now that in a year, do I think they'll exist?
Like, probably not, right?
Are there new jobs that will be created as all of these guys are promising?
Like, yes, of course, like, there will be new jobs in this sort of new, more automated white-collar world.
But I think it's two things.
If we're thinking about in the here and now, how do I think about it?
How do I talk to my staff about it?
You can't ignore that these tools and products exist and that they are being treated as an inevitability.
So, it is in your best interests.
If you are just graduating from college, or if you are at the midway point of your career, or wherever you are, you should spend some time with them.
Like, it would behoove you to understand how these things work, to understand how they work in the context of what you specifically do for a living.
The other piece of it that I'm seeing a lot that I think is one of the reasons I don't feel confident making sort of long-term predictions about this is that I also think we are seeing the premature elimination of some roles or the premature integration of AI into some companies and some workforces, which is happening because all of these executives go to the same conferences and then they all talk and then they go back to their offices and panic.
You know, I talked to someone a few weeks ago who works sort of adjacent to big tech.
A lot of these companies are clients of his, and he was telling me about how a lot of companies that he works with have replaced software engineers with AI and are now churning out like the buggiest, shittiest code, like terrible code to the point that they now need to pay another company to debug their software for them because they overshot, right?
Like it's not good enough yet to replace their software engineers, but they did the layoffs anyways because they needed to cut costs.
So
I am waiting for some of that dust to settle.
And I think in the next 12 to 18 months, do I think that half of all entry-level jobs or whatever that estimate was, do I think that that's going to happen in the next year, year and a half?
No, I don't.
I think what we'll see in the next year to year and a half is the reality meeting the hype in terms of are companies and are corporate leaders actually seeing productivity gains?
Is this actually improving the bottom line?
Are these sort of big corporate use cases of AI as real as they have been promised they are?
Or are we not quite there yet?
And I want to give that a minute to settle before we start start talking about the fact that like nobody's going to have a job anymore.
But, you know, there's so much interesting conversation about sort of AI and education and how that's changing the way students are learning or not learning.
I think that it is in all of our best interests, especially if you are new in your career, to become very conversant in these tools.
Yeah, you mentioned education, and I talked to a former university president recently, and I said, so, okay, you were running a school.
How would you deal with the fact that students are having AI write their papers?
And he said, Well, it's not hard from my perspective.
I think what I would do is just tell them one out of every 10 will be randomly assigned an oral exam, like a PhD student.
And if you flunk, then you really flunk.
Sounded a little hunger games to me, but also perhaps quite effective.
And I was curious if you're seeing in the coverage of this problem of cheating in school, are you seeing us moving towards a more sustainable arrangement, something better than AI writing the papers and AI grading the papers.
Aaron Ross Powell, I think that educators, by and large, are starting to move away from the panic and the sort of hysteria that I saw characterized sort of the first few years of this was,
oh no, our students are using AI.
They're all cheating.
This is terrible.
And it becomes, well, what are you going to do about it?
Because you will have a class next year and next year and the year after and the year after.
And again, this isn't going anywhere.
So, okay, they can all write the essay using ChatGPT now.
You need to change the assignment.
I do think we are seeing educators adopt a more, shall I say, solutions-oriented approach, because the reality is like you need to change the way you conduct your curriculum.
The way you educate students has to change because the reality is a lot of the assignments, a lot of the methodologies that you used 10 years ago, are no longer going to be effective.
If any take-home assignment can be written using an LLM and the detection software, to be quite candid, is not very good and there are plenty of ways around it, how are you going to make sure that these students are learning everything they are supposed to learn instead of just learning how to use ChatGPT to write an essay, right?
Like that's obviously not the end goal.
They should learn how to use ChatGPT,
but it shouldn't be just to complete the assignments that their teachers have given them.
So we're at this fascinating moment right now where it seems like there's this game of thrones going on among the big AI companies that reminds us a bit about how Microsoft and Google and Meta came to be these giant Leviathans.
And we're also at a time when the Federal Trade Commission has been over the last few years trying to break up what it certainly sees as unfair market behavior and monopolistic enterprises.
Do you think that we're heading in a direction where that will replicate itself?
Are we just going to end up with a few giant AI companies?
Or has anything been learned that's going to keep it a bit more distributed?
You know, Evan, I'm really trying to bring some optimism to this conversation, but you're not setting me up very effectively.
Can my phone take a photo?
That's my question, Katie.
Can my, you know, let's find something that we can take some solace from.
Yeah.
Look, I
know.
I mean, the future that I see in the short term right now, given who's in office, given where all the money sits, where all the power sits, where all the lobbying heft sits in this this country.
I think that we are moving towards an ongoing
monopoly of big tech.
If anything, I think what we'll see in the next sort of year to two years are some of these, many of these actually smaller AI companies or startups just being hoovered up by the bigger players, right?
Like that's inevitably where this is going.
I mean, these companies are fundraising at outrageous valuations.
Like there is so much money being pumped into a lot of these startups.
It's not sustainable.
Some will shut down.
Some will be acquired.
There will be aqua hires.
We're already seeing that happen.
So I think ultimately we will end up with sort of a portfolio of big tech companies, whether they're the same big tech companies that we're working with now, or whether we start to see that shift a little bit as some of these AI companies become bigger and bigger and bigger.
I mean, I'm talking about open AI in particular and sort of what their ultimate destiny looks like.
But I think this sort of era of monolithic big tech is by no means over.
And I wish that I were giving you a different answer because there's nothing I would love to see more than
a more dynamic environment, a more competitive environment, an environment where startups and novel thinking and new ideas really have space to flourish amid the metas and Googles and Microsofts of the world.
And I think you've actually taken us to a really important point here, which is that if you look at the history of technology and the way that
it exists in broader society, you know, there are moments when it is this fundamentally optimistic realm.
And you go back to when we were first putting people on the moon, and it was reflected in the science fiction of the time.
It was all sort of utopian, and what will it be like out in the new realms?
And then we've watched as it's all gotten so much darker over the last generation.
And I just am curious, what gives you reasons for optimism, if there are any?
If you see people out there who are at the moment marginal figures who are thinking differently or are swimming against that tide, but give you a sense that actually this is not an inevitability, and perhaps there is a voice within this movement that can perhaps turn the direction a bit.
I do.
And I would point to sort of two specific examples.
These are two people and two organizations who give me a lot of hope right now, who I think are doing really interesting work.
One is Meredith Whitaker at Signal.
They are incredibly principled and resolute about the premise of Signal, right?
Which is really robust end-to-end encrypted communication.
Period, the end, right?
That is what they do.
They do it tremendously well.
And she is an incredibly articulate voice in every room that she's in about the dangers of exactly what we just spent this time talking about.
Wired is very grateful to Signal because that's where we do most of our most of our chatting right now.
It's also where I talk to my entire family.
I mean, if you aren't on Signal and you live in Trump's America, I would get there.
And the other is, you know, Blue Sky and Jay Graeber.
And I think that that's a really good example of social media and social networking being done differently and being done in a way that actually empowers the user much less so than it empowers the company.
I think honestly, so many of us were at a point where the very nature of social media and how it worked and who benefited and who didn't felt set in stone.
It felt like this is the way it has always been.
This is how Mark himself designed it.
And this is just the world that we live in.
I give all of my content to this platform.
They sell all of my data.
They shove a bunch of ads into my feed.
And everybody is mean to me on here anyway.
Like, that was basically what social media was.
Congratulations.
That's what.
Congratulations, all of us.
This is the utopia that we were promised when they landed a man on the moon.
And I think that Jay Graeber and Blue Sky have come along and done something really different, which is is a decentralized social network and one that puts the power in the hands of the user to design their experience exactly the way they want it to be designed, to take their followers with them when they move over to another service or another platform.
It's a fantastic idea.
It's actually surprising to me that it didn't exist 10 years ago because I can't imagine now my social media reality and my world without Blue Sky.
You can talk all day about, oh, everybody fled Twitter, all the lefties lefties moved to Blue Sky.
Well, Twitter is an echo chamber for far-right provocateurs and Nazis.
So I don't think that we need to think too hard about the decision to not spend time on that platform.
And I think that what Blue Sky is doing, just from sort of a technological point of view and a human betterment point of view, that is a better way to run a social media company.
It just is.
So yes, there are good things happening on the internet and on your phone, few and far between.
Yeah, we'll take them where we can.
But it is, I mean, the connections to politics are really profound.
And a lot of this is really important and frankly new material for people whose nose is usually buried in politics.
And it's wonderful to have you here, Katie Drummond, Global Editorial Director of Wired.
Thank you so much.
Thank you for having me.
This was great.
Katie Drummond of Wired.
We spoke in July, and our conversation appeared on the New Yorkers podcast, The Political Scene, which I co-host in Washington along with Susan Glasser and Jane Mayer.
I'm Evan Osnos.
David Remnick will be back next week.
Thanks for joining me and have a good week.
The New Yorker Radio Hour is a co-production of WNYC Studios and The New Yorker.
Our theme music was composed and performed by Meryl Garbis of Tune Yards, with additional music by Louis Mitchell and Jared Paul.
This episode was produced by Max Balton, Adam Howard, David Krasnow, Jeffrey Masters, Louis Mitchell, Jared Paul, Paul, and Ursula Summer.
With guidance from Emily Botine and assistance from Michael May, David Gable, Alex Barish, Victor Guan, and Alejandra Deckett.
And we had help this week from Amber Bruce.
The New Yorker Radio Hour is supported in part by the Torina Endowment Fund.
Hi, I'm Tyler Foggett, a senior editor at The New Yorker and one of the hosts of the Political Scene podcast.
A lot of people are justifiably freaked out right now, and I think that it's our job at the political scene to encourage people to stop and think about the particular news stories that are actually incredibly significant in this moment.
By having these really deep conversations with writers where we actually get into the weeds of what is going on right now and about the damage that is being done, it's not resistance in the activist sense, but I think it is resistance resistance in the sense that we are resisting the feeling of being overwhelmed by chaos.
Join me and my colleagues, David Remnick, Evan Osnos, Jane Mayer, and Susan Glasser on the Political Scene podcast from The New Yorker.
New episodes drop three times a week, available wherever you get your podcasts.