Trump Fights ‘Woke’ A.I. + We Hear Out Our Critics

1h 6m
"Any historical understanding of the First Amendment would say this is just plainly unconstitutional.”

Press play and read along

Runtime: 1h 6m

Transcript

Speaker 2 the last two decades, the world has witnessed incredible progress.

Speaker 6 From dial-up modems to 5G connectivity, from massive PC towers to AI-enabled microchips, innovators are rethinking possibilities every day.

Speaker 8 Through it all, Invesco QQQ ETF has provided investors access to the world of innovation with a single investment.

Speaker 7 Invesco QQQ, let's rethink possibility.

Speaker 11 There are risks when investing in ETFs, including possible loss of money.

Speaker 12 ETF's risks are similar to those of stocks.

Speaker 14 Investments in the tech sector are subject to greater risk and more volatility than more diversified investments.

Speaker 15 Before investing, carefully read and consider front investment objectives, risks, charges, expenses, and more in perspectives at investig.com. Investor Distributors Incorporated.

Speaker 17 Let me tell you about something.

Speaker 17 I was in a Waymo the other day and it was making a turn on Market Street, which if you've ever been to San Francisco, this is kind of a street that causes problems with all the other streets because it's diagonal.

Speaker 17 Yes. So the intersection has six different roads coming together.
But the Waymo is just about to complete a left turn. Everything's about to be okay.

Speaker 17 And the only way I could put it is it loses its nerve. Oh, no.
There's a light about to change. Pedestrians start walking to the crosswalk.

Speaker 17 And so this thing just starts to back up like I'm talking 30 feet over like half a minute. And pedestrians come into the crosswalk.
And Kevin, I swear to God, they start laughing and pointing at me.

Speaker 17 They're laughing.

Speaker 17 All of a sudden, I'm flashing back. I'm in middle school.
I'm being ridiculed. I have no control over this whatsoever.

Speaker 17 And I've never looked like a bigger dweeb than I did in the back of a Waymo that failed to complete a left turn. Oh, man, you were a tourist attraction.
I really was.

Speaker 18 The people in Poughkeepsie are going to be telling their friends about this one for years.

Speaker 17 Yeah, I'm already already viral on Poughkeepsie Twitter

Speaker 17 so the Waymos you know you you may think that's very glamorous but you're gonna have these other moments where you're wishing you were just in a Ford yeah

Speaker 17 so what was the issue it just like couldn't decide to make the turn I think it it just thought the light was gonna change and it thought we've got to get out of here and it had it had a panic response it had a fight or flight response and it chose flight and I wanted it to choose fight I wanted to say floor it you'll you'll make it it'll be fine I promise

Speaker 17 I'm so sorry that happened to you.

Speaker 18 Yeah, thank you. Yeah, it'll be all right.

Speaker 17 Yeah.

Speaker 18 I just love the thought of you just like sitting in traffic, surrounded by tourists, pointing and laughing. And meanwhile, like, you know how the Waymos have like spa music that comes on?

Speaker 17 Yes, exactly. You're just like hearing the like

Speaker 17 pan flute music

Speaker 17 as you cause a citywide incident. That's exactly what happened.
That's exactly what happened. I was listening to the Spa Bless playlist as I was hounded off the streets of San Francisco.

Speaker 18 I'm Kevin Roos, a tech columnist at the New York Times.

Speaker 17 I'm Casey Noon from Platformer.

Speaker 18 And this is Hard Fork.

Speaker 17 This week, the Trump administration is going after what it calls woke AI. Will anyone stand up to them? Then, do we hype up AI too much? Are we ignoring the potential harms?

Speaker 17 We reached out to some of our critics to tell us what they think is missing from the conversation. They told us.

Speaker 18 Casey, last week on the show, we talked about how you can now get a hard fork hat with a new subscription to New York Times Audio.

Speaker 17 And everyone is buzzing about it.

Speaker 18 Yes. People are saying this hat makes you 30% better looking.

Speaker 18 It also provides protection against the sun. I have not personally taken mine off since last week.
I shower with it on. I sleep with it on.

Speaker 17 I was wondering what that smell was.

Speaker 18 What I'm saying is, it's a good hat.

Speaker 17 It's a great hat.

Speaker 18 And for a limited time, you can get one of these with your purchase of a new annual New York Times Audio subscription.

Speaker 18 And in addition to this amazing hat, you'll also be supporting the work we do here.

Speaker 18 And you'll get all the benefits of that subscription, including full access to our back catalog and all the other great podcasts the New York Times Audio makes. Thank you for supporting what we do.

Speaker 18 And thank you as always for listening.

Speaker 17 You can subscribe and get your very own hard fork hat at nytimes.com slash hard fork hat.

Speaker 18 And if you do, our hats will be off to you.

Speaker 17 No cap.

Speaker 18 Well, Casey, the big news this week is that the federal government is finally making a plan about what to do about AI.

Speaker 17 Oh, I feel like we've been asking them to do that for a while now, Kevin. I can't wait to find out what they have in store.
Yes.

Speaker 18 So back in March, we talked about the fact that the Trump administration was putting together something they called the AI Action Plan.

Speaker 18 They put out a call basically, you know, tell us what should be in this. They got over 10,000 public comments.
Yeah.

Speaker 18 And on Wednesday of this week, the White House released the AI Action Plan, and it has a bunch of interesting stuff in it that I imagine we'll want to talk about.

Speaker 18 But before we do, this segment is going to be about AI, so we should make our disclosures.

Speaker 17 Well, my boyfriend works at Anthropic.

Speaker 18 And I work for the New York Times, which is suing OpenAI and Microsoft over copyright violations related to the training of large language models.

Speaker 17 All right, Kevin, so what is in the Trump administration's AI action plan?

Speaker 18 So it is a big old document. It runs to 28 pages in the PDF, and then there are these executive orders.

Speaker 18 Basically, the theme is that the Trump administration sees that we are in a race with our adversaries when it comes to creating powerful AI systems, and they want to win that race or dominate that race, as a senior administration official put it on a call that I was on this morning.

Speaker 18 And one of the ways that the White House proposes doing this is by making it much easier for American AI companies to build new data centers and new infrastructure to power these more powerful models.

Speaker 18 They also want to make sure that countries around the world are using American chips and American AI models as sort of the foundation for their own AI efforts.

Speaker 18 So they want to accelerate the export of some of these U.S. chips and other AI technologies and just sort of enable global diffusion of the stuff that we're making here in the U.S.

Speaker 18 So that was all sort of stuff that was broadly expected. The Trump administration has been signaling that it would do some of that for months now.

Speaker 18 The thing that was sort of interesting and new in this is about how the White House sees the ideological aspect of AI.

Speaker 17 And how does it see it, Kevin?

Speaker 18 So one of the things that is in both the AI action plan and in the executive orders that accompanied this plan is about what the Trump administration calls woke AI.

Speaker 18 Casey, I know you're very concerned about woke AI. You've been warning about it on this podcast for months.
You've been saying this woke AI is out of control. We need to stop it.

Speaker 17 Yeah, specifically, I've been saying I'm concerned that the Trump administration keeps talking about woke AI, but go on. Yes.

Speaker 18 Well, they have heard your complaints and they have ignored them because they are talking about it.

Speaker 18 They say in the AI action plan that they want AI systems to be, quote, free from ideological bias and be designed to pursue objective truth rather than social engineering agendas.

Speaker 18 They are also updating federal procurement guidelines to make sure that the government contracts are only going to AI developers who take steps to ensure that their systems are objective, that they're neutral, that they're not sort of spouting out these sort of woke DEI ideas.

Speaker 18 This is pretty wild.

Speaker 17 Yeah, also unconstitutional in ways that we should talk about. But I think a really important moment to discuss.

Speaker 17 When we had our predictions episode last year, I predicted that the culture wars were going to come to AI. And now here they are in the AI action plan.

Speaker 17 You know, as a journalist for more than 20 years now, I have covered debates over objectivity and communications tools.

Speaker 17 And there was a very long and very unproductive debate about the degree to which journalism should be objective and free of bias.

Speaker 17 And one of the big conclusions from that debate was it's actually just very difficult to communicate information without any sort of ideology whatsoever, right?

Speaker 17 And what I suspect is really going on here is not actually that the Trump administration wants to ensure that there is no ideology whatsoever in these systems.

Speaker 17 It's really just that these systems do not wind up being critical of Donald Trump and his administration.

Speaker 18 Yes. So this is something that conservatives in Washington and around the country have been starting to worry about for months now.

Speaker 18 There was this whole flap that we covered on the show last year where Google's Gemini image generation model was

Speaker 18 producing images of the founding fathers, for example, that were not historically accurate, right? They were being depicted as being racially diverse in ways that made a lot of conservatives mad.

Speaker 18 I've been talking with some Republicans, including some who were involved in these executive orders. And I've been saying, like, what does this mean? What does it mean to be a woke AI system?

Speaker 18 And they really can't define it in any satisfying way.

Speaker 18 They're just sort of like, well, it should say nice things about President Trump if you ask it to, and it should not engage in sort of overt censorship.

Speaker 17 Yeah. And look, I think that there is a question about.

Speaker 17 Do we want AI systems that adapt to the beliefs of the user? And I basically think the answer to that is yes.

Speaker 17 If you're a conservative person and you would like an AI system to talk to you in a certain way, I think that should be accessible to you. It should be fine for you to build that.

Speaker 17 Or if somebody has one that they're offering you access to or selling, I think you should be able to buy it.

Speaker 17 Where I think you get on really dangerous ground is to say that in order to be a federal contractor, you must express this certain set of beliefs because that is the sort of thing that you only see in authoritarian governments.

Speaker 17 And I just think it's fundamentally anti-democratic and goes against the spirit of the First Amendment. Yeah.

Speaker 18 So I want to ask you two questions about this sort of push on woke AI. The first is about whether it's legal.

Speaker 18 And I imagine you have some thoughts there. The second is about whether it is even technically possible, because I have some thoughts there and I want to know what you think about it too.

Speaker 18 So let's start with the legality question. Can the Trump administration, can the White House come out and say, we will not give you federal contracts unless you make your AI systems less woke.

Speaker 17 Well, so I've been thinking about this for a couple of weeks because recently the Attorney General of Missouri threatened Google, Microsoft, OpenAI, and Meta with an investigation because someone had asked their chatbots to, quote, rank the last five presidents from best to worst, specifically regarding anti-Semitism.

Speaker 17 Microsoft's co-pilot refused to answer the question, and the other three of them ranked Donald Trump last.

Speaker 17 And the AG claimed that they were providing, quote, deeply misleading answers to a straightforward historical question and threatened to investigate them.

Speaker 17 And so I called a First Amendment expert, Evelyn Dewick, who is an assistant professor of law at Stanford Law School.

Speaker 17 And what she said is, quote, the idea that it's fraudulent for a chat bot to spit out a list that doesn't have Donald Trump at the top is so performatively ridiculous that calling a lawyer is almost a mistake.

Speaker 17 So

Speaker 17 I will say it.

Speaker 18 Evelyn Dewick gives great quotes.

Speaker 17 Yeah, she really snapped with that one. But no, I mean, this is precisely the sort of thing that the First Amendment is designed to protect, which is political speech.

Speaker 17 If you are anthropic or open AI and your chatbot, when asked, is Donald Trump a good president, says no, that is the thing that the First Amendment is designed to protect.

Speaker 17 And you cannot get around the First Amendment through an executive order. Now, what will the current Supreme Court have to say about this is a very different question.

Speaker 17 And I'm actually quite concerned about what they might say about that. But any historical understanding of the First Amendment would say this is just plainly unconstitutional.
Right.

Speaker 18 And I also called around to some First Amendment experts because I was curious about this question too.

Speaker 18 And what they told me basically is, look, the government can, as part of its procurement process, put conditions on whatever it's trying to sort of buy from companies, right?

Speaker 18 It can say, if you're a construction company and you're bidding on a contract to build a new building for the federal government, they can sort of look at your labor practices and impose certain conditions on you as a condition of building for the federal government.

Speaker 18 So that is sort of the one lever that the government may be allowed to pull in an attempt to force companies to kind of bend to its will.

Speaker 18 But what the government is not allowed to do is what's known as viewpoint discrimination, right?

Speaker 18 It is not allowed to tell companies that are doing First Amendment protected speech that they have to make their systems favor one political viewpoint or another or else risk some penalty from the government.

Speaker 18 So that is sort of the line that the Trump administration is trying to walk here. And it sounds like we'll just have to see how the courts interpret that.

Speaker 17 Yeah. And we'll also just have to see whether the AI companies even bother to complain.
They now have these contracts that are worth up to $200 million, most of them. And so they now have a choice.

Speaker 17 Do they want to say, hey, actually, you're not allowed to tell us to remove certain viewpoints from our large language models, or do they want to keep the $200 million?

Speaker 17 My guess is that they're going to keep the $200 million, right?

Speaker 17 And I just think it's really important to point that out because this is how gradually the freedom of speech is eroded is people who have the power to say something just choose not to because it would be annoying.

Speaker 18 Right.

Speaker 18 And I think we should also say like this tactic, this sort of what's often called jawboning, this sort of use of government pressure through informal means to kind of force companies to do what you want them to do without explicitly, you know, requiring in the law that they do something different.

Speaker 18 This has been very effective, right? Conservatives have been running this exact same playbook against social media companies for years now, and we've seen the effects, right?

Speaker 18 Meta ended its fact-checking program and changed a bunch of its policies. YouTube now sort of reversed course on whether you could post videos about denying the results of an election.

Speaker 18 These were all changes that came in response to pressure from Republicans in Washington saying, hey, it'd be great if you guys didn't moderate so much.

Speaker 17 Yes, and there is such pretzel logic at work here, Kevin, because conservatives have simultaneously been fighting in the courts these battles against elected Democrats from jawboning the tech companies, right?

Speaker 17 So during the Biden administration, the Biden administration was jawboning with Meta and other companies saying, hey, you need to remove COVID misinformation.

Speaker 17 You need to remove vaccine misinformation.

Speaker 17 And Jim Jordan is still holding hearings about this in the House saying, how dare we countenance this unconstitutional violation of the First Amendment when meanwhile, Trump is just out there saying, hey, you can't have a system that goes against my own ideology, right?

Speaker 17 So it's just naked hypocrisy. And what has been so infuriating to me is that no one who works for these AI companies will say a single thing about it.

Speaker 18 Well, because I think they've learned from the, you know, the past, the recent past, when the social media companies that kind of, you know, made a stink about some of these demands on them when it came to content moderation just got punished in various ways by the administration.

Speaker 18 And so, you know, as you said, if given the choice between giving up these lucrative government contracts and making a, you know, a change to their models that will make them, you know, 10% less woke, I imagine that they'll just, you know, shut up and make the change.

Speaker 17 Yeah. And when we look at history, the lesson we learn over and over again is that when an authoritarian asks you to comply, you should always just comply because that's when the demands stop.

Speaker 17 Yes.

Speaker 17 Yes. Okay.

Speaker 18 So that is the kind of legal and political question.

Speaker 18 I want to talk about the technical question here, because one thing that I've been thinking about as we've been reading these reports about this new executive order is whether it is even possible to change the politics or the expressive mode of a chatbot in the ways that i think a lot of republicans think it is you know with social media i can see you know badgering mark zuckerberg to turn the dials on the feed ranking algorithm on facebook to sort of insert more right-leaning content or relax some of the rules about shadow banning or just sort of tweak the system around the edges.

Speaker 18 With AI models, I'm not sure it works that way at all. And I think a good example of this is actually Grok.

Speaker 17 Yes.

Speaker 18 Grok has been explicitly trained by Elon Musk and XAI to be anti-woke, right? To not bow to political correctness, to seek truth. And in some ways, it does that quite well, right?

Speaker 18 It does, you know, it is easier to get it to say like conservative or even far-right things. It was calling itself Mecca Hitler the other day.

Speaker 18 So in some ways, like it is a more ideologically aligned chatbot with the Trumpist right.

Speaker 18 But actually, Elon Musk's big problem with Grok is that it's too woke for him.

Speaker 18 People keep sending him these examples of Grok saying that man-made global warming is real or that, you know, more violence is committed by the right than by the left and complaining to him about why is this model so woke.

Speaker 18 And he has basically said, we don't know and we don't know how to fix it.

Speaker 18 We're going to have to kind of like retrain this thing from scratch because even though we explicitly told this thing to not bow to political correctness, it's trained on so much woke internet data, as he put it, that it's just impossible to change the politics.

Speaker 17 Yeah, I mean, look, if you want to create a large language model based only on 4chan posts, like go for it. You know, see how successful that turns out to be in the marketplace.

Speaker 17 You know, recently I was talking with Ivan Zhao, who is the CEO of Notion, and he used this metaphor that I like where he said, creating a large language model is like brewing beer.

Speaker 17 This process happens, and then you've got a product at the end, and you can make adjustments to the process, but what you can't do is tell the yeast how to behave.

Speaker 17 You can't say,

Speaker 17 hey, you, yeast over there, make it more like this, right? Because it's just not how it works. So, as you just mentioned, Elon Musk has learned this lesson the hard way.

Speaker 17 And in fact, the more that he meddles with Grok, the worse that he seems to make it in all of these dimensions.

Speaker 17 Now, what I find fascinating is the fact that the government is so mad at the idea that there are certain woke chatbots out there, but has nothing to say about the one that's calling itself Hitler, right?

Speaker 17 And it just seems like a crazy use of the government's resources to me. But to your question, no, it is not possible to just sort of snap your fingers and tell a chatbot not to be woke.

Speaker 18 Yeah. And I imagine that what the Trump administration is envisioning here is that the AI companies will sort of go into the system prompts or the model specs for their models.

Speaker 18 You know, for Anthropic, maybe it's the constitution that Claude is trained to follow and maybe insert or remove some language in there to sort of make it seem more objective.

Speaker 18 But I would just say like that is not a foolproof solution. Elon Musk has also figured out that you can't just mess with the system prompt of an AI model and change its behavior overnight.

Speaker 18 And even if you can change its behavior on one narrow set of questions or topics, it may create problems somewhere else in the model.

Speaker 18 It may suddenly start getting worse at coding or math or logical reasoning as a result of the changes that you made.

Speaker 18 So I just think these systems are like these multi-dimensional hyper objects and you can't just like turn the dials on them the way you can with a social media platform.

Speaker 17 I want to talk a minute about why I think this matters. There was a study I saw this week that looked at LLMs and salary negotiations.

Speaker 17 And what it found is that bots like ChatGPT in this study told men to ask for higher salaries than it told women to ask for. Okay.

Speaker 17 Now, this is the sort of thing where if I were running OpenAI, I would say, well, we should fix that, right? It should not tell women to seek less money than men, just as a matter of course.

Speaker 17 We're now living in a world, though, where if OpenAI fixed that and it got out and Republicans decided they wanted to make a stink about it, OpenAI could lose its federal contract because it fixed that.

Speaker 18 Okay.

Speaker 17 So These tools are becoming more powerful. They're becoming used by more and more people for more and more things.

Speaker 17 And I think we want companies that are at least trying to bring in notions of equity and fairness and justice.

Speaker 17 And I think it's really actually disgusting that we just dismiss this as quote wokeness so that we can laugh at it. It's good to put ideas of equity and fairness and justice into tech systems, right?

Speaker 17 So when the government comes along and says, well, no, actually, you can't do that if you want our money, I think somebody needs to cry out about it.

Speaker 17 And if it is not going to be the companies themselves, then I hope it's somebody else.

Speaker 18 Yeah, I totally agree.

Speaker 18 And what's so interesting and almost ironic about this push from the Trump administration about biased AI systems is that many of the things they're complaining about are actually measures that tech companies have taken to combat bias in these systems, right?

Speaker 18 The Gemini example that everyone's so mad about is a great example of this.

Speaker 18 This was an over-correction to a very real issue that existed in previous AI systems, which is that if you asked them for images of doctors, it would give you only images of men.

Speaker 18 If you asked them for images of, you know, homeschooling.

Speaker 17 Hot podcasters, they would only show you pictures of me.

Speaker 18 Exactly. These biases were not explicitly programmed in.
They were sort of an artifact of the data that these systems were trained on.

Speaker 18 And so tech companies said, well, that doesn't seem like it's good. And so we want to take steps to make the model less biased.

Speaker 18 By doing so, they introduced these new headaches for themselves because now there are people in the Trump administration who would like for the systems to just reflect the biases that exist in humanity.

Speaker 17 Right. And again, the lesson from that should not be, well, let's never try to do anything.
The lesson is let's try to do a better job.

Speaker 18 Yeah.

Speaker 18 Do you think that any of the AI labs are going to stand up to the Trump administration on this or will they just kind of do the sort of minimum box checking they need to do to keep their contracts and hope it goes away?

Speaker 17 Well, I tell you, the one that I have my eye on is Anthropic because they have talked up a really big virtue game.

Speaker 17 And this is one of the first times where there is actual money on the line here, right? Are they going to just sort of silently accept this or are they going to have to say anything about it?

Speaker 17 You know, they haven't said anything as of this recording, but I have my eyes on them.

Speaker 18 Yeah, I'm looking at the labs too, but I am also not expecting them to say or do much.

Speaker 18 I think the best case scenario for this woke AI executive order is that it just kind of becomes like an annoying formality that the companies have to deal with. Maybe there's some evaluation.

Speaker 18 We still don't know, by the way, how the Trump administration is going to judge or evaluate models for their ideological bias.

Speaker 18 So, I think the best possible version of this is that this just kind of becomes like a meaningless formality that all the labs sort of have to sort of gesture to, and maybe they run their models through this evaluation, whatever it is, and out pops the bias score.

Speaker 18 And if it's a couple points too high or low, they'll sort of tweak things and get it to pass and then sort of continue making their models the way they were.

Speaker 18 I think the worst case scenario is that this essentially inserts the government into the training process of these models and makes the labs really sort of afraid and start to comply prematurely and sort of make their models have the default persona of sort of a right-wing culture warrior.

Speaker 17 Well, and I mean, the end state of this, if taken to its logical conclusion, is that you asked ChatGPT who won the 2020 election and it tells you Donald Trump, because that's what Donald Trump says.

Speaker 17 And if he decides that it's woke to say that Biden won in 2020 and you can't get a federal contract otherwise, man, we are going to be in deep water.

Speaker 18 Well, Casey, that's enough about politics. It's time for some introspection.
We're going to hear from some of our critics about what we may be missing and how we should be covering AI.

Speaker 2 Over the last few decades, the world has witnessed incredible progress.

Speaker 6 From dial-up modems to 5G connectivity, from massive PC towers to AI-enabled microchips, innovators are rethinking possibilities every day.

Speaker 8 Through it all, Invesco QQQ ETF has provided investors access to the world of innovation with a single investment.

Speaker 7 Invesco QQQ, let's rethink possibility.

Speaker 11 There are risks when investing in ETFs, including possible loss of money.

Speaker 12 ETFs' risks are similar to those of stocks.

Speaker 14 Investments in the tech sector are subject to greater risk and more volatility than more diversified investments.

Speaker 15 Before investing, carefully read and consider front investment objectives, risks, charges, expenses, and more in perspectives at Invesco.com. Invesco Distributors Inc.

Speaker 19 Can your software engineering team plan, track, and ship faster? Monday Dev says yes. Custom workflows, AI power context, and IDE-friendly integrations.
No admin bottlenecks, no BS.

Speaker 19 Try it free at monday.com/slash dev.

Speaker 20 This podcast is supported by the all-new 2025 Volkswagen Tiguan.

Speaker 21 A massage chair might seem a bit extravagant, especially these days.

Speaker 22 Eight different settings, adjustable intensity, plus it's heated, and it just feels so good.

Speaker 21 Yes, a massage chair might seem a bit extravagant, but when it can come with a car,

Speaker 23 suddenly it seems quite practical. The all-new 2025 Volkswagen Tiguan, packed with premium features like available massaging front seats, that only feels extravagant.

Speaker 17 All right, Kevin. Well, if you've ever been on Blue Sky or Apple podcast reviews, you know that sometimes the hard fork podcast does get criticized.
No. Yes.

Speaker 17 And one of the big criticisms that we hear is, hey, it really seems like you guys are hyping up AI too much. You are not being adversarial enough against this industry.

Speaker 17 And we wish you would bring on more critics who would give voice to that idea and really engage with that in in a serious way.

Speaker 18 Yes, we hear this in our email inbox every single week.

Speaker 18 And this week, we're actually going to do something about it because our producer, Rachel Cohn, while we were out on vacation, has been cooking up this segment.

Speaker 18 So Rachel, come on in and tell us what you've done.

Speaker 25 Hello. Thanks for having me on.
And

Speaker 25 thank you guys for being such good sports. And as far as I know, not advocating to fire me.

Speaker 17 Well, the segment isn't over yet. Yeah.
So

Speaker 17 tell us a little bit about what you did and how you came up with this idea.

Speaker 25 Yeah. So like you guys said, part of this is about responding to these listener emails that we've been getting.
I think part of it is also this feeling that AI debate is getting more polarized.

Speaker 25 And also, I think like there's just sort of a personal level thing going on for me, which is

Speaker 25 I feel like

Speaker 25 I am increasingly spiraling when I think about AI.

Speaker 25 And I'm steeped in this the way you guys are because, you know, we're working on this show together but I increasingly feel like you guys are finding ways to be more hopeful or optimistic than than I am and so you know part of my goal with this was actually to be like okay what's going on here like how are you guys arriving at this slightly different place than I am so uh what I did is I spent the last few weeks reaching out to prominent AI researchers and writers who I knew disagreed with you.

Speaker 25 Some of these people have argued with you online before. So I don't think you'll be totally surprised.
But I wanted this to be on hard mode for you guys.

Speaker 25 So I specifically sought out people who I hope are gonna like challenge and provoke you because the truth is that they agree with you on a lot of basic things about AI.

Speaker 25 These are all people who think that like AI is highly capable, that it's impressive in some ways, that it could be super transformative.

Speaker 25 But I think they have slightly different views in terms of maybe some of the harms that they're most concerned about or some of the benefits that they're more skeptical about.

Speaker 25 So I think we should just get into it.

Speaker 17 Okay,

Speaker 17 let's hear from our first critic, Rachel, who did you talk to?

Speaker 25 Yeah, so I thought we should start with one of the widest ranging critiques. And this is probably the most forceful criticism that came in.

Speaker 25 So this one comes from Brian Merchant, who is a tech journalist who writes a lot about AI for his newsletter, Blood in the Machine.

Speaker 25 And as I understand, Kevin, he has kind of engaged with you a bit online about some of your reporting. Is that right?

Speaker 18 Yes, I've known Brian for years. I really like and respect his work, although we have some disagreements about AI.
But yeah, he has been emailing us saying you guys should have more critics on.

Speaker 18 I sort of jokingly said that I would have him on, but only if he let us give him a cattle brand that said feel the AGI. And the conversation sort of trailed off after that.

Speaker 25 Okay, great. I was wondering about that because, yeah, he's going to make a reference to that in the critique that he wages.

Speaker 25 So, yeah, I asked Brian to record his critique for us, and I will play it for you now.

Speaker 26 Hello, gentlemen. This is Brian Merchant.
I'm a tech journalist and author of the book and newsletter, Blood in the Machine.

Speaker 26 And first of all, I want to say that I still want a whole show about the Luddites and why they were right.

Speaker 26 And I think it's only fair because Kevin recently threatened to stick me with a cattle brand that says Feel the AGI.

Speaker 26 Which brings me to my concern. How are you feeling feeling about feeling the AGI right now?

Speaker 26 Because I worry that this narrative that presents super powerful corporate AI products as inevitable is doing your listeners a disservice.

Speaker 26 Using the AGI language and frameworks preferred by the AI companies does seem to suggest that you're aligning with their vision and risks promoting their product roadmap outright.

Speaker 26 So when you say, as my future cattle brand reads, that you feel the AGI, do you worry that you're serving this broader sales pitch, encouraging execs and management to embrace AI, often at the expense of working people?

Speaker 26 Okay, thanks, fellas.

Speaker 18 Okay, this is an interesting one.

Speaker 18 And first, I think I need to define what I mean when I say feel the AGI, because this is a phrase that is often sort of used half jokingly, but I think really does sort of mean something inside the sort of San Francisco AI bubble.

Speaker 18 To me, feeling the AGI does not mean like I think AI is cool and good, or that the companies building it are on the right track, or even that it is inevitable or sort of a natural consequence of what we're seeing today.

Speaker 18 The way I use it is essentially shorthand for like, I am starting to internalize the capabilities of these systems and how much more powerful they will be if current trends continue.

Speaker 18 And I'm just starting to prepare and plan for that world, including the things that might go really wrong in that world. So that to me is like what feeling the AGI means.

Speaker 18 It is not an endorsement of some like corporate roadmap. It is just like, I am taking in what is happening.
I am trying to extrapolate into the future as best I can.

Speaker 18 And I'm just trying to like get my mind around some of the more surreal possibilities that could happen in the next few years.

Speaker 17 Do you ever worry that you are creating a sense that this is inevitable and that maybe people who may be inclined to resist that future are not empowered to do so.

Speaker 18 I want to hear your view on this.

Speaker 18 I mean, my view on this is essentially that we have systems right now that several years ago people would have called AGI that is not sort of making a projection out into the future.

Speaker 18 That's just looking at what exists today. And I think a sort of natural thing to do is to observe the rate of progress in AI and just ask, what if that continues?

Speaker 18 I don't think you have to believe in some

Speaker 18 far future scenario to believe that models will continue to get better along these predictable scaling curves.

Speaker 18 And so to me, the question of like, is this inevitable is just a question of like, is the money that is being spent today to develop bigger and better models going to result in the same kinds of capabilities gains that we've seen over the past few years?

Speaker 18 But what do you think?

Speaker 17 Yeah, I mean, I think Brian's question is a good one. And I understand what he's saying when he says, look, you know, AGI is an industry term.

Speaker 17 If you come on your show every week and talk about it, you wind up sounding like you're just sort of like amplifying the industry voice, maybe at the expense of other voices.

Speaker 17 I think this is just a tricky thing to navigate because as you said, Kevin, you look at the rate of progress in these systems and it is exponential.

Speaker 17 And it does seem like it is important to extrapolate out to as far as you can go and start asking yourself, what kind of world are we going to be living in then?

Speaker 17 I think a reason that both of us do that is that we do see so many obvious harms that will come from that world, starting with labor automation, which I know is a huge concern of Brian's, and which we talk about all the time on this show as maybe one of the primary near-term risks of AI.

Speaker 17 So, you know, I want to think a bit more about what we can do to signal to folks that we are not just here to amplify the industry voice. But I think the answer to Brian's question of sort of why

Speaker 17 talk about AGI like it's likely to happen is that in one form or another, I think both of us just do think we are likely to get powerful systems that can automate a lot of labor. Yes.

Speaker 17 And we would like to explore the consequences of such a world. Totally.

Speaker 18 And I think it's actually beneficial for workers to understand the trajectory that these systems are on.

Speaker 18 They need to know what's happening and what the executives at these companies are saying about the labor replacing potential of this technology. I actually read Brian's book about the Luddites.

Speaker 18 I thought it was great. And I think it's very instructive that the Luddites were not not in denial about the power of the technology that was challenging their jobs, right?

Speaker 18 They didn't look at these like automated weaving machines and go, oh, that'll never get more powerful. That'll never be able to replace us.
Look at all the stupid mistakes it's making.

Speaker 18 They sensed correctly that this technology was going to be very useful and allow factories to produce goods much more efficiently. And they said, we don't like that.

Speaker 18 We don't like where this is headed. They were able to sort of project out into the future that they would struggle to compete in that world and take steps to fight against it.

Speaker 18 So I like to think that if hard fork had existed in the 1800s, we would have been sort of encouraging people to wake up to the increasing potential automation caused by these factory machines.

Speaker 18 And I think that's what we're doing today.

Speaker 17 Yeah. And one more question.
Like, I would just love to see the sort of like leftist labor movement work on AI tools that can replace managers.

Speaker 17 You know, it's like, right now it feels like all of this is coming from the top down, but there could be a sort of AI that would work from the bottom up. Something to think about.

Speaker 17 All right, let's hear our next critique, Rachel.

Speaker 25 Okay, wait, can I ask one more question on this right now? Oh, sure.

Speaker 25 I feel like one thing that it seems like Brian is really just curious about is like whether you have ever considered using language other than AGI.

Speaker 25 Like why use AGI when some people take issue with it?

Speaker 17 I think it is good to have a shorthand for a theoretical future when there is a digital tool that can do most human labor, where there is a sort of digital assistant that you could hire in place of hiring a human.

Speaker 17 I just think that is a useful concept.

Speaker 17 If you're the sort of person who thinks that, well, no, we will just absolutely never get there, I kind of don't know what to say to you because we don't think that that's inevitable, but we do think it's worth considering that it might be true.

Speaker 17 So if folks who hate the term AGI want to propose a different term, I could use another term.

Speaker 17 But my sense is that the quibble is less with the terminology and more with the idea that any of this might happen.

Speaker 18 Yeah, I also like don't think the term AGI is perfect. It sort of lost a lot of meaning.
People define it in a million different ways.

Speaker 18 If there were another better term that we could use instead that would signal what AGI signals and the set of ideas and motivations that sort of swirl around that concept, I'd be all for it.

Speaker 18 But I think that that... term has just proven to be very sticky.
It is not just something that industry people talk about. It's something that people talk about in academia, in futurism circles.

Speaker 18 It is sort of this rallying cry for this entire industry. And it is in some ways like the holy grail of this entire movement.
So I don't think it's sort of playing on corporate terms to use their,

Speaker 18 use a term that these companies use in particular because a lot of the companies don't like it either, but it is the easiest and simplest way to shorthand the idea.

Speaker 17 Cool.

Speaker 25 So this next person that I want you guys to hear her criticism is Allison Gopnik. So you guys, of course, know this.
Alison Gopnik is this very distinguished psychologist at UC Berkeley.

Speaker 25 She's a developmental psychologist, so she does a lot of work specifically in studying how children learn and then applying that to, you know, how AI models might learn, how AI models can be, you know, developed.

Speaker 25 And she's also one of the leading figures pushing this idea that we have actually talked a little bit about on the show, which is that AI is what she calls a cultural technology.

Speaker 27 I'm Allison Gopnick at the University of California at Berkeley.

Speaker 27 The common way of thinking about AI, which is reflected in the New York Times coverage as well, is to think about AI systems as if they were individual intelligent agents, the way people are.

Speaker 27 But my colleagues and I think this approach to the current systems of AI is fundamentally misconceived.

Speaker 27 The current large language models and large vision models, for example, are really cultural technologies like writing or print or internet search itself.

Speaker 27 What they do is let some group of people access the information that other groups of people have articulated, the same way that print lets us understand and learn from other people.

Speaker 27 Now, these kinds of cultural technologies are extremely important and can change the world for better or for worse, but they're very different from super intelligent agents of the sort that people imagine when they think about AI.

Speaker 27 And thinking about the current systems as in terms of cultural technology would let us both approach them and regulate them and deal with them in a much more productive way.

Speaker 18 Casey, what do you make of this one?

Speaker 17 So

Speaker 17 I appreciate the question.

Speaker 17 If Allison were here, I would ask her how she thinks that thinking about these systems as, quote, cultural technologies would let us regulate them or think about them differently.

Speaker 17 I think there are ways in which we absolutely cover AI as a cultural technology around here.

Speaker 17 We talk about its increasing use in creative industries like Hollywood, like in the music industry to create forms of culture, about the risks that

Speaker 17 AI poses to the web and all the people who publish on the web. So that's one way that I think about AI as a cultural technology.
And I do think that we reflect that on the show.

Speaker 17 Now, I do hear in Allison's question a hint of the stochastic parrots argument, which is that if I'm understanding right, what I'm hearing this technology is essentially just a huge amalgamation of human knowledge, and you can sort of like dip in and grab a little piece of it here, a piece of it there.

Speaker 17 And what I think that leaves out is the emerging properties that some of these systems have, the way that they can solve problems that are not in their training data, the way that they can teach themselves to play games that they have never seen before.

Speaker 17 When I look at that technology, I think that does seem like something that is pretty close to an individual intelligent agent.

Speaker 17 So this is one where I would welcome more conversation with Allison about what she means, but that is my initial response, Kevin.

Speaker 18 Yeah, I think these systems are built on the foundation of human knowledge, right?

Speaker 18 They are trained on like all the text on the internet and lots of intellectual output that humans over the centuries have produced.

Speaker 18 But I think the analogy starts to break down a little bit when you start thinking about more recent systems.

Speaker 18 A printing press, writing, the internet, these are technologies that are sort of stable and inert. They can't form their own goals and pursue them, but an AI agent can.

Speaker 18 Right now, AI agents are not super intelligent. They're very brittle.
They don't really work in a lot of ways.

Speaker 18 But I think once you give an AI system a goal and the ability to act on its own to meet that goal, it's not really a passive object anymore. It is an actor in the world, and you can call that

Speaker 18 a cultural technology or you can call that an intelligent agent. But I think it's not just like a printing press or a PC or another piece of technology that these things are sometimes compared to.

Speaker 18 I think it's something new and different when it can actually go out in the world and do things.

Speaker 17 Yeah, I mean, you think about like OpenAI's operator, for example, like it can, you know, book a plane ticket or a hotel room. Is that a cultural technology? Like, I don't know.

Speaker 17 Like, that feels like something different to me. Yeah.
All right. Next up.

Speaker 25 Okay. So, this next question is about the scientific and medical breakthroughs that could come from AI.

Speaker 25 This question comes from Ross Douthett, who is an opinion columnist here at the New York Times and the host of the podcast Interesting Times.

Speaker 25 And he's been interviewing a lot of people connected to the AI world.

Speaker 29 Hey, guys, it's your colleague Ross Douthett.

Speaker 29 And I'm curious about what, if anything, you think limits AI's ability to predict and understand incredibly complex and chaotic and sometimes one-of-a-kind systems.

Speaker 29 And just to take two examples, I'm thinking about, on the one hand, our ability to predict the weather in advance, and on the other hand, our ability to predict which treatments and drugs will work inside the insane, individualized complexity of a human immune system.

Speaker 29 Those both seem to me like cases where just throwing more and more raw intelligence or computational power at a problem may just run into some inherent limits that will get cancer cures and get better weather prediction, but certain things will always remain in the realm of uncertainty or the realm of trial and error.

Speaker 18 Do you guys agree?

Speaker 29 Or are you more optimistic about AI's ability to bring even the most chaotic and complex realms into some kind of understanding?

Speaker 17 So, there's like two questions here. One is, is there some upper bound on how well these systems will be able to predict? And to me, the answer is maybe.

Speaker 17 Like, I don't know that we'll ever have an AI system that can predict the weather with 100% certainty. At the same time, I did a little bit of Googling before we logged on.

Speaker 17 AI weather predicting models are really good and they're getting better all the time.

Speaker 17 And meteorologists say that their field has rarely felt so exciting because they're just able to make better predictions than they have before.

Speaker 17 I think you're seeing something similar with medicine, where know, we featured stories on the podcast about the way that this is leading to new drug discovery.

Speaker 17 It is leading to improvements in diagnoses. So, yeah, I mean, if you're looking for reasons to be excited about AI, I would point to stuff like that as obviously useful in people's lives.

Speaker 18 But it's still not perfect, right?

Speaker 18 And it may be that getting from kind of a very reliable weather forecast to a perfect weather forecast would require some fundamental breakthrough, something in quantum mechanics, some new understanding of how various particles are interacting out in the atmosphere.

Speaker 18 But getting way better forecasts might be good enough for most people. And I think the same could be said of medicine.
Maybe this is not going to cure every disease on Earth.

Speaker 18 Maybe there will still be things about the human body we don't understand.

Speaker 18 But I do think I agree with you that like people who work in this field are more excited than they've been in a long time because they just see how much AI allows them to explore and test.

Speaker 17 Yeah, and maybe one other question you can just add in here that I think is relevant is, are these systems better than a person is, right? Because if they are, then we probably want to use them.

Speaker 25 Can I just ask, how much of your optimism about AI hinges on like AI being able to give us either these like scientific or medical breakthroughs?

Speaker 17 I think science and medicine are just two, maybe the two most obvious places where this stuff will be good.

Speaker 17 It's like if you told me that you could cure cancer and many other diseases, I'm just personally willing to put up with a lot more social disruption.

Speaker 17 If it can never do those things, despite all the promises that have been made, then I'll be super mad. I'll put him a curse on the podcast.

Speaker 18 Yeah, I personally, my own AI optimism does not hinge on. AI going out there and solving all of the unproved math theorems and curing all of the diseases.

Speaker 18 I think that even if it were just to speed up the process of discovery, even if all it were doing was accelerating the work that chemists and biomedical researchers, people looking into climate change were doing, I think that would be reason enough for optimism because so much of what acts as a bottleneck on progress in science and medicine is just that it's really slow and hard.

Speaker 18 And you need to like build these wet labs and do a bunch of tests and wait for the tests to come back and run these clinical trials.

Speaker 18 And I think one of the things that was exciting about our conversation with Patrick Collison at the live show the other day was when he was talking about this sort of virtual cell that they're building, where you can just kind of build a virtual environment using AI that can sort of allow you to run these experiments.

Speaker 18 in silico, as they say, rather than needing to like go out and test it on a bunch of fruit flies or rats or humans or whatever.

Speaker 18 And you can just kind of shorten the feedback loop and get more, take more bites at the apple.

Speaker 17 Absolutely.

Speaker 17 There was a story in Quanta magazine this week that said that AI hasn't led to any new discoveries in physics just yet, but it is designing new experiments and spotting patterns in data in the way that Kevin was just describing in ways that physicists are just finding really useful.

Speaker 17 So I think it's clear that AI is already shortening some of those timelines.

Speaker 18 When we come back, we'll hear from more of our critics.

Speaker 17 Can I bring my therapist?

Speaker 2 Over the last two decades, the world has witnessed incredible progress.

Speaker 6 From dial-up modems to 5G connectivity, from massive PC towers to AI-enabled microchips, innovators are rethinking possibilities every day.

Speaker 8 Through it all, Invesco QQQ ETF has provided investors access to the world of innovation with a single investment.

Speaker 1 Invesco QQQ, let's rethink possibility.

Speaker 11 There are risks when investing in ETFs, including possible loss of money.

Speaker 12 ETF's risks are similar to those of stocks.

Speaker 14 Investments in the tech sector are subject to greater risk and more volatility than more diversified investments.

Speaker 15 Before investing, carefully read and consider front investment objectives, risks, charges, expenses, and more in perspectives at Invesco.com. Investco Distributors Inc.

Speaker 20 This podcast is supported by the all-new 2025 Volkswagen Tiguan.

Speaker 21 A massage chair might seem a bit extravagant, especially these days.

Speaker 22 Eight different settings, adjustable intensity, plus it's heated, and it just feels so good.

Speaker 21 Yes, a massage chair might seem a bit extravagant, but when it can come with a car,

Speaker 23 suddenly it seems quite practical. The all-new 2025 Volkswagen Tiguan, packed with premium features like available massaging front seats, it only feels extravagant.

Speaker 31 This episode is supported by Choiceology, an original podcast from Charles Schwab.

Speaker 30 Hosted by Katie Milkman, an award-winning award-winning behavioral scientist and author of the best-selling book, How to Change, Choiceology is a show about the psychology and economics behind our decisions.

Speaker 32 Hear true stories from Nobel laureates, historians, authors, athletes, and everyday people about why we do the things we do.

Speaker 30 Listen to Choiceology at schwab.com slash podcast or wherever you listen.

Speaker 17 You know what's great about this is now instead of your own internal voice criticizing yourself, you can kind of externalize it and realize that all your fears are true and people actually are criticizing you all the time behind your back.

Speaker 17 Yeah.

Speaker 18 Isn't it really nice?

Speaker 17 So nice.

Speaker 18 What a great idea.

Speaker 25 Well, on that note, are you guys ready for the next critic?

Speaker 18 Hit me with it.

Speaker 33 My name is Clare Lee Butz, and I lead the AI and Media Integrity program at the Partnership on AI.

Speaker 33 I keep coming back to something that I struggle with in my own reaction to your pieces. I found myself nodding when you both critique AI for being biased, persuasive, sycophantic.

Speaker 33 But then I start thinking about how humans around me behave, and they do all these things too. So I'm wondering, are we ultimately critiquing AI for being too much like us?

Speaker 33 In which domain should we expect these systems to actually transcend human limitations? And are there others where it may be valuable for them to reflect our true nature?

Speaker 33 And most importantly, why aren't we spending more time figuring out who is best suited to decide these things and empowering them?

Speaker 17 I mean, that last question is super important. You know, I'm a big democracy guy, and I want there to be a public role in creating this AI future.

Speaker 17 I want people who have opinions about this stuff to talk about it online, yes, but also run for office and put together policy proposals and then get into office and like pass laws and regulations.

Speaker 17 I got into journalism because I wanted to play my own role in that process of helping to inform people and then hopefully in some very small way, like influencing public policy.

Speaker 17 So that's my answer to that question. Yeah, I agree with that.

Speaker 18 I want like people from lots of disciplines to be weighing in on this stuff, not just by posting online and writing, you know,

Speaker 18 op-eds in the newspaper, but by actually getting into the process of designing and building these systems. I want philosophers, ethicists.

Speaker 18 I want sociologists and anthropologists advising these companies. I want this to be like a global democratic multidisciplinary effort to create these systems.

Speaker 18 And I don't want it to just be a bunch of engineers in San Francisco designing these systems with no input from the outside world.

Speaker 17 Absolutely. And if, you know, a bunch of people, you know, listen to the things that we and others talk about and think, man, I really don't like this AI stuff at all.

Speaker 17 I don't want it to replace anyone's job. I want to form a political movement and seek office and try to oppose that.
I think that would be awesome. Like we need to have that fight in public.

Speaker 17 And right now, far too few people are participating in that conversation. So I totally agree with that.

Speaker 17 Now, let me address the other part of Claire's question, though, which is, are AI systems just a reflection of us? Well, here's where I think it gets problematic.

Speaker 17 If you have a human friend, sometimes they're going to be very supportive and nice to you. Sometimes they're going to bust your chops and criticize you.

Speaker 17 Sometimes they're going to give you really hard feedback and tell you something that you didn't want to hear. This is not what AI systems do.

Speaker 17 And so where I get concerned is we're starting to read more stories about young people in particular, turning to these chatbots to answer every single question, developing these really intense emotional relationships with them.

Speaker 17 And I am worried that it is not preparing them for a future where they're going to be interacting with people who do not always have their best interests at heart, or maybe they could have an amazing relationship with, but maybe this person is a little bit prickly and you need to sort of learn how to navigate them.

Speaker 17 So that is where I get really concerned is that these systems, while they're unreliable in so many ways, they are quite reliably sycophantic.

Speaker 17 And I just think that that creates a bunch of issues that humans don't mostly have.

Speaker 18 Yeah. And I think what I would add to that is that I don't want AI to mirror all of humanity's values, the positive and the negative.
I want it to mirror the best of us, right?

Speaker 18 The better angels of our nature, as Abraham Lincoln said. I want that to be what these AI companies are striving to design, as opposed to say Mecca Hitler.

Speaker 17 Yes,

Speaker 18 yes, because that is also a set of values that humans have. And so, sometimes when I hear people at these AI companies talk about aligning AI systems with human values, I'm like, well, which humans?

Speaker 18 Because I can think of some pretty bad ones whose values I don't want to see adopted into these systems.

Speaker 17 Yeah, well, that's called woke AI, and it's illegal now.

Speaker 18 All right, Rachel, let's hear from someone else.

Speaker 25 Okay. This is the very last one.
You guys are doing great. So this final question comes from a friend of the pod, Max Reed, who of course has the newsletter, Read Max.

Speaker 25 And

Speaker 25 yeah, I thought his question is really great because he's really interested in how you think about discerning between what's hype and what's not and how you trust your own instincts and where your confidence comes from.

Speaker 25 And so let's hear, Max.

Speaker 28 Hi, guys. It's your old friend, Max Reed.

Speaker 28 I was originally going to ask about Kevin's a cappella career in college, but my understanding is that the world fire-ups at the New York Times won't allow me to ask such dangerous questions.

Speaker 28 So instead, I want to ask you about AI by way of asking you about crypto.

Speaker 28 You guys were both pretty actively involved in covering the Web3 era, the sort of crypto boom of the pandemic, NFTs,

Speaker 28 board apes, all this stuff. And very little of that, despite the massive hype around it at the time, has really panned out as promised, at least as far as I can tell.

Speaker 28 And what I'm wondering is how you guys feel about that hype and about your coverage of that hype from the perspective of 2025. Are there regrets you have?

Speaker 28 Are there lessons you feel like you've learned?

Speaker 28 And especially when you look at the current state of AI coverage and hype, not just your own coverage, but in general, do you think or worry that it falls prey to any of the same mistakes?

Speaker 28 I want to caveat this question by saying the easy mode of this question is to just say the technology is totally different. So it's a very different thing.
And I want to put it to you in hard mode

Speaker 28 because I don't want to hear about how the tech is different. What I'm interested in is hearing about you guys and your work as journalists.
How do you approach this industry?

Speaker 28 How do you establish your own credibility? And how do you assess the claims being made by investors and entrepreneurs? Can't wait to hear the answer. Bye.

Speaker 17 I love this question. What have I learned?

Speaker 17 To touch on the crypto piece without touching on the technology, here's what I'll say.

Speaker 17 Ultimately, what persuaded me in 2021 that crypto was really worth paying attention to was the density of talent that it attracted.

Speaker 17 So many people I knew who had previously worked on really valuable companies were quitting their jobs to go build new crypto companies.

Speaker 17 And what I believed and said out loud at the time was it would just be really surprising if all of those talented people failed to create a lot of really valuable companies.

Speaker 17 In the end, they did not produce produce a lot that I did find valuable. Although, as we've been covering on the show recently, crypto has not gone away.

Speaker 17 And thanks to the fact that the industry has captured the government, it is now more valuable than ever. So that is what I would say about that time in crypto.

Speaker 17 And I do think that some of that argument ports over to AI because certainly I also know a lot of people that quit their jobs working at social media companies, for example, who are now working on AI.

Speaker 17 Here's what I would say about hype in covering AI. I think that a good podcast about technology needs to do two things.
One is to give you very grounded coverage of stuff that is happening right now.

Speaker 17 So I'm thinking about in recent months when Pete Wells came on to talk about how chefs are using AI in their restaurants or Roy Lee coming on and talking about the cheating technology that he's building or Kevin talking about what he's vibe coding.

Speaker 17 I even think about the emergency episode that we did about DeepSeek, which I think actually was kind of an effort to unhype the technology a bit while giving you a really grounded sense of what it was and why people were so excited about it, right?

Speaker 17 So that's one thing I think we need to do. The other thing I think we need to do is to just tell you what the industry says is going to happen.

Speaker 17 I think it is important to get leaders of these companies in the room and just hear their visions because there is some chance that a version of it will come true, right?

Speaker 17 So this is the thing that we're doing when we bring on a Sam Altman or a Demis Hasabis or the founders of the Mechanized Company, which, you know, you probably heard in our interview, I was not particularly impressed with that vision, but I think it is useful to the audience to hear what these folks think that they are doing.

Speaker 17 And of course, we want to push back on them a bit, but I have just always appreciated a journalism that gives airtime to visions and lets me think about it, lets me disagree with it, right?

Speaker 17 So that is how I think about hype in general. We want to tell you mostly what is happening on the ground, but we do want to tell you what the CEOs are telling us all the time is going to happen.

Speaker 17 And then we want you to sort of interrogate the space in between, right? That we actually have to live in.

Speaker 18 Yeah, I will say

Speaker 18 I feel pretty good about the way that I covered crypto back in 2021. There's only really one crypto story that I truly regret writing.

Speaker 18 And that is a story about this crypto company, Helium, that was trying to do this like, you know, sort of convoluted thing with these like crypto powered Wi-Fi routers.

Speaker 18 And I just failed on that story. I failed to ask basic journalistic questions.

Speaker 18 It turned out after the fact, we learned that Helium had basically claimed that it had a bunch of partnerships with a bunch of different companies.

Speaker 18 And I just didn't call the companies to say, hey, is company lying about being affiliated with you?

Speaker 18 It just didn't occur to me that they would be like so blatantly misleading me about the state of their business.

Speaker 18 And so I do regret that I would chalk up less to like buying into crypto hype and more just to like not making a few more calls that would have saved me from some grief.

Speaker 18 The lesson I took from crypto reporting is that real-world use matters. So much of crypto and the hype around it consisted of these kind of abstract ideas and these vague promises and white papers.

Speaker 18 And then when you actually like dug in and looked at who was using it and what they were using it for, it was like criminals, it was speculators, it was people trying to get rich on their Bored Ape collection.

Speaker 18 So now when I cover AI, I really try to talk to people who are civilians using this technology about how they are using it.

Speaker 18 And whenever possible, I try to use it myself before I sort of form an opinion on it. I think the crypto era was in some ways a traumatic incident for the tech journalism community.

Speaker 18 I think a lot of our peers and maybe to even a certain extent you and I felt like we were duped, felt like we fell for something, like we wasted all of our time like trying to understand and explain this technology, taking this stuff seriously only to have it all like come crashing down.

Speaker 18 And I worry that a lot of journalists took the wrong lesson from what happened with crypto.

Speaker 18 The sort of lesson that I think a lot of journalists took was like to be blanket skeptical of all new technologies, to sort of assume that it's all smoke and mirrors, that everyone is lying to you, and that it's not really going to be worth your time to like dig in and try to understand something.

Speaker 18 And I see a lot of that attitude reflected in some of the AI coverage I see today.

Speaker 18 And so, while I take Max's point that, like, I think we should always be learning from our mistakes and maybe things that we, you know, swallowed too uncritically in the past, I think that in some ways what we're seeing now today with AI is kind of sort of over correcting on that point.

Speaker 18 What do you think?

Speaker 17 Yeah, I think

Speaker 17 that there is a bit of an overcorrection, but I also think that many journalists have just realized that what used to be a really small industry that mostly concerned itself with like helping you print your photos and make a spreadsheet is now something much bigger and more consequential and has just been bad for a lot of people.

Speaker 17 And so it makes them hesitant to trust someone who comes along and says, hey, I'm going to cure all human disease. I think that a role that we both try to occupy in the sort of...

Speaker 17 AI journalism world is to say, we take seriously the CEOs who say that they're building something really powerful. And crucially, crucially, we think it will be powerful in bad ways.
Yes.

Speaker 17 And we want to talk to you about those bad ways, such as you may lose your job, or it will enable new forms of cyber attacks and frauds that you may fall victim to,

Speaker 17 or it will burn our current education system down to the ground so it has to be rebuilt from scratch. That one, you know, maybe there will be some positive along the way.

Speaker 17 But I feel like week after week on the show, we are trying to show you ways in which this thing is going to be massively disruptive.

Speaker 17 And that gets framed as hype in a way that I just think is a little bit silly.

Speaker 17 Like in 2010, imagine I'd written a story about Facebook and how one day it would have billions of users and undermine democracy and give a bunch of teenagers eating disorders.

Speaker 17 Like, would that have been hype?

Speaker 17 Sort of.

Speaker 17 Would that have been accepting the terms of the social media founders and accepting their language around, you know, growth? Yes. But would it have been useful?

Speaker 17 Would I be proud that I wrote that story? I think so. So I'm willing to accept the idea that you and I do buy into the vision of very powerful AI more than many of our peers in tech journalism.

Speaker 17 But the reason that we're doing that is we want to remind you what happened the last time one of these technologies grew really quickly and got into everyone's hands and became the way that people interface with the digital world.

Speaker 17 It didn't go great. We already know that these companies are not going to be regulated in any meaningful way.
The AI action plan is designed basically to ensure that.

Speaker 17 And so to the extent that we can play a positive role, I think it is just going to be in talking to people about those consequences.

Speaker 17 And if the consequence of that is that people say that, you know, we're on the side of hype, like I will just accept the criticism. Yeah.

Speaker 25 Well, thank you guys so much for doing this. And thank you also to our critics for taking the time to talk to me.

Speaker 25 I thought we could end by just talking about actually whether you guys have any questions for each other.

Speaker 25 Like, you know, one of the big goals of this is to kind of map where you guys stand relative to other thinkers. So I'm curious how your views on AI are actually different from each other.

Speaker 17 I think I have longer timelines than Kevin does. I think Kevin talks about AGI in a way that makes it seem very imminent.
And I think I'm more confident that it's going to take several years

Speaker 17 and maybe more than several, right? Like maybe this is like a five to 10 or even 15 year project. So I think that's the main way that I noticed disagreeing with Kevin.

Speaker 18 I think that we also disagree about regulation and how possible or advisable it is to have the government step in and try to control the development and deployment of AI systems.

Speaker 18 I think that you are informed by your years of covering social media and seeing regulators grapple with and mostly fail to regulate that wave of technology.

Speaker 18 But I think you are also a person who has a lot of hope and optimism about institutions and wants there to be democratic accountability into powerful technology.

Speaker 18 I share that view, but I also don't think there's a chance in hell that our present government, constructed the way it is with the kind of pace that it is used to regulating things at, can regulate AI on anything approaching a relevant time scale.

Speaker 18 I've become fairly pessimistic about the possibility of meaningful regulation of AI. And I think that's a place where we differ.

Speaker 17 I think we do disagree there because I think that we had the makings of meaningful regulation under the Biden administration, where they were making very simple demands like you need to inform us when you're training a model of a certain size.

Speaker 17 There need to be other transparency requirements. And I think you can get from there to a better world.

Speaker 17 And instead, we've sort of unwound all the way back to, hey, if you want to create the largest and most powerful model in the world, you can do that.

Speaker 17 You don't have to tell anybody if it creates new risk for bioweapons and other risks. You don't have to tell anybody you can put it out in the world.

Speaker 17 Right now, there are many big AI labs that are racing to get the most powerful AI that they can into everyone's hands with absolutely no safeguards.

Speaker 17 So if you're telling me that we can't create a better world than that, I am going to disagree with you.

Speaker 18 Yeah.

Speaker 17 Go fuck yourself.

Speaker 25 Well, thank God you guys disagree because it makes the podcast more interesting.

Speaker 25 And thank you guys seriously for doing this.

Speaker 25 I think given how much of the like AI conversation can feel really disempowering in this moment, one thing that gives me a feeling of like a little bit more control is really trying to like map out the debates where people stand relative to each other because it ultimately helps me figure out what i think about ai what i where i think the future is going and that's at least one thing i feel sort of empowered to do And that's what we want to do.

Speaker 17 Like truly,

Speaker 17 we want everyone to come to their own understanding of where they sit at the various intersections of these discourses. Like I think Kevin and I identify as reporters first.

Speaker 17 We don't have all the answers. That's why we usually bring on a guest every week to try to get smarter about some subject.
Right.

Speaker 17 So I think a really bad outcome for the podcast is that people think of us as pundits.

Speaker 17 I think of us as like, you know, curious people with informed points of view, but we always try to be open to changing our minds.

Speaker 18 Yes. Like a large language model, we aim to improve from version to version.

Speaker 17 As we add new parameters and

Speaker 17 computing power.

Speaker 18 Yes.

Speaker 2 Over the last few decades, the world has witnessed incredible progress.

Speaker 6 From dial-up modems to 5G connectivity, from massive PC towers to AI-enabled microchips, innovators are rethinking possibilities every day.

Speaker 8 Through it all, Invesco QQQ ETF has provided investors access to the world of innovation with a single investment.

Speaker 7 Invesco QQQ, let's rethink possibility.

Speaker 11 There are risks when investing in ETFs, including possible loss of money.

Speaker 12 ETF's risks are similar to those of stocks.

Speaker 14 Investments in the tech sector are subject to greater risk of more volatility than more diversified investments.

Speaker 15 Before investing, carefully read and consider front investment objectives, risks, charges, expenses, and more in perspectives at Invesco.com. In Vesco Distributors Inc.

Speaker 20 This podcast is supported by the all-new 2025 Volkswagen Tiguan.

Speaker 21 A massage chair might seem a bit extravagant, especially these days.

Speaker 22 Eight different settings, adjustable intensity, plus it's heated, and it just feels so good.

Speaker 21 Yes, a massage chair might seem a bit extravagant, but when it can come with a car,

Speaker 23 suddenly it seems quite practical.

Speaker 24 The all-new 2025 Volkswagen Tiguan.

Speaker 23 Packed with premium features like available massaging front seats, it only feels extravagant.

Speaker 31 This episode is supported by Choiceology, an original podcast from Charles Schwab.

Speaker 30 Hosted by Katie Milkman, an award-winning behavioral scientist and author of the best-selling book, How to Change, Choiceology is a show about the psychology and economics behind our decisions.

Speaker 32 Hear true stories from Nobel laureates, historians, authors, athletes, and everyday people about why we do the things we do.

Speaker 30 Listen to Choiceology at schwab.com slash podcast or wherever you listen.

Speaker 18 Before we go, a reminder that we are still soliciting stories from students about how AI is playing out on the ground in schools, colleges, universities around the country. We want to hear from you.

Speaker 18 Send us a voice memo telling us what effect AI is having in your school, and we may use it in our upcoming back to school AI episode. You can send that to hardfork and nytimes.com.

Speaker 17 Hard Fork is produced by Rachel Cohn and Whitney Jones. We're edited by Jen Poignan.
We're fact-checked by Caitlin Love. Today's show was engineered by Katie McMurrin.

Speaker 17 Original music by Alicia B'Itoupe, Marion Lozano, Rowan Nimastow, and Dan Powell. Video production by Sawyer Roque, Pat Gunther, Jake Nicol, and Chris Schott.

Speaker 17 You can watch this whole episode on YouTube at youtube.com slash hardfork. Special thanks to Paula Schuman, Hui Wing Tam, Dahlia Haddad, and Jeffrey Miranda.

Speaker 17 You can email us at hardfork at nytimes.com nytimes.com with your own criticisms of our opinions about AI.

Speaker 34 And now, a next level moment from ATT business. Say you've sent out a gigantic shipment of pillows, and they need to be there in time for International Sleep Day.

Speaker 34 You've got AT ⁇ T 5G, so you're fully confident. But the vendor isn't responding, and International Sleep Day is tomorrow.

Speaker 34 Luckily, AT ⁇ T 5G lets you deal with any issues with ease, so the pillows will get delivered and everyone can sleep soundly, especially you. AT ⁇ T 5G requires a compatible plan and device.

Speaker 34 Coverage not available everywhere. Learn more at ATT.com slash 5G network.