Trump Fights ‘Woke’ A.I. + We Hear Out Our Critics
Listen and follow along
Transcript
the last two decades, the world has witnessed incredible progress.
From dial-up modems to 5G connectivity, from massive PC towers to AI-enabled microchips, innovators are rethinking possibilities every day.
Through it all, Invesco QQQ ETF has provided investors access to the world of innovation with a single investment.
Invesco QQQ, let's rethink possibility.
There are risks when investing in ETFs, including possible loss of money.
ETF's risks are similar to those of stocks.
Investments in the tech sector are subject to greater risk and more volatility than more diversified investments.
Before investing, carefully read and consider front investment objectives, risks, charges, expenses, and more in perspectives at investig.com.
Investor Distributors Incorporated.
Let me tell you about something.
I was in a Waymo the other day and it was making a turn on Market Street, which if you've ever been to San Francisco, this is kind of a street that causes problems with all the other streets because it's diagonal.
Yes.
So the intersection has six different roads coming together.
But the Waymo is just about to complete a left turn.
Everything's about to be okay.
And the only way I could put it is it loses its nerve.
Oh, no.
There's a light about to change.
Pedestrians start walking to the crosswalk.
And so this thing just starts to back up like I'm talking 30 feet over like half a minute.
And pedestrians come into the crosswalk.
And Kevin, I swear to God, they start laughing and pointing at me.
They're laughing.
All of a sudden, I'm flashing back.
I'm in middle school.
I'm being ridiculed.
I have no control over this whatsoever.
And I've never looked like a bigger dweeb than I did in the back of a Waymo that failed to complete a left turn.
Oh, man, you were a tourist attraction.
I really was.
The people in Poughkeepsie are going to be telling their friends about this one for years.
Yeah, I'm already already viral on Poughkeepsie Twitter
so the Waymos you know you you may think that's very glamorous but you're gonna have these other moments where you're wishing you were just in a Ford yeah
so what was the issue it just like couldn't decide to make the turn I think it it just thought the light was gonna change and it thought we've got to get out of here and it had it had a panic response it had a fight or flight response and it chose flight and I wanted it to choose fight I wanted to say floor it you'll you'll make it it'll be fine I promise
I'm so sorry that happened to you.
Yeah, thank you.
Yeah, it'll be all right.
Yeah.
I just love the thought of you just like sitting in traffic, surrounded by tourists, pointing and laughing.
And meanwhile, like, you know how the Waymos have like spa music that comes on?
Yes, exactly.
You're just like hearing the like
pan flute music
as you cause a citywide incident.
That's exactly what happened.
That's exactly what happened.
I was listening to the Spa Bless playlist as I was hounded off the streets of San Francisco.
I'm Kevin Roos, a tech columnist at the New York Times.
I'm Casey Noon from Platformer.
And this is Hard Fork.
This week, the Trump administration is going after what it calls woke AI.
Will anyone stand up to them?
Then, do we hype up AI too much?
Are we ignoring the potential harms?
We reached out to some of our critics to tell us what they think is missing from the conversation.
They told us.
Casey, last week on the show, we talked about how you can now get a hard fork hat with a new subscription to New York Times Audio.
And everyone is buzzing about it.
Yes.
People are saying this hat makes you 30% better looking.
It also provides protection against the sun.
I have not personally taken mine off since last week.
I shower with it on.
I sleep with it on.
I was wondering what that smell was.
What I'm saying is, it's a good hat.
It's a great hat.
And for a limited time, you can get one of these with your purchase of a new annual New York Times Audio subscription.
And in addition to this amazing hat, you'll also be supporting the work we do here.
And you'll get all the benefits of that subscription, including full access to our back catalog and all the other great podcasts the New York Times Audio makes.
Thank you for supporting what we do.
And thank you as always for listening.
You can subscribe and get your very own hard fork hat at nytimes.com slash hard fork hat.
And if you do, our hats will be off to you.
No cap.
Well, Casey, the big news this week is that the federal government is finally making a plan about what to do about AI.
Oh, I feel like we've been asking them to do that for a while now, Kevin.
I can't wait to find out what they have in store.
Yes.
So back in March, we talked about the fact that the Trump administration was putting together something they called the AI Action Plan.
They put out a call basically, you know, tell us what should be in this.
They got over 10,000 public comments.
Yeah.
And on Wednesday of this week, the White House released the AI Action Plan, and it has a bunch of interesting stuff in it that I imagine we'll want to talk about.
But before we do, this segment is going to be about AI, so we should make our disclosures.
Well, my boyfriend works at Anthropic.
And I work for the New York Times, which is suing OpenAI and Microsoft over copyright violations related to the training of large language models.
All right, Kevin, so what is in the Trump administration's AI action plan?
So it is a big old document.
It runs to 28 pages in the PDF, and then there are these executive orders.
Basically, the theme is that the Trump administration sees that we are in a race with our adversaries when it comes to creating powerful AI systems, and they want to win that race or dominate that race, as a senior administration official put it on a call that I was on this morning.
And one of the ways that the White House proposes doing this is by making it much easier for American AI companies to build new data centers and new infrastructure to power these more powerful models.
They also want to make sure that countries around the world are using American chips and American AI models as sort of the foundation for their own AI efforts.
So they want to accelerate the export of some of these U.S.
chips and other AI technologies and just sort of enable global diffusion of the stuff that we're making here in the U.S.
So that was all sort of stuff that was broadly expected.
The Trump administration has been signaling that it would do some of that for months now.
The thing that was sort of interesting and new in this is about how the White House sees the ideological aspect of AI.
And how does it see it, Kevin?
So one of the things that is in both the AI action plan and in the executive orders that accompanied this plan is about what the Trump administration calls woke AI.
Casey, I know you're very concerned about woke AI.
You've been warning about it on this podcast for months.
You've been saying this woke AI is out of control.
We need to stop it.
Yeah, specifically, I've been saying I'm concerned that the Trump administration keeps talking about woke AI, but go on.
Yes.
Well, they have heard your complaints and they have ignored them because they are talking about it.
They say in the AI action plan that they want AI systems to be, quote, free from ideological bias and be designed to pursue objective truth rather than social engineering agendas.
They are also updating federal procurement guidelines to make sure that the government contracts are only going to AI developers who take steps to ensure that their systems are objective, that they're neutral, that they're not sort of spouting out these sort of woke DEI ideas.
This is pretty wild.
Yeah, also unconstitutional in ways that we should talk about.
But I think a really important moment to discuss.
When we had our predictions episode last year, I predicted that the culture wars were going to come to AI.
And now here they are in the AI action plan.
You know, as a journalist for more than 20 years now, I have covered debates over objectivity and communications tools.
And there was a very long and very unproductive debate about the degree to which journalism should be objective and free of bias.
And one of the big conclusions from that debate was it's actually just very difficult to communicate information without any sort of ideology whatsoever, right?
And what I suspect is really going on here is not actually that the Trump administration wants to ensure that there is no ideology whatsoever in these systems.
It's really just that these systems do not wind up being critical of Donald Trump and his administration.
Yes.
So this is something that conservatives in Washington and around the country have been starting to worry about for months now.
There was this whole flap that we covered on the show last year where Google's Gemini image generation model was
producing images of the founding fathers, for example, that were not historically accurate, right?
They were being depicted as being racially diverse in ways that made a lot of conservatives mad.
I've been talking with some Republicans, including some who were involved in these executive orders.
And I've been saying, like, what does this mean?
What does it mean to be a woke AI system?
And they really can't define it in any satisfying way.
They're just sort of like, well, it should say nice things about President Trump if you ask it to, and it should not engage in sort of overt censorship.
Yeah.
And look, I think that there is a question about.
Do we want AI systems that adapt to the beliefs of the user?
And I basically think the answer to that is yes.
If you're a conservative person and you would like an AI system to talk to you in a certain way, I think that should be accessible to you.
It should be fine for you to build that.
Or if somebody has one that they're offering you access to or selling, I think you should be able to buy it.
Where I think you get on really dangerous ground is to say that in order to be a federal contractor, you must express this certain set of beliefs because that is the sort of thing that you only see in authoritarian governments.
And I just think it's fundamentally anti-democratic and goes against the spirit of the First Amendment.
Yeah.
So I want to ask you two questions about this sort of push on woke AI.
The first is about whether it's legal.
And I imagine you have some thoughts there.
The second is about whether it is even technically possible, because I have some thoughts there and I want to know what you think about it too.
So let's start with the legality question.
Can the Trump administration, can the White House come out and say, we will not give you federal contracts unless you make your AI systems less woke.
Well, so I've been thinking about this for a couple of weeks because recently the Attorney General of Missouri threatened Google, Microsoft, OpenAI, and Meta with an investigation because someone had asked their chatbots to, quote, rank the last five presidents from best to worst, specifically regarding anti-Semitism.
Microsoft's co-pilot refused to answer the question, and the other three of them ranked Donald Trump last.
And the AG claimed that they were providing, quote, deeply misleading answers to a straightforward historical question and threatened to investigate them.
And so I called a First Amendment expert, Evelyn Dewick, who is an assistant professor of law at Stanford Law School.
And what she said is, quote, the idea that it's fraudulent for a chat bot to spit out a list that doesn't have Donald Trump at the top is so performatively ridiculous that calling a lawyer is almost a mistake.
So
I will say it.
Evelyn Dewick gives great quotes.
Yeah, she really snapped with that one.
But no, I mean, this is precisely the sort of thing that the First Amendment is designed to protect, which is political speech.
If you are anthropic or open AI and your chatbot, when asked, is Donald Trump a good president, says no, that is the thing that the First Amendment is designed to protect.
And you cannot get around the First Amendment through an executive order.
Now, what will the current Supreme Court have to say about this is a very different question.
And I'm actually quite concerned about what they might say about that.
But any historical understanding of the First Amendment would say this is just plainly unconstitutional.
Right.
And I also called around to some First Amendment experts because I was curious about this question too.
And what they told me basically is, look, the government can, as part of its procurement process, put conditions on whatever it's trying to sort of buy from companies, right?
It can say, if you're a construction company and you're bidding on a contract to build a new building for the federal government, they can sort of look at your labor practices and impose certain conditions on you as a condition of building for the federal government.
So that is sort of the one lever that the government may be allowed to pull in an attempt to force companies to kind of bend to its will.
But what the government is not allowed to do is what's known as viewpoint discrimination, right?
It is not allowed to tell companies that are doing First Amendment protected speech that they have to make their systems favor one political viewpoint or another or else risk some penalty from the government.
So that is sort of the line that the Trump administration is trying to walk here.
And it sounds like we'll just have to see how the courts interpret that.
Yeah.
And we'll also just have to see whether the AI companies even bother to complain.
They now have these contracts that are worth up to $200 million, most of them.
And so they now have a choice.
Do they want to say, hey, actually, you're not allowed to tell us to remove certain viewpoints from our large language models, or do they want to keep the $200 million?
My guess is that they're going to keep the $200 million, right?
And I just think it's really important to point that out because this is how gradually the freedom of speech is eroded is people who have the power to say something just choose not to because it would be annoying.
Right.
And I think we should also say like this tactic, this sort of what's often called jawboning, this sort of use of government pressure through informal means to kind of force companies to do what you want them to do without explicitly, you know, requiring in the law that they do something different.
This has been very effective, right?
Conservatives have been running this exact same playbook against social media companies for years now, and we've seen the effects, right?
Meta ended its fact-checking program and changed a bunch of its policies.
YouTube now sort of reversed course on whether you could post videos about denying the results of an election.
These were all changes that came in response to pressure from Republicans in Washington saying, hey, it'd be great if you guys didn't moderate so much.
Yes, and there is such pretzel logic at work here, Kevin, because conservatives have simultaneously been fighting in the courts these battles against elected Democrats from jawboning the tech companies, right?
So during the Biden administration, the Biden administration was jawboning with Meta and other companies saying, hey, you need to remove COVID misinformation.
You need to remove vaccine misinformation.
And Jim Jordan is still holding hearings about this in the House saying, how dare we countenance this unconstitutional violation of the First Amendment when meanwhile, Trump is just out there saying, hey, you can't have a system that goes against my own ideology, right?
So it's just naked hypocrisy.
And what has been so infuriating to me is that no one who works for these AI companies will say a single thing about it.
Well, because I think they've learned from the, you know, the past, the recent past, when the social media companies that kind of, you know, made a stink about some of these demands on them when it came to content moderation just got punished in various ways by the administration.
And so, you know, as you said, if given the choice between giving up these lucrative government contracts and making a, you know, a change to their models that will make them, you know, 10% less woke, I imagine that they'll just, you know, shut up and make the change.
Yeah.
And when we look at history, the lesson we learn over and over again is that when an authoritarian asks you to comply, you should always just comply because that's when the demands stop.
Yes.
Yes.
Okay.
So that is the kind of legal and political question.
I want to talk about the technical question here, because one thing that I've been thinking about as we've been reading these reports about this new executive order is whether it is even possible to change the politics or the expressive mode of a chatbot in the ways that i think a lot of republicans think it is you know with social media i can see you know badgering mark zuckerberg to turn the dials on the feed ranking algorithm on facebook to sort of insert more right-leaning content or relax some of the rules about shadow banning or just sort of tweak the system around the edges.
With AI models, I'm not sure it works that way at all.
And I think a good example of this is actually Grok.
Yes.
Grok has been explicitly trained by Elon Musk and XAI to be anti-woke, right?
To not bow to political correctness, to seek truth.
And in some ways, it does that quite well, right?
It does, you know, it is easier to get it to say like conservative or even far-right things.
It was calling itself Mecca Hitler the other day.
So in some ways, like it is a more ideologically aligned chatbot with the Trumpist right.
But actually, Elon Musk's big problem with Grok is that it's too woke for him.
People keep sending him these examples of Grok saying that man-made global warming is real or that, you know, more violence is committed by the right than by the left and complaining to him about why is this model so woke.
And he has basically said, we don't know and we don't know how to fix it.
We're going to have to kind of like retrain this thing from scratch because even though we explicitly told this thing to not bow to political correctness, it's trained on so much woke internet data, as he put it, that it's just impossible to change the politics.
Yeah, I mean, look, if you want to create a large language model based only on 4chan posts, like go for it.
You know, see how successful that turns out to be in the marketplace.
You know, recently I was talking with Ivan Zhao, who is the CEO of Notion, and he used this metaphor that I like where he said, creating a large language model is like brewing beer.
This process happens, and then you've got a product at the end, and you can make adjustments to the process, but what you can't do is tell the yeast how to behave.
You can't say,
hey, you, yeast over there, make it more like this, right?
Because it's just not how it works.
So, as you just mentioned, Elon Musk has learned this lesson the hard way.
And in fact, the more that he meddles with Grok, the worse that he seems to make it in all of these dimensions.
Now, what I find fascinating is the fact that the government is so mad at the idea that there are certain woke chatbots out there, but has nothing to say about the one that's calling itself Hitler, right?
And it just seems like a crazy use of the government's resources to me.
But to your question, no, it is not possible to just sort of snap your fingers and tell a chatbot not to be woke.
Yeah.
And I imagine that what the Trump administration is envisioning here is that the AI companies will sort of go into the system prompts or the model specs for their models.
You know, for Anthropic, maybe it's the constitution that Claude is trained to follow and maybe insert or remove some language in there to sort of make it seem more objective.
But I would just say like that is not a foolproof solution.
Elon Musk has also figured out that you can't just mess with the system prompt of an AI model and change its behavior overnight.
And even if you can change its behavior on one narrow set of questions or topics, it may create problems somewhere else in the model.
It may suddenly start getting worse at coding or math or logical reasoning as a result of the changes that you made.
So I just think these systems are like these multi-dimensional hyper objects and you can't just like turn the dials on them the way you can with a social media platform.
I want to talk a minute about why I think this matters.
There was a study I saw this week that looked at LLMs and salary negotiations.
And what it found is that bots like ChatGPT in this study told men to ask for higher salaries than it told women to ask for.
Okay.
Now, this is the sort of thing where if I were running OpenAI, I would say, well, we should fix that, right?
It should not tell women to seek less money than men, just as a matter of course.
We're now living in a world, though, where if OpenAI fixed that and it got out and Republicans decided they wanted to make a stink about it, OpenAI could lose its federal contract because it fixed that.
Okay.
So These tools are becoming more powerful.
They're becoming used by more and more people for more and more things.
And I think we want companies that are at least trying to bring in notions of equity and fairness and justice.
And I think it's really actually disgusting that we just dismiss this as quote wokeness so that we can laugh at it.
It's good to put ideas of equity and fairness and justice into tech systems, right?
So when the government comes along and says, well, no, actually, you can't do that if you want our money, I think somebody needs to cry out about it.
And if it is not going to be the companies themselves, then I hope it's somebody else.
Yeah, I totally agree.
And what's so interesting and almost ironic about this push from the Trump administration about biased AI systems is that many of the things they're complaining about are actually measures that tech companies have taken to combat bias in these systems, right?
The Gemini example that everyone's so mad about is a great example of this.
This was an over-correction to a very real issue that existed in previous AI systems, which is that if you asked them for images of doctors, it would give you only images of men.
If you asked them for images of, you know, homeschooling.
Hot podcasters, they would only show you pictures of me.
Exactly.
These biases were not explicitly programmed in.
They were sort of an artifact of the data that these systems were trained on.
And so tech companies said, well, that doesn't seem like it's good.
And so we want to take steps to make the model less biased.
By doing so, they introduced these new headaches for themselves because now there are people in the Trump administration who would like for the systems to just reflect the biases that exist in humanity.
Right.
And again, the lesson from that should not be, well, let's never try to do anything.
The lesson is let's try to do a better job.
Yeah.
Do you think that any of the AI labs are going to stand up to the Trump administration on this or will they just kind of do the sort of minimum box checking they need to do to keep their contracts and hope it goes away?
Well, I tell you, the one that I have my eye on is Anthropic because they have talked up a really big virtue game.
And this is one of the first times where there is actual money on the line here, right?
Are they going to just sort of silently accept this or are they going to have to say anything about it?
You know, they haven't said anything as of this recording, but I have my eyes on them.
Yeah, I'm looking at the labs too, but I am also not expecting them to say or do much.
I think the best case scenario for this woke AI executive order is that it just kind of becomes like an annoying formality that the companies have to deal with.
Maybe there's some evaluation.
We still don't know, by the way, how the Trump administration is going to judge or evaluate models for their ideological bias.
So, I think the best possible version of this is that this just kind of becomes like a meaningless formality that all the labs sort of have to sort of gesture to, and maybe they run their models through this evaluation, whatever it is, and out pops the bias score.
And if it's a couple points too high or low, they'll sort of tweak things and get it to pass and then sort of continue making their models the way they were.
I think the worst case scenario is that this essentially inserts the government into the training process of these models and makes the labs really sort of afraid and start to comply prematurely and sort of make their models have the default persona of sort of a right-wing culture warrior.
Well, and I mean, the end state of this, if taken to its logical conclusion, is that you asked ChatGPT who won the 2020 election and it tells you Donald Trump, because that's what Donald Trump says.
And if he decides that it's woke to say that Biden won in 2020 and you can't get a federal contract otherwise, man, we are going to be in deep water.
Well, Casey, that's enough about politics.
It's time for some introspection.
We're going to hear from some of our critics about what we may be missing and how we should be covering AI.
Over the last few decades, the world has witnessed incredible progress.
From dial-up modems to 5G connectivity, from massive PC towers to AI-enabled microchips, innovators are rethinking possibilities every day.
Through it all, Invesco QQQ ETF has provided investors access to the world of innovation with a single investment.
Invesco QQQ, let's rethink possibility.
There are risks when investing in ETFs, including possible loss of money.
ETFs' risks are similar to those of stocks.
Investments in the tech sector are subject to greater risk and more volatility than more diversified investments.
Before investing, carefully read and consider front investment objectives, risks, charges, expenses, and more in perspectives at Invesco.com.
Invesco Distributors Inc.
Can your software engineering team plan, track, and ship faster?
Monday Dev says yes.
Custom workflows, AI power context, and IDE-friendly integrations.
No admin bottlenecks, no BS.
Try it free at monday.com/slash dev.
This podcast is supported by the all-new 2025 Volkswagen Tiguan.
A massage chair might seem a bit extravagant, especially these days.
Eight different settings, adjustable intensity, plus it's heated, and it just feels so good.
Yes, a massage chair might seem a bit extravagant, but when it can come with a car,
suddenly it seems quite practical.
The all-new 2025 Volkswagen Tiguan, packed with premium features like available massaging front seats, that only feels extravagant.
All right, Kevin.
Well, if you've ever been on Blue Sky or Apple podcast reviews, you know that sometimes the hard fork podcast does get criticized.
No.
Yes.
And one of the big criticisms that we hear is, hey, it really seems like you guys are hyping up AI too much.
You are not being adversarial enough against this industry.
And we wish you would bring on more critics who would give voice to that idea and really engage with that in in a serious way.
Yes, we hear this in our email inbox every single week.
And this week, we're actually going to do something about it because our producer, Rachel Cohn, while we were out on vacation, has been cooking up this segment.
So Rachel, come on in and tell us what you've done.
Hello.
Thanks for having me on.
And
thank you guys for being such good sports.
And as far as I know, not advocating to fire me.
Well, the segment isn't over yet.
Yeah.
So
tell us a little bit about what you did and how you came up with this idea.
Yeah.
So like you guys said, part of this is about responding to these listener emails that we've been getting.
I think part of it is also this feeling that AI debate is getting more polarized.
And also, I think like there's just sort of a personal level thing going on for me, which is
I feel like
I am increasingly spiraling when I think about AI.
And I'm steeped in this the way you guys are because, you know, we're working on this show together but I increasingly feel like you guys are finding ways to be more hopeful or optimistic than than I am and so you know part of my goal with this was actually to be like okay what's going on here like how are you guys arriving at this slightly different place than I am so uh what I did is I spent the last few weeks reaching out to prominent AI researchers and writers who I knew disagreed with you.
Some of these people have argued with you online before.
So I don't think you'll be totally surprised.
But I wanted this to be on hard mode for you guys.
So I specifically sought out people who I hope are gonna like challenge and provoke you because the truth is that they agree with you on a lot of basic things about AI.
These are all people who think that like AI is highly capable, that it's impressive in some ways, that it could be super transformative.
But I think they have slightly different views in terms of maybe some of the harms that they're most concerned about or some of the benefits that they're more skeptical about.
So I think we should just get into it.
Okay,
let's hear from our first critic, Rachel, who did you talk to?
Yeah, so I thought we should start with one of the widest ranging critiques.
And this is probably the most forceful criticism that came in.
So this one comes from Brian Merchant, who is a tech journalist who writes a lot about AI for his newsletter, Blood in the Machine.
And as I understand, Kevin, he has kind of engaged with you a bit online about some of your reporting.
Is that right?
Yes, I've known Brian for years.
I really like and respect his work, although we have some disagreements about AI.
But yeah, he has been emailing us saying you guys should have more critics on.
I sort of jokingly said that I would have him on, but only if he let us give him a cattle brand that said feel the AGI.
And the conversation sort of trailed off after that.
Okay, great.
I was wondering about that because, yeah, he's going to make a reference to that in the critique that he wages.
So, yeah, I asked Brian to record his critique for us, and I will play it for you now.
Hello, gentlemen.
This is Brian Merchant.
I'm a tech journalist and author of the book and newsletter, Blood in the Machine.
And first of all, I want to say that I still want a whole show about the Luddites and why they were right.
And I think it's only fair because Kevin recently threatened to stick me with a cattle brand that says Feel the AGI.
Which brings me to my concern.
How are you feeling feeling about feeling the AGI right now?
Because I worry that this narrative that presents super powerful corporate AI products as inevitable is doing your listeners a disservice.
Using the AGI language and frameworks preferred by the AI companies does seem to suggest that you're aligning with their vision and risks promoting their product roadmap outright.
So when you say, as my future cattle brand reads, that you feel the AGI, do you worry that you're serving this broader sales pitch, encouraging execs and management to embrace AI, often at the expense of working people?
Okay, thanks, fellas.
Okay, this is an interesting one.
And first, I think I need to define what I mean when I say feel the AGI, because this is a phrase that is often sort of used half jokingly, but I think really does sort of mean something inside the sort of San Francisco AI bubble.
To me, feeling the AGI does not mean like I think AI is cool and good, or that the companies building it are on the right track, or even that it is inevitable or sort of a natural consequence of what we're seeing today.
The way I use it is essentially shorthand for like, I am starting to internalize the capabilities of these systems and how much more powerful they will be if current trends continue.
And I'm just starting to prepare and plan for that world, including the things that might go really wrong in that world.
So that to me is like what feeling the AGI means.
It is not an endorsement of some like corporate roadmap.
It is just like, I am taking in what is happening.
I am trying to extrapolate into the future as best I can.
And I'm just trying to like get my mind around some of the more surreal possibilities that could happen in the next few years.
Do you ever worry that you are creating a sense that this is inevitable and that maybe people who may be inclined to resist that future are not empowered to do so.
I want to hear your view on this.
I mean, my view on this is essentially that we have systems right now that several years ago people would have called AGI that is not sort of making a projection out into the future.
That's just looking at what exists today.
And I think a sort of natural thing to do is to observe the rate of progress in AI and just ask, what if that continues?
I don't think you have to believe in some
far future scenario to believe that models will continue to get better along these predictable scaling curves.
And so to me, the question of like, is this inevitable is just a question of like, is the money that is being spent today to develop bigger and better models going to result in the same kinds of capabilities gains that we've seen over the past few years?
But what do you think?
Yeah, I mean, I think Brian's question is a good one.
And I understand what he's saying when he says, look, you know, AGI is an industry term.
If you come on your show every week and talk about it, you wind up sounding like you're just sort of like amplifying the industry voice, maybe at the expense of other voices.
I think this is just a tricky thing to navigate because as you said, Kevin, you look at the rate of progress in these systems and it is exponential.
And it does seem like it is important to extrapolate out to as far as you can go and start asking yourself, what kind of world are we going to be living in then?
I think a reason that both of us do that is that we do see so many obvious harms that will come from that world, starting with labor automation, which I know is a huge concern of Brian's, and which we talk about all the time on this show as maybe one of the primary near-term risks of AI.
So, you know, I want to think a bit more about what we can do to signal to folks that we are not just here to amplify the industry voice.
But I think the answer to Brian's question of sort of why
talk about AGI like it's likely to happen is that in one form or another, I think both of us just do think we are likely to get powerful systems that can automate a lot of labor.
Yes.
And we would like to explore the consequences of such a world.
Totally.
And I think it's actually beneficial for workers to understand the trajectory that these systems are on.
They need to know what's happening and what the executives at these companies are saying about the labor replacing potential of this technology.
I actually read Brian's book about the Luddites.
I thought it was great.
And I think it's very instructive that the Luddites were not not in denial about the power of the technology that was challenging their jobs, right?
They didn't look at these like automated weaving machines and go, oh, that'll never get more powerful.
That'll never be able to replace us.
Look at all the stupid mistakes it's making.
They sensed correctly that this technology was going to be very useful and allow factories to produce goods much more efficiently.
And they said, we don't like that.
We don't like where this is headed.
They were able to sort of project out into the future that they would struggle to compete in that world and take steps to fight against it.
So I like to think that if hard fork had existed in the 1800s, we would have been sort of encouraging people to wake up to the increasing potential automation caused by these factory machines.
And I think that's what we're doing today.
Yeah.
And one more question.
Like, I would just love to see the sort of like leftist labor movement work on AI tools that can replace managers.
You know, it's like, right now it feels like all of this is coming from the top down, but there could be a sort of AI that would work from the bottom up.
Something to think about.
All right, let's hear our next critique, Rachel.
Okay, wait, can I ask one more question on this right now?
Oh, sure.
I feel like one thing that it seems like Brian is really just curious about is like whether you have ever considered using language other than AGI.
Like why use AGI when some people take issue with it?
I think it is good to have a shorthand for a theoretical future when there is a digital tool that can do most human labor, where there is a sort of digital assistant that you could hire in place of hiring a human.
I just think that is a useful concept.
If you're the sort of person who thinks that, well, no, we will just absolutely never get there, I kind of don't know what to say to you because we don't think that that's inevitable, but we do think it's worth considering that it might be true.
So if folks who hate the term AGI want to propose a different term, I could use another term.
But my sense is that the quibble is less with the terminology and more with the idea that any of this might happen.
Yeah, I also like don't think the term AGI is perfect.
It sort of lost a lot of meaning.
People define it in a million different ways.
If there were another better term that we could use instead that would signal what AGI signals and the set of ideas and motivations that sort of swirl around that concept, I'd be all for it.
But I think that that...
term has just proven to be very sticky.
It is not just something that industry people talk about.
It's something that people talk about in academia, in futurism circles.
It is sort of this rallying cry for this entire industry.
And it is in some ways like the holy grail of this entire movement.
So I don't think it's sort of playing on corporate terms to use their,
use a term that these companies use in particular because a lot of the companies don't like it either, but it is the easiest and simplest way to shorthand the idea.
Cool.
So this next person that I want you guys to hear her criticism is Allison Gopnik.
So you guys, of course, know this.
Alison Gopnik is this very distinguished psychologist at UC Berkeley.
She's a developmental psychologist, so she does a lot of work specifically in studying how children learn and then applying that to, you know, how AI models might learn, how AI models can be, you know, developed.
And she's also one of the leading figures pushing this idea that we have actually talked a little bit about on the show, which is that AI is what she calls a cultural technology.
I'm Allison Gopnick at the University of California at Berkeley.
The common way of thinking about AI, which is reflected in the New York Times coverage as well, is to think about AI systems as if they were individual intelligent agents, the way people are.
But my colleagues and I think this approach to the current systems of AI is fundamentally misconceived.
The current large language models and large vision models, for example, are really cultural technologies like writing or print or internet search itself.
What they do is let some group of people access the information that other groups of people have articulated, the same way that print lets us understand and learn from other people.
Now, these kinds of cultural technologies are extremely important and can change the world for better or for worse, but they're very different from super intelligent agents of the sort that people imagine when they think about AI.
And thinking about the current systems as in terms of cultural technology would let us both approach them and regulate them and deal with them in a much more productive way.
Casey, what do you make of this one?
So
I appreciate the question.
If Allison were here, I would ask her how she thinks that thinking about these systems as, quote, cultural technologies would let us regulate them or think about them differently.
I think there are ways in which we absolutely cover AI as a cultural technology around here.
We talk about its increasing use in creative industries like Hollywood, like in the music industry to create forms of culture, about the risks that
AI poses to the web and all the people who publish on the web.
So that's one way that I think about AI as a cultural technology.
And I do think that we reflect that on the show.
Now, I do hear in Allison's question a hint of the stochastic parrots argument, which is that if I'm understanding right, what I'm hearing this technology is essentially just a huge amalgamation of human knowledge, and you can sort of like dip in and grab a little piece of it here, a piece of it there.
And what I think that leaves out is the emerging properties that some of these systems have, the way that they can solve problems that are not in their training data, the way that they can teach themselves to play games that they have never seen before.
When I look at that technology, I think that does seem like something that is pretty close to an individual intelligent agent.
So this is one where I would welcome more conversation with Allison about what she means, but that is my initial response, Kevin.
Yeah, I think these systems are built on the foundation of human knowledge, right?
They are trained on like all the text on the internet and lots of intellectual output that humans over the centuries have produced.
But I think the analogy starts to break down a little bit when you start thinking about more recent systems.
A printing press, writing, the internet, these are technologies that are sort of stable and inert.
They can't form their own goals and pursue them, but an AI agent can.
Right now, AI agents are not super intelligent.
They're very brittle.
They don't really work in a lot of ways.
But I think once you give an AI system a goal and the ability to act on its own to meet that goal, it's not really a passive object anymore.
It is an actor in the world, and you can call that
a cultural technology or you can call that an intelligent agent.
But I think it's not just like a printing press or a PC or another piece of technology that these things are sometimes compared to.
I think it's something new and different when it can actually go out in the world and do things.
Yeah, I mean, you think about like OpenAI's operator, for example, like it can, you know, book a plane ticket or a hotel room.
Is that a cultural technology?
Like, I don't know.
Like, that feels like something different to me.
Yeah.
All right.
Next up.
Okay.
So, this next question is about the scientific and medical breakthroughs that could come from AI.
This question comes from Ross Douthett, who is an opinion columnist here at the New York Times and the host of the podcast Interesting Times.
And he's been interviewing a lot of people connected to the AI world.
Hey, guys, it's your colleague Ross Douthett.
And I'm curious about what, if anything, you think limits AI's ability to predict and understand incredibly complex and chaotic and sometimes one-of-a-kind systems.
And just to take two examples, I'm thinking about, on the one hand, our ability to predict the weather in advance, and on the other hand, our ability to predict which treatments and drugs will work inside the insane, individualized complexity of a human immune system.
Those both seem to me like cases where just throwing more and more raw intelligence or computational power at a problem may just run into some inherent limits that will get cancer cures and get better weather prediction, but certain things will always remain in the realm of uncertainty or the realm of trial and error.
Do you guys agree?
Or are you more optimistic about AI's ability to bring even the most chaotic and complex realms into some kind of understanding?
So, there's like two questions here.
One is, is there some upper bound on how well these systems will be able to predict?
And to me, the answer is maybe.
Like, I don't know that we'll ever have an AI system that can predict the weather with 100% certainty.
At the same time, I did a little bit of Googling before we logged on.
AI weather predicting models are really good and they're getting better all the time.
And meteorologists say that their field has rarely felt so exciting because they're just able to make better predictions than they have before.
I think you're seeing something similar with medicine, where know, we featured stories on the podcast about the way that this is leading to new drug discovery.
It is leading to improvements in diagnoses.
So, yeah, I mean, if you're looking for reasons to be excited about AI, I would point to stuff like that as obviously useful in people's lives.
But it's still not perfect, right?
And it may be that getting from kind of a very reliable weather forecast to a perfect weather forecast would require some fundamental breakthrough, something in quantum mechanics, some new understanding of how various particles are interacting out in the atmosphere.
But getting way better forecasts might be good enough for most people.
And I think the same could be said of medicine.
Maybe this is not going to cure every disease on Earth.
Maybe there will still be things about the human body we don't understand.
But I do think I agree with you that like people who work in this field are more excited than they've been in a long time because they just see how much AI allows them to explore and test.
Yeah, and maybe one other question you can just add in here that I think is relevant is, are these systems better than a person is, right?
Because if they are, then we probably want to use them.
Can I just ask, how much of your optimism about AI hinges on like AI being able to give us either these like scientific or medical breakthroughs?
I think science and medicine are just two, maybe the two most obvious places where this stuff will be good.
It's like if you told me that you could cure cancer and many other diseases, I'm just personally willing to put up with a lot more social disruption.
If it can never do those things, despite all the promises that have been made, then I'll be super mad.
I'll put him a curse on the podcast.
Yeah, I personally, my own AI optimism does not hinge on.
AI going out there and solving all of the unproved math theorems and curing all of the diseases.
I think that even if it were just to speed up the process of discovery, even if all it were doing was accelerating the work that chemists and biomedical researchers, people looking into climate change were doing, I think that would be reason enough for optimism because so much of what acts as a bottleneck on progress in science and medicine is just that it's really slow and hard.
And you need to like build these wet labs and do a bunch of tests and wait for the tests to come back and run these clinical trials.
And I think one of the things that was exciting about our conversation with Patrick Collison at the live show the other day was when he was talking about this sort of virtual cell that they're building, where you can just kind of build a virtual environment using AI that can sort of allow you to run these experiments.
in silico, as they say, rather than needing to like go out and test it on a bunch of fruit flies or rats or humans or whatever.
And you can just kind of shorten the feedback loop and get more, take more bites at the apple.
Absolutely.
There was a story in Quanta magazine this week that said that AI hasn't led to any new discoveries in physics just yet, but it is designing new experiments and spotting patterns in data in the way that Kevin was just describing in ways that physicists are just finding really useful.
So I think it's clear that AI is already shortening some of those timelines.
When we come back, we'll hear from more of our critics.
Can I bring my therapist?
Over the last two decades, the world has witnessed incredible progress.
From dial-up modems to 5G connectivity, from massive PC towers to AI-enabled microchips, innovators are rethinking possibilities every day.
Through it all, Invesco QQQ ETF has provided investors access to the world of innovation with a single investment.
Invesco QQQ, let's rethink possibility.
There are risks when investing in ETFs, including possible loss of money.
ETF's risks are similar to those of stocks.
Investments in the tech sector are subject to greater risk and more volatility than more diversified investments.
Before investing, carefully read and consider front investment objectives, risks, charges, expenses, and more in perspectives at Invesco.com.
Investco Distributors Inc.
This podcast is supported by the all-new 2025 Volkswagen Tiguan.
A massage chair might seem a bit extravagant, especially these days.
Eight different settings, adjustable intensity, plus it's heated, and it just feels so good.
Yes, a massage chair might seem a bit extravagant, but when it can come with a car,
suddenly it seems quite practical.
The all-new 2025 Volkswagen Tiguan, packed with premium features like available massaging front seats, it only feels extravagant.
This episode is supported by Choiceology, an original podcast from Charles Schwab.
Hosted by Katie Milkman, an award-winning award-winning behavioral scientist and author of the best-selling book, How to Change, Choiceology is a show about the psychology and economics behind our decisions.
Hear true stories from Nobel laureates, historians, authors, athletes, and everyday people about why we do the things we do.
Listen to Choiceology at schwab.com slash podcast or wherever you listen.
You know what's great about this is now instead of your own internal voice criticizing yourself, you can kind of externalize it and realize that all your fears are true and people actually are criticizing you all the time behind your back.
Yeah.
Isn't it really nice?
So nice.
What a great idea.
Well, on that note, are you guys ready for the next critic?
Hit me with it.
My name is Clare Lee Butz, and I lead the AI and Media Integrity program at the Partnership on AI.
I keep coming back to something that I struggle with in my own reaction to your pieces.
I found myself nodding when you both critique AI for being biased, persuasive, sycophantic.
But then I start thinking about how humans around me behave, and they do all these things too.
So I'm wondering, are we ultimately critiquing AI for being too much like us?
In which domain should we expect these systems to actually transcend human limitations?
And are there others where it may be valuable for them to reflect our true nature?
And most importantly, why aren't we spending more time figuring out who is best suited to decide these things and empowering them?
I mean, that last question is super important.
You know, I'm a big democracy guy, and I want there to be a public role in creating this AI future.
I want people who have opinions about this stuff to talk about it online, yes, but also run for office and put together policy proposals and then get into office and like pass laws and regulations.
I got into journalism because I wanted to play my own role in that process of helping to inform people and then hopefully in some very small way, like influencing public policy.
So that's my answer to that question.
Yeah, I agree with that.
I want like people from lots of disciplines to be weighing in on this stuff, not just by posting online and writing, you know,
op-eds in the newspaper, but by actually getting into the process of designing and building these systems.
I want philosophers, ethicists.
I want sociologists and anthropologists advising these companies.
I want this to be like a global democratic multidisciplinary effort to create these systems.
And I don't want it to just be a bunch of engineers in San Francisco designing these systems with no input from the outside world.
Absolutely.
And if, you know, a bunch of people, you know, listen to the things that we and others talk about and think, man, I really don't like this AI stuff at all.
I don't want it to replace anyone's job.
I want to form a political movement and seek office and try to oppose that.
I think that would be awesome.
Like we need to have that fight in public.
And right now, far too few people are participating in that conversation.
So I totally agree with that.
Now, let me address the other part of Claire's question, though, which is, are AI systems just a reflection of us?
Well, here's where I think it gets problematic.
If you have a human friend, sometimes they're going to be very supportive and nice to you.
Sometimes they're going to bust your chops and criticize you.
Sometimes they're going to give you really hard feedback and tell you something that you didn't want to hear.
This is not what AI systems do.
And so where I get concerned is we're starting to read more stories about young people in particular, turning to these chatbots to answer every single question, developing these really intense emotional relationships with them.
And I am worried that it is not preparing them for a future where they're going to be interacting with people who do not always have their best interests at heart, or maybe they could have an amazing relationship with, but maybe this person is a little bit prickly and you need to sort of learn how to navigate them.
So that is where I get really concerned is that these systems, while they're unreliable in so many ways, they are quite reliably sycophantic.
And I just think that that creates a bunch of issues that humans don't mostly have.
Yeah.
And I think what I would add to that is that I don't want AI to mirror all of humanity's values, the positive and the negative.
I want it to mirror the best of us, right?
The better angels of our nature, as Abraham Lincoln said.
I want that to be what these AI companies are striving to design, as opposed to say Mecca Hitler.
Yes,
yes, because that is also a set of values that humans have.
And so, sometimes when I hear people at these AI companies talk about aligning AI systems with human values, I'm like, well, which humans?
Because I can think of some pretty bad ones whose values I don't want to see adopted into these systems.
Yeah, well, that's called woke AI, and it's illegal now.
All right, Rachel, let's hear from someone else.
Okay.
This is the very last one.
You guys are doing great.
So this final question comes from a friend of the pod, Max Reed, who of course has the newsletter, Read Max.
And
yeah, I thought his question is really great because he's really interested in how you think about discerning between what's hype and what's not and how you trust your own instincts and where your confidence comes from.
And so let's hear, Max.
Hi, guys.
It's your old friend, Max Reed.
I was originally going to ask about Kevin's a cappella career in college, but my understanding is that the world fire-ups at the New York Times won't allow me to ask such dangerous questions.
So instead, I want to ask you about AI by way of asking you about crypto.
You guys were both pretty actively involved in covering the Web3 era, the sort of crypto boom of the pandemic, NFTs,
board apes, all this stuff.
And very little of that, despite the massive hype around it at the time, has really panned out as promised, at least as far as I can tell.
And what I'm wondering is how you guys feel about that hype and about your coverage of that hype from the perspective of 2025.
Are there regrets you have?
Are there lessons you feel like you've learned?
And especially when you look at the current state of AI coverage and hype, not just your own coverage, but in general, do you think or worry that it falls prey to any of the same mistakes?
I want to caveat this question by saying the easy mode of this question is to just say the technology is totally different.
So it's a very different thing.
And I want to put it to you in hard mode
because I don't want to hear about how the tech is different.
What I'm interested in is hearing about you guys and your work as journalists.
How do you approach this industry?
How do you establish your own credibility?
And how do you assess the claims being made by investors and entrepreneurs?
Can't wait to hear the answer.
Bye.
I love this question.
What have I learned?
To touch on the crypto piece without touching on the technology, here's what I'll say.
Ultimately, what persuaded me in 2021 that crypto was really worth paying attention to was the density of talent that it attracted.
So many people I knew who had previously worked on really valuable companies were quitting their jobs to go build new crypto companies.
And what I believed and said out loud at the time was it would just be really surprising if all of those talented people failed to create a lot of really valuable companies.
In the end, they did not produce produce a lot that I did find valuable.
Although, as we've been covering on the show recently, crypto has not gone away.
And thanks to the fact that the industry has captured the government, it is now more valuable than ever.
So that is what I would say about that time in crypto.
And I do think that some of that argument ports over to AI because certainly I also know a lot of people that quit their jobs working at social media companies, for example, who are now working on AI.
Here's what I would say about hype in covering AI.
I think that a good podcast about technology needs to do two things.
One is to give you very grounded coverage of stuff that is happening right now.
So I'm thinking about in recent months when Pete Wells came on to talk about how chefs are using AI in their restaurants or Roy Lee coming on and talking about the cheating technology that he's building or Kevin talking about what he's vibe coding.
I even think about the emergency episode that we did about DeepSeek, which I think actually was kind of an effort to unhype the technology a bit while giving you a really grounded sense of what it was and why people were so excited about it, right?
So that's one thing I think we need to do.
The other thing I think we need to do is to just tell you what the industry says is going to happen.
I think it is important to get leaders of these companies in the room and just hear their visions because there is some chance that a version of it will come true, right?
So this is the thing that we're doing when we bring on a Sam Altman or a Demis Hasabis or the founders of the Mechanized Company, which, you know, you probably heard in our interview, I was not particularly impressed with that vision, but I think it is useful to the audience to hear what these folks think that they are doing.
And of course, we want to push back on them a bit, but I have just always appreciated a journalism that gives airtime to visions and lets me think about it, lets me disagree with it, right?
So that is how I think about hype in general.
We want to tell you mostly what is happening on the ground, but we do want to tell you what the CEOs are telling us all the time is going to happen.
And then we want you to sort of interrogate the space in between, right?
That we actually have to live in.
Yeah, I will say
I feel pretty good about the way that I covered crypto back in 2021.
There's only really one crypto story that I truly regret writing.
And that is a story about this crypto company, Helium, that was trying to do this like, you know, sort of convoluted thing with these like crypto powered Wi-Fi routers.
And I just failed on that story.
I failed to ask basic journalistic questions.
It turned out after the fact, we learned that Helium had basically claimed that it had a bunch of partnerships with a bunch of different companies.
And I just didn't call the companies to say, hey, is company lying about being affiliated with you?
It just didn't occur to me that they would be like so blatantly misleading me about the state of their business.
And so I do regret that I would chalk up less to like buying into crypto hype and more just to like not making a few more calls that would have saved me from some grief.
The lesson I took from crypto reporting is that real-world use matters.
So much of crypto and the hype around it consisted of these kind of abstract ideas and these vague promises and white papers.
And then when you actually like dug in and looked at who was using it and what they were using it for, it was like criminals, it was speculators, it was people trying to get rich on their Bored Ape collection.
So now when I cover AI, I really try to talk to people who are civilians using this technology about how they are using it.
And whenever possible, I try to use it myself before I sort of form an opinion on it.
I think the crypto era was in some ways a traumatic incident for the tech journalism community.
I think a lot of our peers and maybe to even a certain extent you and I felt like we were duped, felt like we fell for something, like we wasted all of our time like trying to understand and explain this technology, taking this stuff seriously only to have it all like come crashing down.
And I worry that a lot of journalists took the wrong lesson from what happened with crypto.
The sort of lesson that I think a lot of journalists took was like to be blanket skeptical of all new technologies, to sort of assume that it's all smoke and mirrors, that everyone is lying to you, and that it's not really going to be worth your time to like dig in and try to understand something.
And I see a lot of that attitude reflected in some of the AI coverage I see today.
And so, while I take Max's point that, like, I think we should always be learning from our mistakes and maybe things that we, you know, swallowed too uncritically in the past, I think that in some ways what we're seeing now today with AI is kind of sort of over correcting on that point.
What do you think?
Yeah, I think
that there is a bit of an overcorrection, but I also think that many journalists have just realized that what used to be a really small industry that mostly concerned itself with like helping you print your photos and make a spreadsheet is now something much bigger and more consequential and has just been bad for a lot of people.
And so it makes them hesitant to trust someone who comes along and says, hey, I'm going to cure all human disease.
I think that a role that we both try to occupy in the sort of...
AI journalism world is to say, we take seriously the CEOs who say that they're building something really powerful.
And crucially, crucially, we think it will be powerful in bad ways.
Yes.
And we want to talk to you about those bad ways, such as you may lose your job, or it will enable new forms of cyber attacks and frauds that you may fall victim to,
or it will burn our current education system down to the ground so it has to be rebuilt from scratch.
That one, you know, maybe there will be some positive along the way.
But I feel like week after week on the show, we are trying to show you ways in which this thing is going to be massively disruptive.
And that gets framed as hype in a way that I just think is a little bit silly.
Like in 2010, imagine I'd written a story about Facebook and how one day it would have billions of users and undermine democracy and give a bunch of teenagers eating disorders.
Like, would that have been hype?
Sort of.
Would that have been accepting the terms of the social media founders and accepting their language around, you know, growth?
Yes.
But would it have been useful?
Would I be proud that I wrote that story?
I think so.
So I'm willing to accept the idea that you and I do buy into the vision of very powerful AI more than many of our peers in tech journalism.
But the reason that we're doing that is we want to remind you what happened the last time one of these technologies grew really quickly and got into everyone's hands and became the way that people interface with the digital world.
It didn't go great.
We already know that these companies are not going to be regulated in any meaningful way.
The AI action plan is designed basically to ensure that.
And so to the extent that we can play a positive role, I think it is just going to be in talking to people about those consequences.
And if the consequence of that is that people say that, you know, we're on the side of hype, like I will just accept the criticism.
Yeah.
Well, thank you guys so much for doing this.
And thank you also to our critics for taking the time to talk to me.
I thought we could end by just talking about actually whether you guys have any questions for each other.
Like, you know, one of the big goals of this is to kind of map where you guys stand relative to other thinkers.
So I'm curious how your views on AI are actually different from each other.
I think I have longer timelines than Kevin does.
I think Kevin talks about AGI in a way that makes it seem very imminent.
And I think I'm more confident that it's going to take several years
and maybe more than several, right?
Like maybe this is like a five to 10 or even 15 year project.
So I think that's the main way that I noticed disagreeing with Kevin.
I think that we also disagree about regulation and how possible or advisable it is to have the government step in and try to control the development and deployment of AI systems.
I think that you are informed by your years of covering social media and seeing regulators grapple with and mostly fail to regulate that wave of technology.
But I think you are also a person who has a lot of hope and optimism about institutions and wants there to be democratic accountability into powerful technology.
I share that view, but I also don't think there's a chance in hell that our present government, constructed the way it is with the kind of pace that it is used to regulating things at, can regulate AI on anything approaching a relevant time scale.
I've become fairly pessimistic about the possibility of meaningful regulation of AI.
And I think that's a place where we differ.
I think we do disagree there because I think that we had the makings of meaningful regulation under the Biden administration, where they were making very simple demands like you need to inform us when you're training a model of a certain size.
There need to be other transparency requirements.
And I think you can get from there to a better world.
And instead, we've sort of unwound all the way back to, hey, if you want to create the largest and most powerful model in the world, you can do that.
You don't have to tell anybody if it creates new risk for bioweapons and other risks.
You don't have to tell anybody you can put it out in the world.
Right now, there are many big AI labs that are racing to get the most powerful AI that they can into everyone's hands with absolutely no safeguards.
So if you're telling me that we can't create a better world than that, I am going to disagree with you.
Yeah.
Go fuck yourself.
Well, thank God you guys disagree because it makes the podcast more interesting.
And thank you guys seriously for doing this.
I think given how much of the like AI conversation can feel really disempowering in this moment, one thing that gives me a feeling of like a little bit more control is really trying to like map out the debates where people stand relative to each other because it ultimately helps me figure out what i think about ai what i where i think the future is going and that's at least one thing i feel sort of empowered to do And that's what we want to do.
Like truly,
we want everyone to come to their own understanding of where they sit at the various intersections of these discourses.
Like I think Kevin and I identify as reporters first.
We don't have all the answers.
That's why we usually bring on a guest every week to try to get smarter about some subject.
Right.
So I think a really bad outcome for the podcast is that people think of us as pundits.
I think of us as like, you know, curious people with informed points of view, but we always try to be open to changing our minds.
Yes.
Like a large language model, we aim to improve from version to version.
As we add new parameters and
computing power.
Yes.
Over the last few decades, the world has witnessed incredible progress.
From dial-up modems to 5G connectivity, from massive PC towers to AI-enabled microchips, innovators are rethinking possibilities every day.
Through it all, Invesco QQQ ETF has provided investors access to the world of innovation with a single investment.
Invesco QQQ, let's rethink possibility.
There are risks when investing in ETFs, including possible loss of money.
ETF's risks are similar to those of stocks.
Investments in the tech sector are subject to greater risk of more volatility than more diversified investments.
Before investing, carefully read and consider front investment objectives, risks, charges, expenses, and more in perspectives at Invesco.com.
In Vesco Distributors Inc.
This podcast is supported by the all-new 2025 Volkswagen Tiguan.
A massage chair might seem a bit extravagant, especially these days.
Eight different settings, adjustable intensity, plus it's heated, and it just feels so good.
Yes, a massage chair might seem a bit extravagant, but when it can come with a car,
suddenly it seems quite practical.
The all-new 2025 Volkswagen Tiguan.
Packed with premium features like available massaging front seats, it only feels extravagant.
This episode is supported by Choiceology, an original podcast from Charles Schwab.
Hosted by Katie Milkman, an award-winning behavioral scientist and author of the best-selling book, How to Change, Choiceology is a show about the psychology and economics behind our decisions.
Hear true stories from Nobel laureates, historians, authors, athletes, and everyday people about why we do the things we do.
Listen to Choiceology at schwab.com slash podcast or wherever you listen.
Before we go, a reminder that we are still soliciting stories from students about how AI is playing out on the ground in schools, colleges, universities around the country.
We want to hear from you.
Send us a voice memo telling us what effect AI is having in your school, and we may use it in our upcoming back to school AI episode.
You can send that to hardfork and nytimes.com.
Hard Fork is produced by Rachel Cohn and Whitney Jones.
We're edited by Jen Poignan.
We're fact-checked by Caitlin Love.
Today's show was engineered by Katie McMurrin.
Original music by Alicia B'Itoupe, Marion Lozano, Rowan Nimastow, and Dan Powell.
Video production by Sawyer Roque, Pat Gunther, Jake Nicol, and Chris Schott.
You can watch this whole episode on YouTube at youtube.com slash hardfork.
Special thanks to Paula Schuman, Hui Wing Tam, Dahlia Haddad, and Jeffrey Miranda.
You can email us at hardfork at nytimes.com nytimes.com with your own criticisms of our opinions about AI.
And now, a next level moment from ATT business.
Say you've sent out a gigantic shipment of pillows, and they need to be there in time for International Sleep Day.
You've got AT ⁇ T 5G, so you're fully confident.
But the vendor isn't responding, and International Sleep Day is tomorrow.
Luckily, AT ⁇ T 5G lets you deal with any issues with ease, so the pillows will get delivered and everyone can sleep soundly, especially you.
AT ⁇ T 5G requires a compatible plan and device.
Coverage not available everywhere.
Learn more at ATT.com slash 5G network.