Meta Goes MAGA Mode +A Big Month in A.I. + HatGPT

1h 10m
“I think this set of changes that the company announced this week are the most important series of policy changes that they have made in the past five years.”

Listen and follow along

Transcript

In business, they say you can have better, cheaper, or faster, but you only get to pick two.

What if you could have all three?

You can with Oracle Cloud Infrastructure.

OCI is the blazing fast hyperscaler for your infrastructure, database, application development, and AI needs where you can run any workload for less.

Compared with other clouds, OCI costs up to 50% less for compute, 70% less for storage, and 80% less for networking.

Try OCI for free at oracle.com slash NYT.

Oracle.com/slash NYT.

Casey, we're back.

We're back in the studio, Kevin.

So, Dirty Secret is that we recorded our predictions episode that ran last week in 2024 before we left for the holiday break.

We are just now coming back from a multi-week break.

How are you doing?

How was your break?

I'm doing great.

We recorded that episode so long ago that when I listened to it, all the predictions were fresh to me.

I was so excited to hear what we were going to say.

But I'm doing good.

I had a really nice break.

And of course, I'm excited to be back.

But what about you, Kevin?

Well, I had kind of a disaster happen to me over this break, which was that I got robbed on Christmas.

Wait, wait, wait.

Was it the Grinch?

You know, the citizens of Whoville are still looking for the suspect.

No, who robbed you?

How'd you get robbed?

Well, I wasn't home, luckily, but someone broke into my house.

Wait, like, what do they take?

So, still sort of sorting through.

We just got back,

but it appears that thief or thieves thieves took some jewelry, some electronics.

But weirdly, and this is sort of the tech angle here,

they did not take the Apple Vision Pro.

Not even a robber wants one of those.

Which makes sense because robbers typically only want to take what is valuable, Kevin.

And

it's not clear what they would actually do with a Vision Pro.

You know, also keep in mind, if you're a robber, you're out there, you're moving through the world, you're breaking into homes.

You can't have that giant thing on your face.

You know, you sort of of need to maintain clear vision.

Yes.

So to speak.

Yes.

Let me ask you this.

Even though all your, your, your, your, your items were stolen, did you, did you look at your family and your dogs and you think, you know what?

At the end of the day, I got my family and that's all that really matters.

I did.

And I don't know why you're saying it was such a

sentimentality.

I was looking for a nice sentimental.

Honestly, it was, that was sort of the, the, the moral of this robbery was much the same as the moral of the Grinch who stole Christmas, which is that the real Christmas, the real

household items are families.

Exactly.

And so, you know, if you get robbed again, maybe don't worry about it.

Was it you?

I'm changing the subject.

We're moving on.

Okay.

Where were you on Christmas?

I'm Kevin Rees, a tech columnist at the New York Times.

I'm Casey Noon from Platformer.

And this is Hard Fork.

This week, Meta goes MAGA.

We break down the company's surrender to the right on speech issues.

Then, why 2025 is shaping up to be a huge year in AI.

And finally, some hat GPT.

Call that a hat GPTs.

Well, Casey, I think we better talk about Meta.

We better do it, Kevin, because I never met a bigger story for this podcast.

Yes.

So the big news this week in the world of social media is that Meta is making a, I would say, pretty calculated and transparent

is another word people have used

play to ingratiate itself with the incoming Trump administration by sort of surrendering to the demands of right-wing speech critics and changing a bunch of things about the way its platform works.

I think this is a very big story, not just because of what it represents about Meta, but because it is the biggest and most prominent example of a Silicon Valley tech company sort of positioning itself for the second Trump term.

And I think it's going to have very big implications for speech on the internet, for the rise of misinformation online, and potentially for the future of Meta itself.

Yeah, absolutely.

I think that while we have talked about speech policies on Meta basically as long as we've been doing this podcast, I think this set of changes that the company announced this week are the most important.

series of policy changes that they have made in the past five years easily.

Yeah.

So let's run down what's actually been happening over at Meta.

So over the past week, there have been three main things that people are pointing to as being all part of this effort to kind of curry favor with the incoming Trump administration.

The first was that last week, Meta's global policy chief, Nick Clegg, a former British deputy prime minister who had served in that role for a number of years, stepped down and was replaced by Joel Kaplan.

Joel Kaplan is a longtime Republican operative going back to the George W.

Bush administration who's been working at Meta in their policy division for a while now and has sort of become the unofficial liaison between Mark Zuckerberg and the Washington right.

That's right.

And then this week on Monday, Meta announced that it was appointing three new board members, including Dana White, who is the CEO of UFC, the Ultimate Fighting Championship.

Dana White, not known as a particular...

particular expert on social media governance, but definitely a close friend and ally of Donald Trump and someone who can presumably act as a liaison between Meta and the Trump administration.

Yeah, so just sort of staffing that bench up with more Trump friends.

And then the big one came on Tuesday when Meta announced that it was ending its fact-checking program and replacing it with an X-style community notes feature.

The company also said it was redoing its rules to allow more speech and less censorship.

It's going to dial up the amount of, quote, civic content, that's sort of Meta's term for political content and current events content in their feeds.

And said that they were moving their content review operations from California to Texas to avoid the appearance of political bias.

There were some other details in there that we can talk about, including some changes to the way that its content moderation automated services will work.

But basically, this was a laundry list of things that right-wing critics of social media platforms had been asking for for years.

And Meta sort of stood up and said, we're going to do all of it.

Yeah.

Or another way of putting it, Kevin, is just that they accepted wholesale the Republican critique of Facebook's speech policies, right?

And actually use the same words that Republicans would do.

You know, in a previous time, we only used the word censorship to apply to state action to actually prohibit speech.

Some people would say it doesn't actually apply to private companies just sort of policing online forums.

But Mark Zuckerberg said, no, effectively, you're right.

We do do a bunch of censorship.

We're doing too much censorship and we're going to stop doing censorship.

Yeah.

So the reasons that Mark Zuckerberg gave and that Joel Kaplan gave when he went on Fox and Friends to announce these changes, which was a very deliberate decision and one that I probably don't have to explain the meaning of to our listeners.

But the reasons that...

Mark Zuckerberg and Joel Kaplan gave for these changes was that Meta had been doing some soul searching and basically had discovered that its former policies created too much censorship and that they were going to return to the company's roots as a platform for free expression.

I was really struck by just the way that they completely backed down here.

They accepted the critique and they seemingly are terrified of what the Trump administration could mean for them and for Mark Zuckerberg personally if they do not comply in advance with everything that Republicans have said about them for years.

Keep in mind that none of these critiques are new.

They were made made throughout the first Trump administration, and Facebook stood up against them.

And they said, we're actually going to try to find a middle path here.

We are going to try to do what we can to preserve free expression while also trying to make this a really safe and inclusive space for as many people as we can.

And in 2025, at the start of the year, Mark Zuckerberg came forward and he said, no, not anymore.

We're done with that.

Everything that the Republicans have been saying about us is true.

And so we are going to lean into their version of what a social network should be.

And so I'd like to play just some of what Zuckerberg said in the reel he posted on Instagram announcing these changes.

censor just 1% of posts, that's millions of people.

And we've reached a point where it's just too many mistakes and too much censorship.

The recent elections also feel like a cultural tipping point towards once again prioritizing speech.

So we're going to get back to our roots and focus on reducing mistakes, simplifying our policies, and restoring free expression on our platforms.

I was just struck by how craven and cynical it felt like Mark Zuckerberg in particular was being about this.

I mean, he sounded like Elon Musk, to be totally honest.

He used phrases like legacy media with this kind of like dripping disdain, which is a phrase that Elon Musk and his friends love to use in describing

the mainstream media.

He also did use this word censorship that he has avoided studiously for years in describing the content moderation work that every social network, including all of Meta's social networks, do as a matter of business.

So it just sounded like a total capitulation, a total giving in to the demands of his most ardent right-wing critics.

More than that, Kevin, he also threw his own contractors under the bus.

And let's hear that clip.

After Trump first got elected in 2016, the legacy media wrote non-stop about how misinformation was a threat to democracy.

We tried in good faith to address those concerns without becoming the arbiters of truth.

But the checkers have just been too politically biased and have destroyed more trust than they've created, especially in the U.S.

He says that the fact checkers had just proven to be too biased, gives no evidence for that, no examples, just sort of says that these fact checkers, all of whom follow this very rigorous code for how they do their work, just sort of asserted, oh, they've been super biased.

So who knows what that meant.

He also,

as you pointed out, says that they're going to move their moderation teams to Texas to avoid bias.

Well, first of all, I can tell you they have had moderators in Texas for many years, basically for as long as they've had moderators.

They've also put moderators in red states for years.

In 2019, I visited Facebook moderation sites in Arizona and Florida, right?

So there's absolutely nothing new about this, but he is throwing his moderators under the bus.

And the worst part about it to me is that he is suggesting that the moderators were the ones making decisions about policy when in fact that person was Mark Zuckerberg.

So if Mark Zuckerberg wants to talk about the perception of bias around Facebook policy, he should reckon with the fact that he is the policymaker in chief over there.

Right.

So what do you think the most impactful part of these changes is?

Because, you know, for all of the talk about the end of the fact-checking program over at Meta, my sense is that the fact-checking program, for all the good people who worked very hard on it, really only ever touched a very tiny fraction of the content shared on Meta's platforms.

It was a pretty ragtag effort that never really had as much of an impact as I think the fact-checking community would have liked, in in part because of the way that Meta restricted it.

So, I don't know that the average user of Facebook or Instagram is actually going to notice the fact that their fact-checking has disappeared.

But what do you think that the biggest impact on users will be?

Well, so let me speak to the fact-checking first because, in some ways, I agree with you.

I don't know about you, I rarely encountered one of these fact-checks on Facebook.

On the other hand, I am someone who believes in harm reduction, and fact-checkers did look at millions of pieces of content that were getting presumably hundreds of millions or billions of views.

And there were empirical studies done that showed that overall people came to have fewer false beliefs if they saw those fact checks.

So to the extent that people saw them, they were effective.

And I think that there was a case to continue doing them, particularly if you want to be a good steward of a network that you have built that billions of people are using every day.

And it's important to you that they have a good experience on that platform and don't come away from it stupider than when they started.

But I don't actually think that that's the most important thing that they announced.

I think it's something else.

And I'm going to point to something that Mark Zuckerberg said in his reel.

Let's hear that clip.

We used to have filters that scanned for any policy violation.

Now we're going to focus those filters on tackling illegal and high severity violations.

And for lower severity violations, we're going to rely on someone reporting an issue before we take action.

So what does that mean?

What it means is, whereas before, Meta used to rely on automated systems to catch all sorts of things, not just illegal things, but also just stuff that was annoying or hurtful, stuff that was a little bit bullying, harassment.

I called you a name.

I called you a slur.

Meta would catch that stuff in advance and maybe not show it to you, maybe take some sort of disciplinary action against the person who sent that.

What Zuckerberg is saying here is we are not the content moderators anymore.

You are Facebook user, Instagram user.

We are now enlisting you in the fight.

And we're going to leave it to you.

If you see a slur on our platform, you go ahead, report that, and then maybe we'll take a look.

And I think that this is a really big deal.

So yesterday, I wound up talking to a bunch of people who either work at Meta or used to work there.

And I talked to one person who just said that they were extremely worried about what this meant because they had seen in so many countries around the world where Meta has traditionally done much worse moderation than it does in the United States, where by not taking action against these lower severity violations, right?

Stuff that was not obviously illegal, they had just seen violence violence fomented again and again.

They had seen harassment against women.

They had seen abuse against LGBTQ people.

And Zuckerberg in his reel said, look, we are going to have more bad stuff on the platform.

But he doesn't go the second step to what does that actually mean?

Well, what it actually means is people could get hurt.

People could die.

So I want to be very clear about that.

This is not, you know, two like pointy-headed intellectuals like, you know, sitting in their podcast studio saying, oh, no, you know, Facebook isn't a safe space anymore for the college students.

What I'm saying is that violence has been fomented on Facebook before and it will be fomented on Facebook again.

And as a result of these changes, more people are going to be hurt.

So that to me is the biggest consequence of these actions.

Yeah, I think this reporting thing that you bring up is so interesting because, you know, as we know, a lot of the worst stuff on Facebook happens in groups, happens in sort of semi-private spaces with hundreds or thousands of members.

And so now I think Meta is essentially saying that it will be up to the members of those groups to report any violative content that they want to be moderated rather than having these sort of proactive scanners going around.

And you might say, what's the big deal about that?

Well, if you're in a stop the steal group or a QAnon conspiracy group or a group that's plotting an insurrection at the Capitol,

which members of that group are going to be reporting each other for violating Facebook's rules?

I don't think that's a thing that's going to happen.

And so I think what we're going to end up with is just

a much more sort of unmoderated mess over at Facebook and Instagram and all the other meta platforms.

You know, when I was talking to employees this week, one of them pointed out to me what a sort of strange step backwards this is in this respect.

For so many years, Mark Zuckerberg bragged about how automation was the future of content moderation.

And he boasted about the systems that they were building that were getting better every single quarter at detecting the hate speech, detecting the bullying, and making this a sort of better place for his community.

And now, instead of saying, we're going to lean into this even more, we're going to make these filters better, he said, we're going to stop using them and we're going to go back to human beings who don't even work for us or have any training or expertise, right?

This is an abandonment of his technological project in favor of something that is obviously inferior.

So to me, that is one of the big twists here is Mark Zuckerberg walking away from the very good technology that he built.

Yeah, that's a really good point.

So what else in these changes caught your eye?

Yeah.

So, look, you know, some of our listeners, Kevin, may use Facebook or Instagram and just wonder, you know, what's it going to be like now now that these changes have made?

So, I thought maybe it would be good to go through some of the offensive things that you can now say on Facebook and Instagram if you want and not get in trouble.

So, for example, I'm gay.

You can now tell me that I have a mental illness, Kevin.

You could go right onto Facebook and tell me that I'm mentally ill for being gay.

You can say that I don't belong in the military.

You could tell trans people.

I mean, you don't belong in the military.

But I have a reason.

other reasons.

And that's important.

Yes, nothing to do with your sexuality.

No.

I'm a terrible shot.

Okay.

There are some other changes.

Yes.

You, you know, so look, if you, if you want to say offensive things about trans people, like, you know, they can't use the bathroom of their choice.

If you want to blame COVID-19 on Chinese people or some other ethnic group, you can just do that on Facebook and Instagram now.

And Mark Zuckerberg says, well, that's sort of more in keeping with the mainstream discourse.

Those are the words words he uses.

That is in keeping with the mainstream discourse.

And I look at that and I think, oh, like the standard on Facebook now is that it's just going to feel like a middle school playground, right?

All of this stuff is stuff that I used to hear when I was 12 years old in Washington middle school.

Maybe not the trans bathroom stuff that was sort of still yet to come.

Everything else I heard in seventh grade, and that is the new standard that Mark Zuckerberg has set for his project.

Yes, he's saying, I would like the discourse on my platforms to more closely resemble the dialogue in a Borat movie.

Yeah.

Yeah.

Which is satirical in the Borat case, but is, you know, very serious.

Yes.

And look, it's easy for me to joke about it.

Look, if you want to tell me I'm mentally ill for being gay, like, I can handle that.

But, you know, if you're 14 years old and queer and it's people in your high school that are calling you that on Instagram, we've seen over and over again that these kids harm themselves.

And one of the things I find so crazy about these series of decisions, Kevin, is that right now, 41 states and and DC are suing Meta over the terrible child safety record it has on its platform.

And my understanding is that these changes apply to younger users just as they apply to everyone else.

And so these classifiers that once used to try to find bullying and abuse and harassment against young people, they're no longer going to be automatically enforced.

And it is going to be up to, I guess, the other kids in school to say, hey, it looks like my friend is being bullied over here on Instagram.

So that just seems like they're opening up a huge amount of liability for themselves.

Right.

And I think we should say, like, it is not just right-wing culture warriors who have complained about excessive moderation on Meta's platforms, right?

People on the left complain that their pro-Palestinian speech is being targeted for takedowns or that.

And that's true, by the way.

Like, those are not just like phony complaints.

Like, it is absolutely true that Meta has overenforced in some cases.

Right.

But what's so interesting as I'm hearing you explain the details.

of some of these changes and how they are revising their rules is that they all seem to be pointed in one direction.

It's like, let's let people on the right uh mock people on the left in more ways yeah absolutely and again like if

uh you know i sort of wrote in my newsletter that like a younger and more capable version of mark zuckerberg truly did handle this differently and the way he handled it was like oh we're over enforcing in this way let's improve the classifier right let's adopt a technological solution to this problem but what they said this week is we're done trying to fix any of it right we are just abandoning the project altogether yeah so that is a lot about the what of these changes i want to talk now about the why of these changes.

I think there is an obvious explanation, the one that has been popular among the critics that I've been reading and talking to over the past couple of days, is the political opportunism angle, which is, you know, this is Mark Zuckerberg's attempt to kind of ingratiate himself with the Trump administration.

It's all business, it's all strategy, it's all cynical, and probably all temporary until the next administration comes in.

What do you make of that explanation for why these changes were made now?

So I think that there is a lot of truth to it.

I think another factor that is in there, and we've talked about this on the show a bit, is that trying to be a good Democrat just didn't really get Mark Zuckerberg anything.

You know, after the 2016 U.S.

presidential election and the huge backlash against Meta in particular that it created, Zuckerberg tried to say, whoa, whoa, whoa, okay, I hear that you're super mad.

I'm going to try to fix this.

And so they went out and they built all these fancy machine learning classifiers to try to improve the service.

And at the end of the day, I don't think Democrats really liked him 1% better than they did before he did any of that.

So you have to remember that politics is transactional and people vote for people who they think they can get things out of.

By the end of 2024, I think it was very clear to Mark Zuckerberg, he was truly not going to get one thing out of the Democrats.

But then along comes Donald Trump.

And Donald Trump has this really interesting relationship with Elon Musk, where, you know, Elon Musk used to be kind of a liberal guy too, had a bunch of sort of bog standard liberal positions.

But, you know, then he, you know, changed his views for whatever reason, gave a bunch of money to Trump.

Trump said, hey, I like this guy.

I'm going to give him every political advantage that he wants.

And Mark Zuckerberg is a pretty smart guy.

And he thought, oh, well, you know what?

Maybe I could do the same thing.

Right.

Right.

I mean, I think the one thing that we know about the values of Mark Zuckerberg and Meta is that they are an extremely efficient.

organism at self-preservation, right?

They will do anything to stay relevant and stay ahead.

They will copy features.

They will change the name of the damn company.

We know that Mark Zuckerberg's own views on speech are very flexible.

They tend to sort of shift as the political winds shift.

But I also think there's another potential why here, which is about Mark Zuckerberg personally and his own shifting political allegiances.

I've been talking recently with some folks who know Mark Zuckerberg or who have worked with him in the past.

And what they have said to me is that that this is a man who is following a very conventional, sort of former Democrat-turned Republican arc, right?

He is a man, he's 40 years old, he's sort of approaching middle age, he's very into these kind of male-coated hobbies like mixed martial arts.

He spends a lot of time, you know, talking with Joe Rogan and,

you know, hanging out with Dana White.

And he's just sort of enmeshed in this kind of manosphere

outside of work.

And he's also been the target of a lot of criticism from especially the left.

And one thing that we know about successful men who get targeted by left-wing opprobrium is that they often respond to that by becoming sort of disaffected former liberals who embrace the right because there they feel like they're getting a more fair treatment.

So I just want to put that out there.

I can't prove this theory, but some people who know Mark Zuckerberg have floated it to me that he has actually become personally

quite red-pilled or conservative over the last few years.

Now, obviously, he's not Elon Musk.

He's not broadcasting his political opinions on social media dozens of times a day.

He's been more careful about sort of signaling which team he's on.

But I just offer this as a theory because I think we're starting to see more evidence that his own views may have shifted quite a bit, independent of what's good for Meta.

Yeah, I mean, I think that there was a version of all of this that was less extreme, and that if Zuckerberg himself were more truly liberal or progressive in his heart, we would not have seen these changes.

So, I do think that the changes that they announced this week offer some evidence for what you just said.

Also, my colleagues, Mike Isaac and Teddy Schleifer, reported last year that Mark Zuckerberg has begun referring to himself as a classical liberal, which if you've ever watched a right-wing YouTube video, is what every former liberal who has now become a Republican says.

They call themselves classical liberal.

So, I'll just put that out there.

That is a code word.

So, okay,

last question about the implications of these changes.

Do you think that we are going to see an exodus of liberal and progressive users from meta platforms the way that we did from X after Elon Musk took it over?

Well, it depends on how all of these changes play out, and we're just not going to know for a while.

My assumption is that Meta will continue to do a significantly better job at moderation than X does.

It's a much bigger company, it has more infrastructure in place.

And so, I don't think you're going to get this sort of overnight transformation you got with Elon Musk.

Also, you know, Facebook and Instagram, they're just like structured, very different than X's.

Like Zuckerberg, I don't think can really take over those platforms like in terms of the actual posts that you're seeing in the feed the same way that Elon does.

So, you know, I would be somewhat surprised by that.

On the other hand, if.

Facebook and Instagram do truly come to feel like seventh grade playgrounds at recess and the the sort of discourse just gets much rougher and coarser.

I do think you're going to see people walking away from it because while we almost only ever discuss content moderation in terms of the politics of it, the truth is there's a huge commercial demand for it.

People do not want to spend time on networks that are full of violence and harassment and abuse and gore and porn.

And that is the main reason why all of these companies build systems to remove those things or suppress them.

So the real question, I think, Kevin, is how far ultimately does Zuckerberg go in this direction?

Because whatever the politics might be, the vast majority of his users just want a safe and friendly place to hang out online.

Yeah.

Okay.

So that is where we are with Meta today and what some of the implications will be.

Do you have any more predictions about where this will all head?

I have a really fun one for you, Kevin.

Yes.

So Meta has told its partners in this fact-checking partnership that it has been funding for the past several that their contracts will end in March.

So in March, the fact checks on Meta properties are going to end.

The community notes product that Meta is planning to build, which is essentially a volunteer content moderation system, that's going to take a little bit longer to build.

So that means, Kevin, that you and I can look forward to fact-free spring on Facebook.

Let's go.

We can truly say the craziest things and not one person is going to be able to stop us.

And let me just say, I'm cooking up some whoppers.

The things I'm about to say on Facebook and Instagram, let's just say you're going to want to follow me.

Yeah.

So follow Casey over at Threads.

Yeah.

And let's just say, start piling up the drafts now.

Yeah.

Because the purge is coming and you're ready.

I'm ready for the purge.

When we come back, oh, say Ken 03

forge a new path forward for AGI.

Okay, we'll go with that.

In business, they say you can have better, cheaper, or faster, but you only get to pick two.

What if you could have all three?

You can with Oracle Cloud Infrastructure.

OCI is the blazing fast hyperscaler for your infrastructure, database, application development, and AI needs where you can run any workload for less.

Compared with other clouds, OCI costs up to 50% less for compute, 70% less for storage, and 80% less for networking.

Try OCI for free at oracle.com slash nyt.

Oracle.com/slash nyt.

Over the last two decades, the world has witnessed incredible progress.

From dial-up modems to 5G connectivity, from massive PC towers to AI-enabled microchips, innovators are rethinking possibilities every day.

Through it all, Invesco QQQ ETF has provided investors access to the world of innovation with a single investment.

Invesco QQQ, let's rethink possibility.

There are risks when investing in ETFs, including possible loss of money.

ETFs' risks are similar to those of stocks.

Investments in the tech sector are subject to greater risk and more volatility than more diversified investments.

Before investing, carefully read and consider front investment objectives, risks, charges, expenses, and more in perspectives at Invesco.com.

In Vesco Distributors Inc.

Can your software engineering team plan, track, and ship faster?

Monday Dev says yes.

With Monday Dev, you get fully customizable workflows, AI-powered context, and integrations that work within your IDE.

No more admin bottlenecks, no add-ons, no BS.

Just a frictionless platform built for developers.

Try it for free at monday.com slash dev.

You'll love it.

Well, Casey, we have more news from Over the Break about one of our favorite topics, AI.

Boy, Dewey, it was a huge couple of weeks for AI, Kevin, during a time of year when normally the news cycle gets pretty slow.

I was wondering about that because usually in December, people are sort of getting ready to go on holiday break.

The news kind of trails off, but not this year.

The AI labs were sort of trampling all over each other to try to get their big news out before the end of the year yeah and i think it was led by open ai which of course announced their 12 days of ship miss where they tried to announce something's big something small every day for 12 days and you know they did wind up ending on something pretty important i think yes so this is all moving very fast there's a lot to catch up on today and i want to take some time to really dig into what happened and what we can expect for the first few months of the new year.

But before we get into all that, Casey, you have something to tell us.

I do.

So, Kevin, of course, our listeners' trust is of paramount importance to us.

And so I wanted to let folks know about something that happened in my life that I just think I want to be upfront about, which is that at the end of 2023, I met a man who had many wonderful qualities.

One of those qualities that I loved was that he worked for a company I'd never heard of, which meant fine, I can keep doing my job as normal.

But as of this week, Kevin, my wonderful boyfriend, started a job at a company we talk about sometimes on the show.

He is a software engineer at Anthropic.

Is his name Claude?

You know, many people have written to me asking me if I fell in love with Claude.

And while I do find it to be very useful for some things, no, this was a human man that I am currently in love with.

I've met him.

He's real.

I can't confirm.

He's wonderful.

But yes, you are disclosing that you have this new, let's call it an entanglement because this is a company that you and I talk about that you also cover in Platformer.

And so we just wanted our listeners to know that this is happening out in the world and in your life.

And that, um, you know, is there anything more you want to say about this?

Yeah, I mean, people have some questions about this.

Like, you know,

I did not play any role in my boyfriend getting this job.

Anthropic didn't know about our relationship before this happened.

Of course, you know, we have since told them about this.

I do plan to continue writing, reporting about Anthropic because I think it's a really important company.

But whenever I do that, I'm going to remind you that this relationship exists.

A couple other things that I would say, you know, my boyfriend and I do not have any financial entanglements.

We do not currently live together.

But, you know, I'm also going to commit to updating folks as that changes.

Basically, I'm going to try to do the same job that I always do, try to bring the same like skeptical, critical eye that I do to everything.

But I'm also just going to remind you that I have this relationship.

But, you know, if you have questions about that, email the show, hardfork at whitetimes.com.

I will try to, you know, answer any respectful questions that I can about this.

Yeah.

Now, Casey, I will just editorialize and add a little bit here to your disclosure, which I think is,

you know, laudable.

And I'm glad you're doing it.

I'm glad you did it in your newsletter.

I'm glad you're doing it on the podcast.

I have known you for a long time.

I have known how hard you have tried to avoid dating men who work in the technology industry.

I truly have.

I mean, for more than 10 years, Kevin, I would be on apps like Tinder and I would see that somebody cute worked at a Google, a Meta, a Twitter, you name it.

And I would just always swipe left because I thought, I don't need that drama in my life.

You know, I don't need that complication.

Which is tough in San Francisco because everyone works in tech.

It is a very small town.

And the number of sort of eligible bachelors out there who do not work at one of the companies you cover limits your dating pool considerably.

It really did.

And it sort of explains why I was mostly single for the last 10 years.

And I thought, well, I finally found something that sort of gets me out of it.

But, you know, sometimes life just has other plans for you and you kind of have to to roll with the punches.

Yeah.

So here I am.

Well, anyway, thank you, Casey, for that disclosure.

I think transparency is very important.

We are obviously going to keep talking about developments in AI at Anthropic and elsewhere, but we will also put this disclosure in sort of the way we do when we talk about Open AI and the fact that the New York Times company is suing Open AI and Microsoft, alleging copyright violations.

Yeah.

And you know, when I disclose this in my newsletter this week, Kevin, one reader actually replied that they thought it was cute that I would now have a disclosure to go along with your disclosure that you do every week.

So we're sort of now one for one.

Well, let's proceed to the real meat of this segment, which is about AI news.

Because so many things happened.

Truly.

So let's start by talking about OpenAI.

We've already made the disclosure.

Don't have to do that one again.

This was a big month for OpenAI.

On December 20th, they announced a new model called 03.

This was a successor to 01.

Funnily, they skipped 02 in the naming process because of a lawsuit threat from 02, the telecom company.

I'm not sure if it was a threat.

They said they did it out of respect, but yes, presumably there would have been some sort of legal problems.

Yes, yeah.

So they skipped right over 02 to 03.

This model is not yet available for users, but they did give a kind of preview of it to some researchers, and they also talked about how it had performed on some benchmark evaluations.

Casey, tell us about about O3.

What is O3?

So O3 is a large language model, Kevin, like you would already find in ChatGPT, but it is built in a different way, and it's known as a reasoning model.

And the reasoning models are a little bit different.

A main way that they are different is how they are trained.

So they are trained to try to be better at handling logical operations and structured data.

The second big way that they are different is that when you make a query, you know, you type into the little box whatever you want it to do, the reasoning model takes longer to go over it.

It uses more computing power.

It will take multiple passes through the data and it will really try to bring true reasoning to what it is looking at.

And so the result of taking more time, doing more passes, being structured in a slightly different way, is that it can perform a lot better on very complicated tasks.

And what OpenAI found with O3 is that they were actually able to get way further on some of the hardest benchmarks ever designed for LLMs to pass than anything that has come before that.

Yes, we talked a little bit about this idea of test time inference or test time compute back when we discussed O1, their previous reasoning model.

But this is basically

a different step than the classic pre-training step of building a large language model.

This is something that happens when the user makes the query.

Instead of just spitting out an answer right away, it goes through the secondary test time step.

And that is something that researchers were very excited about when 01 came out.

They thought, okay, maybe if we are tapping out the limits of the pre-training step, maybe there is a kind of new scaling law developing around this test time or inference compute.

And maybe if we pour more resources into that step, the models will get better along a different axis.

And so what people were very excited about when O3 came out was that it looks like that actually worked.

Yes.

And now this stuff is not yet in the hands of everyday users, but OpenAI did enter this O3 model in this really fascinating public competition known as the ARC Prize.

You know the ARC Prize, Kevin?

Yes.

So the basic idea idea with the ARC Prize is they try to come up with problems that would be insanely difficult for an LLM to solve.

And one of the ways that they're difficult, by the way, is that they are original problems.

So these problems are known to not be in the training data of any of these models.

Because of course, one of the criticisms of the LLMs is essentially, oh, well, you already have all that data stored, right?

You just essentially did a quick search.

So this prize says, no, no, no, we're not going to let you search.

You actually are going to have to show that you can reason your way through something really difficult.

So, this ARC AGI1 public training set has been around since at least 2020.

And at that time, Kevin, GPT-3, previous OpenAI model, got a 0%.

Okay.

So, just four or five years ago, we were at 0%.

In 2024, last year,

GPT-4.0 got to 5%.

Okay.

With 03,

it gets to 75.7%

in one evaluation where the limit was you could only spend $10,000 on computing power.

In a second test, where they let OpenAI spend as much money as they wanted, which we actually think it was like more than a million dollars, O3 hit 87.5%

on this model.

So something that was essentially impossible through all of 2024 almost instantly, we have now hit 87.5% of that benchmark.

And that is essentially the only public data we have about how good this thing is.

But man, did that get people's attention?

Yeah, it got people's attention.

I also saw a lot of people paying attention to O3's performance on something called CodeForces.

This is a programming competition benchmark.

And this is sort of one way that...

These AI companies try to assess how good their models are at coding.

OpenAI's O3 received a rating on CodeForces of 2727.

That is roughly equivalent to about the 179th best human competitive coder on the planet.

And just for context, Sam Altman in presenting this result mentioned that only one programmer at OpenAI has a rating higher than 3,000 on CodeForces.

So, why does this matter?

Well, you think about some of the discussion that was happening at the end of 2024, Kevin, and you started to hear people saying, we are hitting a scaling wall.

This was the phrase, right?

And the idea was the techniques that we used to build the previous LLMs, we're just sort of running out of the low-hanging fruit.

And it's going to require some sort of conceptual breakthrough in order for them to continue improving.

And O3 comes along and effectively does just that.

And what I think is so important about these benchmarks and why, you know, we want to take some time today going through them is there's a lot of questions and criticism right now that is justified around how much are these things being hyped up, right?

You know, we know that the companies love to hype up their products and tell us, you know, how incredible they are, but the benchmarks are something objective that you can actually use to measure their performance.

And so when you have one of those benchmarks saying that there is now a model that is better than all but 179 people on Earth, well, it seems like we might be getting pretty close to super intelligence.

Because what is super intelligence, if not a system that is better than every human at something.

Yeah.

And I would just...

add to that a little bit of caveat, which is that these so-called reasoning models, they seem from what we know about them so far to be very good at the kinds of tasks that you can design what are called reward functions for, which are things that have sort of a definite right answer, right?

Coding, either the code runs or it doesn't.

Math has a definite right and wrong answer.

So in these domains where you can kind of give the reinforcement learning model a goal and the indicator of whether it is right or wrong in pursuing that goal, it tends to do very well.

But if you asked it, what is the meaning of true love?

It would never know.

It wouldn't know the first thing about it.

And I think that's beautiful.

Right.

So I think for the short term, like the next year or two, we're going to have these early reasoning models that are very good and potentially even superhuman at some tasks, the kinds of tasks that have sort of definite right and wrong answers.

But for other things like, you know, fiction writing or life coaching or sort of these vaguer tasks that don't necessarily have one right and one wrong answer, they may not advance much beyond what we see today.

Yeah.

And, you know, some people will use that as an excuse to say, well, then this doesn't matter that much.

And I would just point out that at some point in your life, you're probably going to go see a surgeon.

And that surgeon might be not that great of a painter.

And it's not actually going to change the fact that the surgery that you got was very valuable, right?

So I think it's important to think more in terms of what these things are capable of in the moment than what they are not capable of.

Yes.

The other thing from OpenAI that we should talk about quickly is that Sam Altman wrote a new blog post on January 5th called Reflections,

basically talking about some of his thoughts about the two years since ChatGPT was released.

And the big headline from this blog post is that Sam Altman is claiming now that OpenAI knows how to build AGI, that the artificial general intelligence that people have been speculating about for years now, that OpenAI has been sort of hinting at, that they are within sight of that goal and that he believes it could happen very quickly.

And they are already starting to look past AGI to ASI, to artificial superintelligence.

So Casey, what did you make of this blog post?

Well, so I spent, you know, basically a day trying to figure out what exactly does Sam mean when he says that they know how to build AGI.

And another thing that happened this week, Kevin, is that Sam did an interview with Josh Tarangle at Bloomberg.

And one of the things that he tells Josh is, I'm going to quote, I don't have deep, precise answers there yet, but if you could could hire an AI as a remote employee to be a great software engineer, I think a lot of people would say, okay, that's AGI-ish.

My interpretation based on conversations that I had this week is this actually is the destination that everyone has in mind for 2025.

This is where the race is going.

You are going to see all the big AI labs race to try to release a virtual AI coworker.

And if they can do that, and if the coworker is pretty good, then they're going to say, this is actually what AGI is.

Because at the moment, you can hire a sort of virtual entity to do some task or series of tasks in your companies that you no longer need a person for.

That is where this entire thing has been driving the whole time.

Yeah, I agree.

And I think that it is just, it is not necessarily something that we need to accept uncritically, right?

Sam Altman is a person with his own goals and motives and open AI and reward functions.

And reward functions.

And we should, you know, maybe apply some discount to what he says about his projections for ai because he does have a vested stake in the outcome but i think we should also just use this as sort of a you know sticking our finger in the wind of what conversations are happening in the ai scene in san francisco people here i i i cannot emphasize this enough are very sincere and very genuine about the fact that they believe that we are going to get a gi or something like it very very soon possibly this year yeah and when you look at the improvement in these models that we saw in December alone, I think you have to take them seriously.

Yes.

Okay.

Moving on from OpenAI.

Another thing that happened in December is that Google released Gemini 2.0, the new version of its flagship AI model.

And Casey, have you tried it yet?

What do you make of it?

You know, I have not tried it yet, Kevin, because it is not in the sort of consumer brand

Gemini that I pay for.

With the exception of they have this new feature called Deep Research, where you can ask Gemini to sort of go and read the web and prepare a little report for you about something.

I think I've only used it one time.

It seemed like okay.

To be candid with you, I have not followed the 2.0 stuff as closely because it just hasn't seemed as

shocking or impressive as the OpenAI stuff.

Have you?

I've played around a little bit with Gemini 2.0, mostly in a series of demos that I got at Google before it came out.

Some of what has been in there is sort of catching up with other models.

Google also released a Gemini 2.0 flash thinking mode, which was their first kind of attempt at an inference time compute reasoning model similar to 0.1 and 03 from OpenAI.

I have not played around with Gemini deep research mode yet, but I've heard people talking about how cool it is.

So I'm excited to try that out.

But people I trust, whose judgment I trust about this stuff, say that this is basically Google sort of announcing that it is on the same trajectory as OpenAI and all the other companies, that it are its peers and rivals, that it is going to be scaling up very quickly in 2025, and that we should look forward to more there.

Yes.

Although there was a post on X that went viral this week where someone asked Google, does corn get digested?

And all of the image results are of AI slop that

appear to be diagrams of corn and just make no sense whatsoever.

And it's extremely funny.

So maybe it'll be patched by the time this comes out.

But if not, just go ahead and do an image search search for does corn get digested?

And you'll get a sense of where Google's AI search skills are at.

Got it.

So in conclusion, Google is cooking in the AI department,

but not much of this has gotten out into consumers' hands yet.

And so I think that will be the question for 2025 is, is this stuff actually as good as Google says it is?

Yeah.

All right.

The third and final story that we're going to catch up on today from Over the Break is something out of a Chinese company called DeepSeek.

DeepSeek is a Chinese AI company.

It's actually run by a Chinese hedge fund called HighFlyer.

And right around Christmas, as my house was getting robbed,

they

released a new model called DeepSeek v3

that

ranks up there with some of the world's leading chatbots and caught a lot of people's attention.

Yeah.

And look, I have not used this one yet, but there's a few things to know about this one.

One is that it's really big.

It has more than 680 billion parameters, which makes it significantly bigger than the largest model in Meta's Llama series, which I would say up to this point has been sort of the gold standard for open models.

That one has 405 billion parameters.

But the really, really important thing about DeepSeek is that it apparently was trained at a cost of $5.5 million.

And so what that means is you now have an LLM that is about as good as the state-of-the-art that was trained for a tiny fraction of what something like a LLAMA or a GPT was trained for.

I saw some speculation from this great blogger, Simon Willison, who said, it seems like the export controls that the U.S.

is placing on chips is actually inspiring these Chinese developers

to get much better at optimizing.

And indeed, you now have this state-of-the-art model for $5.5 million.

So this is a huge step toward the proliferation of LLMs everywhere.

Yeah, let me just back up and go a little more slowly through what you just described because I think it's really

I was trying to go really slowly.

I need a slower.

I don't need, I need the deep research mode here.

Okay.

So one of the big questions over the past five or so years is about the Chinese AI industry and where they are relative to the leading frontier AI labs in the U.S.

and whether we need to be doing more to kind of slow them down.

And if we even can slow them down, or if this stuff is just kind of common knowledge that as soon as someone invents a new way of doing AI, it spreads throughout the world and there's not much you can do to stop it.

One of the things that we've done in the United States was to pass something called the Chips Act, along with a set of controls that basically limited which AI chips you could export to China.

And we put a lot of faith in the ability of these restrictions to effectively constrain the Chinese AI industry.

If they couldn't get the latest chips out of NVIDIA and other companies, they wouldn't be able to build models that were competitive with the state-of-the-art US models.

And that was one way that we were going to sort of try to keep our national advantage.

What DeepSeek, I think, has showed, or at least what they have hinted at, is the possibility that China is actually not that far behind.

Because this model, whatever you think about it, I have not tried it myself, but according to its benchmarks, it is up there in many respects with the latest and greatest models from companies like OpenAI and Google and Anthropic.

It is, according to some measures, the highest ranking open source or open weights model that we have.

And it does not appear to have needed the latest and greatest hardware to be trained on.

According to the report that DeepSeek put out, they trained this new model v3 at an estimated cost of about $5.5 million.

And they did it not on the leading-edge NVIDIA H100 or A100 chips that all the big AI labs use, but on a different version of NVIDIA chips known as the H800, which is basically just a less capable version of the state-of-the-art chips from NVIDIA.

And so I think what this all boils down to

is the conclusion that regulating AI by limiting access to hardware is just going to be much more complicated than we thought.

One interpretation would be that you actually can't stop China from building state-of-the-art foundation models and that our regulatory regime just isn't going to cut it when it comes to keeping the U.S.

ahead of China.

What do you make of that?

So, I mean, the first thing I would say is I do get a little bit nervous when people frame the debate this way because I think a lot of the people who try to frame the like AI story as a race between the United States and China are like sort of very hawkish and like leading us to a potential conflict that I would rather avoid.

And it also presupposes that all of the American companies have to race as fast as they can and they have to build AGI as fast as they can, even if it means cutting corners on safety, because otherwise, you know, this looming specter of China and everything that could happen.

So I just would sort of say we don't necessarily have to do that.

We can choose to still, you know, move somewhat deliberately and with caution here.

But do I think that this shows that it is going to be harder to prevent China from developing extremely high-end models and that regulations mean more complicated?

Yes, absolutely.

All right.

Casey, that is.

A small fraction of what happened in AI while we were gone.

But probably the most important things.

I think we covered most of the most of what really mattered.

And if there's one thing that we can be sure of in 2025, it's that we are going to be very busy talking about more AI changes and progress.

You know, somebody was telling me that if like 2023 was a year that made everybody say, oh my gosh, AI is going so fast.

And 2024 was a year that felt very business as usual.

2025 is a year where we could be going back to, oh my gosh, AI is going so fast.

And then maybe it'll just feel like that all the time forever.

Isn't that a pleasant thought?

Yeah.

So anyway, happy new year.

AI Vertigo.

Forever.

Forever.

Wanna come back?

2025's first game of HatGPT.

In business, they say you can have better, cheaper, or faster, but you only get to pick two.

What if you could have all three?

You can with Oracle Cloud Infrastructure.

OCI is the blazing fast hyperscaler for your infrastructure, database, application development, and AI needs where you can run any workload for less.

Compared with other clouds, OCI costs up to 50% less for compute, 70% less for storage, and 80% less for networking.

Try OCI for free at oracle.com slash NYT.

Oracle.com slash NYT.

Every Vitamix blender has a story.

I have a friend who's a big cook.

Every time I go to her house, she's making something different with her Vitamix, and I was like, I need that.

To make your perfect smoothie in the morning or to make your base for a minestra verde or potato leek soup.

I can make things with it that I wouldn't be able to make with a regular blender because it does the job of multiple appliances and it actually has a sleekness to it that I like.

Essential by design, built to last.

Go to Vitamix.com to learn more.

That's Vitamix.com.

Gun injuries are the leading cause of death for children and teens in the United States.

Some people avoid talking about gun violence because they don't think they can make a difference, but every conversation matters.

When it comes to gun violence, we agree on more than we think.

And having productive conversations about gun violence can help protect children and teens.

Learn how to have the conversation at agreetoagree.org.

Brought to you by the Ad Council.

Well, Kevin, from time to time, we like to check in on some of the wilder headlines from the world of tech in a segment we we call Hat GPT.

Yes!

In Hat GPT, of course, we take headlines, we put them into a hatch, we fish headlines out, discuss them for a bit, and when one or the other of us gets bored, we simply say, stop generating.

We have not done a Hat GPT in a while, and there's been so much that I'm excited to see what's in the hat.

Me too.

Well, let's, why don't you go ahead and get us started?

Okay,

I'll pick first.

Okay.

All right.

This one is called Meta Kills AI-Generated People like Proud Black Queer Mama.

This is from Futurism.

So this was sparked by an interview that was given by a meta executive in the Financial Times at the end of 2024, basically talking about their plans to let users create a bunch of AI profiles and sort of fake people and get them to share generated content on meta platforms.

And then people began discovering the existence of these older AI-generated profiles that Meta had started up back in 2023.

And Washington Post columnist Karen Atia posted on Blue Sky about one AI-generated profile in particular that was described as a proud black queer mama of two and truth teller named Liv.

And Karen started chatting with this chat bot.

She then posted her chat on Blue Sky and Meta summarily killed Liv and many of its other older AI personas.

You know, this whole thing was so silly, and I think there's been a lot of just backlash against Facebook for this one, because this truly is a case where you wonder, why are they doing any of this?

Yes.

You know, and I think the answer would probably be that they saw character AI have some success by letting people chat with all of these sort of different sorts of characters.

But I think where character AI succeeded was they let let you pretend like you were talking to Luke Skywalker or a Spider-Man or characters that were very personally meaningful to you.

Meta just made up a bunch of essentially generic humans and said go nuts and had them say generic things.

And it just felt incredibly creepy to people, I think.

Yeah, I think this is a case of an idea that needs to be taken out back and dispensed with.

But Meta is not giving up on the idea of AI generated personas.

In fact, they have signaled that they intend to put more AI-generated personas inside all of their apps.

And I'm just fascinated to see what fresh horrors emerge when that happens.

Here's what I hope.

I hope that at some point Meta will be able to detect when you're harassing or abusing someone, which is, of course, now allowed under their new rules.

And they just actually route you to an AI so that the AI can sort of absorb all of your prejudice and bigotry.

It might be a nice solution.

I like that, like an AI punching bag.

Exactly.

Yeah.

Okay, stop generating.

All right.

I feel like normally when it's my turn to pick, I get to shake the hat.

But for some reason, this week,

you've decided you want to shake the hat.

Okay.

I'm just going to shake the hat, as is my right.

All right.

Here's one.

Apple agrees to pay a $95 million settlement in a Siri privacy lawsuit.

Kevin, this is from Chris Velasco at the Washington Post.

Apple has agreed to end a five-year legal battle over user privacy related to its virtual assistant, Siri, with a $95 million payout to affected customers, according to a preliminary settlement.

Apparently, Kevin, Siri was a bit overzealous in listening for wake words like Siri.

So when it thought it was being called into action, it would start recording audio that it wasn't supposed to.

And a number of those clips somehow ended up in the hands of third-party contractors.

Back in 2019, The Guardian reported on Apple contractors regularly hearing confidential medical information, drug deals, and of course, recordings of couples having having sex.

So if a judge signs off on the settlement, anyone who qualifies can submit a claim for up to five Siri-enabled devices for a max payout of $20 per device.

So I guess my question to you is, would you be willing to let Apple listen to you have sex for $100?

Because let me just say, I'd go for it.

No, I don't think my price is a little higher than that.

No.

But Casey, I saw this one making the rounds because people said, oh, finally, they're admitting that they listened to you through the microphone and your iPhone, which has been, of course, a favorite conspiracy theory of people, including critics of Meta

for years now.

There's no proof that that is true.

What this essentially seems to be saying is it's not that this was sort of an omnipresent listening Siri that was listening when it shouldn't be.

It's that, you know, obviously Siri.

needs to be listening sort of ambiently in order to tell when a user says, hey, Siri.

That's right.

And I'm sorry if we just woke up your Siri on your iPhone and you're no longer listening to this podcast because I just said that.

But this is essentially saying it sounds like that it was a little miscalibrated to where it was listening more than it needed to be to sort of listen for that wake word or that it was recording more audio than it needed.

Yeah.

And I don't care about the actual incident, Kevin.

And here's this reason.

In the 14 years that Siri has existed, I think it's correctly understood me about four times.

This is not a technology that ever knows what I'm talking about for any reason.

Siri could take an hour-long recording of me and have no idea what to do with it.

So, I don't care about that aspect.

What I do care about is this is just going to fuel the most annoying conspiracy theory in tech, which is that all the tech companies are secretly listening to you.

So, yeah, we're just going to see a lot more conspiracies around this, and it is super unfortunate because, again, this is only Siri we're talking about, it doesn't know anything.

Yeah, it's not that serious.

Stop generating.

Okay,

This one is from the Athletic, Netflix's WWE Investment and the Future of Live Events on the Platform.

Quote, we're learning as we go.

Starting January 6th, the story says the WWE's popular weekly wrestling show, Raw, will stream exclusively on Netflix in the United States.

This is part of a decade-long agreement worth a reported $5 billion.

And Casey, as Hardforks resident WWE fan and expert, why don't you take this one on?

Well, Kevin, I mean, did you watch?

No, I did not.

Well, you missed something huge, which is that Roman Reigns beat his cousin, Solo Sokoa, in a tribal combat match, winning back the Ulafala and becoming the one tribal chief of the World Wrestling Entertainment.

Is that true?

That is all true.

It was a great match.

It was a really fun show.

And I think it looked great.

You know, WWE positioned this as a really huge thing for them.

And

it is.

It's also huge for Netflix.

You know, from WWE's WWE's perspective, now they can be in something like 280 million homes around the globe.

For Netflix, they get to experiment with some of this live programming, which they've been dipping their toes into.

Of course, there's a lot of speculation about whether they might soon go after more traditional sports.

So maybe they want to get a big football deal, a big baseball deal.

And so I'm very interested to see how these two things work together.

And I'm very interested to see who Cody Rhodes will be fighting at WrestleMania this year.

So yeah.

I did see the, I mean, obviously they did the big Jake Paul, Mike Tyson fight that was on Netflix.

I also saw on Christmas Day, they had some live football on Netflix.

That's right.

Do you think this is hastening the death of cable TV?

Or do you think it's just that was sort of already happening and this is just Netflix trying to pick up the pieces?

I absolutely do.

You know, I watch, in addition to WWE, another wrestling promotion, AEW.

And the reason that I had my YouTube TV account, which cost me something like $80 $80 a month, was so that I could watch AEW programming because that is only available on cable.

Well, guess what, Kevin?

AEW started streaming on Macs.

And so I was able to cut the cord once again.

And now I am.

fully streaming again.

So yes, as these sort of live events that have these, you know, intense weird fandoms move from traditional cable to streaming, it absolutely becomes a moment where more people cut the cord.

Now, this is a little bit of a tangent, but I did have an interesting moment over the break where we were stuck in a motel in Lake Tahoe, and our iPad that we use to sometimes entertain our child had run out of battery.

And so I was forced to turn on the hotel TV and try to explain to my two-year-old son the concept of linear TV.

And Casey, it blew his freaking mind.

I was like, so on this screen, you can watch Bluey sometimes,

but not all the time.

And you can't pick a specific episode.

And then about twice an episode, they're going to interrupt the episode to try to sell you toys.

And he was just so confused by the concept of linear TV that I thought,

you know, this industry probably does not have a long time left.

No, it doesn't.

Your child knows.

Yeah.

Yeah.

All right.

We'll stop generating.

Now, oh, this was a fun one.

So the YouTuber Megalag posted a video on December 21st titled Exposing the Honey Influencer Scam.

And ever since, Kevin, YouTube has been overtaken by discussion of what honey did.

Yeah, this in the world of YouTube creators was probably the biggest news story of the year.

And I don't think I've heard much about it outside of YouTube because of the sort of way that Insular platform works.

But essentially, this was a massive scandal among major YouTubers over the holidays.

Maybe we should just sort of explain what happened for people who are not glued to YouTube 24-7.

I think we should.

So, Honey is a company that was acquired by PayPal a while back, and they are a browser extension.

And the idea is before you go to checkout online, before you make an online purchase, you click the honey button, and Honey will scan the landscape for the best coupon.

Because, you know, often if you have a coupon code, you can get a little discount.

And so, Honey went out to a bunch of YouTubers and signed these deals, and they said, Hey, please go ahead and promote Honey.

And the reason that this is important is that

these sort of coupon codes are a big part of the creator economy.

We've talked on this show in the past about affiliate links.

A lot of the internet is built on companies that sell things, giving a little kickback to people who talk about their things.

Right.

And I think before we say what the allegations against honey are, we should just like set the scene for people who are not YouTube heads.

The relationship between like honey was maybe the most prominent advertiser on major mainstream YouTube channels.

I mean, I would say that Honey sponsorships propped up YouTubers and YouTube content creation in a similar way that like online mattresses propped up the podcast industry for a couple of years.

Like major, major YouTube influencers, you know, David Dobrik, Emma Chamberlain, the Paul Brothers, Marquez Brownlee, these people, you know, many of them had major deals with Honey to sort of underwrite their channels.

That's right.

They were basically ubiquitous.

It was hard to watch a lot of YouTube a couple years ago without running into honey ad after honey ad.

Right.

So what are the allegations that Megalag publishes?

Well, it's two things.

One is that, and this is just sort of hiding in plain sight on Honey's website, honey will actually go to online retailers and charge those retailers money to keep their best codes out of the honey database.

So let's say you have your online store and you have like a crazy 80% coupon that you gave out.

Honey will say, oh, we'll make sure that no honey user actually ever sees that coupon code.

So honey is straightforward about that, but it's obviously a terrible user experience, right?

Right.

Because the way honey works, like in a nutshell, is there are these coupon codes.

People, you know, there are sites where you can go look up coupon codes before you buy something, try to find, you know, a 10% or 20% off coupon.

Honey will basically go out and scour the internet for these codes for you.

and then automatically apply them to your purchase in your browser for basically any e-commerce e-commerce website that has these codes.

That's right.

So it's save you a little money while you're out shopping.

That's right.

And if that had been all that Honey was doing, this wouldn't have been a scandal.

But then there was the second allegation from Mega Lag, Kevin, and that was that when people would see products in these influencer videos and they would go to buy them, those shopping carts would often get the creator's affiliate link inserted.

So the creator would then get a kickback, which is of course the whole point that creators like to work with these companies that share affiliate links is so they can get a little bit of money.

And the allegation is that Honey was going in at the end of this process and replacing the creator's affiliate link with Honey's affiliate link.

So Honey got to keep all of the affiliate revenue and cut the creators out of the process.

So just let's just walk through this step by step.

Okay.

So I am watching a...

major YouTuber's video.

You're watching the Hard Fork channel.

I'm watching the Hard Fork channel.

We don't actually have affiliate links in our videos, but if say we did, say we're out there, you know, we've got

a

online mattress company that we have a promo deal with.

And every time you go and buy a mattress and enter the code hard fork at checkout, you get 10% off.

The allegation was that honey, in the instances where a user went to go buy a mattress from our affiliate link, if they used honey in their browser, honey would find that affiliate link and replace it with the honey affiliate link.

And so instead of getting a kickback on that sale ourselves, that money would instead go to honey.

That is exactly right.

And so people are quite mad about this.

There's a channel called Legal Eagle that is suing them, which I know nothing about Legal Eagle, but I have to say, that sounds exactly what a YouTube channel named Legal Eagle would do, which just be to sue one of the advertisers.

When The Verge asked PayPal, by the way, about all of this, PayPal said, quote, honey follows industry rules and practices, including last-click attribution.

And what I take that to mean is that the industry rules and practices is horrible.

And Honey is not doing one thing to try to improve on them in any way.

So, you know, this was really a case where creators took a look at the situation and they said, I don't think so, honey.

And that's a lustical characterist reference.

And I would just say that I think this is a case of like

People just really being naive about how the internet works.

You know, honey was a very popular, very pro, so, so profitable and popular that PayPal, you know, acquired it.

And people just really, YouTubers just thought they were out there providing these coupon codes to people out of the goodness of their hearts.

And I just want to say, bless your heart, if you thought that's what honey was about.

YouTubers are telling honey to mind their own beeswax.

And with that, I'll stop generating.

Okay.

Last one.

LA tech entrepreneur nearly misses flight after getting trapped in RoboTaxi.

Passenger Mike Johns was reportedly riding in an autonomous Waymo car on the way to the Phoenix airport when the vehicle began driving around a parking lot repeatedly, circling eight times as he was on the phone seeking help from the company.

Did you see this video?

I did see this.

This was so wild.

So he initially believed it was a prank, he told The Guardian.

And then he sort of gets on the phone with the support person at Waymo as he's inside this car that is just circling the parking lot.

And it won't let him out.

And as a result, he almost missed his flight.

You know, I think this is every Waymo support person's fantasy is that one day you just pick a random Waymo and you just start driving it around in circles in the parking lot with no explanation.

Maybe you're like teaching your kid how to drive or something like that.

No, this would obviously be somewhat disconcerting, but it is also hilarious.

And I have to say, if I made a list of like the 10 worst things that ever happened to me in an Uber, for example, driving around in a circle eight times would not make the top 10.

Yeah, I've almost missed my flight several times because of Uber drivers just thinking they know a a better way to the airport.

So, yes, I would say we shouldn't make light of this.

People are placing their life in Waymo's hands when they get into one of these autonomous cars.

And I did see some people saying, see, this is why I would never trust a self-driving taxi.

And I do think it's worth taking these incidents seriously.

At the same time, no one was hurt.

This was a case of clearly some like little software glitch or something or something, some issue with the map.

I don't think they ever got to the bottom of what happened here.

Look, here's another way of thinking about it.

Maybe this is a final destination situation where, if you know, the Waymo had gotten immediately on the freeway, maybe there would have been a terrible accident.

But something in the training said, No, we need to stay in this parking lot.

We're going to drive around in eight circles, and that will sort of reset the timeline and ensure that Mike makes it safely to the airport.

Something to think and think about.

Do you know how, like, airport Wi-Fi sometimes makes you watch an ad before you can get the free Wi-Fi?

Yeah.

This is giving me like an evil business idea, which is like, oh, you want to get out of your Waymo and make your flight?

Time to click over to honey.

Complete your purchase with honey if you want us to stop circling this parking lot.

God, someone out there is taking notes.

I'm so sorry.

All right, stop generating.

That is Hat GPT.

Casey, it is so good to be back with you in the studio doing one of our favorite games.

Hats off to you, Kevin.

And hats off to all of our listeners.

In business, they say you can have better, cheaper, or faster, but you only get to pick two.

What if you could have all three?

You can with Oracle Cloud Infrastructure.

OCI is the blazing fast hyperscaler for your infrastructure, database, application development, and AI needs where you can run any workload for less.

Compared with other clouds, OCI costs up to 50% less for compute, 70% less for storage, and 80% less for networking.

Try OCI for free at oracle.com/slash nyt.

Oracle.com/slash nyt.

Every Vitamix blender has a story.

I have a friend who's a big cook.

Every time I go to her house, she's making something different with her Vitamix, and I was like, I need that.

To make your perfect smoothie in the morning or to make your base for a minestra verde or potato leek soup.

I can make things with it that I wouldn't be able to make with a regular regular blender because it does the job of multiple appliances and it actually has a sleekness to it that I like.

Essential by design, built to last.

Go to Vitamix.com to learn more.

That's Vitamix.com.

Gun injuries are the leading cause of death for children and teens in the United States.

Some people avoid talking about gun violence because they don't think they can make a difference, but every conversation matters.

When it comes to gun violence, we agree on more than we think.

And having productive conversations about gun violence can help protect children and teens.

Learn how to have the conversation at agreetoagree.org.

Brought to you by the Ad Council.

Art book is produced by Whitney Jones and Rachel Cohn.

We're edited this week by Rachel Dry.

We're fact-checked by Caitlin Love.

Today's show was engineered by Chris Wood.

Original music by Alicia Ba'Itube, Rowan Nemisto, and Dan Powell.

Our executive producer is Jen Poyant.

Our audience editor is Nel Golokli.

Video production by Ryan Manning and Chris Schott.

You can watch this whole episode on YouTube at youtube.com slash hardfork.

Special thanks to Paula Schuman, Hui Wing Tam, Dahlia Haddad, and Jeffrey Miranda.

You can email us at hardfork at nytimes.com with something really mean that you can say on Facebook now.

Gun injuries are the leading cause of death for children and teens in the United States.

Some people avoid talking about gun violence because they don't think they can make a difference, but every conversation matters.

When it comes to gun violence, we agree on more than we think, and having productive conversations about gun violence can help protect children and teens.

Learn how to have the conversation at agreetoagree.org, brought to you by the Ad Council.