Bad Apple + The Rise of the AI Empire + Italian Brain Rot

1h 11m
“I have rarely read a judge who is so obviously angry at a tech company”

Listen and follow along

Transcript

the last two decades, the world has witnessed incredible progress.

From dial-up modems to 5G connectivity, from massive PC towers to AI-enabled microchips, innovators are rethinking possibilities every day.

Through it all, Invesco QQQ ETF has provided investors access to the world of innovation with a single investment.

Invesco QQQ, let's rethink possibility.

There are risks when investing in ETFs, including possible loss of money.

ETF's risks are similar to those of stocks.

Investments in the tech sector are subject to greater risk and more volatility than more diversified investments.

Before investing, carefully read and consider fund investment objectives, risks, charges, expenses, and more in perspectives at investig.com.

Invesco Distributors Incorporated.

Well, Casey, have you heard the exciting news this week?

Which news, Kevin?

The Golden Globes are adding a podcast category.

I did not hear that.

Yeah, that just came out.

So yet another award.

We're not going to win.

Well, I don't know about that because if I know one thing about the Golden Globes, it's that until very recently, it seems like you could just bribe them directly to win.

I don't know if that's still true, but we should look into it.

Yeah, what does it cost to win a Golden Globe these days?

I don't know, a few hundred dollars.

checks in the mail wait unless there are tariffs topical

okay now we're definitely not winning now we're not winning because i accused them of corruption oh listen we speak the truth on this podcast we do

i don't care what it costs me

I'm Kevin Roos, a tech columnist at the New York Times.

I'm Casey Newman from Platformer.

And this is Hardform.

This week, the scathing court ruling that that forced Apple to give up some control over its app store and could send an executive to jail.

Then, author Karen Howe joins us to discuss her new book on the history of open AI and the hidden costs of reaching massive scale.

And finally,

it's time for me to teach Kevin about the joys of Italian brain rot.

Mama Mia.

Casey, have you noticed a smell in the air over San Francisco this week?

Many smells in the air, Kevin.

Well, the smell I'm talking about, Casey, is the smell of freedom.

Because in the last week, Apple has lost its iron grip on the iOS App Store thanks to a ruling by a judge.

Commerce is legal in America again, Kevin.

Yes, so we're going to talk about this today.

Apple has been forced to make some big changes to its app store by a lawsuit that was brought by Epic Games, the maker of Fortnite.

A judge ruled last week that Apple had not complied with an earlier injunction.

And we will get into all of that.

But first, I just want...

you to make the case that this matters to normal people.

Why should the average person with an iPhone care about what Apple's rules for its app store are?

Well, to me, it actually starts with the Kindle app, Kevin.

Lots of people love to read on their phones and tablets.

And I think most people I know in my life have had the experience of opening up the Kindle app or the Amazon app, thinking, I want to buy that book.

And then there's just kind of a big blank spot where you're expecting to see the buy button.

And Apple is the reason for that blank spot.

They charge such a high commission on ebooks that Amazon and other companies cannot profitably sell them.

And so since the dawn of the App Store, in order to buy a book on your phone, something that should be very easy has required you to open up a browser, log into an Amazon account, navigate that whole system.

And Amazon is not alone.

Many, many developers have had to go through similar contortions just to be able to sell their products and still make any kind of profit.

Yeah, this is the so-called Apple tax of up to 30% that developers have to pay when they want to charge for apps or purchases within their apps.

And for many years, Apple has not only levied this tax, but they have also made it impossible for those developers to direct users off of Apple's platforms to say, hey, if you want a better deal on this Spotify subscription or this Netflix subscription or this purchase of an iPhone game, you can actually go on the web and get a better deal there because there we don't have to pay Apple's 30% fee.

That has not been allowed.

And so Epic Games, which makes Fortnite, brought a lawsuit years ago to try to get those policies changed.

And in 2021, a judge in California named Ivan Gonzalez-Rogers ruled that Apple had violated the law in California against unfair competition.

She ordered Apple to allow apps to provide users with links to pay developers directly for their services.

And that way they could avoid paying Apple's 30% commission.

And after that ruling, Apple did go and make some changes, but apparently they didn't do a good enough job.

No, and I would say this has been apparent to most people who've been following this.

I think we've talked about this on the show.

Apple did what is often called malicious compliance, doing the absolute least while dragging and kicking and screaming the whole time.

Yeah, so we're going to talk about some of that malicious compliance, but let's just say straight up, this was a scathing opinion.

I have rarely read a judge who is so obviously angry at a tech company for doing what they did.

No, this was the kind of speech that you typically only see on a Bravo reality show.

Yes.

So Judge Gonzalez Rogers not only accused accused Apple of doing this kind of malicious compliance, but she also accused them of outright lying to the court under oath.

She referred both Apple and their vice president of finance, Alex Roman, for potential criminal prosecution for perjury.

And we should just read the last paragraph of the order from Judge Gonzalez Rogers, which is truly the mic drop moment.

She writes, quote, Apple willfully chose not to comply with this court's injunction.

It did so with the express intent to create new anti-competitive barriers, which would, by design and effect, maintain a valued revenue stream, a revenue stream previously found to be anti-competitive.

That it thought this court would tolerate such insubordination was a gross miscalculation.

As always, the cover-up made it worse.

For this court, there is no second bite at the apple.

Period.

But you know what?

It kind of was a second bite at the apple because she bit him the first time and then they didn't do it.

So she had to bite him again.

Yes.

So let's just talk for a second about some of the details that were revealed in this judge's opinion that have come out about how Apple tried to skirt compliance with this earlier 2021 injunction.

Yeah, well, and this was well known to all of the developers, but if you wanted to use an external sales system in the App Store, you still had to pay Apple a commission.

And that commission was 27% or 3% less than it was paying Apple.

And of course, these companies have to pay the payment provider.

So basically, Apple created a system where you were actively disadvantaged in multiple ways from trying to operate outside of the app store.

Yes.

So I knew that Apple was charging a commission for apps that would send people, like if you're Spotify and you want people to be able to subscribe to your app on the internet, pay a lower price, pay you directly rather than going through Apple, you could do that.

under Apple's sort of revised rules, but Apple would actually charge you a 27% commission, which by the time you added credit card fees on top of that would probably be more than the 30% that they would charge you.

So this was clearly a case of Apple trying to say, well, go ahead and use this other system, but it's not actually going to save you any money.

No.

And what I did not realize until I read Judge Gonzalez Rogers' opinion here was that Apple would not just collect those commissions if you went directly from an iOS app to the web to buy a subscription or a service, but if you went a week later, they would be able to track that you had gone to the web from the iOS app and they would still charge the developer that commission.

Yeah.

It was absolutely outrageous.

It was insane.

And it was also not the only thing that Apple did to try to dissuade iOS users from going to external links to buy goods and services outside of their payment system.

Casey, what is a scare screen and how did Apple use this?

The scare screen was a pop-up that you would see when a user did actually try to click out of the app store to make a purchase using an external system.

And while these were not the exact words, Kevin, here was the vibe.

Hey, loser, looks like you're trying to do something stupid.

You're probably going to die.

Do you want to try it anyway?

And believe it or not, Kevin, when people saw a message that had that vibe, most of them just chose not to click it.

Yeah.

And what was so amazing about this was that Apple, I guess, had tried to protect some of its private company communications from being seen by the judge in this case by claiming some sort of attorney-client privilege.

But the judge said, no, no, no, out with it.

Let's see those emails.

And so we have, in this opinion, lots of emails between Apple executives, including Tim Cook, the CEO, talking about the very specific language to put on this scare screen and how to make it even scarier so that users would be less inclined to go outside of Apple's ecosystem and make a purchase.

Yes, and these internal documents showed that the company would lose minimal revenue or no revenue at all from this, right?

That they built a system that was maximally designed to protect their revenue, which was contrary to the judge's order, which she wrote in the spirit of increasing competition and other companies' revenue.

Yeah, so to put it mildly, Judge Gonzalez Rogers did not find any of this charming in the least.

And she also directly accused at least one Apple executive of lying outright under oath about what it had done.

Casey, explain the perjury charge here.

Yeah, so this perjury charge was leveled against Alex Roman, the vice president of finance at Apple.

And among other things, she focuses on this moment where he testifies that until January 16th, 2024, which is when Apple's revised system went into effect, Apple had no idea what fee it would impose on purchases that linked out of the App Store.

He testified that the decision to impose a 27% fee was made that day, which is just like so obviously untrue.

And of course, during the legal proceedings, business documents revealed that the main components of the plan were determined in July of 2023.

So basically, this guy got caught red-handed, and the judge is going to punish him for it.

Yeah.

And so, effective immediately, according to Judge Gonzalez-Rogers' order, Apple has to drop these commissions, these 27% fees on these external links.

And Apple, as of last week, had officially updated its App Store guidelines to allow those links out of the app in the U.S.

But Casey, what are the implications of this?

And how are other developers that put stuff on iPhones reacting?

So developers are reacting by implementing the links that they've always wanted to have.

So in the Kindle app, for example, now, you will see a get book button.

You'll tap it and it'll kick you out immediately into a browser where you can complete a purchase.

Spotify, Patreon are also doing something like this.

This is not a perfect solution.

Like you can't actually just buy a book in the Kindle yet for reasons that actually aren't entirely clear to me.

Maybe we'll get there.

But on the whole, we are essentially removing the restrictions that prevents outside businesses from communicating with their customers, telling them about deals, telling them about their websites.

Just these sort of like very onerous restrictions on the speech of these other companies have been wiped out.

Yes.

And I think that gets to why these arcane and somewhat small-seeming changes to the rules governing Apple's App Store really are important.

Apple has been for many years this sort of godlike gatekeeper on any company that wants to make things for the billion-plus iPhones out there.

They have made extremely strict and specific rules about how developers can and can't build their apps and sell products and services to customers.

They have effectively been a landlord over the entire digital services economy.

And I think judging from this opinion, they have really abused that power and now they are getting slapped on the wrist for it.

Yeah.

And I think it has been to their own detriment, Kevin.

You know, Apple's view is that these developers should feel lucky that they get to sell in the app store at all, when in reality, a big reason that we buy iPhones is because of the apps that are there.

If you took off the Amazon app and the Spotify app and the Patreon app and a million, you know, all these other apps apps off of the iPhone, people would start considering alternatives, right?

And so I think that the balance in between the developers and Apple had just gotten completely skewed.

And Apple has not been recognizing the value of what those developers are bringing to iOS.

Yeah.

So you think this ruling is a good thing?

I think it is absolutely a good thing.

I think it has been long overdue.

And I hope it is upheld after Apple appeals, which it is going to do.

But what do you think?

Yeah, I mean,

I think it's an open question.

So Apple's defense of these App Store rules has always been some version of like, we're protecting our customers, right?

If we let people, you know, sideload apps onto the iPhone in a way other than through the App Store,

people will put all kinds of dangerous malware and stuff on the iPhone and you'll be sorry.

If we let people pay for things on external websites, then people will run all kinds of scams and people will be taken advantage of.

And so by implementing these rules, we're really protecting our customers.

It's for your own benefit, essentially.

And I think it'll be really interesting to see if when these restrictions are gone, people actually do say, we wish that Apple were taking a more active role here.

We want some of these restrictions back.

Or if the net result is just going to be that people have more choice and they pay a little less for stuff because the developers making that stuff are not having to pay 30% of their revenue to Apple.

Well, I think that's going to be the case.

You know, this whole argument that, you know, Apple maintains this pristine vigilant control over the app store, I think has always been mostly a fantasy.

You know, think about in the early days of like ChatGPT, before there was an app, you know, you would go onto the app store and you would search for ChatGPT.

You would see a dozen plus apps that were all just clearly misrepresenting themselves as OpenAI, that were some of the most revenue generating apps in the entire app store.

Apple could have stepped in to prevent that.

They didn't.

I'll give you a more recent example.

One of the best video games of the year is called Blueprints, P-R-I-N-C-E.

All of the gaming bloggers love it.

I've been playing it.

I've been loving it myself.

The day it came out, somebody just ripped it off and just uploaded it onto the App Store and was selling it for, I don't know, 10 bucks or something.

Why didn't Apple know that?

They are not paying the attention to the App Store that they are telling you that they are paying.

Yeah.

I mean, to me,

the most interesting part of this, as with a lot of these antitrust trials that are going on right now, was just seeing the internal communications at these companies.

And, you know, in this ruling, there are all these like fascinating excerpts from these emails and messages between Apple executives sort of talking about the various plans that they had to sort of circumvent this injunction and charge this 27% fee.

They had all these code names like Project Michigan or Project Wisconsin, so that they could talk about this stuff in a way that would not be obvious that they were doing some sort of price fixing.

And it just makes you realize like these giant tech monopolies monopolies did not end up that way by accident, right?

They have had to work very hard for a very long time to prevent competition, to keep their market power and their dominance.

And I don't know, man, there's just something really depressing about that.

Like these are companies that used to succeed by making good things that people loved.

And in some respects, they still do that, but they also spend just a ton of time.

Their top executives are in these meetings talking about whether the fee should be 27% or some other number.

And it just makes you realize like they have really lost the plot here.

Absolutely.

Well, let me try to cheer you up a little bit then, Kevin, because I think there actually is a negative consequence for these folks of just growing their profits so big on the basis of this extremely easy money where, you know, they just make every developer pay this very high rent to them.

And that is Apple has been missing the boat on next generation technologies.

We know that they invested billions of dollars into a car project that they could never figure out and had to abandon, right?

We know that they are struggling to figure out how to do anything with AI and have had to walk back a bunch of claims recently in a really embarrassing way.

We know that the Vision Pro, their most recent hardware initiative is not taking off in part because developers do not want to make apps for it because they have not been able to get rich.

making apps for it, right?

So all of this stuff is just adding up in a way where Apple's decisions really are coming back to haunt it.

And while it remains a giant and I'm sure will for a very long time, we are starting to see some little cracks in its armor.

Yes.

And yet Apple just reported its earnings for the last quarter.

It made $95.4 billion in revenue, up 5% year over year.

So despite the fact that they are missing all of these

new innovations and trends, that they're late on generative AI, that they haven't succeeded with the Vision Pro in the way that they had hoped.

They are still doing quite well as a company.

So I don't know that this is actually coming back to bite them in the way that we might hope it would.

Well, I mean, let's see what happens.

You know, the idea behind these rules was never to make Apple a tiny company that was struggling to get by.

It was just to get them to share a very small portion of the wealth with a large number of developers.

Like, you know, Apple has done a ton of incredible innovative things.

They deserve to be rewarded for that.

They deserve to take some sort of commission from the apps in the app store, right?

But this has been about trying to create a more level playing field for other developers out there.

And, you know, if the end result of this is that Apple is still pretty rich and profitable, I think that will actually make the point that the judge is making, which is that there is no need for Apple to engage in the sort of shenanigans it's been up to.

Yeah, I think the best outcome possible here is that all the big developers that can afford to sort of develop their own payment systems for their apps or send people to external websites to buy things that they do that and they start charging way way less than 27% for that and that Apple is ultimately forced to improve its own payment system to maybe reduce its fees to in other words compete like that is what all of this is about is forcing Apple a company that has not had to compete for the affections of iOS developers in a long time, to finally step up and do something different.

You know, keep in mind, even Microsoft, which was sued for anti-competitive behavior, you know, back in the early 2000s, they never said we want to take a 30% cut of every software program sold on Windows.

They actually left a lot of money on the table and it helped that ecosystem to thrive, right?

I would like to believe something similar could happen here.

When we come back, we'll talk to author Karen Howe about her new book on open AI and the costs of building such big models.

Over the last few decades, the world has witnessed incredible progress, From dial-up modems to 5G connectivity, from massive PC towers to AI-enabled microchips, innovators are rethinking possibilities every day.

Through it all, Invesco QQQ ETF has provided investors access to the world of innovation with a single investment.

Invesco QQQ, let's rethink possibility.

There are risks when investing in ETFs, including possible loss of money.

ETF's risk is similar to those of stocks.

Investments in the tech sector are subject to greater risk and more volatility to the more diversified investments.

Before investing, carefully read and consider front investment objectives, risks, charges, expenses, and more in perspectives at Invesco.com.

Investco Distributors Inc.

Can your software engineering team plan, track, and ship faster?

Monday Dev says yes.

Custom workflows, AI power context, and IDE-friendly integrations.

No admin bottlenecks, no BS.

Try it free at monday.com/slash dev.

Huge savings on Dell AI PCs with Intel Core Ultra processors are here.

And they are newly designed to help you do more faster.

They can generate code, edit images, multitask without lag, draft emails, summarize documents, create live translations, and even extend your battery life.

That's the power of Dell AI with Intel inside.

Upgrade today by visiting dell.com/slash deals.

Well, Casey, it's a day ending in Y, so there's some OpenAI drama making the rounds this week.

Yeah, although I don't know if this is so much drama as the company is trying to retreat from drama, Kevin.

Yes, so OpenAI announced on Monday of this week that it was no longer trying to get out from under the control of its nonprofit board.

That was something that a lot of people, including Elon Musk, had objected to.

A lot of former OpenAI employees and others in the AI field had said, hey, wait a minute, you can't do that.

You've still got to have this nonprofit board controlling you.

And OpenAI, after hearing from some attorneys general that they were not happy about this plan, has retreated.

So what is the new plan, Casey, and how is is it different than the old plan?

So, the old plan was basically the nonprofit is going to no longer have any control over the for-profit enterprise.

It's going to go be a separate thing, it's going to invest in various AI-related causes and philanthropies.

Under the new plan, the nonprofit is going to retain control over the for-profit.

So, basically, the status quo is going to be in effect, Kevin, except for a couple of key changes.

One is what is now a limited liability corporation is going to become what they call a public benefit corporation.

And a PBC, as they are called, has a responsibility not just to think about shareholders like Microsoft and SoftBank and everybody else who owns a chunk of OpenAI, but also to think about the general public, right?

So that's sort of one important idea that's there.

The other big idea is that the nonprofit is currently set to get some unlimited amount of profits if OpenAI does eventually become a trillion-dollar company.

That's not going to be the case anymore.

Under this new model, the for-profit is going to give some stake to the nonprofit, but after that, it's going to be a very normal tech company.

Everybody who owns shares, all of the employees, they can get unlimited upside.

And the more money that OpenAI makes, the more money that they can make too.

Right.

So these profit caps that OpenAI had previously had in place where investors like Microsoft were sort of limited to earning some multiple of the amount that they put in and no more, those caps are now going away.

Yeah, they put on their thinking caps and they said, we're getting rid of the profit caps.

Well, it just goes to your point that you've been making on this show for years now, which is that OpenAI is a very weird company.

Yes, and I have to say, when Sam Altman wrote a letter to employees this week, the first sentence of the letter was, quote, OpenAI is not a normal company and never will be.

And I felt so seen.

Somebody's been listening to Hard Forum.

And in other Open AI corporate news, the company announced late Wednesday that its board member, Fiji Simo, would leave her job as CEO of Instacart to come be the company's new CEO of applications overseeing its business and product divisions.

So we are not going to do a whole segment about the Open AI corporate conversion story this week.

Because we love you too much.

Yeah, we love our listeners too much.

We would not subject you to that.

But we are going to talk about it and many other things related to Open AI with Karen Howe.

Karen Howe is a reporter who has been covering Open AI and the AI industry for years now.

And she has a book that's coming out later this month called Empire of AI, where she writes about Sam Altman and Open AI and what she calls the dreams and nightmares of this very strange company.

Yeah, and you know, but by the way, I think she should already start working on a sequel and call it The Empire Strikes Back.

Something to think about.

Yes, and this is a very buzzy book.

People in Silicon Valley and at the AI companies have been sort of nervously waiting for it.

Karen is very unsparing in her descriptions of AI companies and the AI industry.

I would not say it is a book that the AI industry will think is flattering, but it's an important conversation to have because I think it's got a lot of people talking.

Absolutely.

And before we do that, Kevin, do we have anything we want to disclose?

Well, let me make mine first.

My boyfriend works at Anthropic.

Kevin, you're coming out.

I'm so happy for you.

No, I work at the New York Times company, which is suing OpenAI and Microsoft for alleged copyright violation.

Interesting.

And my boyfriend works at Anthropic.

Yours too?

Yes.

Anyways, let's bring in Karen.

Karen Howe, welcome to Hard Fork.

Thanks so much for having me.

So I imagine your book is sitting there behind you on the shelf.

It's all printed up.

It's ready to go.

And then this very week, OpenAI puts out a story: say, hey, maybe we're going to change our structure around again.

Why the heck not?

So what's it like trying to write a self-contained book about a company that just never stops making news?

Tiring.

Yeah,

but you know, like, honestly, people have been asking me this question a lot, like, how do you even write a book at a book scale?

Because usually it's like months on end before it goes to publish.

And I think sometimes the news is actually a little bit distracting in that, yes, there are a lot of changes happening.

Yes, things are evolving really fast, but there are some fundamentals that are kind of ever-present.

And so I tried to keep the book focused on the things that don't change so much.

Yeah, well, and among other things, this book is a history of OpenAI.

So maybe let's go back all the way to the beginning.

What was this company like when you started writing about it?

So I started writing about OpenAI in 2019, and I went to the office to embed with them for three days as the first journalist to profile what had just become a newly minted company.

So

right before I started covering it, it was still founded as a nonprofit nonprofit, and it had this explicit goal that it should be a counterbalance to for-profit companies.

And it sort of became clear to me during my time at the company then that the idea that this was a bastion of idealism and transparency and was going to be totally open and share all of its technologies to the world and not at all be beholden to any kind of commercialization was already going away.

And there were a lot of kind kind of early signs of that that I picked up on while I was there.

Just there was a lot of secrecy for a company that purported to be incredibly transparent.

And there was a lot of competitiveness, which to me suggested that like, if you're going to be competitive and you want to specifically reach AGI first,

you are going to have some really hard trade-offs with this transparency.

mission and this like open up everything to the public mission.

So I've talked to some people people at OpenAI who have said that they felt quite burned by some of your early coverage of them, like they were expecting something different than they got.

And you write in the book that after you published your story on them, they stopped talking to you for three years.

I'm just curious, like what you think surprised them about your coverage or if they should have been surprised given some of the questions you were asking.

I think they were surprised because

they gave me a lot of access and they thought that i would sort of adopt a lot of the narrative that they were giving me

and to be honest like i i kind of came in without really a lot of expectations it was actually my first ever company profile and i i was going in kind of just with an open mind of okay like this company presents itself as as this like ethical

lighthouse and like what

let's try to understand a little bit like how do they organize themselves and how do they try to achieve the goals that they've set out to do?

And I just found that they couldn't quite articulate what their vision was, what their plan was, what AGI was.

And I think the prioritization of the problems that they were saying that they were focusing on just didn't quite feel

right to me.

Like, I pointed out to them that there were environmental issues that were starting to become more and more of a concern as AI models were scaling larger and larger.

And, you know, Ilya said to me, he was like, yes, of course, that's a concern, but when we get to AGI, climate change will be solved.

And that was just like, okay, that's kind of a, you know, it's like, it's like a cop-out card to just be like, well, when we get to the thing that we don't know how to define, all the problems that we might have created along the way will just like magically disappear.

And so that's when I started being like,

I think we need to like scrutinize this company more and just be more

cautious about taking all the things that they say at face value.

Right.

I mean, it sort of sounds like a microcosm of the arguments that have taken place for the last few years among the AI safety crowd and the AI ethics crowd.

That, you know, the AI safety people, they're worried about existential risk and bioweapons and

malicious use of these systems.

And the AI ethics crowd are much more worried about like issues like bias and environmental concerns and things like that.

So I'm just, I want to make sure I'm characterizing it fairly.

You yourself are coming from more of the perspective of the AI ethics crowd and that you think we should be paying more attention to immediate harms of these models rather than trying to avert some future harms.

Yeah, so I would I would call it like the AI accountability crowd and that

and the reason why I use the term accountability instead of ethics is because I think accountability acknowledges that there's a huge power dynamic happening here where like the developers of these technologies have an extraordinary amount of power that they've accrued and amassed and are continuing to accrue and amass based on this narrative that they need all of these resources to build so-called AGI, right?

So, I definitely come from that perspective.

And I think that if we take seriously the present-day harms of what is happening now, that will help us not get to future harms because we will be more thoughtful about how we develop AI systems today so that they don't end up having like wild detrimental effects in the future.

And I think, like, this idea that we don't really know how bad AGI might happen or like what the catastrophic scenarios are

is not quite right in that we have already so much evidence right now of like how AI is affecting people in society.

And also, like, AI is harming people literally right now.

So, like, we need to address that.

We need to document that.

We need to change that.

One of the central arguments of your book is that open AI and the sort of AI industry in general has become an empire.

It's the title of your book, Empire of AI, and that has done so by exploiting people and resources around the world for their own benefit.

Sketch that argument for us.

Yeah.

So if we think about empires of old, the long, centuries-long history of European colonialism, they effectively went around the world, laid claim to resources that were not their own, but they designed rules that suggested that they suddenly were.

They exploited a lot of labor, as in they didn't pay the labor or they paid extremely little amounts to the labor that ultimately helped to fortify the empire.

And all of that like resource extraction and labor exploitation went and accrued benefits to the empire.

And they did this all under like a justification of a civilizing mission.

They're ultimately doing this to bring progress and modernity to the rest of the world.

And we're literally seeing empires of AI effectively do the same thing.

And what I say in the book is like they are not as overtly violent as empires of old.

We've had 150 years of like social mores and progress.

So there isn't that kind of overt violence today.

But they are doing the same thing of laying claim to resources that are not their own.

That includes like the labor of a lot of artists and a lot of writers.

That includes all the data that people have put online that they've just scraped in these internet loads of data sets.

That includes exploiting

labor of the people who they contract to help clean their models and annotate the data that goes into their models.

That also includes like labor exploitation in the sense that they are building technologies that are ultimately, like OpenAI literally says, their definition of AGI is to create AI systems that will be able to outperform most humans at economically valuable work.

That is a labor automation machine.

So they're also exploiting labor in the sense that they're creating these AI systems that will dramatically make it more difficult for workers to kind of demand rights.

And they're doing it under this civilizing mission where they're saying, like, ultimately, this is for the benefit of all of humanity.

But what we're seeing is that's, you know, not true.

When you go far and away from Silicon Valley, when you go to places like the Global South, when you go to rural communities, impoverished communities, marginalized communities, they really feel like the brunt of this AI development, this extraction and this exploitation.

And they're not at all receiving any of the supposed benefits of this accelerating AI quote-unquote progress.

Let's talk about some of that

extraction of natural resources.

This is one of the things that your book gets into that I think doesn't get discussed a lot in the context of AI.

Tell us about some of your reporting and what you saw.

Yeah, so I ended up spending a lot of time in Latin America and also in Arizona to kind of understand the just sheer amount of computational infrastructure that is now being built to support the generative AI paradigm and the quest to AGI.

And these are, you know, massive data centers and supercomputers that are being plopped kind of in communities that initially accept this kind of infrastructure either because they don't know about it, because companies enter these communities in like shell companies and don't and aren't transparent about actually putting this infrastructure there.

or they're sort of persuaded into it because there seems to be like a really positive economic case where a company comes in and says, we're going to give you like hundreds of millions of dollars to build this data center here and it's going to create a bunch of jobs.

And what they don't say is that, like, the jobs are not permanent.

They're talking about construction jobs.

And once the construction jobs are over, there's actually not that many jobs for running the data center.

And these data centers, they consume an enormous amount of power and they consume an enormous amount of water because they need to be cooled when they're training, you know, these models 24/7.

And this infrastructure is permanent.

So once it gets put there, even if a city doesn't have that kind of energy anymore or the water to provide to these data centers, they can't really roll it back.

And in Chile, I was like with activists who had been fighting tooth and nail to try and get these data centers from not literally taking all of their drinking water.

And they were entering also communities in Uruguay where I was spending time as well during a drought where people literally were drinking bottled water if they could afford it or they were drinking contaminated water if they could not because there was not enough fresh drinking water to go around and that was like when google decided to build a data center there so that's kind of when i say that there's like the current ai development paradigm is creating a lot of harms at a mass scale.

Like that's the kind of stuff that I'm referring to.

Yeah.

I mean, part of empire building is about exerting political power, right?

I'm curious why the governments in Chile and Uruguay are okay with this.

Like, like, what is the mechanism through which they're deciding to grant all of this power to these AI companies?

A lot of governments learn that they have to serve the global north if they want to get more investment and more jobs and more opportunity into their country.

And in the AI case, it ends up not being a good bargain, but a lot of them don't know that up front.

And so they think that if they can open up their land, their water, their energy to these companies, that somehow they will get more investment, more high quality, like white collar jobs in the future.

Like I was talking with politicians who said that they hoped that if they allowed a data center, then eventually, you know, Microsoft would bring in like an office with like software engineering jobs nearby their data center.

And so that's kind of the reason why they end up doing this.

And Chile has like a really interesting history in particular in that they have dealt with just like centuries of extraction.

Most recently, they've become like a huge provider of lithium for the lithium boom.

And so they sort of

have developed this mentality over time that like this is what they do.

Like they open up their natural resources to

these multinationals and that somehow this will convert into economic growth, broad-based economic growth through people.

But unfortunately, it doesn't really.

Well, I want to push back on that a little bit because I think if I'm being like sort of trying to be sympathetic to the people, the politicians, the communities that are accepting this stuff, I think there's a case to be made that it is actually helping them.

Maybe not in terms of direct GDP or economic growth.

But like the World Bank recently did a randomized control trial with students in Nigeria who were given access to GPT-4 for AI-assisted tutoring and found that it boosted their test scores significantly and that the gains were especially big among girls who were behind in their classes.

So like, as I'm hearing you talk about the exploitation taking place, I'm thinking, well, maybe there is something that they're getting in return.

Maybe there is something worth it to them.

Maybe this technology can, in some instances, help level the playing field between poor countries in the global south and places like America.

And maybe there's a deal to be had where it's like, okay, you wanna like extract our lithium, you wanna build a data center in our country?

Sure, but you have to give all of our students free access to ChatGPT Pro or something like that.

Is there any sort of fair exchange that you can imagine that would help these people?

So I think this question is kind of premised on the idea that like we have to make these trade-offs in order to get that kind of gain.

Like

we have to give you our lithium in order to like have some kind of educational boost from ChatGPT.

And like that's kind of a premise that I just don't agree with.

I think that there are ways to develop AI that gives you the gains without this kind of extraction.

So like the reason why I call it Empire of AI in the book is in part to point out that this is not the only pathway to AI development.

These companies have chosen a very particular pathway of AI development that is predicated on absolutely massive amounts of scale, massive amounts of resources, massive amounts of data.

Well, that's how you get the models to be general and good and to be able to work in all kinds of different languages.

Is there another path that you're suggesting there's another path?

Like, what is the path other than through scale?

So we don't necessarily know what it is yet, but it isn't being explored at all.

And there are already signs that there can be other ways to get to these more general capabilities without that scale.

DeepSeek is a really interesting example of this.

I think there are a lot of also problems with DeepSeek, but DeepSeek demonstrated that there is a,

even in a resource-constrained environment, you can actually develop models that have more generality.

And so, I mean, this is what science is.

Like, you have to to discover kind of the frontiers of what we don't know yet.

And the industry has fallen into this very specific scaling paradigm that they know works, but it has so many externalities with it that it's ultimately not actually achieving what OpenAI says its mission is, benefit all of humanity.

And so, like, if we constrained the problem to think, like, how can we get more positives out of this technology without having all of that negative harm, I think there would actually be more innovation that would come out, like true innovation that would come out that would be more beneficial.

Karen, one thing that is very clear in your book is that you are not a fan of the big general purpose AI models.

You call them monstrosities built from consuming previously unfathomable amounts of data, labor, computing power, and natural resources.

Is there any way for people to engage ethically with these models in your view, or is it all fruit from a poison tree?

I think the way that they're being developed right now,

me personally, I do think that it's root from a poison tree.

Do you use ChatGPT at all?

Not really.

No.

Have you ever?

Yes, I have.

Did you?

I'm just curious because like writing a book is, I'm doing it now and I'm finding a lot of uses for AI.

And I'm just curious, like, this is a very well, you know, thoroughly researched book.

Was it helpful?

Any AI tools were used in the creation of this book?

So no generative AI tools, but I did use predictive AI tools.

So, I used Google reverse image search to try and figure out the price of OpenAI's furniture because they had some like really nice chairs.

And I was trying to explain like

the level of upgrade that happened when they went from like a nonprofit in one office to this new like Microsoft-backed

capped profit entity in this other office.

And when I like ran the reverse image search through, it came up.

It was like Brazilian designer chairs that were like ten thousand dollars each um yeah so i i mean like i do use predictive ai but uh i did not use generative air for this book other than to just like understand how the tool works and like test its new features but i like never used it for like getting research or organizing thoughts or anything like that.

Because at the end of the day, I'm writing a book about Open AI and open, like I'm not going to like willingly hand a bunch of my data about like what I'm thinking about and what I'm researching to open AI in the process.

And that's where you and Kevin are different.

So I want you to, I want you guys to interact about this a little bit because Karen, let me tell you, if Kevin can use generative AI to do something, he's doing it.

Okay.

Like there's gonna be a lot of generative AI that's going into the making of this book you're writing, right?

Well, in the research phase, because I found that it's not that good at like composing.

Right.

And, but it is like super, super useful for doing like, give me a history of the term AGI and where it originated and who, who was the first people to use it, and how it evolved over the years, and how has every lab defined it in all of their various publications.

Like, that kind of thing would have taken me weeks before, and now it's like minutes.

Right.

So, Karen, make your case that Kevin should stop doing that.

So, I'm not going to make that case, but what I'm going to say is

this is like the perfect use case for these tools because

like these companies are constantly testing their tools for like

on like ai topics like that is like the thing that they like stress test their tools on and so if there were any topic in the world that like these chaplets would be particularly good at talking about it would be ai and a gi and so kevin like move forward fire away no but so here's another thing that i wanted to ask you karen because i think this is another place where we we sort of disagree um yeah uh you are very skeptical about the claims that the AI labs are making about AI safety or the concept of AGI.

And I guess I'm trying to understand that argument.

My view on these folks is that they are sincere, that they are sincere when they worry about AI posing risks to humanity.

I think that's why they're investing tons of money into AI safety and trying to work on things like interpretability, figuring out how these language models work.

Is your view that they are sincere but just wrong about AI being an existential threat possibly, or that they don't believe it at all and that they're just kind of using AI safety as a smokescreen or an excuse for, you know, sort of raising money and continuing to build their models?

I think it totally depends on who you're talking about.

So in general, I think

there are a lot of people that are incredibly sincere about believing in these problems.

I don't have any doubt about that.

I talked with a lot of them for my book.

And, you know, like I talked to people who were like, their voice was quivering while they were telling me about being really, really scared about the demise of humanity.

Like, you know, like that, that's a sincere belief and sincere reaction.

I think there are other people who

pretend that they believe in this as, you know, the smokescreen.

But I think by and large, like, a lot of these people do truly believe, and

their heart is where their mouth is, and they are trying to do good by the world.

My critique is that

this

particular worldview is just really narrow.

It's just really, really narrow and like a product of like being in Silicon Valley, which is like one of the wealthiest epicenters of one of the wealthiest countries in the world.

Like, of course, you are going to have the luxury to think about these like really far off problems that don't have to do with things that are literally harming and affecting people all around the world today.

And it's not that I don't think we should focus on any research to these problems.

Like, that's not what I'm saying.

But I think the sheer amount of resources that are going to prioritizing these problems over present day problems is a super, it's just like not at all proportional to like

what the

problem landscape literally is in reality.

Yeah.

So when people like Sam Altman or Dario Amade or Demis Asabis say that we are, you know, a couple years away from something like AGI or even super intelligence.

Your view is that that just has no reflection on reality or that we should cross that bridge when we come to it and pay attention to the stuff that we can actually observe in the world now?

So

I think it like

It also depends on how they define AGI.

Like when OpenAI says that they are two years away from potentially automating away most labor, I could believe that they're on a path to systems that would appear to do so in two years and then lead to a lot of company executives deciding to hire the AI instead of hiring workers.

If we're talking about AGI in another definition, then I mean, it would have to be like on a case-by-case, like, how are they defining AGI and what is their time scale?

But

do I think that OpenAI has high convictions to try and create a labor automating machine and that they have the resources to start making a dent in labor opportunities for people.

Like, yes, I do.

Well, maybe let's have the kind of how do you define AGI conversation.

It's come up a few times during this conversation.

And I know there are a lot of folks who regularly remark that the definition of AGI seems really sort of amorphous and slippery to them.

You know, I have to say, like, it doesn't feel that amorphous to me.

I work with an assistant.

My assistant does customer service stuff, scheduling stuff, a little bit of sales.

If there was a tool that I could use and pay a subscription to that did those things on my behalf, I think I would say, yeah, I think that feels like AGI.

So that's kind of how I conceive of it in my mind, but I know there are so many folks out there who say, no, no, no, no, no, the definition is always changing and slippery.

And, you know, and this is a really big problem.

So, Karen, how do you feel about it?

I mean, what you are describing, like, yeah, like, if you want to define that as AGI, that's totally fine.

But I don't think that's how the companies are necessarily defining it as AGI, right?

Like, they are not defining it well.

But when they need to raise capital, when they need to kind of rally public support, when they need to get in front of Congress and try and

ward off regulation, the things that they say are: one day AGI will solve climate change, one day it will cure cancer.

Like, I think that the AGI system that you're describing is not exactly the AGI system that they are sketching out in that kind of broad sweeping vision that they're trying to use as justification to continue doing what they're doing.

Right.

There's a lot of hand waving that goes on when somebody says that some future AI technology is going to cure cancer.

It's leaving out many, many steps.

Well, but in partial defense of the labs here, I think like we have seen things like AlphaFold, which was Google Deep Minds system that solved the protein folding problem essentially and that was not something that they thought was going to be the end of their progress toward scientific cures for disease that was sort of the beginning stages and actually if you talk to biomedical researchers they say that was a huge deal and really did make it possible to do all kinds of new drug discoveries and i guess that part is like feels a little separate to me than the agi discussion um it but it does feel like the quest for a gi the sort of scaling up of these models, the attempt to make them more general, there have just been good things that fall out of that process.

And also some externalities that you mentioned, Karen.

But I'm just curious if you see any positive applications of the scaling hypothesis and the sort of dominant paradigm.

I don't think I have come across a positive application that I think justifies the amount of cost going into it.

And I think to return back to also DeepMind AlphaFold,

that was not a general intelligence system.

That was a task-specific system, right?

Which I advocate for.

Like, I think we need more task-specific AI systems where we give them a well-scoped problem, we curate the data, we then, you know, train the model, and then it does remarkable things.

Like, I totally agree that AlphaFold was a remarkable achievement.

And I don't think that that has much correlation with what

AGI labs are now doing with the scaling paradigm.

That's not, those are like two perpendicular tracks to me.

Yeah, the, I mean, I think it's clear that the hype is far ahead of the results right now.

We have heard a lot more about AGI curing cancer than we've actually seen progress toward curing cancer in the moment of this recording.

Now, some people believe that's going to change very soon, but I can understand why if you read a lot of headlines and you don't see cancer being cured yet, that you'd have some questions.

Yeah, and I think the other thing here is,

I mean, these companies are continuing to say that they're AGI lives, they're pursuing AGI, but like they've dramatically shifted.

And now they're really just focused on like building products and services that they can charge lots of money for.

And like all of the maneuvering that they've tried to do to make it seem like that is on exactly the same path as to what they're saying is AGI.

Like,

come on.

Like, that's probably not.

what's happening here.

And like, ultimately, these companies are building these like, I mean, you know, in like the last episode that you guys were talking about AI Flattery and like the debacle around that, and how they're turning to maximizing for engagement because this is the thing that they've realized gets them a lot of users, gets them more cash flow.

And like, that is ultimately what they're now building.

So I think what they're saying they're building and what they're building is also starting to diverge in the kind of new

era, I guess, where they need to be able to justify like a $40 billion raise.

Yeah.

Well, let's sort of bring it home here by talking about one thing that I think all three of us agree on.

You write that the most urgent question of our generation is how do we govern artificial intelligence?

I agree with you on that front, Karen.

And so let me ask, how do we govern artificial intelligence?

Please help us.

Democratically.

Yes.

So what does a more democratic way of governing AI look like?

So to me, it's like you consider the supply chain of AI development.

You have data, you have compute, you have models, you have applications.

I think at every single stage of that supply chain, there should be input from people, not just the companies.

Like when companies decide that they're going to train, to curate a data set, like there should be people that can opt in and opt out of that data set.

There should be people that, not just for their own data, but maybe there's consortiums that are debating like what kind of data, like publicly accessible data should or should not go into these tools.

There should be like debates about content moderation of the data, because

as I write in the book, there were a lot of moments in OpenAI's history where they kind of just debated internally, like, should we keep in

pornographic images in the data set or not?

And then they just decided it on the fly.

Like that to me is not democratic governance.

Like we should be having open public discourse about those types of decisions.

When it comes to compute, like there should be an ability for communities to even know that data centers are coming in to their communities.

And they should then be able to go to a city council meeting and actually talk with

their city council, talk with the companies about whether or not they want the data center and have like good, solid information about like what actually the long-term trajectory of hosting a data center would look like.

And when it comes to like the labor, the contract workers that are working for AI, like there should be,

you know, know, they should follow international human rights norms.

Because a lot of the conditions in which these workers are working do not follow international human rights norms.

So I think that's the way that I think about like all of these different stages all need to be democratic.

And when OpenAI says, like, we're going to develop democratic AI simply because we're an American company, like, that's not how it works.

Everyone actually has to participate, have agency, have a say to shape and change what is and isn't developed and how.

Well, Karen, this has been a fascinating conversation.

Really appreciate your time.

And

thanks.

Thank you so much for having me.

Louie, come back.

Turn your brain off.

It's time to talk about Italian brain rot.

Ooh, sounds fancy.

Over the last few decades, the world has witnessed incredible progress.

From dial-up modems to 5G connectivity, from massive PC towers to AI-enabled microchips, innovators are rethinking possibilities every day.

Through it all, Invesco QQQ ETF has provided investors access to the world of innovation with a single investment.

Invesco QQQ, let's rethink possibility.

There are risks when investing in ETFs, including possible loss of money.

ETF's risks are similar to those of stocks.

Investments in the tech sector are subject to greater risk and more volatility than more diversified investments.

Before investing, carefully read and consider front investment objectives, risks, charges, expenses, and more in perspectives at Invesco.com.

Investco Distributors Inc.

Can your software engineering team plan, track, and ship faster?

Monday Dev says yes.

With Monday Dev, you get fully customizable workflows, AI-powered context, and integrations that work within your IDE.

No more admin bottlenecks, no add-ons, no BS.

Just a frictionless platform built for developers.

Try it for free at monday.com/slash dev.

You'll love it.

Huge savings on Dell AI PCs with Intel Core Ultra processors are here, and they are newly designed to help you do more faster.

They can generate code, edit images, multitask without lag, draft emails, summarize documents, create live translations, and even extend your battery life.

That's the power of Dell AI with Intel inside.

Upgrade today by visiting dell.com/slash deals.

Kevin, if I were to start referring to you as Kavanini Russolini, what would that mean to you?

I would think it was some sort of mockery of my

Italian heritage.

I would never.

I would never.

What about Trellalero Trallala?

You know him?

No, I think you're having a stroke.

What about Bombardino Crocodilo?

Okay, now this is just getting ridiculous.

Vallerina Cappuccina?

Nope.

All right.

Listen, if you or someone you love recognizes any of these terms, Kevin, you may be suffering from a case of Italian brain rot.

I'm almost afraid to ask.

I have not been following this story, although I know you were very excited to tell me about it today.

What is going on with Italian brain rot?

Do not be afraid of Italian brain rot, Kevin.

If you have been on TikTok or Instagram or YouTube over the past many weeks, you may have encountered this unique form of AI-enabled insanity.

Now, typically, I know that brain rot refers to this kind of feeling of, I don't know, cognitive decline related to excessive use of social media or something like that.

People on TikTok are always complaining about their brain rot.

But what is Italian brain rot?

Well, if you want to catch up on this, I highly recommend a story in the times by Alicia Haradasani Gupta, who kind of catches you up.

This stuff started to emerge in January, and it really is an AI phenomenon.

You know, recently, Kevin, we've seen advances in some of these text-to-video generators.

So you might be able to, for example, create a short clip of a little coffee cup that is also a ballerina.

Well, congratulations.

You just invented ballerina cappuccino.

I mean, to me, like this is sort of the, the difference between this age of viral content and previous generations of viral content.

Like, I spend a lot of time on TikTok, but I have never, literally never seen anything about Italian brain rot.

And it's such a contrast to like, everyone knew the ice bucket challenge was happening, right?

Because you could see it everywhere, but things have become so like siloed and atomized that like you could tell me literally anything was happening on TikTok and that millions of people were into it.

It was the trend sweeping the youth and I would have no idea.

So, either that means I'm old or something has changed about social media.

Well, look, this is why you have to have your younger colleagues like myself come in and tell you what's happening in middle school.

You are not younger than me.

Well, spiritually, I think there's a case for it.

So, listen, there's no way to talk about Italian brain rot that improves on the experience of actually watching it.

So, let's watch a couple clips of Brain Rot, and I believe we have one queued up.

I hope I get hazard pay for this.

So, if you are not watching these, let me just describe what I just saw.

This is sort of a compilation of these Italian brain rot memes, which were all kind of like AI-generated, weird characters.

Like one of them was like a, looked like a sort of hamster poking out from a half of a coconut.

That's right.

And they're just saying these like Italian phrases.

So this is Italian brain rot?

This is Italian brain rot.

You know, you're probably grasping the Italian part because they're sort of being voiced in this over-the-top Italian accent.

And all of these sort of strange phrases that you're hearing are the names of the characters.

So I know you're probably wondering, who is Trippy Troppy Tropa Tripa?

And that's a shrimp with a cat head.

So I love this one because, you know, a lot of meme explainers, there's like a lot of excavating to do of where did this come from and what this about.

Here, it really is just what it says on the tin.

It is an Italian accent over a series of images that make you feel like you're going insane.

Yes.

And was this made by an Italian?

No.

In fact, in the Times, one of the main creators, this is the person who created Ballerina Cappuccina, was Susanu Savatudor, who is a 24-year-old from Romania, and who told the Times that this is just a form of absurd humor that really has very little to do with Italy.

But this creator just sort of created the name Ballerina Cappuccina, and they've gotten more than 45 million views on TikTok and 3.8 million likes.

God.

Now, like, at the risk of explaining a joke and thereby killing it, like, is there any point to Italian brain rot?

Is it making some sort of social commentary?

Is it trying to say, like, Italians are big users of social media and therefore are getting brain rot?

Well, so I actually do have a theory about this.

Like, I think here is what makes this feel new is that whatever this is actually does feel fresh.

And we live in a time where everything that Hollywood is giving us feels like a recycled version of something else.

We are on phase six of the Marvel cinematic universe.

And in that world where it's like, oh, and here's Ant-Man's cousin.

People are saying, F that, give me ballerina cappuccina.

It does just feel like there is some like organic hunger out there for just like really

stupid shit.

Just like really random.

Like, I was thinking about this recently.

You know, the the Minecraft movie is like a big hit, right?

People are, it's like one of the biggest movies of the year.

And there's this moment in the movie, apparently, I've not seen it, but where someone says the word chicken jockey.

Yeah.

Jack Black does, I think.

And at that moment, like teens and other young people have decided that this is the moment in the movie to like stand up and cause a ruckus.

They start throwing popcorn.

Someone actually, I saw, brought a live chicken to the theater and like held it up.

Like, this feels like of a league with chicken jockey from the Minecraft movie in the sense that it is just absurdist trying to explain it actually makes you dumber in some way and so there's a kind of appealing randomness to it yeah and by the way i think that is actually part of being a young person is building a language that is inaccessible to people older than you right like that is sort of how the identity formation process works is there are older people older people have no idea who trippy troppy tropitripa is and that is something that you can talk about with your friends that belongs to

what are some of the other ones okay well so i'm glad you asked because we haven't actually watched enough of these videos yet.

So, Kevin, I would now like to direct your attention to one Salamino penguino.

Salamino penguino, mezzo salame, mezzo penguino, tutto problema, non shivo la siamo.

This is like a penguin covered in salami, like wearing almost like a sort of headdress made out of salami.

Now, let's take a look at Glorbo.

Glorbo.

Okay, this is a

crocodile or alligator with a watermelon for a body.

This is a still image with 578,000 likes.

Everybody loves Glorbo.

Is this even real Italian?

Are we sure it's real Italian?

I'm pretty sure it's not real Italian.

Let's stop that one there and then let's sort of go now.

I know what you're saying.

You're that Casey.

These are these characters, they're just standing around.

Like that seems like super boring.

What if I were to tell you that other creators are now incorporating them into dramas, Kevin?

Oh, boy.

Let's take a look at one of those.

And this one stars Trallalero, Tra La La,

who is a shark wear extra.

And is that Ballerina Cappuccina, I see?

That is Ballerina Cappuccina, and she's with Tong Tong Tong Sahur.

Tonga Tonga Tonga and Tong Tong Tong.

Sahur enjoying their so he leaves for the day and oh, there comes Tra La Lero Tra La La the shark and now they're kissing in bed and oh no, Vallerina Camchina is pregnant.

And now

it's chasing after

the shark.

And that's Bumbolini Crocodini, and he sends in an airstrike.

So

that was, let's just review.

That was, I don't know, that was 10 or 15 seconds.

In that, you see two of these characters.

One of them gets into an affair, has a love child.

Her partner finds out and then sends in an airstrike to attack

the sort of cheater.

So they're doing a lot in 15 seconds.

Wow.

That was not a Pixar film.

That was really something.

I

feel like I'm on a very powerful psychedelic right now.

Well, you know, you mentioned earlier that, you know, in the old days, we would do things like the ice bucket challenge.

Kevin, what if I told you that some of these Italian brain rock characters are actually doing the ice bucket challenge?

no yeah let's watch that one my name is chimpanzini bonanini and i've been nominated for the usc this is a chimpanzee who is also a banana ice bucket challenge i nominate bomb bomb dinni cousini trippy troppie and boneca ambala and he's he's nominating the other characters to do the ice bucket challenge

this is so dumb yeah

it's very funny though i did i am like genuinely laughing at this but it is like i could not explain to you why this is funny if you paid me.

Well, here, listen, I have done a little bit of comedy in my life.

And one thing that I learned in improv was that everyone goes nuts for an over-the-top Italian accent.

It's extremely funny.

All I have to do is say, make a bowl of spaghetti.

You're already laughing.

See, I don't have to do anything.

The Italian brain rot functions much in the same way,

but they are taking advantage of this AI thing.

And, you know, look, we talked earlier on this show, these systems are being trained with other people's art without their consent.

There are some people who feel like you can never make anything truly creative or truly artistic with ai and yet here you have this bona fide viral phenomenon that is people making extremely silly stuff using ai and it is resonating with us and i think this has been one of the more counterintuitive lessons of ai slop is a year or so ago we were looking at images of shrimp jesus all over facebook and we were saying that seems silly i'm sure the company is going to get rid of this no no no my friend they're going to lean into it because there are riches that lie down this path and italian brain rot is the first example i think of that happening god it just i mean so i have a couple reactions one of them is yes i absolutely think that like

ai has uh utility and that there are good things that have come out with it but seeing italian brain rot makes me want to nuke the data centers

like shut it all down

you've gone too far but seriously i do think there is something here not just in the sort of like absurdist humor of this thing but i do think there are going to be new kinds of entertainment that are birthed out of these tools because, you know, if you wanted to make something like a ballerina with the cappuccino for a head, you know, 10 years ago, you needed to be an animator to do that or at least have some facility with animating software.

Now you just go into an AI tool and you type give me a ballerina cappuccino and out comes this like pretty perfect animation.

Yeah, which has always been the case for this sort of tool, by the way, is that it takes people who do not have those kinds of artistic skills and lets them express themselves creatively if they can think it, they can visualize it, they can make it available to other people.

Here is my case why this is actually a good thing, Kevin.

You know, I was thinking this morning about a few years back during the height of the crypto boom when people started talking about how crypto could be used to fund these alternative worlds of entertainment, right?

Like the Bored Apes Yacht Club was going to become this mega franchise.

But what made it cool was that anybody could buy in.

Anyone could get a Slurp juice.

Anyone could get a Slurp juice, put it on a mutant ape, transform your mutant ape, et cetera.

And people didn't really get into this because I think nobody wanted to be involved.

What was essentially like a homeowners association for creating entertainment?

But I look at Italian Brain Rot and I see something similar happening where it's like, as far as I can tell, no one has a trademark on ballerina cappuccina or chimpanzini bannanini.

You could just sort of make your own version of it and put it up there and nobody's going to issue a copyright strike you can have these characters do whatever you want to so it feels like there is actually a freedom in making this that people are really responding to and so maybe we do actually get the next version of like crowdsource entertainment and it all comes out of these bizarre text to video makers I gotta say, I believe you and you say that that is a possible outcome, but my brain just goes immediately to like some office at like Disney headquarters where they're like watching these Italian brain rot memes and like furiously trying to license the IP to make like a series of seven movies about chimpanzini Bananini.

And I do think that there's a possibility that this becomes just like any other entertainment franchise.

It could go that way, but you know, maybe that sort of robs it of the fun of it that, you know, makes it go viral today to begin with.

I mean, they're making movies out of Minecraft.

They can make movies out of anything.

They're really running out of things to make movies out of, as far as I can tell.

So, do I lean optimistic about this?

Yes.

At the same time, do I think that if China had just sort of come up with this idea independently as a way of bringing down American civilization, it would be a great idea.

If they were like, what if we just sort of did weird characters in Italian accents?

Could that distract all of American middle schoolers for a year?

Probably worth doing.

How hard could it be?

This is all a CCP plot to undermine American sovereignty.

That's kind of always been the thing with TikTok.

It's like, I don't think it's a Chinese plot to destroy America, but it is working.

Well, if Cappuccina Ballerina starts talking, singing the praises of Xi Jinping, we'll know that something grave has gone wrong.

Yeah, we'll keep our eyes on that one.

Over the last two decades, the world has witnessed incredible progress.

From dial-up modems to 5G connectivity, from massive PC towers to AI-enabled microchips, innovators are rethinking possibilities every day.

Through it all, Invesco QQQ ETF has provided investors access to the world of innovation with a single investment.

Invesco QQQ, let's rethink possibility.

There are risks when investing in ETFs, including possible loss of money.

ETF's risks are similar to those of stocks.

Investments in the tech sector are subject to greater risk and more volatility than more diversified investments.

Before investing, carefully read and consider front investment objectives, risks, charges, expenses, and more in perspectives at Invesco.com.

Investco Distributors Inc.

Can your software engineering team plan, track, and ship faster?

Monday Dev says yes.

With Monday Dev, you get fully customizable workflows, AI-powered context, and integrations that work within your IDE.

No more admin bottlenecks, no add-ons, no BS.

Just a frictionless platform built for developers.

Try it for free at monday.com slash dev.

You'll love it.

Huge savings on Dell AI PCs with Intel Core Ultra processors are here and they are newly designed to help you do more faster.

They can generate code, edit images, multitask without lag, draft emails, summarize documents, create live translations, and even extend your battery life.

That's the power of Dell AI with Intel inside.

Upgrade today by visiting dell.com/slash deals.

Hard Fork is produced by Rachel Cohn and Whitney Jones.

We're edited this week by Matt Collette.

We're fact-checked by Ina Alvarado.

Today's show is engineered by Chris Wood.

Original music by Alicia Magitoup, Diane Wong, and Dan Powell.

Our executive producer is Jen Poyon.

Video production by Sawyer Roquet, Pat Gunther, and Chris Schott.

You can watch this whole episode on YouTube at youtube.com/slash hard fork.

Special thanks to Paula Schuman, Pui Wing Tam, Dahlia Haddad, and Jeffrey Miranda.

You can email us at heartfork at nytimes.com, or should I say, heartforkinie at nytimes.com andini.

Don't actually send a message to that email address.

Yeah,

that address is not active.

Gun injuries are the leading cause of death for children and teens in the United States.

Some people avoid talking about gun violence because they don't think they can make a difference, but every conversation matters.

When it comes to gun violence, we agree on more than we think, and having productive conversations about gun violence violence can help protect children and teens.

Learn how to have the conversation at agreetoagree.org, brought to you by the Ad Council.