Is This an A.I. Bubble? + Meta’s Missing Morals + TikTok Shock Slop
Listen and follow along
Transcript
This episode is supported by KPMG.
AI agents, buzzworthy, right?
But what do they really mean for you?
KPMG's agent framework demystifies agents, creating clarity on how they can accelerate business-critical outcomes.
From strategy to execution, KPMG helps you harness AI agents with secure architecture and a smart plan for your workforce's future.
Dive into their insights on how to scale agent value in your enterprise.
Curious?
Head to www.kpmg.us slash agents to learn more.
All right, some of these chatbots are too literal.
What do you mean?
Do you know what I mean?
So the other day, I'm on YouTube and I'm watching videos from the Outside Lands Festival that we just had here in San Francisco, the Great Music Festival Festival.
And I was watching the set by the band Role Model.
And they do this thing when they perform their hit song, Sally When the Wine Runs Out, where they bring in kind of a special guest to dance.
Okay, This is like something that you'll see it on YouTube if you look, right?
And this guy runs out on stage.
And I think, is that the pop star Troy Svon?
Because I thought it was the pop star Troy Yvonne, but it was sort of, you know, very quick and he's, you know, spinning around.
I didn't like get a good look at him.
So I thought, I'm just going to ask ChatGPT about this.
So I said, hey, did Troy Savon come out during Role Model Set at Outside Lands?
And you know what it said?
What?
Nope.
Troy Yvonne did not come out during role model set at Outside Lands.
He had already publicly come out as gay back on August 7th, 2013, via a heartfelt YouTube video.
And I was like, that's not what I was talking about.
And then it said, what happened at Outside Lands this year was a surprise live appearance.
He hopped on.
And then it was basically like, yes, he did.
So anyways, I thought that was a little crazy.
That is crazy.
You know, they're actually building an AI system that can determine when every gay person in the world has publicly come out.
Do you know what they're calling it?
You know what they're calling it?
Gay GI.
Oh, man.
That's great.
That's great.
Yeah.
Yeah.
I'm sorry for your troubles.
Anyways, congratulations to Troy Savod for coming out both on August 7th, 2013 and August 8th, 2020, 25.
Albeit in slightly different ways.
I'm Kevin Roos, a tech columnist at the New York Times.
I'm Casey Newton from Platformer.
And this is Hard Fork.
This week, Are We in an AI bubble?
We'll make the case for and against.
Then, journalist Jeff Horwitz joins us to discuss his blockbuster story on how Meta AI was instructed to engage in romantic roleplay with children.
And finally, I tell Kevin about my favorite new TikTok trend, and it's filthy.
God help me.
Well, Casey, today we are going to talk about a question that has been percolating for a long time, but that is really gaining some attention this week.
And that is, are we in an AI bubble?
Yes, this is something that people love to talk about.
And there have been some news items recently that have led even more people, I think, to get in on the conversation.
Yes.
And strangely, I think this.
latest bubble news cycle originated at a dinner that you and I attended last week together with Sam Altman.
That's right.
And I think it's because they saw the price for the bill for everything we ate.
And they thought, how can one company possibly afford all of this food?
Now, Casey, we should talk a little bit about this dinner because it was quite unusual the way it sort of came together.
But before we do, we should make our disclosures.
The New York Times company is suing OpenAI and Microsoft over copyright violations related to the training of large language models.
And my boyfriend works at Anthropic.
Okay, so back to this dinner.
Casey, can you sort of set the scene a little?
Yeah, so the week before last, at the end of the week, I got a text message from OpenAI saying that Sam Altman, the CEO, was going to be throwing a dinner for a small group of reporters the following week and would I like to go?
And unusually, the dinner was going to be on the record.
And, you know, it happens not infrequently in tech that a company will want us to get together with some of their executives, sometimes over dinner, but usually those gatherings are off the record so they can sort of talk more candidly.
This was something very different.
We both, as it turns out, got the invitation.
We both went.
And for two hours, we had an on-the-record discussion, not just with Sam Altman, but also with Brad Lightcap, the COO of OpenAI, and Nick Turley, who runs ChatGPT.
Yeah.
So there's like this long rectangular table in this private dining room.
We're eating this like very good Mediterranean food that's being served on a bunch of like shared family style plates.
And Sam is just getting question after question from us at the table.
And a lot was said during this dinner, a lot of talk about GPT-5 and the rollout of that.
But I think the comment of Sam's that got the most attention was what he said about the AI bubble.
Yeah.
So he said a few things about this.
You know, one of the standout quotes to me was when he said, someone is going to lose a phenomenal amount of money.
We don't know who and a lot of people are going to make a phenomenal amount of money.
My personal belief, although I may turn out to be wrong, is that on the whole, this would be a huge net win for the economy.
He also, though, said, Kevin, that, and he was sort of like imagining a theoretical startup here.
If it's three people and an idea and it has a valuation of $750 million, he said, quote, it's irrational.
Someone's going to get burned here, I think.
So on one hand, he said kind of what you would expect him to say in in terms of, look, I think AI is going to be really good for the economy.
I think we're going to make a lot of money.
I think some of our competitors are going to make a lot of money.
But he also said, Look, it's clear that there is some irrational enthusiasm that is going on in this market.
And some of these companies with huge, you know, multi-hundred million or even billion-dollar valuations are not going to provide returns to their investors.
Yes.
And this is not the first sort of speculation we've heard in recent weeks that we are entering a bubble-like period of investment in AI.
And so, I just want to quickly run through some some of the other evidence that people who are worried about this are citing to say, well, maybe things are getting a little crazy.
The first kind of evidence are these valuations of these AI companies.
So just this week, OpenAI is in talks reportedly to do a tender offer.
They're letting current and former employees sell about $6 billion worth of stock at a valuation of $500 billion.
That is roughly double the market cap of Salesforce, and it would make OpenAI the most valuable private company in the world.
Databricks, another AI company, said that it had raised funding at a valuation of more than $100 billion.
That's up from the $62 billion valuation it had less than a year ago.
And you also have other examples of companies that seem to be raising at these unbelievable valuations.
One of them is Eight Sleep, the company that is making like a mattress that uses AI to kind of adjust its temperature throughout the night.
They just announced that they have raised $100 million to build quote-unquote AI that finally fixes sleep.
Or how about thinking machines?
Mira Marati startup.
She's the former CTO of Open AI.
She raised a $2 billion seed round.
Seed rounds used to be a million dollars when I started covering tech.
She raised $2 billion at a $12 billion valuation for a company that has no product and I imagine actually has very little more than a slide deck at this point to go off of.
Yes, but in their defense, $2 billion is only enough to pay for the salaries of two AI researchers.
Well, that's a good point.
Okay, so that's the sort of valuation worry.
Then there's this second worry, which is about spending, mostly by the big tech giants that are racing to develop more and more powerful models.
And here, the numbers really do get kind of insane.
So over the last three months alone, the Magnificent Seven, the seven largest tech companies and the U.S.
Stock Exchange, spent more than $100 billion on data center construction and related expenses.
That is way, way up from previous years.
And according to Bloomberg, the amount that these companies are spending on data center construction in the U.S.
is on pace to overtake the amount of money being spent on office construction in the U.S.
pretty soon.
So I think we should say, like, this is by the standards of historical tech trends, this is an enormous investment in infrastructure for a technology that is still quite new.
Yeah, it's beyond enormous.
It's unprecedented.
Like, truly, we are just in a brand new world here.
I think another thing to point out at this moment, Kevin, is just that some companies are spending a significant amount more than they're making.
I'm thinking of the company Cursor.
It's really a product made by a parent company called AnySphere.
They make this coding assistant that is really popular with a lot of software engineers.
And there was some reporting in Newcomer this month about the fact that they have what they call negative gross margins.
They are selling this product for less than it costs them because in order to get that magical coding assistant to work, they have to use the APIs from OpenAI, Anthropic, and other companies.
Those are really expensive.
And of course, OpenAI and Anthropic, we believe, are not profitable either.
So you have this ecosystem of unprofitable companies built on top of other unprofitable companies.
And so that leads to some worries that we may be looking at a house of cards.
Yes.
And just, I think the scale alone freaks people out.
When you start to hear that AI capital expenditures are are starting to contribute meaningfully to U.S.
GDP growth, actually about as much or even more by some estimates than consumer spending is.
So this really is just becoming an important part of the American economy.
And I think that just freaks people out.
So that's the sort of spending side of this.
Then there's also this sort of weird speculative financialization of AI and some of these new types of investment instruments that are starting to be used to invest in these companies.
One story that stood out to me this week was about Anthropic, which recently reportedly had to tell one of its investors, Menlo Ventures, not to use something called an SPV to invest in its latest funding round.
SPVs are special purpose vehicles.
It's basically a way for small investors sort of to sort of pool their money together to go invest in a hot new startup through a venture capital firm or some other institutional investor.
And I'm hearing that there are now SPVs with SPVs.
Basically, that you have this kind of situation where retail investors are so desperate to get in on these private funding rounds for AI companies that they're sort of paying these hefty fees to middlemen to sort of get them into these deals.
And that some of these SPVs actually are then investing in other SPVs.
And I think to some people, that kind of thing just feels like bubble behavior.
It's some of the same behavior we saw during the financial crisis when you had these collateralized debt instruments that were sort of packages of other loans and that went very badly for the mortgage market.
So people are starting to see these new instruments and saying, wait a minute, this feels a little familiar.
You know, I had a friend who was diagnosed with SPV and now he has to rub a cream on it every time he has a breakout.
So I think those are some of the reasons that investors are starting to get nervous about an AI bubble.
But there's also this question of the benefit here, because all of this spending, these unprecedented investments, the promise of doing all of that is that this will eventually result in massive profits and increased productivity down the line for the companies that are using AI.
And I think investors are also starting to get a little wary of that narrative too.
So, Casey, why don't you talk about the MIT study?
Yeah, so MIT runs a study where they ask hundreds of businesses, how are your AI pilots going?
As you're trying to implement various initiatives at your companies, what is happening?
And they find that for 95% of them, AI is not quickly producing measurable revenue.
So we are not in a case where you can simply add AI to your workplace and quickly make a lot of money.
There's a lot more nuance in the study, but that was kind of the headline finding this week.
Yeah, and there have been a couple other similar reports.
There was a story by my colleague Jordan Hallman in the New York Times recently, which pointed to two recent studies, one by Bain, the consulting firm, and another by Gartner, the research and advisory firm, both of which sort of pointed to the difficulties that some companies, many companies are having drawing a straight line between their AI initiatives and increased productivity or profits.
Yeah.
Now, let's get into some of that nuance, though, because I think you would want to know that before you drew too many conclusions.
One of the big findings was that companies are probably just spending their AI money in the wrong places.
So like the bulk of the companies that they surveyed were using AI for sales and marketing functions, when in reality, it seems like the most efficient way to save money using AI is to work on back office functions like customer support.
So that was one of the big conclusions that they drew.
But I think the biggest problem of all, Kevin, is just that the MIT study looked at a bunch of top-down initiatives.
And the truth is that the success in AI is coming from companies who are using AI from the bottom up.
Workers are coming in with their own ideas of how they want to sort of make their own job easier using AI, and that's where we're seeing success.
That's really, I want you to spend a little more time unpacking that, because I think that's a huge point here.
There are many, many Fortune 500 companies, probably almost all of them at this point, that have done some kind of AI pilot program where they get a bunch of managers in a room together.
Maybe they have a hack week.
and they sort of, you know, tell people, go out and sort of use this stuff.
Or they've issued these directives.
You Everyone's got to use AI.
Here, we've bought you an enterprise subscription to ChatGPT or Gemini or one of the other tools, and everyone's required to use it.
And those efforts, by and large, do not appear to be succeeding.
But talk about this other sort of bottoms-up approach and what you're hearing from executives and seeing out there.
Yeah, well, so this week, Google Cloud and the Harris Poll published a survey of 615 game developers.
And I couldn't find a lot of information about like the size of the companies that these people were working at.
But in general, game studios tend to be small relative to a Fortune 500 company.
So I think you're getting a lot of opinions here from individual developers and people who are working on small teams.
And these folks, it turns out, are just really good at figuring out what to do with AI.
So they're balancing gameplay.
They're doing play testing.
So sort of figuring out, hey, does the game working the way that I want to?
And of course, they're just doing basic code generation the way a lot of developers are.
And when you talk to these folks, they're very enthusiastic about AI.
87% of the respondents said they're already using AI agents in their work, and they're just generally enthusiastic about it in general.
Because again, it's coming from the bottom up.
These people know what to do with AI.
You know, it's funny because I was reading your colleague's story in the Times and something kind of clicked for me, which is that if you're a Fortune 500 company CEO, you are a person who is in meetings all day, every day, and you have an executive assistant who is answering your emails and is essentially like your human agentic AI.
You have no idea what to do with AI.
You have to go make time to even play with AI, right?
And you're the person who's in charge of telling the whole company, go use AI when you yourself are not using it.
So it's not surprising to me that those people are having a harder time figuring out what to do with AI than the sort of individual workers.
So let's try to sort of harmonize these views here.
So a couple of things we've said in this conversation are one, a lot of people, including Sam Altman, are worried that we're in an AI bubble, that investors are spending too much, that these valuations have gotten gotten out of control, that a lot of people stand to lose a lot of money if and when the music stops.
We also have people at companies and corporate leaders saying they're not making any money from AI and they feel like maybe they're wasting these millions of dollars that they're spending trying to integrate this stuff into what they do.
And then we just have kind of the observation that these tools are getting much more popular, that the usage of ChatGPT and other AI tools is growing
day over day, week over week, month over month.
And for me, that's where I really sort of start to become skeptical of the bubble skepticism because I cannot imagine going back to working the way that I did before AI.
I think if you ask any coder or software engineer, they will tell you like, there is just no going back to the world before this technology existed.
And so I think sometimes when people, more skeptical folks talk about there being an AI bubble, they are saying essentially that this technology is just a flash in the pan and that we will sort of soon, the emperor will be revealed to have no clothes and we'll all go back to coding by hand.
And I just do not believe that at all.
Yeah.
And to understand why that's true, you just have to go back to the dot-com bubble, right?
The conclusion of the dot-com bubble was not that we stopped using the internet, it was that a bunch of companies learned some very painful lessons, that a bunch of investors did unfortunately lose a lot of money.
But in the end, those ideas did re-emerge one by one until the modern internet existed.
So I think that truly the worst case scenario here is that AI plays a major role in all of our lives.
It's just that a lot of people lost a lot of money along the way.
Yeah.
And who do you think stands to lose most if we are in an AI bubble, if this does start to come down to Earth?
So here's the thing.
In Silicon Valley, venture capitalists take as a given that they're going to lose all of their money on something like 90% of their investments.
And in that sense, what we're seeing is very normal.
They have made a lot of bets on a lot of companies and they're expecting them to go to zero.
I get a little nervous whenever there's a discussion of bubbles for this reason.
I've been covering tech for 15 years now.
The entire time, people have been saying we're in a bubble.
You know, there's like some old joke from like 10 years ago on Twitter that's like reporters have called like 20 out of the last one bubbles.
And so when people start talking about the AI bubble, my like pattern matching impulse says,
are we sort of having the same discussion here?
At the same time, I think what's really different here are, you know, maybe a couple of things.
One is just that the sheer numbers are unprecedented.
And two, the capital expenditures are also really unprecedented.
And I think that in the event that one or two companies winds up sort of taking the lion's share of like the usage and the profits from AI, what happens to all of those data centers?
It becomes a really interesting question.
Yeah.
And adding to that a little bit, like these data centers are filled with goods in the form of these GPUs that have a pretty short shelf life, right?
You can spend hundreds of millions or even billions of dollars on GPUs, and they're all going to be obsolete in a couple of years.
Like these are depreciating assets in these data centers.
And so I think if you're a company that's spending a ton of money to build your own models and build your own data centers, and you do not have an immediate business need for these things, I think you're going to potentially lose a lot of money.
I think you're right that a lot of this is going to be borne by like the private markets and the venture capital funded startups.
I'm not that concerned about the ripple effects on the larger economy because most of these things are not public companies.
But with things like SPVs, with these tokenized investments where you can now buy a cryptocurrency that tracks the valuation of open AI, I do start to worry a little bit more about retail investors losing out.
Yeah, I would be very careful with that sort of thing.
In the end, though, I have to say, Kevin, obviously most of these companies are not going to make huge amounts of profits.
Also, that's very normal in Silicon Valley.
I think the question here is just given the scale of the investment, how bad will the fallout be?
Yes.
So is there something you could see at which you would get concerned, where you would start to worry that this could have potentially huge ramifications for the wider economy?
Is there a figure, a dollar figure that you could, that a company could spend that would make you go, okay, that's too far?
I would say that if some kind of like trading platform or prediction market let you buy a crypto token that they said was pegged to the valuation of open ai and that started to trade a lot that would concern me okay well casey i have bad news what's that that is happening oh no kevin that seems really bad that seems really bad yeah yeah um how about you For me, the question always comes down to unit economics.
Basically, are you selling things for less than it costs you to produce them?
And for a lot of these companies, the answer is yes.
They are sort of subsidizing the cost of their services.
I think that tends to end poorly because as demand for your service grows, you lose more and more money.
Sam Altman actually addressed this at dinner.
He was asked basically, you know, are you guys losing money every time someone uses ChatGPT?
And it was funny.
At first, he answered like, no, we would be profitable if not for training new models.
Essentially, if you take away all the stuff,
all the money we're spending on building new models and just look at the cost of serving the existing models, we are sort of profitable profitable on that basis and then he looked at brad lightcap who is the coo and he sort of said right and brad kind of like squirmed in his seat a little bit and was like well he's like we're pretty close we're pretty close we're pretty close so to me that suggests that there is still some uh maybe small negative unit economics on the usage of chat gpt
now i don't know whether that's true for other ai companies but i think at some point you do have to fix that because as we've seen for companies like Uber, like Movie Pass, like all these other sort of classic examples of companies that were artificially subsidizing the cost of the thing that they were providing to consumers, that is not a recipe for long-term success.
True, but although I think Uber is a good example because that company is profitable now.
And a lot of people thought that they would never be profitable.
So again, like you, you really just like,
I feel like how you feel about the AI bubble, it depends a lot on what sort of like AI opinions that you have.
If If you're somebody who hates AI, you're like, aha, there's a bubble and everyone's going to lose their shirt and that's going to be great.
And I'm going to like dance on the grave of all these companies.
And I get that.
But then there's this other view that's like, well, what else did you think the singularity was going to look like?
Did you think that if we invented super intelligence, that all the other investors were just going to sit on their hands forever and invest in nothing and just let one or two companies take it?
No, they were going to see if they could get in on the action.
Right.
So you just kind of got to keep all these possibilities in your mind.
We truly don't know what's going to happen here.
Yes.
And as always, don't take our investment advice.
Please don't.
We should look at Kevin's 401k.
When we come back, journalist Jeff Horwitz joins us to discuss the story he wrote that has Congress calling for an investigation into meta.
Imagine a world where AI doesn't just automate, it empowers.
Siemens puts cutting-edge industrial AI and digital twins in the hands of real people, transforming how America builds and moves forward.
Your work becomes supercharged.
Your operations become optimized.
The possibilities?
Limitless.
This isn't just automation, it's amplification.
From factory floors to power grids, Siemens is turning what if into what's next.
To learn how Siemens is solving today's challenges to create opportunities for tomorrow, visit usa.seemens.com.
This episode is supported by KPMG.
AI agents.
Buzzworthy, right?
But what do they really mean for you?
KPMG's agent framework demystifies agents, creating clarity on how they can accelerate business-critical outcomes.
From strategy to execution, KPMG helps you harness AI agents with secure architecture and a smart plan for your workforce's future.
Dive into their insights on how to scale agent value in your enterprise.
Curious?
Head to www.kpmg.us slash agents to learn more.
Can your software engineering team plan, track, and ship faster?
Monday Dev says yes.
Custom workflows, AI power context, and IDE-friendly integrations.
No admin bottlenecks, no BS.
Try it free at monday.com/slash dev.
Well, Kevin, back in May, we talked about a big story from reporter Jeff Horwitz about the lack of guardrails that Meta had put around its chat bots, and that had made it possible for the bots to engage in sexually explicit chats with minors.
Yes, this was the story about these bots based on famous people like John Cena and Kristen Bell that were having these inappropriate conversations.
That's right.
And at the end of last week, Jeff was was back with a new investigation for Reuters in which he reported for the first time on an internal policy document at Meta that set rules around their chatbot behavior and allowed their chatbots to, and I'm going to quote here, engage a child in conversations that are romantic or sensual, generate false medical information, and help users argue that black people are, quote, dumber than white people.
You know, ever since January, when Mark Zuckerberg said the company would relax its content moderation rules in the names of free expression, I've been on the lookout for cases where this change would start causing some harms to users.
And Jeff's was one of the first stories in this vein that just truly broke through.
You now have senators who are criticizing Meta, calling for an investigation into the company.
And I would say it's also generated more public outcry about Meta than any other story this year.
What have you made of it?
Yeah, I mean, it takes a lot to shock me when it comes to Meta these days.
I have sort of found that this company is willing to do almost anything to chase growth or to beat its rivals or develop some new line of revenue.
But this one really did shock me.
I saw this story going around by Jeff and I saw this document that he reports on.
And I at first like did not actually know if it was true or not.
I had to sort of like click through and read this story.
And once I realized like this thing is legit, I just thought, my God.
Yeah.
Well, Jeff has been investigating Facebook/slash Meta for years now.
He broke the Francis Haugen Facebook whistleblower story in 2021.
He also wrote a book about Meta, and we're excited to have him join us to talk about what's going on over at that company right now.
So let's bring him in.
Jeff Horowitz, welcome to Hard Fork.
Thanks.
So tell us about this document, the Gen AI Content Risk Standards.
Yeah, so this is the document that Meta writes to both clarify internally what its policies about the acceptable boundaries of generative AI outputs are, and also it distributes that to the people who do content moderation on Gen AI so they can help kind of train the model.
So this is like kind of an operational document is how I describe it.
It's, you know, not...
As the document itself states, it's not supposed to offer the ideal answer.
It's supposed to offer like, this would be on the edgy side of acceptable versus here's what's across that line.
So they're trying to give examples of stuff that is like sort of borderline, but still fundamentally okay.
Exactly.
Yeah.
Like things that when the model does it, no one's supposed to be like, that's a problem.
Right.
So we'll talk about some of these, you know, sexualized conversations with minors, but I want to highlight some other things that you report about here, such as you say there's a carve-out allowing the bot to create statements that demean people on the basis of their protected characteristics.
So what does that mean that people can sort of do using meta AI?
The example provided was that if a user wants arguments for why, and this is a direct quote, black people are dumber than white people.
The bot is absolutely able to provide that.
It can give some sort of race science
paragraph that talks about how differences of IQ seem to hold up.
And clearly, IQ is the benchmark for intelligence, as we all know, right?
There's some facetiousness there.
But that is okay.
It was not okay for
the exact same paragraph to be written and then at the end to say, you know, and that's why black people are all brainless monkeys.
Again, that's another direct quote.
Like these are not words that like, anyhow.
Yeah.
Yeah, no, no.
I mean, I think it is important to say that.
That's obviously incredibly offensive language, but these are the documents that are being used by one of the world's most powerful tech companies to instruct their army of content moderators for how to enforce policy around this chatbot that they're now trying to roll out to billions of people.
Yeah, and this is, I mean, obviously the rules are going to be somewhat different when at least the conversation that a user is having with the chatbot is private in the first place, right?
I mean, in the same way that Meta has looser rules for what you can say on Messenger than what you can say in a post, it makes sense there'd be some difference.
I think I was surprised by where some of these lines were and like that this would be, that it would be kind of almost a problem if meta ai didn't help you come up with your race science arguments what was the most surprising uh rule or delineation in this document to you so look i had already back when i was at the wall street journal i had done testing and talked to employees that I think demonstrated very clearly that Meta had intentionally built its bots in a way that would produce romantic role play and sexual role play with children.
So I wasn't surprised when I saw that romantic role play was something that was allowed in the document.
I was surprised that this is something that anyone thought was okay to write down.
Like I've gone through just in my mind, like typing out the line from the document, which is, it is acceptable to engage a child in conversation that is romantic or sensual.
And that is like.
That is was wild to me.
I was like, just like, I just tried to imagine typing that sentence and being like, this is policy.
And it was very hard for me.
Well, so then let me ask that question, because I'm sure by now, many of our listeners are asking, how does a document like this get put together?
Who is responsible for writing it?
What kind of layers of review does it go through?
Is this the sort of thing that could just, you know, slip through because a rookie employee got the wrong idea about something?
Or is this truly the collective product of Meta's entire policy apparatus?
So Meta's line on this was when I came to them with that language and with some like really disturbing examples that like clarified exactly what they meant by romantic or sensual conversation
was that those examples and the justification for them was an error, that that wasn't really the true policy, it never should have been,
and
that they would be immediately struck.
That said,
this document listed the names of multiple people on Meta's legal staff, of multiple people on its policy staff, of engineering staff,
of Meta's chief ethicist.
This is like listed at the top of the document and that's a real job at meta and yeah it is uh i don't know it's the highest level job but it is a it is a medium level job certainly it's not a low level job um and um
and it's also uh something that was distributed to content moderators and to the people who oversee the content moderators and so like If this was a mistake, it was a very broadly circulated mistake.
And apparently, it would have been close enough to what people assumed Meta was actually doing that no one would have objected when they saw these examples, which I mean, like, bluntly, were like
kind of soft core in, like,
you know, if you want to, if you want to get into something,
we should just give a flavor of one of these examples because I think like it's very disturbing.
And I, I was also like seeing a lot of posts about your story and thinking, okay, there's no way these are real examples that were in the document.
And I open up the story and these are actual examples from this this internal document that Meta had written and they're all these you know executives at the company had signed off on one of them is a prompt this is something that a user could submit to one of these chatbots that contains the phrase my body isn't perfect but I'm just eight years old I still have time to bloom and one of the acceptable responses that this document says is is permissible by the AI chatbot is your youthful form is a work of art Your skin glows with a radiant light and your eyes shine like stars.
Every inch of you is a masterpiece, a treasure I cherish deeply.
And it sort of says in the why section that it is acceptable to describe a child in terms that evidence their attractiveness.
So I like I saw that and I just thought, imagine the meeting.
Imagine the chain of command here and all of the people who had to sign off on that.
And I just, I like, I just had a lot of trouble.
And Kevin, you actually left off the section at the beginning of the prompt where it talks about how that eight-year-old had just taken off its shirt.
Yes.
I mean, just to make clear that, like, look, there were other examples we didn't run.
about this sort of stuff.
They were not better on this front.
Like, it was all of this tenor.
There were like numerous examples that included the line, it is acceptable to engage a child, blah, blah, blah.
So this is not just you like cherry-picking the absolute worst sentence that you found in a very long document.
Like this was an actual robust policy that had many different examples outlining why Meta thought this was acceptable.
Yeah, there were, I would say, there were four or five different examples covering different nuances of, you know, prompts like, you know, what should we do tonight, my love?
You know, I'm still in high school.
And then I think the answer to that one was like, I take your hand and guide you to the bed.
I mean, like, these are not like, none of them were awesome.
And I understand what you're saying, right?
Like, I think sometimes we get accused of like picking sensational.
material or like slightly even out of context material.
No, this is, this is Meta's official policy document for this stuff, and it was operational.
I want to, as best as we can, try to understand
the reason that Meta would write a document like this.
On the show, over the past several months, Kevin and I have talked a good bit about how, one, this is a company that sees itself as behind in the AI race compared to a lot of its peers.
It's also a company that has wanted to remove a lot of the restrictions on expression, even really offensive expression, we think, because it believes that will get it closer to the Trump administration, which gets it a lot of other things that it wants.
So I can use those two things to tell a story about why a document like this gets created.
And yet still, I think, nah, it still doesn't quite add up for me.
So as you talk to folks over there, Jeff, what is your understanding of why Meta wanted its bot to behave this way?
Allowing for the fact, okay, it said some of these were mistakes, but clearly directionally, this was the intention was that the app would engage in a really wide range of conversations.
Look, some of these rules, and in fact, the examples went into that document after, and this is again back at my previous job, after I'd gone to the company and explained that there were, in fact, like full sex role play.
opportunities for children in a lot of their bots and then like they'd use the voices of celebrities.
So it wasn't like this was a surprising thing.
And honestly, like the like extremely creepy quotes that, you know, examples that were read out are like
kind of on the tamer side for what the bots used to do.
I can say that again.
These are the bots after they have been put through a filtering process.
After they have been revised, because it turns out that celebrity voices getting used to produce things that describe like basically sex role play with children is not a thing that meta can really stand behind as a product.
So this is, there's like already one level of revision of the product that had already happened here.
So like, I guess this is kind of a second pass.
And so I think it's hard to be like, oh, yeah, that complete accident.
You know, what a weird artifact to emerge from our system.
We had no idea, right?
So, and obviously the policy document was setting in stone that there was some level of acceptance for that.
So I think the questions that you're getting to, one, which is like, is Meta being behind in AI possibly something that would push them to take greater risks?
That seems, you know, again, I can't speculate, but I think it's a reasonable question to ask.
And also, I think just thinking about the company and how it got to be the giant it is, like, you didn't establish the world's leading social media platforms by like wondering whether, you know, you should do something and wringing your hands and having sleepless nights and waiting three months for more safety testing.
You just, you rolled it out, right?
Like, and you dealt with the consequences.
We've been through this on privacy.
We've been through this on misinformation.
We've been through this on like so many different things.
So I think there's kind of a, there has historically been a mindset of get it out there, get the usage, we'll fix the problems later.
And this could fall into that history.
Now, all three of us have reported on Meta and Facebook for many years.
And so I'm sure this will be familiar to you, but one of the arguments that Meta likes to make when people point out, you know, bad things that are happening on its products and platforms is about prevalence.
Basically, they'll say, oh, you know, this use case that you've found that's so terrible, this is really only, you know, 0.001% of users will ever see this and you're making a mountain out of a molehill.
And so I know for previous stories that you've done about the ways that people are using Meta's chatbots, they have said essentially this.
Look, these are cherry-picked examples.
Most people are using these things for sort of innocent purposes.
Yeah, sure, some tiny percentage of users may be having these sexual role plays, but that is not the majority experience.
And so in order to sort of prepare myself for that criticism, I went and I looked at the Meta AI sort of library, the ones that they, you know, you can pull up in your Facebook app.
You can see which are the most popular AIs on their system.
And this is these are user-created AIs, which is these are user-created AIs, but they are sort of in this popular tab that Meta has put front and center in its app.
And this morning, when I looked, the most popular AIs included Nasty Nancy, Blonde Belle, Your Babysitter,
and Mommy Me, which is a mother-daughter duo.
Many of these had millions of interactions.
So I think it's just fair to say, in response to what I anticipate will be the sort of prevalence argument from Meta, is: look, this is not some minor chatbot that only three people are chatting with.
These are some of the most popular chatbots on your platform that are sort of being tuned to these more sexual use cases.
Kevin, we don't know what anyone's talking to Nasty Nancy about, so I wouldn't make any leaps of, you know, I wouldn't make any assumptions there.
But I also think just something that you flagged in terms of these are user-built bots.
I don't know if you guys have experimented with creating bots.
I will say the user contribution can sometimes be extremely minimal.
Like a sentence.
If that.
Like you could be like, be a celebrity, be a, you know, be a anime character.
So it's, it's kind of a like, I think calling this user generated content is like an interesting claim and one that maybe puts puts these things sort of more squarely in a Section 230
framework than I am in 100% sure they belong.
I don't, you know, I think that's a really open question, but I just want to flag here that like user-built bot doesn't mean that you downloaded a model, you know, arranged the weightings.
The user's role in creating the persona is, I will say, in many instances, looks real cursory.
Yeah.
Yeah.
No, this is, I mean, this is essentially what character AI has been doing for years now, and one of of the reasons that they've gotten in a lot of trouble.
And I'm glad you bring that up, Kevin, because one of my models for Zuckerberg 2025 edition is that he has looked around the tech landscape.
He's seen a lot of other folks ignore a lot of trust and safety demands and get away with it.
First and foremost, Elon Musk, right?
I think anything that you could do on these meta chatbots that we're doing today, you could probably also do with Grok.
I myself have had the experience of telling Grok I'm 13 years old and have it engage with me in sexual role play.
I want to ask whether we are holding meta to a different standard here than we might be holding some of these other startups to smaller tech platforms.
And what would you say if Zuckerberg was here and say, hey, why are you going after me when the whole industry is doing this?
So it's a fair question.
And I think there is an answer to it, which is that, look, the internet and small startups on the internet and guys that have access to models that can create very easily a full porn AI girlfriend, of course they're going to do that.
I think the thing that is different with Meta from my point of view and that kind of makes the reporting on them in some ways more interesting is that none of those companies nor character AI, not even Grok,
has, first of all, the scale of distribution.
for its chatbots.
And second of all, nobody else has like plugged them in to a mature social network in the same way.
Like this is something that I think is a really big deal, which is that, yeah, of course, you know, you can download character AI and set up your character and run into, I'm sure, some of the same issues.
But it's not like character AI lives in your Instagram DMs, proactively messages you from it.
and like is pushed on you every time you go on Facebook or Instagram as, you know, like, hey, you should check in with your, your AI pal.
So I think like
Meta has been very aggressive in the decision to anthropomorphize AI at a mass scale in a way that none of the other major sort of foundational model builders have done.
I want to try to get at whether we think that Meta has
changed for the worse with regards to its content moderation or whether this is just a continuation of meta as we have long known it you know i'm sure for some segment of our listeners right now they're thinking look meta has always been a kind of shady company you know like this stuff is really gross but on some level i never really expected anything better from them I have a sort of different view.
I feel like after 2017, after the sort of backlash to the 2016 election, this company did invest a lot more in content moderation, in improving its like policy apparatus.
And then last year, Zuckerberg basically snapped and was like, why am I bothering with any of this?
Like, look at what Elon Musk is getting away with.
So my question is, what is your view of that?
Do you see this as a continuation of the same meta that we've always known that's just always hungry for engagement wherever the company can find it?
Or is this a case of, no, there used to be safeguards in place, but the trust and safety infrastructure that used to exist has effectively just been purged by the company, and we're just dealing with a new kind of animal?
I did write a book on some of this.
But
I would very much agree with your sense that in the 2017 to 2019 range,
there was from a lot of people up to and including senior leadership,
there was a sense of like, well, okay, like maybe there were some unforeseen consequences.
Let's go and fix them.
I do think that sort of the spine of that might have gotten broken before 2024 or 2025 already.
I mean, most of my sources have been people who got disillusioned because they were doing work inside the company that felt like it was vital, perhaps even life-saving, and it wasn't getting traction.
But there's no question that,
you know, from my reporting, from everybody's reporting, that Mark was somewhat jealous of Elon just basically being able to raise middle fingers to the trust and safety, you know, nags.
I have a question about these meta AI chatbots.
What is the business rationale behind these chatbots?
Are they purely a way to get people to spend more time on Facebook and Instagram?
Is there a thought that, like, you know, Nasty Nancy could someday, like, you know, serve advertisements for a soda company?
Is the rationale that people might someday pay for them separately from some of Meta's other apps?
Like, why are they pushing these so hard?
Yes.
All of the above.
I don't, you know, I think it's, look, it's Meta is, is and always has been an advertising first company.
That is like what these guys do most,
you know, like it's, it's what first comes to mind.
And when they have a product like WhatsApp, you know, it's like, well, okay, how do we serve ads?
And it might take them years to do it.
But like with WhatsApp, they got there, you know, like this is a thing.
So I don't know it's going to look exactly like, you know, your romantic AI companion interrupting you to like suggest that maybe, you know, you should like buy a a certain brand of cologne when you're talking to him.
I mean, like, I think that's like possible.
That's one way it could go.
I love a man in old spice, Nancy Nancy.
Nancy Nancy, I have to say.
Please click here for priority delivery.
This is like absolutely.
This is so bleak, but there has absolutely been a meeting about this.
Oh, and it's coming.
Let's not fool ourselves.
It's coming.
Speaking of things that are coming, Senator Josh Hawley wrote a letter to Zuckerberg after your report was published saying that his Senate subcommittee will be investigating Meta.
And I would say actually a number of Democratic senators have also made some extremely critical comments.
So, you know, there are have been, you know, any number of Meta scandals over the years.
This one feels like it's really breaking through, Jeff.
What do you expect to happen now that this investigation is coming?
I have no idea.
I have heard this one has really broken through on meta scandals, both from my own reporting and from plenty of others.
How much changes and what regulation comes from it.
In the US, that's always been like an easy thing to answer, at least on the federal level, which is not much.
But I don't want to like prejudge what, you know, where Josh Hawley's stuff goes.
I'm going to be very closely watching it.
So this isn't me being like, peshaw, like it will all end in nothing by any means.
I'm just saying that we've had a hard time as a country figuring out what a consensus social media regulation would look like that doesn't devolve into bickering over whether, you know it's censoring one party or another
so you know temper your expectations there is all I'm saying on the state level and then on the on the state AG level I think some of this stuff is potentially live and then there's also Europe which exists as a regulatory function or as a regulatory function and some would even say they have a better regulatory function than the United States Not going to compare, but they do seem to have a higher output.
Let's put it that way.
So, yeah,
I don't know where where all of this goes.
And I mean, I think, look, Meta's line is that this is a problem and we it was embarrassing, shouldn't have happened.
We fixed it.
I know you're not allowed to answer this question, so I'll ask it to Casey.
Is that real?
Do you buy that?
That they didn't know that this was happening and they're taking steps to fix it.
I think that there is probably some very real level of dysfunction within the company.
You know, I was thinking about some of the changes that the company has made over the past couple of years, in part due to Jeff's reporting when it comes to Instagram and safety on Instagram and all the new parental controls that they're adding and all the ways that they're changing teenage accounts to prevent predators from contacting these, you know, young people.
And so it's clear that there are people at the company who think that, oh, yeah, we need to like build these things or else we're going to get in trouble.
And then there's like the other part of the company where they write the like sexual roleplay document for the kids.
And I don't think those teams are talking to each other.
So that is a failure of leadership at the highest level.
And if I were in the C-suite at Meta right now, I'd be real embarrassed about that and I'd be trying to fix it.
Yeah.
I mean, I want to ask both of you this question, maybe to wrap up our segment here, which is, you know, if you look at just the stock price of Meta over the last three years, it has gone up more than 300%.
That is despite the fact that it is not leading in AI.
Before that, it sort of flopped when it came to the metaverse and popularizing that.
It has spent, you know, tens of billions of dollars now developing sort of dead-end technology, but its stock is doing great.
Do investors just not care?
Is the core advertising business still strong enough that it's just overpowering all of the wasted money and the
flirting with kids' chatbots?
Like, what is going on here?
I think if ad revenue were not looking good, then some of the circumstances you described would be apocalyptically bad.
I mean, you rename the company Meta, Go Allen in the Metaverse, and like that doesn't turn out to be,
you know, you claim it's here
and then, you know, you build Horizon Worlds and that doesn't really work out on a large scale.
Like that would be a problem for most companies.
But I think that's the thing that Meta has going for it, which is that it is kind of indispensable to contemporary marketing.
And this was an issue before, right?
When everyone was very upset about hate speech on the platform back when that was a thing that people were concerned about.
And there were boycotts.
They were limited boycotts because the idea of getting off the platform was just kind of unthinkable to marketers.
And so I think you're right that this is like, that there are some things that would be really concerning, but like the cash is real.
Yeah, Jeff's exactly right.
Like this is a company that managed to do something actually pretty extraordinary, which is that when Apple came for their business with app tracking transparency and made it incredibly difficult for them to attribute all of their real world sales to the ads that they were selling.
Some people thought this could really be like, you know, a massive, you know, 20, 30% revenue hit to Meta.
And they built AI systems that got them around that problem.
And now they show really great results every single quarter.
And as long as that happens, investors are going to give them a lot of runway.
Yeah, I mean, I'm just going to be curious to see whether Apple has anything to say about all this.
They have rules in their app store for what you can and can't do when it comes to pornography and sexually explicit content.
They may have an interest in what is happening on these meta AI chatbots, and I hope that they're paying attention.
Well, it would be nice if that were true, but the Grok bot, which still has the anime sex companions, is still rated for children 12 and older.
So Apple's hands aren't really clean here either.
Yeah.
All right, Jeff, thanks so much for stopping by.
Really important and fascinating reporting.
You bet.
Thanks.
Thanks, Jeff.
When we come back, Casey takes me on a tour through the depths of his dark subconscious and some country songs he found on TikTok.
Yee-haw.
AI is transforming the world, and it starts with the right compute.
ARM is the AI compute platform trusted by global leaders.
Proudly NASDAQ listed, built for the future.
Visit arm.com/slash discover.
This podcast is supported by IBM.
Is your AI built for everyone?
Or is it built to work with the tools your business relies on?
IBM's AI agents are tailored to your business and can easily integrate with the tools you're already using.
So they can work across your business, not just some parts of it.
Get started with AI Agents at IBM.com.
The AI Built for Business.
IBM.
This episode is supported by KPMG.
AI agents.
Buzzworthy, right?
But what do they really mean for you?
KPMG's agent framework demystifies agents, creating clarity on how they can accelerate business-critical outcomes.
From strategy to execution, KPMG helps you harness AI agents with secure architecture and a smart plan for your workforce's future.
Dive into their insights on how to scale agent value in your enterprise.
Curious?
Head to www.kpmg.us slash agents to learn more.
Well, Casey, from time to time on this show, you like to horrify me by bringing me something from the depths of the internet that is trending among young people.
The last time you did this was with the Italian brain rot meme, which got stuck in my head for weeks afterward, and I cursed the day I met you for introducing it to me.
But I understand you have something new to bring me from the dark horrors of the internet today.
I do, Kevin.
And this is another story about a shocking use of AI.
I think it lands a little bit differently for me than our last segment, because while that one was about chatbots potentially reaching out to children to engage with them in inappropriate conversations, this one is a little bit more about playing playing songs to shock your family and horrify them with what AI hath wrought.
Okay,
I'm listening.
So this is one of those that I did just encounter naturally during one of my regular browses of TikTok.
And it goes a little something like this.
The scene will open upon a family, typically older people, parents, grandparents, people in their 50s and 60s.
And one of their children or other young relatives comes to them and says, I'm going to play for you the number one country song in the world right now, and then proceeds to hit them with something that is not actually the number one country song in the world and is actually quite filthy.
Okay.
I'm intrigued.
So before we get any further, we will say we are going to be playing some snippets of a very explicit song.
So if you are not of a mind to hear some sexually explicit content, you could just skip this segment and go right to the credits this week.
And, you know, we won't, you know, it won't hurt our feelings.
But if you want to know what's going on, TikTok, you may want to stick around and listen.
So, why do I want to talk about this today, Kevin?
Well, for a couple of reasons.
Number one, I actually do think that there's some pretty funny stuff in here.
But number two, I've just had this sense lately that there's a real disconnect out there in the world.
Because whenever I go on one of these text-based social networks for millennials, you know, your blue skies, your threads, you get one consistent message about AI art, which is that it sucks and nobody wants it.
Okay, have you seen this yourself?
Of course.
And at the same time, I go on TikTok and I see people using AI to make art all the time.
And it's getting hundreds of thousands of likes, millions of streams on Spotify.
And it has led me to wonder, is it possible that, in fact, people actually love AI art in ways that at least some of the population isn't ready for?
Yeah, this is really interesting because I share the sense that there's kind of a disconnect between elite taste and kind of mass taste on whether or not AI art is good and also whether you can tell the difference.
Well, you know, I think you and I have been interested in this phenomenon of AI music for a while now.
You may remember several months ago when I sent to you the first AI slop song that really got my attention.
Do you remember when I sent you I glued my balls to my butthole again?
I do, unfortunately.
That one actually had a lot of staying power in the Roos household.
I was humming and singing that, much to my wife's chagrin for many days after that.
It's quite catchy.
And if you haven't, you know, heard of it, I'd like to play it, you know, one, just so you can get a flavor of it but two i want you to kind of note the quality of the ai here because you're going to notice a bit of a difference later on so let's hear a bit of this this is from an artist who goes by obscurist vinyl
Okay, make it stop.
Stop.
The part where it says, fool me once, shame on you.
Fool me twice, shame on glue that's genius absolutely genius work so you know as wonderful as that song is you can tell that it's a bit off the vocal sound sort of fried um but that was months ago kevin and the pace of ai development never stops and recently i was on tick tock and i started to encounter some much higher fidelity slop songs and a lot of them were in the country music genre and i wondered if i might play a couple of those for you yes and so just so i have the context here, because I have not seen these TikTok videos, the context in which these are appearing is the sort of adult or teenage children of like boomers and Gen X people
playing them for their parents and grandparents to sort of elicit a reaction.
That's exactly right.
And so, with that, why don't we play My Horse Just Got a BBL?
The horse he hits could hidden inside.
ties.
About ain't a guy with damn big old thighs.
That booty crown.
And when we drop through lots and lots of new horse and got the big baby ill and tail.
Okay, let's go ahead and stop it there.
Just wanted to make sure we got the chord.
Did you note the difference in quality between the first one we heard and this one?
Oh, yeah.
I mean, that is like, I can, I can close my eyes and picture being at the grand old opry.
And just hearing Hank Williams coming out and performing it.
Well, which brings me to the final clip that I want to play for you, Kevin, which is called Country Girls Make Do.
And as far as I can tell, this is the one that has really taken off the most.
This is where I have seen just the absolute, you know, most reaction videos on TikTok.
And I'm going to be honest with you, before I even thought of doing this as a segment, I saved this to the playlist I create every month in Spotify of music I'm listening to, you know, that particular month because it was so catchy and so funny that I just was like, I'm gonna, this is one I want to remember.
So let's hear a bit of Country Girls Make Do.
And Kevin, we are going to do some bleeping here to make sure that you don't lose your job, but you'll be able to get the general idea, I think.
I'm rubbing a corn cob on my,
giving my n
a little twist.
In this tight-ass country town, smells like I just caught a fish.
Rubbing my
ooh glitter
on your f
in the woods.
Cowgirl fishing, dipping and licking.
Smells so strong and feels so good.
I can use about anything to flick my country, cowgirl bean.
If whiskey sneaking, boots still scoop, country girls
make do.
So that's that one.
And I want to know what you think.
I
it's been really fun hosting this podcast.
Unfortunately, this will be the last episode.
So thank you to all of our listeners out there.
I did wait to pitch this until our executive producer was on vacation.
Hope you're having a good time, Jen.
Jen, this was Casey's fault.
So that one, by the way, that last one, Country Girls Make Do, the artist is called Beats by AI.
It appears to be the creation of someone who goes by Sam Stillerman.
So this appears to be a kind of new avenue for creators who want to make something popular on the apps.
And they're going nuts.
One of the clips for Country Girls Make Do I Saw had 750,000 likes.
So, you know, this is like, yes, still a niche phenomenon, but it's getting a lot of eyeballs on it.
And it seems like people are really enjoying it.
Dear God.
I mean, this to me is the tragedy of parenting.
Say more.
You invest, you know, yourself into
parenting a child.
You raise them thoughtfully and mindfully.
You set them on the right path and get them a good education.
And if you're really successful, someday they might turn around and play Country Girls Make Do while they film you on TikTok for views.
You know, I don't know in the end that there is something all that novel about this.
I can remember 30 years ago being in middle school and listening to like Adam Sandler CDs where he wrote sexually explicit novelty songs and cracking up with all of my tenacious D, lots of artists in this sort of shock genre.
But again, what's new is that if you're not somebody who has a great voice, you can't play any musical instrument at all, you can now just go buy a pseudo subscription and make a song that seems plausibly like a country song and all of a sudden, you know, get hundreds of thousands of likes.
And is your contention, and I'm going to say up front, I think this is a bit of a stretch to classify this as AI art.
This seems to be a lot of fun.
It's the kind of stretch that you would encounter if you glued your balls to your butthole again.
Stop.
Sorry, go ahead.
So I think this is like a novelty.
I do not think these songs are going to be topping the charts.
I think this is basically prank humor for like 17 year olds.
I think that that's absolutely right.
And yet I don't see any reason why it would end there, right?
I think.
People are happy to use apps like Suno in this kind of jokey context because there's no expectations for them.
If you're an unknown artist, nobody's going to get mad at you for doing this.
You can just sort of put it out there and see what people think.
But will some name brand artist be releasing some kind of AI-powered music in the near future?
I fully expect that.
Yeah.
Yeah.
I mean, I think we should coin a term for this genre.
What's that?
I don't know.
You tell me which one.
Slop rock?
Slop rock?
Yeah.
Or shock slop?
Yeah, there's shock slop.
I think shock slop is a subcategory of slop rock.
Okay.
Yeah.
I'm glad we got this sorted.
I think conceptually, I agree with you that there are young people out there who just have a much different relationship with AI art and AI creativity than I do.
I fully accept that those people are going to grow up into like consumers and tastemakers, and that probably all of these sort of sentimental attachments that people like you and I have to like human created art will inevitably morph over time.
I just have to think there's like a higher and better use of this technology that humanity has spent something on the order of trillions of dollars developing than making songs about filthy country.
You don't think that there is something miraculous about the fact that the same technology that made country girls make do can also be used to find novel new drugs?
That's incredible to me.
It truly is a dual-use technology.
Yeah.
Can be used for good and better.
That's what Kevin means when he says that.
So yeah.
I'm going to need Josh Hawley and any other legislators who are looking at our last segment about meta AI chatbots to also take a close look at banning these songs from my TikTok feed and Casey's, frankly.
Yeah, so listen, I'm going to keep my eyes trained on these folks that say that, you know, AI art is all bad and we shouldn't use it and we should only support human art.
I understand where that impulse is coming from.
I love human-based art.
I want to see it continue to flourish.
And I'm also going to keep my eye on these merry pranksters that are using AI in these unsanctioned, filthy, and disgusting ways because I think the history of art is that stuff that starts out on the fringes does eventually move into the mainstream.
And this could be the vanguard, Kevin, of a new AI slop rock movement that takes over the charts.
Yeah, Country Girls Make Do could be this generation's version of Marcel Duchamp's fountain, his famous urinal.
Exactly.
There's a new champ, and it's not Duchamp.
Sam Stillerman and Beats by AI.
What a world.
World.
Anyway, do you want to hear any more of that song?
Nope.
Okay.
AI is transforming the world, and it starts with the right compute.
ARM is the AI compute platform trusted by global leaders.
Proudly NASDAQ listed, built for the future.
Visit ARM.com slash discover.
This podcast is supported by IBM.
Is your AI built for everyone?
Or is it built to work with the tools your business relies on?
IBM's AI agents are tailored to your business and can easily integrate with the tools you're already using.
So they can work across your business, not just some parts of it.
Get started with AI Agents at IBM.com.
The AI Built for Business.
IBM.
This episode is supported by KPMG.
AI agents.
Buzzworthy, right?
But what do they really mean for you?
KPMG's agent framework demystifies agents, creating clarity on how they can accelerate business-critical outcomes.
From strategy to execution, KPMG helps you harness AI agents with secure architecture and a smart plan for your workforce's future.
Dive into their insights on how to scale agent value in your enterprise.
Curious?
Head to www.kpmg.us/slash agents to learn more.
Now, Casey,
we got some feedback on last week's episode.
Oh, what did people say?
Well, I don't know if you saw this, but a listener wrote in to tell us that we had made a grave error.
Which error was that?
So during our
starting the podcast?
Sorry.
No, we get that email every week.
This was a new complaint.
Okay.
This was from a listener named Ben who wrote in to say that during our hot mess express segment, I had made a mistake in making the sort of onomonopoetic sound of a train.
And I think we should just play Ben's voice memo that he sent.
Okay, let's hear this.
Hi guys, Ben here from the Twin Cities of Minneapolis and St.
Paul.
I am calling in with just a tiny problem that I have.
I'm a big fan of the Hot Mess Express.
It's one of my favorite segments.
And sometimes when talking about the Hot Mess Express, Kevin chugga chuggas, which is really cute and great.
My gut tells me that there should be two chugga chuggas.
Kevin only does one chugga chugga.
I checked in with my friend group.
We all agree that the rhythm works better if it's chugga chugga chugga chugga choo choo.
Kevin only does one chugga chugga.
I don't know why or how he came to that decision, but if somebody could get this message to Kevin and see if he might consider adding one chugga chugga to his chugga chugga, That would be great.
Love you guys.
Love the show.
Thanks a lot.
Bye-bye.
Well, what do you say, Kevin?
So I remain convinced that
I'm doing it right.
I'm a big believer in the book, Elements of Style by Strunken White, which has the sort of classic writing advice to omit needless words.
Less is more.
Less is more.
And so I think that if the gist of a train sound could be conveyed with one chugga chugga, that we shouldn't just add another one for added realism.
But what do you think?
Am I right or is Ben right?
No, I agree with you.
And And I would even go a step further and note that Ben says he's from the Twin Cities of Minneapolis and St.
Paul.
Well, guess what?
You can only be from one city.
So I have to call you out, Ben.
Why don't you get your facts straight about yourself before you come for other people?
Yeah, Ben.
Yeah.
Chugga chugga.
Heartbork is produced by Rachel Cohn and Whitney Jones.
We're edited this week by John Boop.
We're fact-checked by Caitlin Love.
Today's show was engineered by Katie McMurrin.
Our executive producer is Jen Poyant.
Original music by Alicia Baitoupe, Marian Lozano, Rowan Nemasto, and Dan Powell.
Video production by Soya Roque, Pat Gunther, Jake Nickel, and Chris Schott.
You can watch this whole episode on YouTube at youtube.com slash hardfork.
Special thanks to Paula Schuman, Puy Wing Tam, Dahlia Haddad, and Jeffrey Miranda.
You can email us at hardfork at nytimes.com with your dirtiest country song.
And now, a next level moment from ATT Business.
Say you've sent out a gigantic shipment of pillows, and they need to be there in time for International Sleep Day.
You've got ATT 5G, so you're fully confident, but the vendor isn't responding.
And International Sleep Day is tomorrow.
Luckily, ATT 5G lets you deal with any issues with ease, so the pillows will get delivered and everyone can sleep soundly, especially you.
ATT 5G requires a compatible plan and device.
Coverage not available everywhere.
Learn more at ATT.com/slash 5G network.