California Regulates A.I. Companions + OpenAI Investigates Its Critics + The Hard Fork Review of Slop
Listen and follow along
Transcript
300 sensors, over a million data points per second.
How does F1 update their fans with every stat in real time?
AWS is how.
From fastest laps to strategy calls, AWS puts fans in the PID.
It's not just racing, it's data-driven innovation at 200 miles per hour.
AWS is how leading businesses power next-level innovation.
Casey, I heard the big news this week in tech is that Waymo is going to London.
You know, I saw that and I thought British people are going to hate it.
I had a different question was, how are they going to teach you to drive on the other side of the road?
That's a very
good question.
Just switch the software.
Go on the other side now.
Everything you used to do, do it in reverse.
That's like the autonomous vehicle equivalent of dark mode is when you have to drive on the other side of the road.
You know?
It's not available at launch, but eventually they bring it up.
Do you think they have to put the steering wheels that don't do anything on the other side of the car too?
Presumably.
I'm Kevin Roos, a tech columnist for the New York Times.
I'm Casey Newton from Platformer.
And this is Hard Fork.
This week, the first state law to regulate AI companions is here.
Will it be enough?
Then, OpenAI is waging legal battles against its critics, and code lawyer Nathan Calpin joins us to explain why the company served him with a subpoena.
And finally, it's time for the first ever hard-fork review of Slop.
Grab your opera glasses, Kevin.
Well, Casey, it's been a big week for tech regulation in the state of California.
That's right, Kevin.
Everywhere I look, it's bills, bills, bills.
I'm like, what is this?
A Destiny's Child song?
Very topical 90s reference.
Listen, a lot of our listeners are seniors, and they're going to really appreciate that.
That one.
So on Monday of this week, Governor Gavin Newsom of California signed into law a bunch of new tech-related bills that had been making their way through the state legislature in California.
And we're going to talk about them today.
And if you're not a listener of ours who lives in the state of California, you may be asking, why are you devoting an entire segment to tech regulation in California?
And Casey, what is our response to that?
Well, Kevin, I think you and I both believe that while AI has the potential to do some good, it's also clearly causing some harm.
And right now, the AI companies are operating with very minimal regulations on what they do.
And that's just been a growing source of concern.
We have talked over the past year on this show about teenagers who have died by suicide after having very tragic interactions with chatbots.
And I think there has been a growing cry for some kind of guardrails to be placed around these companies.
So that is what we are talking about today is a state that had some ideas that actually managed to pass the laws and is putting them into practice and will hopefully rein some of these companies in.
Yeah, California is a uniquely important state in tech regulation for a couple of reasons.
One of them is a lot of the companies are based here.
They care a lot about how California regulates them.
And the laws that are passed in California tend to sort of ripple out to the rest of the country and the rest of the world.
They tend to become kind of de facto national standards.
And especially at this moment where our federal government is shut down and even when they're operating, don't seem to be interested in passing any tech regulations.
This is what we have, is the state level regulation sort of standing in for the federal regulation that doesn't exist.
Yeah.
So today, let's talk about some of these bills that got passed and what we think they tell us about what some common sense approaches to regulating AI might look like.
Okay, so let's start with what I think may be the most important bill that has come out of this flurry of legislation, which is SB 243.
Casey, what is SB 243 and what does it do?
What SB 243 does is it requires developers to identify and address situations where users are expressing thoughts of self-harm.
So they have to have a protocol for what they're going to do if they see somebody express these thoughts.
They have to share that protocol with California's Department of Public Health, and they have to share statistics about how often they are directing their users to resources.
And then starting in 2027, Kevin, the Department of Public Health has to publish this data.
So that's a little bit longer than I would like to start getting this data.
But my hope is that when that begins, we will have a very large and useful set of public health data about the actual effects of chatbots on the population of California.
So if if you're somebody like me who's really interested slash worried about what it is going to do to our society and our culture once so many people are chatting with these bots every day, this is a really big step toward understanding that.
Yeah, I think this is a good one for us to drill down on because it is a place where I think there is sort of a lot of attention and momentum around regulating.
You know, OpenAI has recently rolled out some parental controls.
Character AI, which we've also talked about on the show, now has a disclaimer on its chatbots and some additional guardrails for minors.
So I think the platforms were starting to kind of comply with these kinds of laws in advance of them actually becoming laws, but this will at least give them some formal requirements.
Yeah, and we should mention a couple more of those requirements.
In California, chatbots will now have to tell you that their output is AI generated.
Of course, you know, our savvy listeners probably already know that, but there may be some people who are chatting with ChatGPT and aren't entirely sure what's going on.
This bill does have a few additional protections for minors, including that chatbots cannot produce sexually explicit images for them.
And it's going to remind minors to take breaks if they have been chatting with ChatGPT for a really long time.
So interestingly, there was another bill that California legislators passed, which would have, I think, potentially banned ChatGPT use for minors.
And Gavin Newsom vetoed that.
He was like, that's going too far.
But this is kind of like one step back.
And I do think adds some meaningful protections.
And no longer do we have to rely on the goodwill of an open AI or a character AI to implement these things.
Now it's just in the law and it says you actually have to do this.
Now, does this law apply to all of the AI platforms or just the like really big ones with hundreds of millions of users?
So according to a legislative analysis, it will
apply to basically any chatbot that can be used as a companion.
And initially, I didn't know, like, well, would that include ChatGPT?
Most people, I think, don't really think of ChatGPT as a companion.
But according to this legislative analysis, yes, like, and you know, look, if you're talking to it for three hours a day, it's some kind of a companion to you.
Yeah, I think this is a case where the industry kind of understood that something was going to be done about chatbot companions in the arena of state regulation.
And they had this other proposal that they thought was too strict and stringent.
And so they kind of accepted the lesser of two evils and sort of got behind begrudgingly this, this, uh, this bill that actually did end up being signed into law.
That's right.
And can I talk about why I think this is important, guys?
Okay.
So just on Tuesday, we get this really interesting tweet from OpenAI CEO Sam Altman.
Okay.
This is this week gets a lot of attention because it says at the end that in December, they're going to allow what they call verified adults to start using ChatGPT to generate erotica.
Let's set that aside for a second.
Here's what Sam says at the beginning of this long tweet.
He said, We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues.
We realized this made it less useful/slash-enjoyable to many users who had no mental health problems.
But given the seriousness of the issue, we wanted to get this right.
Now that we've been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases.
And then he says that
what people liked about GPT-4.0, which was the model that they got in trouble over over because it was so sycophantic and it encouraged people who were telling it things like, I'm not taking my medication anymore or I think I'm God.
They said they're going to bring whatever people liked about that model back to ChatGPT.
How does this connect to the California bill?
I'm not sure that OpenAI has mitigated the serious mental health issues that came up
with this.
It has been two weeks since they rolled out parental controls in ChatGPT.
Do we really have enough data to say this one is under control?
I have to say, Kevin, I was actually pretty shocked by this tweet, not for the erotica stuff, but for the GPT-4.0 stuff and the suggestion that we have a handle on how to guide these chatbots so that they don't hurt people.
What did you think when you saw this?
I thought they are really trying to drive up usage numbers.
They must be seeing something that is suggesting to them that people were engaging more with ChatGPT when it was more like a companion, when it was telling people more flattering and and sycophantic things.
And I suspect that that is part of the reason that they are trying to sort of get that mode back.
Now, I think there's actual logic there.
I think there's, you know, a lot of users do want something that's going to tell them they're great, but we do not know how they have solved these
safety challenges that they have supposedly solved or these mitigations for mental health issues.
It's also like, it's not clear to me that things are just as simple and binary as like some users have mental health problems and some don't.
And like for the for the ones that don't, we're gonna give them the kind of like unhinged, unfiltered chatbot experience.
And for the ones who do, we'll like guide them onto a set of guardrails.
Like these things exist on a spectrum.
And it may not be obvious to OpenAI or even to the users themselves when people are starting to develop mental health issues as a result of talking to these chatbots.
I just, I find it so confusing because on one hand, you know, this is a company that will put up a blog post that says, we do not want to optimize for engagement.
We want you to use this thing, have it help you and let you get on with your life.
And then they release Sora, the infinite slop feed.
And then they say, we're bringing back the most sycophantic AI personality in the history of the company because we know that some of you out there might need a friend.
So it really feels like there are two wolves inside of open AI right now.
And I think it just makes necessary
some common sense legislation that starts to put some guardrails around this and signals to these companies that they cannot just do whatever they want.
Yeah, I mean, I guess we'll have to see how the AI companies comply with these.
I do not think that this law is going to dissuade them from trying to build the chatbot companions because that's obviously a lucrative industry for them.
But I hope it does make them pay more attention to things like safety and mental health for especially younger and more vulnerable users.
Yeah.
Now, at this point, our listeners are thinking, Kevin, you told us that there were actually a lot lot of bills that California passed, and I'm desperate to know what they are.
Well, Casey, buckle up.
I'm going to run through a few of the other bills here.
We won't talk about all of them in that much detail, but we have AB 621, which provides stronger protections against deep fake porn.
This bill, this law, will make it possible for victims of non-consensual deep fake porn to sue platforms that facilitate the creation of that porn for up to $250,000 per violation.
Yeah.
And this is really important because a trend that we haven't talked much about on the show this year, Kevin, is that there are these really sketchy companies that make what are called Nudify apps.
They've been advertising themselves all over Facebook and Instagram somehow, and people have been using them to generate these defects.
And now there is a law on the books that says, hey, we can actually come after the companies themselves.
So I think that's just obviously a good thing.
So next bill I want to talk about is AB 853.
This is the California AI Transparency Act.
And this caught my attention because it essentially requires that AI companies build into their systems tools to detect whether AI generated content, images, video, or audio, is in fact AI-generated.
Basically, they have to offer users a way to put in an image and say, hey, did you generate this image and get a reliable response back?
Yeah.
Here's what this means.
If you come to California, and you see a video of dogs playing poker, no longer will you have to wonder, are those dogs really playing poker?
There will be a watermark and you will get the answer to your question.
That's true.
About damn time.
Then there was AB 56, which was about warning labels for minor users of social media platforms about the potential mental health risks associated with using the apps.
Casey, this one's pretty wild.
Yeah, it's certainly very intrusive.
Like the law dictates how much of the screen this warning has to cover, how often it has to appear,
which is basically right when the person starts using using the app.
And then again, after three hours, which is like sort of funny to me.
It's like, you can have your three hours, but then we're going to, you know, remind you that you might be cooking.
Well, we should say, like, it's not going to be a small little thing on the screen.
The law says that after three hours of use, platforms must display a 30-second non-bypassable warning covering at least 75% of the screen.
If you are a 16-year-old and you are scrolling TikTok or Instagram for more than three hours, you are going to get a giant
cigarette warning essentially on your screen that you cannot skip or get off your screen for 30 seconds.
And that is going to happen again after every additional hour after that.
So I predict there will be a lot of teens who are finding clever ways around this because teens do not like to wait for their TikToks.
I'm just curious, like, will teenagers see this and think, oh my God, I have to get off TikTok?
Or will they think, damn, I am such a badass for using this crazy, dangerous dangerous app?
Because I can see it going that way too.
Yeah.
What a rebel.
Meet me behind the school.
We're watching TikToks.
You know, semi-related, there was a study that came out this week.
It was a pretty big study.
It was of 6,000 children who were under 13.
Did you see this?
No.
What is it?
So they tracked their use of social media and they found that the more time per day kids spent on social media, the more that was associated with being bad at reading.
And so my question is, should this warning say, hey, you know, kids, be careful, this app could be bad for your mental health.
Or should it say, you are actively becoming worse at reading than everyone in your class?
I actually think that might be more effective.
Yes.
Actually, in order to bypass the warning label, you should have to do like a reading comprehension quiz based on a short story by Ernest Hemingway.
Just say like,
no more TikToks for you.
Yes.
Until you can tell me what the old man in the sea was about.
I'm still trying to find that one out.
All right.
Next bill.
This one is one I wanted to get your take on.
This is AB 1043.
This is about age verification, a subject we have talked about on this show before.
This bill would require that Apple and Google, which make the two most popular mobile operating systems, verify users' ages in their app stores.
Casey, explain what this bill does and whether it's a big deal or not.
Yeah, so there are a bunch of different approaches to what they call age assurance in the business.
And the reason that this one is notable to me is that in the California actually took the approach that I favor, which is that when
someone is setting up a device for their child, the parent inputs the age of the child and that information is then passed along to the app store and to the developers.
And the thing that is great about that is that it seems like the most privacy protecting of all of the age assurance protocols that we've seen, right?
Other states, they want you to potentially like upload a driver's license, right?
You're providing a lot of really personal data.
Some of that is being held by third parties.
All that stuff is subject to, you know, data breaches and who knows what else.
In California, it's just like, hey, you're the parent.
You're the guardian.
You tell us how old your kid is and we will make sure that they don't download an app that they're not supposed to have.
Right.
So instead of.
what's happening today, which is that every app asks you to sort of say how old you are when you sign up and create an account and it just kind of works on the honor system.
This would essentially force Apple and Google to, when you're getting a new iPhone or a new Android phone and your parents are helping you set it up, they kind of like, you know, put in your birthday.
And as a 16-year-old, it shows, okay, I am a minor, I am 16 years old.
And then it passes, your phone will like pass that information to every app that is trying to be installed on that phone.
Is that more or less correct?
Exactly.
Now, you said you favored this solution.
Are you taking credit for this bill?
No, I'm not taking it.
It also wasn't my idea.
Like other smart people, you know, have been talking about this for a while, but I've written about it in the past.
And this is what I said I thought we should see happen.
And, you know, every once in a while in a democracy, you got to see something you actually want.
And it's a lovely thing when that happens.
Every once in a while.
Once in a while, you have a good idea.
Yeah, cherish it.
Cherish the moment.
All right.
One more bill we should talk about because this is the one that has actually gotten most of the attention and a lot of the lobbying dollars.
This is SB 53.
This was actually signed into law last month.
This is the Transparency in Frontier Artificial Intelligence Act.
This is the sort of successor bill to SB 1047, which we've talked about in the show before.
That bill was vetoed by Governor Newsom last year.
This new bill is essentially a watered-down version of that bill.
It establishes some basic transparency requirements for the biggest AI companies, what they call large frontier developers.
It requires them to publish information about their safety standards and create
a new mechanism to report potential critical safety incidents to the California state government.
It also establishes some whistleblower protections for people inside the companies who may want to disclose some significant risks posed by their models.
And this one did pass and was signed into law.
Casey, do you think this is a big deal?
I think it's great that we have some transparency requirements.
I think it's great that we have some whistleblower protections.
When I think about the things regarding AI development that concern me the most, this bill does not speak to them.
But I feel like the main reaction that I've read to this bill is a bunch of people saying, yeah, this couldn't hurt.
You know, like, that's kind of how this feels.
It's like, yeah, it's fine.
Right.
It feels pretty toothless to me.
And it also is basically sort of codifying something that a lot of the companies are already doing anyway.
I think of the large frontier developers, all or nearly all of them already published things that would sort of put them into compliance with this law.
So
I like the idea of not just relying on voluntary self-regulation, but this seems like a pretty weak bill that was, you know, weak enough that most of the AI industry didn't feel like it was worth opposing.
Although there were industry groups that lobbied against it, but I think for the most part, they said, okay, well, this is better than the one that we tried to kill last time.
Yeah.
Okay, so that is a bunch of information about these California state AI laws and social media laws.
When we kind of step back and zoom out here, does this give you any thoughts about how AI regulation and tech regulation in general is going?
I think in some ways it's going better than I expected, Kevin.
You know, we covered the past decade of lawmakers twiddling their thumbs, wondering how social media ought to be regulated.
It took too long.
Some of those efforts have finally gotten off the ground at the state level, but after a lot of harm was done.
In the case of AI, we are earlier in that kind of epoch of tech, but already we've seen California and other states make make some pretty decisive moves to build some guardrails, create some transparency requirements.
And I think that's a really good thing.
We're going to have to see how effective these things are.
But I just want to say we need something like this.
It is important that this week OpenAI came out and said, despite everything that has happened this year with their chatbots and mental health, they are going to hit the accelerator on making them more personable, more sexual, and more powerful.
That will continue to have reverberations, and we need state lawmakers paying attention to that.
We need federal lawmakers paying attention to it.
Be realistic.
I can't talk to you when you're being hysterical.
Like I, what this makes me feel like is, God, I wish we had a Congress that could do something about this.
Like, I really, I am sympathetic to the AI companies on this one point.
I do not think that state level regulation is the best way to do this.
I do not think it is good or efficient to have 50 individual states all kind of coming up with their own bills and trying to pass them and then have the AI companies have to like look at all the 50 states and decide how they're going to build systems that comply with all of those.
Like that does not feel like a good solution to me.
But for that to not be the default path here, we are actually going to need Congress to step in and do something at the federal level.
And right now our government is shut down, so I don't have high hopes.
But I think that in the absence of Congress getting its act together and deciding to do something federally, what we're going to end up with is a bunch of states doing what California has done here and just trying their best to get some rules on the books while they can.
Yeah, I agree with that.
I would add that Senator Josh.
Hawley is currently circulating a draft bill that would ban AI companions for minors.
Who knows how far that will make it through?
But I do think that there are a significant number of members of Congress who would like to see something like this happen.
The question, of course, as ever, is whether they can get something across the finish line.
Yeah.
All right, Casey, that is what's happening in California.
When we come back, we'll talk about how this legislative fight got personal for one AI lawyer.
Millions of players.
One world.
No lag.
How's it done?
AWS is how.
Epic Games turned to AWS to scale to more than 100 million Fortnite players worldwide, so they can stay locked in with battle-tested reliability.
AWS is how leading businesses power next-level innovation.
This podcast is supported by Bank of America Private Bank.
Your ambition leaves an impression.
What you do next can leave a legacy.
At Bank of America Private Bank, our wealth and business strategies can help take your ambition to the next level.
Whatever your passion, unlock more powerful possibilities at privatebank.bankofamerica.com.
What would you like the power to do?
Bank of America, official bank of the FIFA World Cup 2026.
Bank of America Private Bank is a division of Bank of America and a member FDIC and a wholly owned subsidiary of Bank of America Corporation.
The University of Michigan was made for moments like this.
When facts are questioned, when division deepens, when the role of higher education is on trial, look to the leaders and best turning a public investment into the public good.
From using AI to close digital divides to turning climate risk into resilience, from leading medical innovation to making mental health care more accessible.
Wherever we go, progress follows.
For answers, for action, for all of us, look to Michigan.
See more solutions at umic.edu slash look.
Well, Casey, there's another big story involving the law and AI this week that we wanted to chat about.
And it involves this behind-the-scenes beef beef that has been going on between OpenAI and some of its biggest critics.
Yeah, so there are really two big legal battles that this story is at the intersection of.
One is the battle over OpenAI trying to convert itself into a for-profit enterprise.
Right now, OpenAI is famously a non-profit.
This has created many issues for the company over the past several years.
They want to be sort of a more normal money-making enterprise.
And this is opposed by lots of people.
Some of the people that oppose it are OpenAI's direct competitors, including Elon Musk and Mark Zuckerberg.
And OpenAI has been pretty aggressive in going after groups that they believe might be connected to those two.
The second battle is about SB 53, the bill that we just talked about.
It was just signed into law by California Governor Gavin Newsom, and it establishes some basic transparency requirements and whistleblower protections for people who work at AI labs.
There were a lot of groups that lobbied both for and against this one.
ENCODE was one of the groups that lobbied for it.
And so those are the kind of two big legal battles that were happening next to each other.
But today's story, Kevin, takes place right in between both of them.
Yes.
So our guest today, Nathan Calvin, is the vice president of state affairs and general counsel at ENCODE.
They are a small AI policy nonprofit.
They were started several years ago by a high school student.
Fun fact.
What were you doing in high school?
I wasn't starting AI safety nonprofits.
Debate team, crushing it.
Anyway.
Anyway, they have become one of these groups that is submitting briefs and lobbying lawmakers on a lot of these AI-related bills and efforts.
They have also been very vocally opposed to the restructuring of OpenAI as a for-profit.
And what seemed to happen here in Nathan's telling was that one night as this legislative process was ongoing, a sheriff's deputy showed up at his house and delivered a subpoena from OpenAI demanding that he produce all kinds of personal communications, including anything related to not just the restructuring, but SB 53, this bill that they had been advocating on behalf of.
Yeah.
So this surprised folks because they do still identify as this kind of mission-driven company that's trying to create AI to benefit all of humanity.
I think it's generally understood that during these legal battles, there are going to be people who lobby for and against, and that's just part of the process.
But now one of those people who was doing the lobbying on behalf of his nonprofit finds himself with a legal battle of his own.
And that got a lot of folks talking, including some people who worked at OpenAI who criticized their own employer for its behavior.
So that seemed like something that it'd be worth understanding more about, Kevin.
Yes, this has been a subject of hot debate and conversation within OpenAI, as well as around the broader AI industry.
And we wanted to talk to Nathan about his experience.
But before we do that, since this is, after all, a story involving AI and legal battles, I should note note that my employer, the New York Times, is engaged in its own legal battle.
They are suing OpenAI and Microsoft over alleged copyright violations.
And my boyfriend works at Anthropic, but so far we've managed to avoid any legal battles.
So counting our blessings on that one.
Well, check the mail when you.
Oh, no.
All right.
Let's bring in Nathan Calvin.
Nathan Calvin, welcome to Hard Fork.
Pleasure to be here.
You know,
many-time listener, first-time caller.
I think I said that wrong, but anyway, very glad to be here.
So, just to set the scene for our listeners, you're in Washington, D.C.
It's a Tuesday night.
I'm assuming it was, you know, a normal weekday.
You and your wife were sitting down to dinner, and then you got a knock on your door.
Tell the story from there.
So, when I opened the door, there was a sheriff's deputy who was there to serve me a subpoena from OpenAI asking for different communications and documents related to a piece of AI safety legislation I was working on, as well as about our criticism of OpenAI's restructuring to a for-profit.
One thing I will say just in terms of the timeline, because it's come up in some of the back and forth, is that On Saturday previously, I had gotten a call while I was visiting my mom and nephew saying that someone was trying to get into my apartment to serve me papers.
And I said, I'm not there right now.
Anyway, they finally did come on
Tuesday.
And so when Jason Kwan, the chief strategy officer at OpenAI, I think in his comment, said something about, you know, I should have known this was.
coming.
I did know they were trying to serve me, but I didn't know about any of the details.
And I didn't know they would be coming that exact night.
Now,
I want to get to all of that, but first, you know, I have not been served with a subpoena.
Casey's been arrested many times, so he's familiar with how these...
But never convicted.
He's familiar with how these things go down.
But are they literally handing you a packet of paper like in the movies?
Or what does it look like to be served with a subpoena from opening?
Yeah.
It is just a stack of papers.
I did not know, again, I am a lawyer, but I didn't know that sheriff's deputies are the ones who at least some of the time serve subpoenas in D.C.
I later learned that that's not incredibly unusual, but it certainly was, you know, surprising from my perspective.
I don't know, to be clear, the guy was perfectly nice.
I don't know, just to some degree, like after I had heard on Saturday that someone was trying to serve me papers, by the time it kind of actually happened and they were at my door, there was a little bit of like, okay, now I can figure out what is actually happening.
And kind of, it was honestly the days preceding hearing that it was coming and it actually happening were honestly some of the most stressful.
And it's like, okay, now I can figure out at least what we're dealing with and,
you know, how to respond.
I mean, when you got into AI advocacy, I mean, was this on your radar as something that would likely to be happening that people would be saying, like, okay, like, you got to like show us all the emails you've been sending about this?
No, I mean, I don't know.
Like, I do feel like,
I don't know.
My mom worked
for the American Academy of Pediatrics for 25 years and was involved in litigation against tobacco companies.
And they came and took all of her papers out of her office at some point and and had told me, you know, never, never write any emails.
You're not, you know, comfortable with having read back to you later or something.
And so, oh, wow.
So, you know, like I'm actually like way better prepared for this than the average person.
Yeah.
I think that's fair.
Yes.
And indeed, indeed.
Did you understand immediately why OpenAI was subpoenaing you?
What was your sort of initial response when you actually started reading these papers and understanding what they were after?
Yeah.
I mean, in some ways, there had been a little bit of a
preceding escalation before
I received the subpoena.
You know, there was,
I had some sense that, you know, we were
doing, you know, lots of advocacy and public communications and writing things to the attorneys general about this issue.
And I, you know, was getting some sense that this was getting on their nerves.
You know, I will say that, like, when they asked, there's part of me that's still thinking, like, okay, maybe this is just a good faith question.
And they're trying to figure out, like, maybe, you know, we are secretly funded and controlled by Musk or Meta or something.
When I did was reading through the subpoena, though, and I got to the part where it said all of your communications about SB 53, a bill we were working on, then I started to think, this doesn't really feel like they are just asking good faith questions.
There's a, again, and I don't, I don't know for a fact what's in their head and I can't say it, but my, my impression of it was, was not that.
What you're saying is it it would sort of make sense to me if for whatever reason they were serving me a subpoena and saying, hey, are you funded by Elon Musk?
Is that why you're trying to block our for-profit conversion?
But when they came to you and said, give us all the emails you've been sending about this bill that you're working on, that just kind of felt sort of out of scope.
Yes, it did.
And one other thing I will add as well is, again, I was expecting maybe that a subpoena would.
come but like when i had talked to a previous organization like you know the other orgs i was aware of that had been subpoenaed it had been to their organization and just like their Delaware registered agent.
And you just like get an email that, you know, your Delaware registered agent got a subpoena.
Like it wasn't people coming to their, you know, fifth story apartment building at 7 p.m.
or whatever.
And so that was another aspect that did just feel kind of eyebrow raising.
And so I just think it's, it just really does leave a bad taste in my mouth.
Right.
I mean, there's one explanation that is like the sort of uncharitable explanation, which is that OpenAI is trying to sort of bully and intimidate any sort of nonprofits that are critical of its its restructuring plan.
There's another explanation, which I want to get your take on, which is that these are fair questions to ask.
We don't have a lot of transparency.
We have a lot of dark money sort of flooding into fights about tech regulation these days, and it's worth asking questions about who is behind those efforts.
And I guess we should just sort of dispense with the central claim here.
Nathan, let me just ask you straight up, are you or ENCODE working with or being funded by either Elon Musk or Mark Zuckerberg themselves or people or entities associated with them or their companies?
So we are not funded by Elon Musk or Mark Zuckerberg.
If you go on our website, it says that we have received funding from the Future of Life Institute, which was one that was mentioned in their subpoena.
Future of Life Institute several years ago got a donation from Musk, but they are not Musk.
And we said this in our communications back with them.
Like, I have never talked to Musk.
Like, Musk is not not directing our activities.
It's false.
I don't know.
We, we submitted a
asking the FTC to open an investigation into XAI and spicy Grok and their things.
And I will happily say on air that I think that like XAI's safety practices, I think, are in many cases far, far worse than open AIs.
Like, I, I, again, so just like that central claim is false.
What about Mark Zuckerberg?
Any relationship with him or Meta?
None.
Zero.
And again, like, I think our partners who work on the issue, like, I don't know, we are an organization that focuses on AI safety and kids' safety issues.
Like, we are just constantly at war with Meta.
The idea that Meta is backing us is just, it feels, again, I realize not everyone has the context and knows who we are, but it's just like completely laughable.
You do have a list of donors or funders on ENCODE's website.
You say this is, we're generously supported by, and then you list a bunch of organizations, including the Omidiar Network, the Archwell Foundation, which is Harry and Megan's foundation, the Survival and Flourishing Fund, which is a kind of effective altruism-linked philanthropy
funded primarily by Jan Tollin.
So you do provide some transparency about who your funders are.
Why do you think that wasn't enough for OpenAI?
Why do you think they still had questions about Elon Musk or Mark Zuckerberg?
I think to some degree, you'll have to ask them.
I'm not sure.
I mean, again, there's also one thing to say here of like, there is no general right for them to know about all of our.
And again, like the subpoena did not ask about the RMDR Foundation because the OMDR Foundation is not relevant to their litigation in any way.
Like the role of a subpoena is to get relevant information for the litigation you are engaged in, not to just like ask whatever questions you would like the answers to from other private organizations.
Like, you know, we would love to send a subpoena to OpenAI and be like, tell us all the details of what you're you're planning to do and the restructuring.
And like, are you going to disempower the nonprofit in the ways it perceives whatever?
But like, we don't have a right to do that.
Like, that's not a question we can just ask them, even though we might like to.
And so, what we did is, you know, we put out like a public letter asking them a bunch of questions.
Like, OpenAI can go to the press and say, you know, we want transparency about these things.
Again, they do have the right to ask us about Elon because they are in litigation about this.
And again, I think if they had just reached out to us at our corporate address and said, Are you funded or directed by Elon?
And, you know, we explained no and proved to them no, and then they moved on.
Like, I would understand that.
And I think that that is a fair thing and that Elon is attacking them and trying to destroy them.
And they want to make sure that there are efforts that are not covertly being supported and directed by him.
But that, but I just like can't emphasize how far away what actually happened was from like that narrow question that they were entitled to ask.
So, I mean, as you reflect on this experience, do you feel like this was intimidation?
Do you think that OpenAI is trying to penalize organizations for speaking up either against the for-profit conversion or for AI regulation?
Yeah, I mean,
to some extent, it's a question of intent, but...
And I don't know what's inside their heads.
And so I want to be careful about that.
But I believe that that is what they were doing.
That is my best guess.
And that was how I received it.
And I would like to be another X, there to be another explanation for this.
And if it really, you know, I thought it was possible when I put this out that maybe they would say, you know, hey, this was a misstep.
Our lawyers went a bit far.
We didn't really actually mean to add the thing about 53.
Like, that's not what they said.
They like doubled down and said that, you know, we think we are entitled to this.
And I think that that just is very important to note.
And I will just say another thing that I don't think we've mentioned is that, you know, even for some folks within OpenAI, for instance, Joshua Akium,
who was speaking in his personal capacity, but put out a fairly long thread talking about the fact that what I was describing in my thread is, you know,
doesn't look great.
Yeah, but that was the unofficial response from someone at the company who was sort of breaking from the company itself.
We've also seen Jason Kwan, as you mentioned, the chief strategy officer at OpenAI.
He wrote a lengthy thread arguing that you and ENCODE were sort of only giving part of the picture, that
ENCODE doesn't disclose their funding, and that this is not about SB 53.
Jason said, quote, we did not oppose SB 53.
And they said that basically this was
sort of a tempest in a teapot.
There was also a quote that a lawyer for OpenAI, Ann O'Leary, gave to the SF Standard saying, we welcome legitimate debate about AI policy, but it is essential to understand when nonprofit advocacy is is simply a front for a competitive commercial interest.
What do you make of the official OpenAI response to what you claim?
So
one thing is,
you know, I think Jason focuses on the fact that we became involved with the lawsuit between Elon and OpenAI by filing an amicus brief arguing that it was in the public interest for OpenAI to remain a nonprofit.
Jeffrey Hinton also made some positive comments about our amicus and showed support for our arguments.
He's also someone who, by the way, has called for Elon Musk to lose his status with the Royal Academies and is really not a fan of Musk.
If you want another example of not everyone who is critical of
OpenAI's restructuring is a Musk fan.
Yeah, I mean, also on the point of the did not oppose SB 53, it is true that they never put out something saying that they formally opposed it, but their global affairs head, Chris Lahane, did send a letter to Governor Newsom at a time when SB 53 was in pretty heated discussion saying that he believes the correct path for California is to have an exemption from its AI frameworks for any company that signs on to an agreement with the federal government for testing or that says that they will be adhering to the EU AI code of practice, which in practice means a complete exemption from the California law.
So, I mean, you can say that advocating for you to be completely exempted and all of your bunch of your fellow companies to be completely exempted is not the same as opposing it.
You know, like you can ask a linguist for whether that's fair, but you know, I think it still is important context that he did not discuss.
What now?
Are you going to send Open AI the information that they're asking for?
Are you planning to do any more transparency around your funding or your advocacy efforts, what's the next shoe here?
So we sent them our objections and responses where we laid out in four areas that were relevant, like for instance, our communications or funding received by Elon, saying that those didn't exist and saying that for the other pieces of information that they were not relevant.
They never responded to that.
They could have filed a motion to compel saying to the the judge that we have to turn them over, but they didn't do that.
My view, again, I don't know this for sure, is that they didn't know that because they realized a judge would not grant that motion because they were not, in fact, relevant.
I think there are fair discussions about transparency.
I mean, I think there's fair things if some of our donors want to be private.
And when you're donating to C4s, you have the right to give money privately.
We have listed on our site a lot of our donors.
And I think we're, you know, I think you get a clear impression of the different types of motivations that people have who are funding us but i i think this kind of like larger discussion about like what the transparency appropriate transparency is for folks involved in the advocacy process is very different from like i don't think that's like what open ai cares about here or why they're asking about this um and even even in the subpoena which was an overreach in in in many ways like they don't talk about you know the omidio foundation which again is listed on our website as a funder we're not hiding that fact um because it's not relevant to their litigation with musk but you But you said there are donors that you don't list on your website who want to remain private.
Would you like to tell us who they are or how much they're giving you?
Not here.
Okay.
Just checking.
I have to ask.
Fair, fair.
I mean, I think the list of the
they're not Musk or Zuckerberg.
Yeah, they're not Musk or Zuckerberg.
We don't take money from Frontier AI companies.
Yeah, I don't know.
I will say that.
Yeah, and I think it's a reasonable thing to advocate for that all of these groups should be required to disclose much more about who funds them.
But I think that should apply equally to organizations that are pushing for the other side of things here.
I think all of the I think that's fair.
Yeah.
I think that's a fair discussion to have.
I'm just not sure OpenAI is like the one to make that argument.
So as you look back on this episode, how has it changed the way that you think about OpenAI?
I genuinely...
have a lot of positive feelings about OpenAI and think that they do many things genuinely better than their peers.
For instance, like Meta or XAI.
And I think that, for instance, some of their safety research and system cards are things that
they have even improved on in recent months and have done a genuinely good job of.
And I think that there is some of a feeling among some people at OpenAI that they get disproportionate criticism relative to their peers.
And I think that there is some truth in that.
One thing I'll say is like, I don't know, if one of their peers had been the one to show up at my house and give me a subpoena, I would have said about that too.
But it was OpenAI that was the one that did it.
And also I think there's some aspect that OpenAI is a non-profit and they are a non-profit that has a mission to ensure that AGI benefits all of humanity.
And,
you know, they are in the process of trying to weaken and get around that legal mission and be able to consider profit more in their decisions.
And I think this episode and also things like, you know, the discussion about whether to allow, you know, not safe for work, you know, porn or whatever on Chat GPT or to you know release Sora 2 and the way they released it and you know their kid safety practice and all sorts of these other things that like they are not a normal for-profit company.
They are at least for now a non-profit that is dedicated to this mission above profit.
And I do think that means that they should be held to a higher standard.
Yeah, I mean, I'll just say, like, it's not like Elon Musk is the only person who opposes this restructuring plan.
Like, the whole AI safety, you know, community has been up in arms about this for years now.
It's very unpopular.
Yes.
Yeah.
I am just
curious what you make of kind of the difference between
Joshua's statement and Jason's statement and kind of some of this like continued evolution and pressure you have between OpenAI kind of transitioning from more of a research organization focused on some of these loftier ideals to trying to move to the next stage of what it wants to do.
I mean, I think it just speaks speaks to a very real tension within the company, which is that there are a lot of people there who believe in the stated mission, who want to create this very beneficial AI.
And then you also have a lot of people who come from other giant tech companies who see this primarily as a competition about winning and being first and making the most money.
And people who come from those kind of companies are not above, you know, waging lawfare to get what they want.
So I'll be curious to see kind of how that shakes out in the coming months.
It does seem like it's that second group, the kind of big company group that is currently steering the company.
And I wonder if that's going to continue.
But I will say, in addition to that, I think that's right.
Your story, Nathan, has caused more consternation and soul searching among
people at Open AI than I think anything since the Daniel Cocatello
story about these non-disparagement agreements that they were forcing people to sign or else they would claw back their vested equity in the company.
That was a big deal to people at OpenAI.
And this is a big deal to people at OpenAI.
I have been talking to people.
It's not just Josh who is saying this stuff.
I think there's a lot of soul searching going on inside the company about this question of are we still the good guys?
Are we transitioning to something we no longer support?
And so I think there's going to be some internal qualms about this and probably other stories to come.
But most of them probably won't break out into the open the way this has.
Nathan, thank you so much for coming on and explaining all this to us.
Thanks, Nathan.
Thank you.
Just wanted to note that we reached out to OpenAI after this interview asking about this question of intimidation, and they responded with a statement from Jason Kwan reiterating that: quote, Elon has opposed our restructure for obvious competitive reasons.
And in code, joined in.
Organizations that suddenly emerge or shift priorities to join Elon raise legitimate questions about coordination and funding, which the subpoena seeks to clarify.
Our questions have still not been answered, and we still don't transparently know who is funding these organizations.
When we come back, an old woman falls off a very high shelf.
Is it real or is it fake?
No, it's the hard fork review of Slop.
300 sensors.
Over a million data points per second.
How does F1 update their fans with every stat in real time?
AWS is how.
From fastest laps to strategy calls, AWS puts fans in the pit.
It's not just racing, it's data-driven innovation at 200 miles per hour.
AWS is how leading businesses power next-level innovation.
This podcast is supported by Bank of America Private Bank.
Your ambition leaves an impression.
What you do next can leave a legacy.
At Bank of America Private Bank, our wealth and business strategies can help take your ambition to the next level.
Whatever your passion, unlock more powerful possibilities at privatebank.bankofamerica.com.
What would you like the power to do?
Bank of America, official bank of the FIFA World Cup 2026.
Bank of America Private Bank is a division of Bank of America NA member FDIC and a wholly owned subsidiary of Bank of America Corporation.
Know the feeling when AI turns from tool to teammate?
If you're Rovo, you know.
With Rovo, you can streamline your workflow and power up your team's productivity.
Find what you need in a snap with Rovo Search.
Connect Rovo to your favorite SaaS apps to get the personalized context you need.
And Rovo is already built into Jira and Confluence.
Discover Robo by Atlassian and streamline your workflow with AI-powered search, chat, and agents.
Get started with Robo, your new AI teammate, at rovo.com.
Well, Casey, over the last few weeks on our show, we've been talking a lot about slop.
We have, and it seems like the more we talk about it, the more of it appears all over the internet.
Yes, it is taking over the internet.
And for that reason, we thought we should introduce a new segment that we are calling the hard fork review of Slop.
The Hard Fork Review of Slop.
Oh my God, that's so perfect.
That's beautiful.
You know, this, I would say, is generally a STEM podcast.
We care a lot about science and technology and engineering, not as much math, but we also care about the arts.
And so we thought, why don't we carve out some time time on the show to talk about some of the new achievements in AI art that we're seeing out there on the internet and also sort of bring our critical eye to them and, you know, put them in conversation with the culture.
Yes, we have critics out there for books and movies and music and video games.
And I think slop is like an emerging genre of cultural production, most of which is bad, but some of which may actually be good.
And so we need to stand here amid the floodgates, sort of filtering out the bad slop and letting the good slop get through.
Okay, I just want to signal up front.
I didn't actually bring anything good, I didn't know that that was part of the assignment.
I have one good one, but we'll save it for the end.
All right, fair enough.
So, Casey, tell me about the slop that you have been looking at, and then I will tell you about some slop that I've found.
That's great.
Well, Kevin, maybe to just kind of warm us up, we can look at some of the slop that I think of as cocoa melon for adults, just kind of pure visual stimulation, no ideas in it whatsoever.
And this kind of slop you can find on TikTok if you search for glass fruit cutting.
Have you seen any of the glass fruit cutting?
No.
Okay, let's see if we can queue one of these up.
Some of these are in the sort of like the ASMR realm.
Ooh.
This man is cutting into a coconut with a knife, but the coconut is glass.
Oh, the kiwi is glass.
Oh,
that's.
Oh, I don't like that sound.
That just has like nails on a chalkboard vibes for me.
This is what's replacing reading in our schools.
I mean, literally.
Now, what I liked about this one, Kevin, is it's glass pancakes
with a beautiful maple syrup.
You really hate that sound, Kevin.
That's a jump scare sound for me.
Oh, we got a doughnut, a glass donut.
We'd love to see a glass doughnut.
Oh, just cutting through the glass bowl of
cereal there.
The physics here are actually kind of impressive, right?
Like it is showing the reflections of the knife in the glass.
It looks vaguely realistic.
Yeah.
It's like it's weird because it's like the food looks delicious, but but beautiful, but it's glass, so it's off-putting.
Yeah, like it just sort of doesn't make any sense.
And so it hypnotizes your brain into this sense of, I don't know what I'm watching.
I don't want to look away.
Yes.
And I'm going to stop thinking words.
Yes.
It's sort of like the spiritual successor to those like crush videos where they'll just have like the hydraulic press and they'll just like press down on like seven objects.
Yeah.
And now instead of just wasting those objects, we can waste water and electricity.
All right.
What do you have?
So I have an example from the news.
Actually, this one comes to us from DirecTV, which has just struck a partnership with an AI company called Glance that will allow people with DirecTV Gemini devices to put themselves inside of 30-second AI generated videos.
Basically, if you step away from your TV to go get a snack or go to the bathroom, you might come back and find that you are in the ad on the TV.
And Casey, let's watch an example of this.
So this kind of shows how it works.
You connect it to your TV, you put in your photo.
tap a couple buttons and it generates your looks.
This process is already so absurd.
And then boom, there you are in a blazer.
Now, Casey, I thought the point of advertisements was to show clothes on people who are more attractive than me
to entice me to buy them.
Why would I want to see clothes ads with me in them?
I can't answer that question, honestly.
This one is so funny to me because
the process you have to go through to do this is so complicated.
I basically cannot imagine a a single person doing this.
First of all, you already have a TV that is like working against you, right?
Like the way that this works is that if you have one of these TVs and you leave it idle for 10 minutes, AI takes over, which is like, it's kind of like, you know, if the bus goes below 55 miles an hour, it explodes.
This is that, but for like AI advertising, and then after it shows you these images, then it's up to you to go scan a QR code and take a photo of yourself.
Like, no one who is is watching TV wants to do any of this
at all.
So
it's a very silly process.
And, you know, I mean, in the
demonstration, the photos look fine.
They look fine.
They look fine.
Yeah.
But let me ask you this.
Do you not know what you look like with a jacket on?
You know what you look like with a jacket on.
Let me just say this.
What are we doing here?
That is my review of this.
What are we doing here?
We are selling advertising technology.
Okay.
So now I just want to show one that made me laugh.
I call this one Woman on the Walmart Shelf, if we want to cue this one up.
I saw this one on TikTok, although it does have the Sora watermark on it.
And I think this speaks to the ability of AI Slop to just kind of create like a classic Pratt Fall physical comedy situation.
This one involves what looks like store security cam footage of an older woman on a very high shelf inside a Walmart.
And there's a police officer who's looking up at her as our story begins.
Ma'am, please come down from there.
You want me to come down?
Yes, ma'am.
And she does kind of does a header off the shelf and crashes into the police officer.
So Kevin, what did that one make you feel?
There's a lot there.
There's a lot of layers to this onion.
Is this a one-off or is there a larger genre of older people falling off the the top shelf at the grocery store onto a police officer?
It's a whole interconnected cinematic universe with sort of, you know, these very rich sort of characterizations.
The vocal performances are really amazing.
So I encourage you to get into it.
Beautiful stuff.
No, this is a one-off, Kevin.
I've never seen anything else related to it.
Yeah, I'm not worried that people are going to start throwing themselves off the shelves of grocery stores to sort of mimic the trend here.
This one feels pretty harmless to me.
And I appreciate inspiring older people to do things like climbing up to the top shelf of the grocery store.
I mean, look, these days, anytime I see a Sora video that isn't like misappropriating the likeness of Martin Luther King Jr., I say that's a win for Slop.
Yes, this one, I think, pretty harmless.
All right, what else you got?
Well, this next one, Casey, was not harmless because it involved America's queen, Dolly Parton.
Oh, no.
Leave her alone.
Basically, some sicko out there has been generating AI images of Dolly Parton looking very sick, including at least one image of Reben McIntyre visiting Dolly Parton on her deathbed, which went around on the internet and led to a bunch of rumors that Dolly Parton, God forbid, was dying.
Oh no.
See, I hate this.
Yeah, I don't like this either.
Let's watch Reba's video summarizing the whole thing.
You tell them, Dolly, that AI mess has got us doing all kind of crazy things.
You're out there dying.
I'm out here having a baby.
Well, both of us know you're too young and I'm too old for any of that kind of nonsense.
But you better know I'm praying for you.
I love you with all my heart, and I can't wait to see you soon.
Love you.
Wait, just to be clear, what you showed me was real, not sloppy.
That is Reba McIntyre's actual Instagram account.
That is Reba McIntyre's actual Instagram account.
She does show some of the slop images inside the thing of Reba at Dolly's deathbed.
And Dolly responded with another real video from her real social media account saying, quote, I ain't dead yet.
So, Casey, what do you make of this one?
I mean, this is so bad, you know, like
the, so many of the fears around misinformation have been there will just become a time when you can't tell what is true and what is false.
And the better that image generation software gets, the more of these little viral hoaxes we're going to see going around.
So this is super bad.
I'm truly trying to, like, what kind of person do you have to be to be like, today is the day that I create a rumor that Dolly Parton has died and I'm going to like use Sora to prove it?
Truly.
It is like mind-boggling to me.
If you wanted to turn the public against AI and against AI-generated content, the most effective thing you could do would be to go after Dolly Parton, who everyone, literally everyone loves.
No, I hope Jolene finds whoever did this and there's a number on him.
Let's just say Dolly Parton's lawyers are going to be working more than nine to five.
This next one is sort of a narrated journey.
We are returning to Walmart for this one.
And this creator is very interested in the use of AI to create
like art on products.
You know, so you're sort of like, you know, I've seen some that are like at a craft store and like there's like framed.
pictures of what has clearly been AI generated.
In this case, she picks up some butter cookies at Walmart and makes a pretty convincing case that it is slop art.
And I enjoyed this journey.
Let's see how it looks here.
This is bad.
I didn't think it could get worse, but you guys were right.
The butter cookie tins at Walmart are way worse than the popcorn tins.
Because why is Santa throwing ass?
Why is he squatting on a table?
Why does he look like he's about to twerk?
What is his hand doing?
What is
Santa has the fattest ass in this?
Look how wonky that is.
And what is this wall full of random things?
Like, can you make out what any of that is actually supposed to be it looks like there's cobwebs on the roof whether that's intentional or not i don't know it's just like random shapes on the wall slop no all right we can probably stop it there i i have to say i did this video made me feel very naive because i did not realize that there was like mass-produced products in like walmart stores that is ai generated oh yeah yeah and i i also love that there are now like slop detectives who are just going out there vigilante style and like investigating the slop on the shelves of their local Walmart.
That's beautiful to me.
We need more citizen participation.
Honestly, it could be a segment for our show, you know, slop vestigations.
Mmm.
Let me ask.
Wait, let me ask a question.
Yeah.
If you're shopping and you pick up an object and you see that, you know, there's slop art, does that affect the way that you want to buy it or not buy it one way or the other?
No.
Okay.
I mean, I think there's like a whole like category of art that basically doesn't matter, which is like the stuff on the cookie tin, right?
The stuff at Walmart.
No one is winning any prizes for that.
No one is reaching any new heights of creativity.
Basically, this is just a way for the butter cookie manufacturer to save a couple bucks and not have to hire an illustrator or use some stock art from the internet.
And do you think they're passing the savings on to us, the customers?
Probably not.
Probably not.
That's probably going right to their bottom line.
That's unfortunate.
Yes.
What about you?
Would you be less likely to buy something if a slop had been used in its advertising?
I mean, maybe, you know, because I think it speaks to a kind of cheapness and a lack of care.
And so if I were buying like a heart defibrillator and I saw that there was slop art on the box, I would say, I don't know if I could trust these people.
What about butter cookies from Walmart?
Are you going for quality when you're buying butter cookies from Walmart?
I want only the, I want brown butter.
If there was going to be butter cookies.
Butter is a great flavor, but it needs something else.
You know what I mean?
Okay.
So for Casey, only the artisanal images of Santa with a huge ass.
Small batch, huge ass, Santa, butter cookies, please.
Okay.
One more example of slop that I want to tell you about today, Casey, and get your opinions on.
This one is what I would consider a good slop.
This is slop that is being made toward a noble cause, which is preventing the AI apocalypse.
Now, Casey, you might think to yourself, how could this happen?
How could AI slop be used to ward off the AI apocalypse?
I was just about to ask you that.
Well, this is a company called Hyperstition.
It was founded by Andrew Cote and Aaron Silverbook.
And basically, this is a company that is trying to counteract all of the sci-fi stories and narratives out there about AI going rogue and killing people, which, you know, this, this hypothesis goes sort of makes its way into the training data for these AI systems and actually makes them more likely to sort of go rogue.
It gives some ideas.
It gives them some ideas.
And so Andrew Cotay said,
what if we combated this by writing a bunch of AI-generated novels about AIs and humans getting along really well?
And then we fed that into the training data for the AI systems to kind of give them some more good examples to follow.
All right.
Kind of a convoluted explanation, but sure, why not?
So this company has just gotten a grant.
I read about this on Astro Codex 10.
They just got a grant to create 5,000 AI generated novels.
And they're trying to have these novels be sort of 80,000 words.
And they're going to enlist the public's help to help generate these.
And you can buy credits, about $4 a book to generate this.
And then they're going to try to feed these into the language models and get the models to think about maybe good scenarios and maybe be more likely to act on that.
Wait, why does the public get involved if the works are all AI generated?
I think they want it to reflect a diverse set of
scenarios and characters.
Basically, they want just people to sort of get involved in this and make it as diverse as possible.
All right.
Well, do we have any examples we can see?
No.
Great.
So what do you make of this attempt to use slop for the benefit and potentially the salvation of humanity?
Here's what I'm going to say.
If it turns out that the thing that is needed to prevent
human extinction from AI is a massive infusion of slop into the training data.
I'll be very surprised.
I'll be very surprised if that was the difference maker.
I share your skepticism.
I think the default outcome from this project is that it probably doesn't save us from the AI apocalypse.
I think a funny secondary effect would be if one of these like 5,000 slop novels goes on to become a huge bestseller and like becomes the literary craze that takes over the country.
Do I think that's likely?
No, but it could happen.
Well, as we mentioned earlier in the show, it doesn't seem like people are reading all that much these days.
But, you know, maybe all of this will eventually be fed into a notebook LM video presentation that folks can watch.
Yes.
All right.
That is it for the hard fork review of slop, and we welcome your submissions for future installments.
If you spot something, some slop that is worthy of cultural interaction.
interrogation by some of our nation's foremost slop critics, please send it over to us at hardfork at nytimes.com along with a brief explanation of the effect it had on you, how it moved you.
Yeah, we want to like, it can't just be like, look at this weird thing.
Like, I want to see slop that made you feel something.
Yeah.
And the next time you see a Santa with a suspiciously large posterior.
Call us.
Call us.
Email us.
We want to know about it.
We want to see it.
And we want to see it.
He has a folder on his MacBook that's just photos and images of Santa with a very large.
I love a thick Santa.
And I salute them, sir.
See you on Christmas, big guy.
The hard fork review of slop.
300 sensors.
Over a million data points per second.
How does F1 update their fans with every stat in real time?
AWS is how.
From fastest laps to strategy calls, AWS puts fans in the pit.
It's not just racing, it's data-driven innovation at 200 miles per hour.
AWS is how leading businesses power next-level innovation.
This podcast is supported by Bank of America Private Bank.
Your ambition leaves an impression.
What you do next can leave a legacy.
At Bank of America Private Bank, our wealth and business strategies can help take your ambition to the next level.
Whatever your passion, unlock more powerful possibilities at privatebank.bankofamerica.com.
What would you like the power to do?
Bank of America, official bank of the FIFA World Cup 2026.
Bank of America Private Bank is a division of Bank of America NA member FDIC and a wholly owned subsidiary of Bank of America Corporation.
Know the feeling when AI turns from tool to teammate?
If you're Rovo, you know.
With Rovo, you can streamline your workflow and power up your team's productivity.
Find what you need in a snap with RovoSearch.
Connect Rovo to your favorite SaaS apps to get the personalized context you need.
And Rovo is already built into Jira and Confluence.
Discover Rovo by Atlassian and streamline your workflow with AI-powered search, chat, and agents.
Get started with Rovo, your new AI teammate, at rovo.com.
Hard Fork is produced by Rachel Cohn and Whitney Jones.
We're edited by Jen Poyan.
This episode was fact-checked by Will Peischel.
Today's show was engineered by Chris Wood.
Original music by Alicia BaYouTube, Diane Wong, Rowan Nemastow, and Dan Powell.
Video production by Sawyer Roquet, Pat Gunther, Jake Nicol, and Chris Schott.
You can watch this whole episode on YouTube along with that slop at youtube.com slash hard fork.
Special thanks to Paula Schuman, Wee Wing Tam, Dahlia Haddad, and Jeffrey Miranda.
You can email us at heartfork at nytimes.com with a slop that made you stop.
Spend more time interviewing candidates who check all your boxes.
Less stress, less time, more results now with Indeed sponsored jobs.
And listeners of this show will get a $75 sponsored job credit to help get your job the premium status it deserves at Indeed.com/slash NYT.
Just go to Indeed.com/slash NYT right now and support our show by saying you heard about Indeed on this podcast.
Indeed.com/slash NYT terms and conditions apply.
Hiring, do it the right way with Indeed.