California Regulates A.I. Companions + OpenAI Investigates Its Critics + The Hard Fork Review of Slop

1h 5m
Could this be a blueprint for the nation?

Press play and read along

Runtime: 1h 5m

Transcript

Speaker 1 300 sensors, over a million data points per second.

Speaker 3 How does F1 update their fans with every stat in real time?

Speaker 5 AWS is how.

Speaker 7 From fastest laps to strategy calls, AWS puts fans in the PID.

Speaker 8 It's not just racing, it's data-driven innovation at 200 miles per hour.

Speaker 2 AWS is how leading businesses power next-level innovation.

Speaker 9 Casey, I heard the big news this week in tech is that Waymo is going to London.

Speaker 10 You know, I saw that and I thought British people are going to hate it.

Speaker 9 I had a different question was, how are they going to teach you to drive on the other side of the road? That's a very

Speaker 9 good question. Just switch the software.

Speaker 9 Go on the other side now. Everything you used to do, do it in reverse.

Speaker 10 That's like the autonomous vehicle equivalent of dark mode is when you have to drive on the other side of the road.

Speaker 9 You know?

Speaker 10 It's not available at launch, but eventually they bring it up.

Speaker 9 Do you think they have to put the steering wheels that don't do anything on the other side of the car too? Presumably.

Speaker 9 I'm Kevin Roos, a tech columnist for the New York Times. I'm Casey Newton from Platformer.
And this is Hard Fork.

Speaker 10 This week, the first state law to regulate AI companions is here. Will it be enough?

Speaker 10 Then, OpenAI is waging legal battles against its critics, and code lawyer Nathan Calpin joins us to explain why the company served him with a subpoena.

Speaker 9 And finally, it's time for the first ever hard-fork review of Slop.

Speaker 10 Grab your opera glasses, Kevin.

Speaker 9 Well, Casey, it's been a big week for tech regulation in the state of California.

Speaker 10 That's right, Kevin. Everywhere I look, it's bills, bills, bills.
I'm like, what is this? A Destiny's Child song?

Speaker 9 Very topical 90s reference.

Speaker 10 Listen, a lot of our listeners are seniors, and they're going to really appreciate that. That one.

Speaker 9 So on Monday of this week, Governor Gavin Newsom of California signed into law a bunch of new tech-related bills that had been making their way through the state legislature in California.

Speaker 9 And we're going to talk about them today.

Speaker 9 And if you're not a listener of ours who lives in the state of California, you may be asking, why are you devoting an entire segment to tech regulation in California?

Speaker 9 And Casey, what is our response to that?

Speaker 10 Well, Kevin, I think you and I both believe that while AI has the potential to do some good, it's also clearly causing some harm.

Speaker 10 And right now, the AI companies are operating with very minimal regulations on what they do. And that's just been a growing source of concern.

Speaker 10 We have talked over the past year on this show about teenagers who have died by suicide after having very tragic interactions with chatbots.

Speaker 10 And I think there has been a growing cry for some kind of guardrails to be placed around these companies.

Speaker 10 So that is what we are talking about today is a state that had some ideas that actually managed to pass the laws and is putting them into practice and will hopefully rein some of these companies in.

Speaker 9 Yeah, California is a uniquely important state in tech regulation for a couple of reasons. One of them is a lot of the companies are based here.
They care a lot about how California regulates them.

Speaker 9 And the laws that are passed in California tend to sort of ripple out to the rest of the country and the rest of the world. They tend to become kind of de facto national standards.

Speaker 9 And especially at this moment where our federal government is shut down and even when they're operating, don't seem to be interested in passing any tech regulations.

Speaker 9 This is what we have, is the state level regulation sort of standing in for the federal regulation that doesn't exist. Yeah.

Speaker 10 So today, let's talk about some of these bills that got passed and what we think they tell us about what some common sense approaches to regulating AI might look like.

Speaker 9 Okay, so let's start with what I think may be the most important bill that has come out of this flurry of legislation, which is SB 243. Casey, what is SB 243 and what does it do?

Speaker 10 What SB 243 does is it requires developers to identify and address situations where users are expressing thoughts of self-harm.

Speaker 10 So they have to have a protocol for what they're going to do if they see somebody express these thoughts.

Speaker 10 They have to share that protocol with California's Department of Public Health, and they have to share statistics about how often they are directing their users to resources.

Speaker 10 And then starting in 2027, Kevin, the Department of Public Health has to publish this data. So that's a little bit longer than I would like to start getting this data.

Speaker 10 But my hope is that when that begins, we will have a very large and useful set of public health data about the actual effects of chatbots on the population of California.

Speaker 10 So if if you're somebody like me who's really interested slash worried about what it is going to do to our society and our culture once so many people are chatting with these bots every day, this is a really big step toward understanding that.

Speaker 9 Yeah, I think this is a good one for us to drill down on because it is a place where I think there is sort of a lot of attention and momentum around regulating.

Speaker 9 You know, OpenAI has recently rolled out some parental controls.

Speaker 9 Character AI, which we've also talked about on the show, now has a disclaimer on its chatbots and some additional guardrails for minors.

Speaker 9 So I think the platforms were starting to kind of comply with these kinds of laws in advance of them actually becoming laws, but this will at least give them some formal requirements.

Speaker 10 Yeah, and we should mention a couple more of those requirements.

Speaker 10 In California, chatbots will now have to tell you that their output is AI generated.

Speaker 10 Of course, you know, our savvy listeners probably already know that, but there may be some people who are chatting with ChatGPT and aren't entirely sure what's going on.

Speaker 10 This bill does have a few additional protections for minors, including that chatbots cannot produce sexually explicit images for them.

Speaker 10 And it's going to remind minors to take breaks if they have been chatting with ChatGPT for a really long time.

Speaker 10 So interestingly, there was another bill that California legislators passed, which would have, I think, potentially banned ChatGPT use for minors. And Gavin Newsom vetoed that.

Speaker 10 He was like, that's going too far. But this is kind of like one step back.
And I do think adds some meaningful protections.

Speaker 10 And no longer do we have to rely on the goodwill of an open AI or a character AI to implement these things. Now it's just in the law and it says you actually have to do this.

Speaker 9 Now, does this law apply to all of the AI platforms or just the like really big ones with hundreds of millions of users?

Speaker 10 So according to a legislative analysis, it will

Speaker 10 apply to basically any chatbot that can be used as a companion. And initially, I didn't know, like, well, would that include ChatGPT?

Speaker 10 Most people, I think, don't really think of ChatGPT as a companion.

Speaker 10 But according to this legislative analysis, yes, like, and you know, look, if you're talking to it for three hours a day, it's some kind of a companion to you.

Speaker 9 Yeah, I think this is a case where the industry kind of understood that something was going to be done about chatbot companions in the arena of state regulation.

Speaker 9 And they had this other proposal that they thought was too strict and stringent.

Speaker 9 And so they kind of accepted the lesser of two evils and sort of got behind begrudgingly this, this, uh, this bill that actually did end up being signed into law. That's right.

Speaker 10 And can I talk about why I think this is important, guys?

Speaker 10 Okay. So just on Tuesday, we get this really interesting tweet from OpenAI CEO Sam Altman.
Okay.

Speaker 10 This is this week gets a lot of attention because it says at the end that in December, they're going to allow what they call verified adults to start using ChatGPT to generate erotica.

Speaker 10 Let's set that aside for a second. Here's what Sam says at the beginning of this long tweet.
He said, We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues.

Speaker 10 We realized this made it less useful/slash-enjoyable to many users who had no mental health problems. But given the seriousness of the issue, we wanted to get this right.

Speaker 10 Now that we've been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases. And then he says that

Speaker 10 what people liked about GPT-4.0, which was the model that they got in trouble over over because it was so sycophantic and it encouraged people who were telling it things like, I'm not taking my medication anymore or I think I'm God.

Speaker 10 They said they're going to bring whatever people liked about that model back to ChatGPT.

Speaker 10 How does this connect to the California bill? I'm not sure that OpenAI has mitigated the serious mental health issues that came up

Speaker 9 with this.

Speaker 10 It has been two weeks since they rolled out parental controls in ChatGPT. Do we really have enough data to say this one is under control?

Speaker 10 I have to say, Kevin, I was actually pretty shocked by this tweet, not for the erotica stuff, but for the GPT-4.0 stuff and the suggestion that we have a handle on how to guide these chatbots so that they don't hurt people.

Speaker 10 What did you think when you saw this?

Speaker 9 I thought they are really trying to drive up usage numbers.

Speaker 9 They must be seeing something that is suggesting to them that people were engaging more with ChatGPT when it was more like a companion, when it was telling people more flattering and and sycophantic things.

Speaker 9 And I suspect that that is part of the reason that they are trying to sort of get that mode back. Now, I think there's actual logic there.

Speaker 9 I think there's, you know, a lot of users do want something that's going to tell them they're great, but we do not know how they have solved these

Speaker 9 safety challenges that they have supposedly solved or these mitigations for mental health issues.

Speaker 9 It's also like, it's not clear to me that things are just as simple and binary as like some users have mental health problems and some don't.

Speaker 9 And like for the for the ones that don't, we're gonna give them the kind of like unhinged, unfiltered chatbot experience. And for the ones who do, we'll like guide them onto a set of guardrails.

Speaker 9 Like these things exist on a spectrum.

Speaker 9 And it may not be obvious to OpenAI or even to the users themselves when people are starting to develop mental health issues as a result of talking to these chatbots.

Speaker 10 I just, I find it so confusing because on one hand, you know, this is a company that will put up a blog post that says, we do not want to optimize for engagement.

Speaker 10 We want you to use this thing, have it help you and let you get on with your life. And then they release Sora, the infinite slop feed.

Speaker 10 And then they say, we're bringing back the most sycophantic AI personality in the history of the company because we know that some of you out there might need a friend.

Speaker 10 So it really feels like there are two wolves inside of open AI right now. And I think it just makes necessary

Speaker 10 some common sense legislation that starts to put some guardrails around this and signals to these companies that they cannot just do whatever they want.

Speaker 9 Yeah, I mean, I guess we'll have to see how the AI companies comply with these.

Speaker 9 I do not think that this law is going to dissuade them from trying to build the chatbot companions because that's obviously a lucrative industry for them.

Speaker 9 But I hope it does make them pay more attention to things like safety and mental health for especially younger and more vulnerable users. Yeah.

Speaker 10 Now, at this point, our listeners are thinking, Kevin, you told us that there were actually a lot lot of bills that California passed, and I'm desperate to know what they are.

Speaker 9 Well, Casey, buckle up. I'm going to run through a few of the other bills here.

Speaker 9 We won't talk about all of them in that much detail, but we have AB 621, which provides stronger protections against deep fake porn.

Speaker 9 This bill, this law, will make it possible for victims of non-consensual deep fake porn to sue platforms that facilitate the creation of that porn for up to $250,000 per violation.

Speaker 10 Yeah.

Speaker 10 And this is really important because a trend that we haven't talked much about on the show this year, Kevin, is that there are these really sketchy companies that make what are called Nudify apps.

Speaker 10 They've been advertising themselves all over Facebook and Instagram somehow, and people have been using them to generate these defects.

Speaker 10 And now there is a law on the books that says, hey, we can actually come after the companies themselves. So I think that's just obviously a good thing.

Speaker 9 So next bill I want to talk about is AB 853. This is the California AI Transparency Act.

Speaker 9 And this caught my attention because it essentially requires that AI companies build into their systems tools to detect whether AI generated content, images, video, or audio, is in fact AI-generated.

Speaker 9 Basically, they have to offer users a way to put in an image and say, hey, did you generate this image and get a reliable response back? Yeah. Here's what this means.

Speaker 10 If you come to California, and you see a video of dogs playing poker, no longer will you have to wonder, are those dogs really playing poker?

Speaker 10 There will be a watermark and you will get the answer to your question.

Speaker 9 That's true.

Speaker 10 About damn time.

Speaker 9 Then there was AB 56, which was about warning labels for minor users of social media platforms about the potential mental health risks associated with using the apps. Casey, this one's pretty wild.

Speaker 10 Yeah, it's certainly very intrusive. Like the law dictates how much of the screen this warning has to cover, how often it has to appear,

Speaker 10 which is basically right when the person starts using using the app. And then again, after three hours, which is like sort of funny to me.

Speaker 10 It's like, you can have your three hours, but then we're going to, you know, remind you that you might be cooking.

Speaker 9 Well, we should say, like, it's not going to be a small little thing on the screen.

Speaker 9 The law says that after three hours of use, platforms must display a 30-second non-bypassable warning covering at least 75% of the screen.

Speaker 9 If you are a 16-year-old and you are scrolling TikTok or Instagram for more than three hours, you are going to get a giant

Speaker 9 cigarette warning essentially on your screen that you cannot skip or get off your screen for 30 seconds. And that is going to happen again after every additional hour after that.

Speaker 9 So I predict there will be a lot of teens who are finding clever ways around this because teens do not like to wait for their TikToks.

Speaker 10 I'm just curious, like, will teenagers see this and think, oh my God, I have to get off TikTok? Or will they think, damn, I am such a badass for using this crazy, dangerous dangerous app?

Speaker 10 Because I can see it going that way too. Yeah.

Speaker 9 What a rebel.

Speaker 9 Meet me behind the school. We're watching TikToks.

Speaker 10 You know, semi-related, there was a study that came out this week. It was a pretty big study.
It was of 6,000 children who were under 13. Did you see this?

Speaker 9 No. What is it?

Speaker 10 So they tracked their use of social media and they found that the more time per day kids spent on social media, the more that was associated with being bad at reading.

Speaker 10 And so my question is, should this warning say, hey, you know, kids, be careful, this app could be bad for your mental health.

Speaker 10 Or should it say, you are actively becoming worse at reading than everyone in your class? I actually think that might be more effective.

Speaker 9 Yes. Actually, in order to bypass the warning label, you should have to do like a reading comprehension quiz based on a short story by Ernest Hemingway.
Just say like,

Speaker 9 no more TikToks for you. Yes.
Until you can tell me what the old man in the sea was about. I'm still trying to find that one out.

Speaker 9 All right. Next bill.
This one is one I wanted to get your take on. This is AB 1043.
This is about age verification, a subject we have talked about on this show before.

Speaker 9 This bill would require that Apple and Google, which make the two most popular mobile operating systems, verify users' ages in their app stores.

Speaker 9 Casey, explain what this bill does and whether it's a big deal or not.

Speaker 10 Yeah, so there are a bunch of different approaches to what they call age assurance in the business.

Speaker 10 And the reason that this one is notable to me is that in the California actually took the approach that I favor, which is that when

Speaker 10 someone is setting up a device for their child, the parent inputs the age of the child and that information is then passed along to the app store and to the developers.

Speaker 10 And the thing that is great about that is that it seems like the most privacy protecting of all of the age assurance protocols that we've seen, right?

Speaker 10 Other states, they want you to potentially like upload a driver's license, right? You're providing a lot of really personal data. Some of that is being held by third parties.

Speaker 10 All that stuff is subject to, you know, data breaches and who knows what else. In California, it's just like, hey, you're the parent.
You're the guardian.

Speaker 10 You tell us how old your kid is and we will make sure that they don't download an app that they're not supposed to have. Right.

Speaker 9 So instead of. what's happening today, which is that every app asks you to sort of say how old you are when you sign up and create an account and it just kind of works on the honor system.

Speaker 9 This would essentially force Apple and Google to, when you're getting a new iPhone or a new Android phone and your parents are helping you set it up, they kind of like, you know, put in your birthday.

Speaker 9 And as a 16-year-old, it shows, okay, I am a minor, I am 16 years old. And then it passes, your phone will like pass that information to every app that is trying to be installed on that phone.

Speaker 9 Is that more or less correct? Exactly. Now, you said you favored this solution.
Are you taking credit for this bill?

Speaker 10 No, I'm not taking it. It also wasn't my idea.
Like other smart people, you know, have been talking about this for a while, but I've written about it in the past.

Speaker 10 And this is what I said I thought we should see happen. And, you know, every once in a while in a democracy, you got to see something you actually want.
And it's a lovely thing when that happens.

Speaker 10 Every once in a while.

Speaker 9 Once in a while, you have a good idea. Yeah, cherish it.

Speaker 10 Cherish the moment.

Speaker 9 All right. One more bill we should talk about because this is the one that has actually gotten most of the attention and a lot of the lobbying dollars.
This is SB 53.

Speaker 9 This was actually signed into law last month. This is the Transparency in Frontier Artificial Intelligence Act.

Speaker 9 This is the sort of successor bill to SB 1047, which we've talked about in the show before. That bill was vetoed by Governor Newsom last year.

Speaker 9 This new bill is essentially a watered-down version of that bill. It establishes some basic transparency requirements for the biggest AI companies, what they call large frontier developers.

Speaker 9 It requires them to publish information about their safety standards and create

Speaker 9 a new mechanism to report potential critical safety incidents to the California state government.

Speaker 9 It also establishes some whistleblower protections for people inside the companies who may want to disclose some significant risks posed by their models.

Speaker 9 And this one did pass and was signed into law. Casey, do you think this is a big deal?

Speaker 10 I think it's great that we have some transparency requirements. I think it's great that we have some whistleblower protections.

Speaker 10 When I think about the things regarding AI development that concern me the most, this bill does not speak to them.

Speaker 10 But I feel like the main reaction that I've read to this bill is a bunch of people saying, yeah, this couldn't hurt.

Speaker 9 You know, like, that's kind of how this feels. It's like, yeah, it's fine.

Speaker 10 Right.

Speaker 9 It feels pretty toothless to me. And it also is basically sort of codifying something that a lot of the companies are already doing anyway.

Speaker 9 I think of the large frontier developers, all or nearly all of them already published things that would sort of put them into compliance with this law. So

Speaker 9 I like the idea of not just relying on voluntary self-regulation, but this seems like a pretty weak bill that was, you know, weak enough that most of the AI industry didn't feel like it was worth opposing.

Speaker 9 Although there were industry groups that lobbied against it, but I think for the most part, they said, okay, well, this is better than the one that we tried to kill last time. Yeah.

Speaker 9 Okay, so that is a bunch of information about these California state AI laws and social media laws.

Speaker 9 When we kind of step back and zoom out here, does this give you any thoughts about how AI regulation and tech regulation in general is going?

Speaker 10 I think in some ways it's going better than I expected, Kevin. You know, we covered the past decade of lawmakers twiddling their thumbs, wondering how social media ought to be regulated.

Speaker 10 It took too long. Some of those efforts have finally gotten off the ground at the state level, but after a lot of harm was done.

Speaker 10 In the case of AI, we are earlier in that kind of epoch of tech, but already we've seen California and other states make make some pretty decisive moves to build some guardrails, create some transparency requirements.

Speaker 10 And I think that's a really good thing.

Speaker 10 We're going to have to see how effective these things are. But I just want to say we need something like this.

Speaker 10 It is important that this week OpenAI came out and said, despite everything that has happened this year with their chatbots and mental health, they are going to hit the accelerator on making them more personable, more sexual, and more powerful.

Speaker 10 That will continue to have reverberations, and we need state lawmakers paying attention to that.

Speaker 9 We need federal lawmakers paying attention to it. Be realistic.
I can't talk to you when you're being hysterical.

Speaker 9 Like I, what this makes me feel like is, God, I wish we had a Congress that could do something about this. Like, I really, I am sympathetic to the AI companies on this one point.

Speaker 9 I do not think that state level regulation is the best way to do this.

Speaker 9 I do not think it is good or efficient to have 50 individual states all kind of coming up with their own bills and trying to pass them and then have the AI companies have to like look at all the 50 states and decide how they're going to build systems that comply with all of those.

Speaker 9 Like that does not feel like a good solution to me. But for that to not be the default path here, we are actually going to need Congress to step in and do something at the federal level.

Speaker 9 And right now our government is shut down, so I don't have high hopes.

Speaker 9 But I think that in the absence of Congress getting its act together and deciding to do something federally, what we're going to end up with is a bunch of states doing what California has done here and just trying their best to get some rules on the books while they can.

Speaker 10 Yeah, I agree with that.

Speaker 10 I would add that Senator Josh. Hawley is currently circulating a draft bill that would ban AI companions for minors.

Speaker 10 Who knows how far that will make it through?

Speaker 10 But I do think that there are a significant number of members of Congress who would like to see something like this happen.

Speaker 10 The question, of course, as ever, is whether they can get something across the finish line. Yeah.

Speaker 9 All right, Casey, that is what's happening in California. When we come back, we'll talk about how this legislative fight got personal for one AI lawyer.

Speaker 11 Millions of players.

Speaker 12 One world.

Speaker 12 No lag.

Speaker 12 How's it done?

Speaker 12 AWS is how.

Speaker 11 Epic Games turned to AWS to scale to more than 100 million Fortnite players worldwide, so they can stay locked in with battle-tested reliability.

Speaker 11 AWS is how leading businesses power next-level innovation.

Speaker 13 This podcast is supported by Bank of America Private Bank.

Speaker 14 Your ambition leaves an impression. What you do next can leave a legacy.

Speaker 16 At Bank of America Private Bank, our wealth and business strategies can help take your ambition to the next level.

Speaker 17 Whatever your passion, unlock more powerful possibilities at privatebank.bankofamerica.com. What would you like the power to do?

Speaker 16 Bank of America, official bank of the FIFA World Cup 2026.

Speaker 19 Bank of America Private Bank is a division of Bank of America and a member FDIC and a wholly owned subsidiary of Bank of America Corporation.

Speaker 20 The University of Michigan was made for moments like this.

Speaker 20 When facts are questioned, when division deepens, when the role of higher education is on trial, look to the leaders and best turning a public investment into the public good.

Speaker 20 From using AI to close digital divides to turning climate risk into resilience, from leading medical innovation to making mental health care more accessible. Wherever we go, progress follows.

Speaker 20 For answers, for action, for all of us, look to Michigan. See more solutions at umic.edu slash look.

Speaker 9 Well, Casey, there's another big story involving the law and AI this week that we wanted to chat about.

Speaker 9 And it involves this behind-the-scenes beef beef that has been going on between OpenAI and some of its biggest critics.

Speaker 10 Yeah, so there are really two big legal battles that this story is at the intersection of. One is the battle over OpenAI trying to convert itself into a for-profit enterprise.

Speaker 10 Right now, OpenAI is famously a non-profit. This has created many issues for the company over the past several years.
They want to be sort of a more normal money-making enterprise.

Speaker 10 And this is opposed by lots of people. Some of the people that oppose it are OpenAI's direct competitors, including Elon Musk and Mark Zuckerberg.

Speaker 10 And OpenAI has been pretty aggressive in going after groups that they believe might be connected to those two. The second battle is about SB 53, the bill that we just talked about.

Speaker 10 It was just signed into law by California Governor Gavin Newsom, and it establishes some basic transparency requirements and whistleblower protections for people who work at AI labs.

Speaker 10 There were a lot of groups that lobbied both for and against this one. ENCODE was one of the groups that lobbied for it.

Speaker 10 And so those are the kind of two big legal battles that were happening next to each other. But today's story, Kevin, takes place right in between both of them.

Speaker 9 Yes. So our guest today, Nathan Calvin, is the vice president of state affairs and general counsel at ENCODE.
They are a small AI policy nonprofit.

Speaker 9 They were started several years ago by a high school student. Fun fact.
What were you doing in high school? I wasn't starting AI safety nonprofits.

Speaker 10 Debate team, crushing it.

Speaker 9 Anyway.

Speaker 9 Anyway, they have become one of these groups that is submitting briefs and lobbying lawmakers on a lot of these AI-related bills and efforts.

Speaker 9 They have also been very vocally opposed to the restructuring of OpenAI as a for-profit.

Speaker 9 And what seemed to happen here in Nathan's telling was that one night as this legislative process was ongoing, a sheriff's deputy showed up at his house and delivered a subpoena from OpenAI demanding that he produce all kinds of personal communications, including anything related to not just the restructuring, but SB 53, this bill that they had been advocating on behalf of.

Speaker 9 Yeah.

Speaker 10 So this surprised folks because they do still identify as this kind of mission-driven company that's trying to create AI to benefit all of humanity.

Speaker 10 I think it's generally understood that during these legal battles, there are going to be people who lobby for and against, and that's just part of the process.

Speaker 10 But now one of those people who was doing the lobbying on behalf of his nonprofit finds himself with a legal battle of his own.

Speaker 10 And that got a lot of folks talking, including some people who worked at OpenAI who criticized their own employer for its behavior.

Speaker 10 So that seemed like something that it'd be worth understanding more about, Kevin.

Speaker 9 Yes, this has been a subject of hot debate and conversation within OpenAI, as well as around the broader AI industry. And we wanted to talk to Nathan about his experience.

Speaker 9 But before we do that, since this is, after all, a story involving AI and legal battles, I should note note that my employer, the New York Times, is engaged in its own legal battle.

Speaker 9 They are suing OpenAI and Microsoft over alleged copyright violations.

Speaker 10 And my boyfriend works at Anthropic, but so far we've managed to avoid any legal battles. So counting our blessings on that one.

Speaker 9 Well, check the mail when you. Oh, no.

Speaker 9 All right. Let's bring in Nathan Calvin.

Speaker 9 Nathan Calvin, welcome to Hard Fork.

Speaker 21 Pleasure to be here. You know,

Speaker 21 many-time listener, first-time caller. I think I said that wrong, but anyway, very glad to be here.

Speaker 9 So, just to set the scene for our listeners, you're in Washington, D.C.

Speaker 9 It's a Tuesday night. I'm assuming it was, you know, a normal weekday.
You and your wife were sitting down to dinner, and then you got a knock on your door. Tell the story from there.

Speaker 21 So, when I opened the door, there was a sheriff's deputy who was there to serve me a subpoena from OpenAI asking for different communications and documents related to a piece of AI safety legislation I was working on, as well as about our criticism of OpenAI's restructuring to a for-profit.

Speaker 21 One thing I will say just in terms of the timeline, because it's come up in some of the back and forth, is that On Saturday previously, I had gotten a call while I was visiting my mom and nephew saying that someone was trying to get into my apartment to serve me papers.

Speaker 21 And I said, I'm not there right now. Anyway, they finally did come on

Speaker 21 Tuesday. And so when Jason Kwan, the chief strategy officer at OpenAI, I think in his comment, said something about, you know, I should have known this was.
coming.

Speaker 21 I did know they were trying to serve me, but I didn't know about any of the details. And I didn't know they would be coming that exact night.

Speaker 9 Now,

Speaker 9 I want to get to all of that, but first, you know, I have not been served with a subpoena. Casey's been arrested many times, so he's familiar with how these...
But never convicted.

Speaker 9 He's familiar with how these things go down. But are they literally handing you a packet of paper like in the movies? Or what does it look like to be served with a subpoena from opening? Yeah.

Speaker 21 It is just a stack of papers. I did not know, again, I am a lawyer, but I didn't know that sheriff's deputies are the ones who at least some of the time serve subpoenas in D.C.

Speaker 21 I later learned that that's not incredibly unusual, but it certainly was, you know, surprising from my perspective. I don't know, to be clear, the guy was perfectly nice.

Speaker 21 I don't know, just to some degree, like after I had heard on Saturday that someone was trying to serve me papers, by the time it kind of actually happened and they were at my door, there was a little bit of like, okay, now I can figure out what is actually happening.

Speaker 21 And kind of, it was honestly the days preceding hearing that it was coming and it actually happening were honestly some of the most stressful.

Speaker 21 And it's like, okay, now I can figure out at least what we're dealing with and,

Speaker 21 you know, how to respond.

Speaker 10 I mean, when you got into AI advocacy, I mean, was this on your radar as something that would likely to be happening that people would be saying, like, okay, like, you got to like show us all the emails you've been sending about this?

Speaker 21 No, I mean, I don't know. Like, I do feel like,

Speaker 21 I don't know. My mom worked

Speaker 21 for the American Academy of Pediatrics for 25 years and was involved in litigation against tobacco companies.

Speaker 21 And they came and took all of her papers out of her office at some point and and had told me, you know, never, never write any emails.

Speaker 21 You're not, you know, comfortable with having read back to you later or something. And so, oh, wow.

Speaker 9 So, you know, like I'm actually like way better prepared for this than the average person.

Speaker 9 Yeah.

Speaker 9 I think that's fair.

Speaker 21 Yes. And indeed, indeed.

Speaker 9 Did you understand immediately why OpenAI was subpoenaing you? What was your sort of initial response when you actually started reading these papers and understanding what they were after?

Speaker 21 Yeah. I mean, in some ways, there had been a little bit of a

Speaker 21 preceding escalation before

Speaker 21 I received the subpoena. You know, there was,

Speaker 21 I had some sense that, you know, we were

Speaker 21 doing, you know, lots of advocacy and public communications and writing things to the attorneys general about this issue. And I, you know, was getting some sense that this was getting on their nerves.

Speaker 21 You know, I will say that, like, when they asked, there's part of me that's still thinking, like, okay, maybe this is just a good faith question.

Speaker 21 And they're trying to figure out, like, maybe, you know, we are secretly funded and controlled by Musk or Meta or something.

Speaker 21 When I did was reading through the subpoena, though, and I got to the part where it said all of your communications about SB 53, a bill we were working on, then I started to think, this doesn't really feel like they are just asking good faith questions.

Speaker 21 There's a, again, and I don't, I don't know for a fact what's in their head and I can't say it, but my, my impression of it was, was not that.

Speaker 10 What you're saying is it it would sort of make sense to me if for whatever reason they were serving me a subpoena and saying, hey, are you funded by Elon Musk?

Speaker 10 Is that why you're trying to block our for-profit conversion?

Speaker 10 But when they came to you and said, give us all the emails you've been sending about this bill that you're working on, that just kind of felt sort of out of scope.

Speaker 21 Yes, it did. And one other thing I will add as well is, again, I was expecting maybe that a subpoena would.

Speaker 21 come but like when i had talked to a previous organization like you know the other orgs i was aware of that had been subpoenaed it had been to their organization and just like their Delaware registered agent.

Speaker 21 And you just like get an email that, you know, your Delaware registered agent got a subpoena. Like it wasn't people coming to their, you know, fifth story apartment building at 7 p.m.
or whatever.

Speaker 21 And so that was another aspect that did just feel kind of eyebrow raising. And so I just think it's, it just really does leave a bad taste in my mouth.

Speaker 9 Right.

Speaker 9 I mean, there's one explanation that is like the sort of uncharitable explanation, which is that OpenAI is trying to sort of bully and intimidate any sort of nonprofits that are critical of its its restructuring plan.

Speaker 9 There's another explanation, which I want to get your take on, which is that these are fair questions to ask. We don't have a lot of transparency.

Speaker 9 We have a lot of dark money sort of flooding into fights about tech regulation these days, and it's worth asking questions about who is behind those efforts.

Speaker 9 And I guess we should just sort of dispense with the central claim here.

Speaker 9 Nathan, let me just ask you straight up, are you or ENCODE working with or being funded by either Elon Musk or Mark Zuckerberg themselves or people or entities associated with them or their companies?

Speaker 21 So we are not funded by Elon Musk or Mark Zuckerberg.

Speaker 21 If you go on our website, it says that we have received funding from the Future of Life Institute, which was one that was mentioned in their subpoena.

Speaker 21 Future of Life Institute several years ago got a donation from Musk, but they are not Musk. And we said this in our communications back with them.
Like, I have never talked to Musk.

Speaker 21 Like, Musk is not not directing our activities. It's false.
I don't know. We, we submitted a

Speaker 21 asking the FTC to open an investigation into XAI and spicy Grok and their things.

Speaker 21 And I will happily say on air that I think that like XAI's safety practices, I think, are in many cases far, far worse than open AIs. Like, I, I, again, so just like that central claim is false.

Speaker 9 What about Mark Zuckerberg? Any relationship with him or Meta?

Speaker 21 None. Zero.
And again, like, I think our partners who work on the issue, like, I don't know, we are an organization that focuses on AI safety and kids' safety issues.

Speaker 21 Like, we are just constantly at war with Meta.

Speaker 21 The idea that Meta is backing us is just, it feels, again, I realize not everyone has the context and knows who we are, but it's just like completely laughable.

Speaker 9 You do have a list of donors or funders on ENCODE's website.

Speaker 9 You say this is, we're generously supported by, and then you list a bunch of organizations, including the Omidiar Network, the Archwell Foundation, which is Harry and Megan's foundation, the Survival and Flourishing Fund, which is a kind of effective altruism-linked philanthropy

Speaker 9 funded primarily by Jan Tollin. So you do provide some transparency about who your funders are.

Speaker 9 Why do you think that wasn't enough for OpenAI? Why do you think they still had questions about Elon Musk or Mark Zuckerberg?

Speaker 21 I think to some degree, you'll have to ask them.

Speaker 21 I'm not sure. I mean, again, there's also one thing to say here of like, there is no general right for them to know about all of our.

Speaker 21 And again, like the subpoena did not ask about the RMDR Foundation because the OMDR Foundation is not relevant to their litigation in any way.

Speaker 21 Like the role of a subpoena is to get relevant information for the litigation you are engaged in, not to just like ask whatever questions you would like the answers to from other private organizations.

Speaker 21 Like, you know, we would love to send a subpoena to OpenAI and be like, tell us all the details of what you're you're planning to do and the restructuring.

Speaker 21 And like, are you going to disempower the nonprofit in the ways it perceives whatever? But like, we don't have a right to do that.

Speaker 21 Like, that's not a question we can just ask them, even though we might like to. And so, what we did is, you know, we put out like a public letter asking them a bunch of questions.

Speaker 21 Like, OpenAI can go to the press and say, you know, we want transparency about these things. Again, they do have the right to ask us about Elon because they are in litigation about this.

Speaker 21 And again, I think if they had just reached out to us at our corporate address and said, Are you funded or directed by Elon?

Speaker 21 And, you know, we explained no and proved to them no, and then they moved on. Like, I would understand that.

Speaker 21 And I think that that is a fair thing and that Elon is attacking them and trying to destroy them.

Speaker 21 And they want to make sure that there are efforts that are not covertly being supported and directed by him.

Speaker 21 But that, but I just like can't emphasize how far away what actually happened was from like that narrow question that they were entitled to ask.

Speaker 10 So, I mean, as you reflect on this experience, do you feel like this was intimidation?

Speaker 10 Do you think that OpenAI is trying to penalize organizations for speaking up either against the for-profit conversion or for AI regulation?

Speaker 21 Yeah, I mean,

Speaker 21 to some extent, it's a question of intent, but... And I don't know what's inside their heads.
And so I want to be careful about that. But I believe that that is what they were doing.

Speaker 21 That is my best guess. And that was how I received it.
And I would like to be another X, there to be another explanation for this.

Speaker 21 And if it really, you know, I thought it was possible when I put this out that maybe they would say, you know, hey, this was a misstep. Our lawyers went a bit far.

Speaker 21 We didn't really actually mean to add the thing about 53. Like, that's not what they said.
They like doubled down and said that, you know, we think we are entitled to this.

Speaker 21 And I think that that just is very important to note.

Speaker 21 And I will just say another thing that I don't think we've mentioned is that, you know, even for some folks within OpenAI, for instance, Joshua Akium,

Speaker 21 who was speaking in his personal capacity, but put out a fairly long thread talking about the fact that what I was describing in my thread is, you know,

Speaker 21 doesn't look great.

Speaker 9 Yeah, but that was the unofficial response from someone at the company who was sort of breaking from the company itself.

Speaker 9 We've also seen Jason Kwan, as you mentioned, the chief strategy officer at OpenAI. He wrote a lengthy thread arguing that you and ENCODE were sort of only giving part of the picture, that

Speaker 9 ENCODE doesn't disclose their funding, and that this is not about SB 53. Jason said, quote, we did not oppose SB 53.

Speaker 9 And they said that basically this was

Speaker 9 sort of a tempest in a teapot.

Speaker 9 There was also a quote that a lawyer for OpenAI, Ann O'Leary, gave to the SF Standard saying, we welcome legitimate debate about AI policy, but it is essential to understand when nonprofit advocacy is is simply a front for a competitive commercial interest.

Speaker 9 What do you make of the official OpenAI response to what you claim?

Speaker 21 So

Speaker 21 one thing is,

Speaker 21 you know, I think Jason focuses on the fact that we became involved with the lawsuit between Elon and OpenAI by filing an amicus brief arguing that it was in the public interest for OpenAI to remain a nonprofit.

Speaker 21 Jeffrey Hinton also made some positive comments about our amicus and showed support for our arguments.

Speaker 21 He's also someone who, by the way, has called for Elon Musk to lose his status with the Royal Academies and is really not a fan of Musk. If you want another example of not everyone who is critical of

Speaker 21 OpenAI's restructuring is a Musk fan.

Speaker 21 Yeah, I mean, also on the point of the did not oppose SB 53, it is true that they never put out something saying that they formally opposed it, but their global affairs head, Chris Lahane, did send a letter to Governor Newsom at a time when SB 53 was in pretty heated discussion saying that he believes the correct path for California is to have an exemption from its AI frameworks for any company that signs on to an agreement with the federal government for testing or that says that they will be adhering to the EU AI code of practice, which in practice means a complete exemption from the California law.

Speaker 21 So, I mean, you can say that advocating for you to be completely exempted and all of your bunch of your fellow companies to be completely exempted is not the same as opposing it.

Speaker 21 You know, like you can ask a linguist for whether that's fair, but you know, I think it still is important context that he did not discuss.

Speaker 9 What now? Are you going to send Open AI the information that they're asking for?

Speaker 9 Are you planning to do any more transparency around your funding or your advocacy efforts, what's the next shoe here?

Speaker 21 So we sent them our objections and responses where we laid out in four areas that were relevant, like for instance, our communications or funding received by Elon, saying that those didn't exist and saying that for the other pieces of information that they were not relevant.

Speaker 21 They never responded to that. They could have filed a motion to compel saying to the the judge that we have to turn them over, but they didn't do that.

Speaker 21 My view, again, I don't know this for sure, is that they didn't know that because they realized a judge would not grant that motion because they were not, in fact, relevant.

Speaker 21 I think there are fair discussions about transparency. I mean, I think there's fair things if some of our donors want to be private.

Speaker 21 And when you're donating to C4s, you have the right to give money privately. We have listed on our site a lot of our donors.

Speaker 21 And I think we're, you know, I think you get a clear impression of the different types of motivations that people have who are funding us but i i think this kind of like larger discussion about like what the transparency appropriate transparency is for folks involved in the advocacy process is very different from like i don't think that's like what open ai cares about here or why they're asking about this um and even even in the subpoena which was an overreach in in in many ways like they don't talk about you know the omidio foundation which again is listed on our website as a funder we're not hiding that fact um because it's not relevant to their litigation with musk but you But you said there are donors that you don't list on your website who want to remain private.

Speaker 9 Would you like to tell us who they are or how much they're giving you?

Speaker 9 Not here.

Speaker 9 Okay. Just checking.
I have to ask.

Speaker 9 Fair, fair. I mean, I think the list of the

Speaker 21 they're not Musk or Zuckerberg. Yeah, they're not Musk or Zuckerberg.
We don't take money from Frontier AI companies. Yeah, I don't know.
I will say that.

Speaker 9 Yeah, and I think it's a reasonable thing to advocate for that all of these groups should be required to disclose much more about who funds them.

Speaker 9 But I think that should apply equally to organizations that are pushing for the other side of things here. I think all of the I think that's fair.

Speaker 21 Yeah. I think that's a fair discussion to have.
I'm just not sure OpenAI is like the one to make that argument.

Speaker 10 So as you look back on this episode, how has it changed the way that you think about OpenAI?

Speaker 21 I genuinely... have a lot of positive feelings about OpenAI and think that they do many things genuinely better than their peers.
For instance, like Meta or XAI.

Speaker 21 And I think that, for instance, some of their safety research and system cards are things that

Speaker 21 they have even improved on in recent months and have done a genuinely good job of.

Speaker 21 And I think that there is some of a feeling among some people at OpenAI that they get disproportionate criticism relative to their peers. And I think that there is some truth in that.

Speaker 21 One thing I'll say is like, I don't know, if one of their peers had been the one to show up at my house and give me a subpoena, I would have said about that too.

Speaker 21 But it was OpenAI that was the one that did it.

Speaker 21 And also I think there's some aspect that OpenAI is a non-profit and they are a non-profit that has a mission to ensure that AGI benefits all of humanity. And,

Speaker 21 you know, they are in the process of trying to weaken and get around that legal mission and be able to consider profit more in their decisions.

Speaker 21 And I think this episode and also things like, you know, the discussion about whether to allow, you know, not safe for work, you know, porn or whatever on Chat GPT or to you know release Sora 2 and the way they released it and you know their kid safety practice and all sorts of these other things that like they are not a normal for-profit company.

Speaker 21 They are at least for now a non-profit that is dedicated to this mission above profit. And I do think that means that they should be held to a higher standard.

Speaker 9 Yeah, I mean, I'll just say, like, it's not like Elon Musk is the only person who opposes this restructuring plan.

Speaker 9 Like, the whole AI safety, you know, community has been up in arms about this for years now. It's very unpopular.
Yes. Yeah.

Speaker 21 I am just

Speaker 21 curious what you make of kind of the difference between

Speaker 21 Joshua's statement and Jason's statement and kind of some of this like continued evolution and pressure you have between OpenAI kind of transitioning from more of a research organization focused on some of these loftier ideals to trying to move to the next stage of what it wants to do.

Speaker 10 I mean, I think it just speaks speaks to a very real tension within the company, which is that there are a lot of people there who believe in the stated mission, who want to create this very beneficial AI.

Speaker 10 And then you also have a lot of people who come from other giant tech companies who see this primarily as a competition about winning and being first and making the most money.

Speaker 10 And people who come from those kind of companies are not above, you know, waging lawfare to get what they want. So I'll be curious to see kind of how that shakes out in the coming months.

Speaker 10 It does seem like it's that second group, the kind of big company group that is currently steering the company.

Speaker 10 And I wonder if that's going to continue.

Speaker 9 But I will say, in addition to that, I think that's right.

Speaker 9 Your story, Nathan, has caused more consternation and soul searching among

Speaker 9 people at Open AI than I think anything since the Daniel Cocatello

Speaker 9 story about these non-disparagement agreements that they were forcing people to sign or else they would claw back their vested equity in the company. That was a big deal to people at OpenAI.

Speaker 9 And this is a big deal to people at OpenAI. I have been talking to people.
It's not just Josh who is saying this stuff.

Speaker 9 I think there's a lot of soul searching going on inside the company about this question of are we still the good guys? Are we transitioning to something we no longer support?

Speaker 9 And so I think there's going to be some internal qualms about this and probably other stories to come. But most of them probably won't break out into the open the way this has.

Speaker 9 Nathan, thank you so much for coming on and explaining all this to us.

Speaker 10 Thanks, Nathan.

Speaker 9 Thank you.

Speaker 9 Just wanted to note that we reached out to OpenAI after this interview asking about this question of intimidation, and they responded with a statement from Jason Kwan reiterating that: quote, Elon has opposed our restructure for obvious competitive reasons.

Speaker 9 And in code, joined in. Organizations that suddenly emerge or shift priorities to join Elon raise legitimate questions about coordination and funding, which the subpoena seeks to clarify.

Speaker 9 Our questions have still not been answered, and we still don't transparently know who is funding these organizations.

Speaker 10 When we come back, an old woman falls off a very high shelf. Is it real or is it fake? No, it's the hard fork review of Slop.

Speaker 1 300 sensors.

Speaker 2 Over a million data points per second.

Speaker 3 How does F1 update their fans with every stat in real time?

Speaker 5 AWS is how.

Speaker 6 From fastest laps to strategy calls, AWS puts fans in the pit.

Speaker 4 It's not just racing, it's data-driven innovation at 200 miles per hour.

Speaker 2 AWS is how leading businesses power next-level innovation.

Speaker 13 This podcast is supported by Bank of America Private Bank.

Speaker 14 Your ambition leaves an impression. What you do next can leave a legacy.

Speaker 16 At Bank of America Private Bank, our wealth and business strategies can help take your ambition to the next level.

Speaker 17 Whatever your passion, unlock more powerful possibilities at privatebank.bankofamerica.com. What would you like the power to do?

Speaker 16 Bank of America, official bank of the FIFA World Cup 2026.

Speaker 19 Bank of America Private Bank is a division of Bank of America NA member FDIC and a wholly owned subsidiary of Bank of America Corporation.

Speaker 22 Know the feeling when AI turns from tool to teammate?

Speaker 16 If you're Rovo, you know.

Speaker 22 With Rovo, you can streamline your workflow and power up your team's productivity. Find what you need in a snap with Rovo Search.

Speaker 22 Connect Rovo to your favorite SaaS apps to get the personalized context you need. And Rovo is already built into Jira and Confluence.

Speaker 22 Discover Robo by Atlassian and streamline your workflow with AI-powered search, chat, and agents. Get started with Robo, your new AI teammate, at rovo.com.

Speaker 9 Well, Casey, over the last few weeks on our show, we've been talking a lot about slop.

Speaker 10 We have, and it seems like the more we talk about it, the more of it appears all over the internet.

Speaker 9 Yes, it is taking over the internet. And for that reason, we thought we should introduce a new segment that we are calling the hard fork review of Slop.

Speaker 9 The Hard Fork Review of Slop.

Speaker 10 Oh my God, that's so perfect. That's beautiful.
You know, this, I would say, is generally a STEM podcast.

Speaker 10 We care a lot about science and technology and engineering, not as much math, but we also care about the arts.

Speaker 10 And so we thought, why don't we carve out some time time on the show to talk about some of the new achievements in AI art that we're seeing out there on the internet and also sort of bring our critical eye to them and, you know, put them in conversation with the culture.

Speaker 9 Yes, we have critics out there for books and movies and music and video games.

Speaker 9 And I think slop is like an emerging genre of cultural production, most of which is bad, but some of which may actually be good.

Speaker 9 And so we need to stand here amid the floodgates, sort of filtering out the bad slop and letting the good slop get through.

Speaker 10 Okay, I just want to signal up front. I didn't actually bring anything good, I didn't know that that was part of the assignment.

Speaker 9 I have one good one, but we'll save it for the end. All right, fair enough.
So, Casey, tell me about the slop that you have been looking at, and then I will tell you about some slop that I've found.

Speaker 9 That's great.

Speaker 10 Well, Kevin, maybe to just kind of warm us up, we can look at some of the slop that I think of as cocoa melon for adults, just kind of pure visual stimulation, no ideas in it whatsoever.

Speaker 10 And this kind of slop you can find on TikTok if you search for glass fruit cutting. Have you seen any of the glass fruit cutting?

Speaker 9 No.

Speaker 10 Okay, let's see if we can queue one of these up. Some of these are in the sort of like the ASMR realm.

Speaker 9 Ooh.

Speaker 10 This man is cutting into a coconut with a knife, but the coconut is glass.

Speaker 21 Oh, the kiwi is glass.

Speaker 9 Oh,

Speaker 9 that's.

Speaker 9 Oh, I don't like that sound. That just has like nails on a chalkboard vibes for me.

Speaker 9 This is what's replacing reading in our schools. I mean, literally.

Speaker 10 Now, what I liked about this one, Kevin, is it's glass pancakes

Speaker 10 with a beautiful maple syrup.

Speaker 9 You really hate that sound, Kevin. That's a jump scare sound for me.

Speaker 9 Oh, we got a doughnut, a glass donut.

Speaker 10 We'd love to see a glass doughnut.

Speaker 10 Oh, just cutting through the glass bowl of

Speaker 10 cereal there.

Speaker 9 The physics here are actually kind of impressive, right?

Speaker 9 Like it is showing the reflections of the knife in the glass. It looks vaguely realistic.
Yeah.

Speaker 10 It's like it's weird because it's like the food looks delicious, but but beautiful, but it's glass, so it's off-putting. Yeah, like it just sort of doesn't make any sense.

Speaker 10 And so it hypnotizes your brain into this sense of, I don't know what I'm watching. I don't want to look away.
Yes.

Speaker 10 And I'm going to stop thinking words. Yes.

Speaker 9 It's sort of like the spiritual successor to those like crush videos where they'll just have like the hydraulic press and they'll just like press down on like seven objects. Yeah.

Speaker 10 And now instead of just wasting those objects, we can waste water and electricity.

Speaker 9 All right. What do you have?

Speaker 9 So I have an example from the news.

Speaker 9 Actually, this one comes to us from DirecTV, which has just struck a partnership with an AI company called Glance that will allow people with DirecTV Gemini devices to put themselves inside of 30-second AI generated videos.

Speaker 9 Basically, if you step away from your TV to go get a snack or go to the bathroom, you might come back and find that you are in the ad on the TV.

Speaker 9 And Casey, let's watch an example of this. So this kind of shows how it works.
You connect it to your TV, you put in your photo. tap a couple buttons and it generates your looks.

Speaker 10 This process is already so absurd.

Speaker 9 And then boom, there you are in a blazer. Now, Casey, I thought the point of advertisements was to show clothes on people who are more attractive than me

Speaker 10 to entice me to buy them.

Speaker 9 Why would I want to see clothes ads with me in them?

Speaker 10 I can't answer that question, honestly.

Speaker 10 This one is so funny to me because

Speaker 10 the process you have to go through to do this is so complicated.

Speaker 10 I basically cannot imagine a a single person doing this. First of all, you already have a TV that is like working against you, right?

Speaker 10 Like the way that this works is that if you have one of these TVs and you leave it idle for 10 minutes, AI takes over, which is like, it's kind of like, you know, if the bus goes below 55 miles an hour, it explodes.

Speaker 10 This is that, but for like AI advertising, and then after it shows you these images, then it's up to you to go scan a QR code and take a photo of yourself.

Speaker 10 Like, no one who is is watching TV wants to do any of this

Speaker 9 at all.

Speaker 10 So

Speaker 10 it's a very silly process. And, you know, I mean, in the

Speaker 10 demonstration, the photos look fine. They look fine.
They look fine. Yeah.
But let me ask you this. Do you not know what you look like with a jacket on? You know what you look like with a jacket on.

Speaker 10 Let me just say this. What are we doing here? That is my review of this.
What are we doing here?

Speaker 9 We are selling advertising technology.

Speaker 10 Okay. So now I just want to show one that made me laugh.
I call this one Woman on the Walmart Shelf, if we want to cue this one up.

Speaker 10 I saw this one on TikTok, although it does have the Sora watermark on it. And I think this speaks to the ability of AI Slop to just kind of create like a classic Pratt Fall physical comedy situation.

Speaker 10 This one involves what looks like store security cam footage of an older woman on a very high shelf inside a Walmart. And there's a police officer who's looking up at her as our story begins.

Speaker 21 Ma'am, please come down from there.

Speaker 9 You want me to come down?

Speaker 9 Yes, ma'am.

Speaker 10 And she does kind of does a header off the shelf and crashes into the police officer. So Kevin, what did that one make you feel?

Speaker 9 There's a lot there.

Speaker 10 There's a lot of layers to this onion.

Speaker 9 Is this a one-off or is there a larger genre of older people falling off the the top shelf at the grocery store onto a police officer?

Speaker 10 It's a whole interconnected cinematic universe with sort of, you know, these very rich sort of characterizations. The vocal performances are really amazing.
So I encourage you to get into it.

Speaker 10 Beautiful stuff. No, this is a one-off, Kevin.
I've never seen anything else related to it.

Speaker 9 Yeah, I'm not worried that people are going to start throwing themselves off the shelves of grocery stores to sort of mimic the trend here. This one feels pretty harmless to me.

Speaker 9 And I appreciate inspiring older people to do things like climbing up to the top shelf of the grocery store.

Speaker 10 I mean, look, these days, anytime I see a Sora video that isn't like misappropriating the likeness of Martin Luther King Jr., I say that's a win for Slop.

Speaker 9 Yes, this one, I think, pretty harmless.

Speaker 10 All right, what else you got?

Speaker 9 Well, this next one, Casey, was not harmless because it involved America's queen, Dolly Parton.

Speaker 10 Oh, no.

Speaker 9 Leave her alone.

Speaker 9 Basically, some sicko out there has been generating AI images of Dolly Parton looking very sick, including at least one image of Reben McIntyre visiting Dolly Parton on her deathbed, which went around on the internet and led to a bunch of rumors that Dolly Parton, God forbid, was dying.

Speaker 9 Oh no. See, I hate this.
Yeah, I don't like this either. Let's watch Reba's video summarizing the whole thing.

Speaker 10 You tell them, Dolly, that AI mess has got us doing all kind of crazy things. You're out there dying.
I'm out here having a baby.

Speaker 10 Well, both of us know you're too young and I'm too old for any of that kind of nonsense. But you better know I'm praying for you.
I love you with all my heart, and I can't wait to see you soon.

Speaker 10 Love you. Wait, just to be clear, what you showed me was real, not sloppy.
That is Reba McIntyre's actual Instagram account.

Speaker 9 That is Reba McIntyre's actual Instagram account. She does show some of the slop images inside the thing of Reba at Dolly's deathbed.

Speaker 9 And Dolly responded with another real video from her real social media account saying, quote, I ain't dead yet. So, Casey, what do you make of this one?

Speaker 10 I mean, this is so bad, you know, like

Speaker 10 the, so many of the fears around misinformation have been there will just become a time when you can't tell what is true and what is false.

Speaker 10 And the better that image generation software gets, the more of these little viral hoaxes we're going to see going around. So this is super bad.

Speaker 10 I'm truly trying to, like, what kind of person do you have to be to be like, today is the day that I create a rumor that Dolly Parton has died and I'm going to like use Sora to prove it? Truly.

Speaker 9 It is like mind-boggling to me.

Speaker 9 If you wanted to turn the public against AI and against AI-generated content, the most effective thing you could do would be to go after Dolly Parton, who everyone, literally everyone loves.

Speaker 10 No, I hope Jolene finds whoever did this and there's a number on him.

Speaker 9 Let's just say Dolly Parton's lawyers are going to be working more than nine to five.

Speaker 10 This next one is sort of a narrated journey.

Speaker 10 We are returning to Walmart for this one. And this creator is very interested in the use of AI to create

Speaker 10 like art on products. You know, so you're sort of like, you know, I've seen some that are like at a craft store and like there's like framed.
pictures of what has clearly been AI generated.

Speaker 10 In this case, she picks up some butter cookies at Walmart and makes a pretty convincing case that it is slop art. And I enjoyed this journey.
Let's see how it looks here.

Speaker 2 This is bad.

Speaker 9 I didn't think it could get worse, but you guys were right.

Speaker 2 The butter cookie tins at Walmart are way worse than the popcorn tins.

Speaker 9 Because why is Santa throwing ass?

Speaker 23 Why is he squatting on a table?

Speaker 5 Why does he look like he's about to twerk?

Speaker 21 What is his hand doing? What is

Speaker 10 Santa has the fattest ass in this?

Speaker 4 Look how wonky that is.

Speaker 23 And what is this wall full of random things?

Speaker 9 Like, can you make out what any of that is actually supposed to be it looks like there's cobwebs on the roof whether that's intentional or not i don't know it's just like random shapes on the wall slop no all right we can probably stop it there i i have to say i did this video made me feel very naive because i did not realize that there was like mass-produced products in like walmart stores that is ai generated oh yeah yeah and i i also love that there are now like slop detectives who are just going out there vigilante style and like investigating the slop on the shelves of their local Walmart.

Speaker 9 That's beautiful to me. We need more citizen participation.

Speaker 10 Honestly, it could be a segment for our show, you know, slop vestigations.

Speaker 9 Mmm.

Speaker 9 Let me ask. Wait, let me ask a question.

Speaker 10 Yeah. If you're shopping and you pick up an object and you see that, you know, there's slop art, does that affect the way that you want to buy it or not buy it one way or the other?

Speaker 9 No. Okay.
I mean, I think there's like a whole like category of art that basically doesn't matter, which is like the stuff on the cookie tin, right? The stuff at Walmart.

Speaker 9 No one is winning any prizes for that. No one is reaching any new heights of creativity.

Speaker 9 Basically, this is just a way for the butter cookie manufacturer to save a couple bucks and not have to hire an illustrator or use some stock art from the internet.

Speaker 10 And do you think they're passing the savings on to us, the customers?

Speaker 9 Probably not. Probably not.
That's probably going right to their bottom line.

Speaker 10 That's unfortunate.

Speaker 9 Yes.

Speaker 9 What about you? Would you be less likely to buy something if a slop had been used in its advertising?

Speaker 10 I mean, maybe, you know, because I think it speaks to a kind of cheapness and a lack of care.

Speaker 10 And so if I were buying like a heart defibrillator and I saw that there was slop art on the box, I would say, I don't know if I could trust these people.

Speaker 9 What about butter cookies from Walmart? Are you going for quality when you're buying butter cookies from Walmart?

Speaker 10 I want only the, I want brown butter. If there was going to be butter cookies.

Speaker 10 Butter is a great flavor, but it needs something else. You know what I mean?

Speaker 9 Okay. So for Casey, only the artisanal images of Santa with a huge ass.

Speaker 10 Small batch, huge ass, Santa, butter cookies, please.

Speaker 9 Okay.

Speaker 9 One more example of slop that I want to tell you about today, Casey, and get your opinions on. This one is what I would consider a good slop.

Speaker 9 This is slop that is being made toward a noble cause, which is preventing the AI apocalypse. Now, Casey, you might think to yourself, how could this happen?

Speaker 9 How could AI slop be used to ward off the AI apocalypse? I was just about to ask you that. Well, this is a company called Hyperstition.
It was founded by Andrew Cote and Aaron Silverbook.

Speaker 9 And basically, this is a company that is trying to counteract all of the sci-fi stories and narratives out there about AI going rogue and killing people, which, you know, this, this hypothesis goes sort of makes its way into the training data for these AI systems and actually makes them more likely to sort of go rogue.

Speaker 10 It gives some ideas.

Speaker 9 It gives them some ideas. And so Andrew Cotay said,

Speaker 9 what if we combated this by writing a bunch of AI-generated novels about AIs and humans getting along really well?

Speaker 9 And then we fed that into the training data for the AI systems to kind of give them some more good examples to follow.

Speaker 10 All right. Kind of a convoluted explanation, but sure, why not?

Speaker 9 So this company has just gotten a grant. I read about this on Astro Codex 10.
They just got a grant to create 5,000 AI generated novels.

Speaker 9 And they're trying to have these novels be sort of 80,000 words. And they're going to enlist the public's help to help generate these.
And you can buy credits, about $4 a book to generate this.

Speaker 9 And then they're going to try to feed these into the language models and get the models to think about maybe good scenarios and maybe be more likely to act on that.

Speaker 10 Wait, why does the public get involved if the works are all AI generated?

Speaker 9 I think they want it to reflect a diverse set of

Speaker 9 scenarios and characters. Basically, they want just people to sort of get involved in this and make it as diverse as possible.
All right.

Speaker 10 Well, do we have any examples we can see?

Speaker 9 No. Great.

Speaker 9 So what do you make of this attempt to use slop for the benefit and potentially the salvation of humanity?

Speaker 10 Here's what I'm going to say. If it turns out that the thing that is needed to prevent

Speaker 10 human extinction from AI is a massive infusion of slop into the training data. I'll be very surprised.
I'll be very surprised if that was the difference maker.

Speaker 9 I share your skepticism. I think the default outcome from this project is that it probably doesn't save us from the AI apocalypse.

Speaker 9 I think a funny secondary effect would be if one of these like 5,000 slop novels goes on to become a huge bestseller and like becomes the literary craze that takes over the country.

Speaker 9 Do I think that's likely? No, but it could happen.

Speaker 10 Well, as we mentioned earlier in the show, it doesn't seem like people are reading all that much these days.

Speaker 10 But, you know, maybe all of this will eventually be fed into a notebook LM video presentation that folks can watch.

Speaker 9 Yes. All right.
That is it for the hard fork review of slop, and we welcome your submissions for future installments. If you spot something, some slop that is worthy of cultural interaction.

Speaker 9 interrogation by some of our nation's foremost slop critics, please send it over to us at hardfork at nytimes.com along with a brief explanation of the effect it had on you, how it moved you.

Speaker 10 Yeah, we want to like, it can't just be like, look at this weird thing. Like, I want to see slop that made you feel something.
Yeah.

Speaker 9 And the next time you see a Santa with a suspiciously large posterior.

Speaker 9 Call us. Call us.
Email us.

Speaker 10 We want to know about it. We want to see it.

Speaker 9 And we want to see it. He has a folder on his MacBook that's just photos and images of Santa with a very large.

Speaker 10 I love a thick Santa. And I salute them, sir.

Speaker 10 See you on Christmas, big guy.

Speaker 9 The hard fork review of slop.

Speaker 1 300 sensors. Over a million data points per second.

Speaker 3 How does F1 update their fans with every stat in real time?

Speaker 5 AWS is how.

Speaker 7 From fastest laps to strategy calls, AWS puts fans in the pit.

Speaker 2 It's not just racing, it's data-driven innovation at 200 miles per hour. AWS is how leading businesses power next-level innovation.

Speaker 13 This podcast is supported by Bank of America Private Bank.

Speaker 14 Your ambition leaves an impression. What you do next can leave a legacy.

Speaker 16 At Bank of America Private Bank, our wealth and business strategies can help take your ambition to the next level.

Speaker 17 Whatever your passion, unlock more powerful possibilities at privatebank.bankofamerica.com. What would you like the power to do?

Speaker 16 Bank of America, official bank of the FIFA World Cup 2026.

Speaker 19 Bank of America Private Bank is a division of Bank of America NA member FDIC and a wholly owned subsidiary of Bank of America Corporation.

Speaker 22 Know the feeling when AI turns from tool to teammate?

Speaker 13 If you're Rovo, you know.

Speaker 22 With Rovo, you can streamline your workflow and power up your team's productivity. Find what you need in a snap with RovoSearch.

Speaker 22 Connect Rovo to your favorite SaaS apps to get the personalized context you need. And Rovo is already built into Jira and Confluence.

Speaker 22 Discover Rovo by Atlassian and streamline your workflow with AI-powered search, chat, and agents. Get started with Rovo, your new AI teammate, at rovo.com.

Speaker 10 Hard Fork is produced by Rachel Cohn and Whitney Jones. We're edited by Jen Poyan.
This episode was fact-checked by Will Peischel. Today's show was engineered by Chris Wood.

Speaker 10 Original music by Alicia BaYouTube, Diane Wong, Rowan Nemastow, and Dan Powell. Video production by Sawyer Roquet, Pat Gunther, Jake Nicol, and Chris Schott.

Speaker 10 You can watch this whole episode on YouTube along with that slop at youtube.com slash hard fork. Special thanks to Paula Schuman, Wee Wing Tam, Dahlia Haddad, and Jeffrey Miranda.

Speaker 10 You can email us at heartfork at nytimes.com with a slop that made you stop.

Speaker 20 Spend more time interviewing candidates who check all your boxes. Less stress, less time, more results now with Indeed sponsored jobs.

Speaker 20 And listeners of this show will get a $75 sponsored job credit to help get your job the premium status it deserves at Indeed.com/slash NYT.

Speaker 20 Just go to Indeed.com/slash NYT right now and support our show by saying you heard about Indeed on this podcast. Indeed.com/slash NYT terms and conditions apply.

Speaker 20 Hiring, do it the right way with Indeed.