California Regulates A.I. Companions + OpenAI Investigates Its Critics + The Hard Fork Review of Slop

1h 5m
Could this be a blueprint for the nation?

Press play and read along

Runtime: 1h 5m

Transcript

Speaker 1 The University of Michigan was made for moments like this.

Speaker 1 When facts are questioned, when division deepens, when the role of higher education is on trial, look to the leaders and best turning a public investment into the public good.

Speaker 5 From using AI to close digital divides to turning climate risk into resilience, from leading medical innovation to making mental health care more accessible.

Speaker 9 Wherever we go, progress follows. For answers, for action, for all of us, look to Michigan.

Speaker 10 See more solutions at umic.edu slash look.

Speaker 11 Casey, I heard the big news this week in tech is that Waymo is going to London.

Speaker 12 You know, I saw that and I thought British people are going to hate it.

Speaker 11 I had a different question was, how are they going to teach you to drive on the other side of the road? That's a very

Speaker 11 good question. Just switch the software?

Speaker 11 Go on the other side now. Everything you used to do, do it in reverse.

Speaker 12 That's like the autonomous vehicle equivalent of dark mode is when you have to drive on the other side of the road. You know, it's not available at launch, but eventually they bring it up.

Speaker 11 Do you think they have to put the steering wheels that don't do anything on the other side of the car too? Presumably.

Speaker 11 I'm Kevin Roos, a tech columnist for the New York Times. I'm Casey Newton from Platformer.
And this is Hard Fork.

Speaker 12 This week, the first state law to regulate AI companions is here. Will it be enough?

Speaker 12 Then, OpenAI is waging legal battles against its critics, and code lawyer Nathan Calvin joins us to explain why the company served him with a subpoena.

Speaker 11 And finally, it's time for the first ever hard-fought review of Slop.

Speaker 12 Grab your opera glasses, Kevin.

Speaker 11 Well, Casey, it's been a big week for tech regulation in the state of California.

Speaker 12 That's right, Kevin. Everywhere I look, it's bills, bills, bills.
I'm like, what is this? A Destiny's Child song?

Speaker 11 Very topical 90s reference.

Speaker 12 Listen, a lot of our listeners are seniors, and they're going to really appreciate that one.

Speaker 11 So on Monday of this week, Governor Gavin Newsom of California signed into law a bunch of new tech-related bills that had been making their way through the state legislature in California.

Speaker 11 And we're going to talk about them today.

Speaker 11 And if you're not a listener of ours who lives in the state of California, you may be asking, why are you devoting an entire segment to tech regulation in California?

Speaker 11 And Casey, what is our response to that?

Speaker 12 Well, Kevin, I think you and I both believe that while AI has the potential to do some good, it's also clearly causing some harm.

Speaker 12 And right now, the AI companies are operating with very minimal regulations on what they do. And that's just been a growing source of concern.

Speaker 12 We have talked over the past year on this show about teenagers who have died by suicide after having very tragic interactions with chatbots.

Speaker 12 And I I think there has been a growing cry for some kind of guardrails to be placed around these companies.

Speaker 12 So that is what we are talking about today is a state that had some ideas that actually managed to pass the laws and is putting them into practice and will hopefully rein some of these companies in.

Speaker 11 Yeah, California is a uniquely important state in tech regulation for a couple reasons. One of them is a lot of the companies are based here.
They care a lot about how California regulates them.

Speaker 11 And the laws that are passed in California tend to sort of ripple out to the rest of the country and the rest of the world. They tend to become kind of de facto national standards.

Speaker 11 And especially at this moment where our federal government is shut down and even when they're operating, don't seem to be interested in passing any tech regulations.

Speaker 11 This is what we have is the state level regulation sort of standing in for the federal regulation that doesn't exist. Yeah.

Speaker 12 So today let's talk about some of these bills that got passed and what we think they tell us about what some common sense approaches to regulating AI might look like.

Speaker 11 Okay, so let's start with what I think may be the most important bill that has come out of this flurry of legislation, which is SB 243. Casey, what is SB 243 and what does it do?

Speaker 12 What SB 243 does is it requires developers to identify and address situations where users are expressing thoughts of self-harm.

Speaker 12 So they have to have a protocol for what they're going to do if they see somebody express these thoughts. They have to share that protocol with California's Department of Public Health.

Speaker 12 And they have to share statistics about how often they are directing their users to resources. And then starting in 2027, Kevin, the Department of Public Health has to publish this data.

Speaker 12 So that's a little bit longer than I would like to start getting this data.

Speaker 12 But my hope is that when that begins, we will have a very large and useful set of public health data about the actual effects of chatbots on the population of California.

Speaker 12 So if you're somebody like me who's really interested slash worried about what it is going to do to our society and our culture once so many people are chatting with these bots every day, this is a really big step toward understanding that.

Speaker 11 Yeah, I think this is a good one for us to drill down on because it is a place where I think there is sort of a lot of attention and momentum around regulating.

Speaker 11 You know, OpenAI has recently rolled out some parental controls.

Speaker 11 Character AI, which we've also talked about on the show, now has a disclaimer on its chatbots and some additional guardrails for minors.

Speaker 11 So I think the platforms were starting to kind of comply with these kinds of laws in advance of them actually becoming laws, but this will at least give them some formal requirements.

Speaker 12 Yeah, and we should mention a couple more of those requirements.

Speaker 12 In California, chatbots will now have to tell you that their output is AI generated.

Speaker 12 Of course, you know, our savvy listeners probably already know that, but there may be some people who are chatting with ChatGPT and aren't entirely sure what's going on.

Speaker 12 This bill does have a few additional protections for minors, including that chatbots cannot produce sexually explicit images for them.

Speaker 12 And it's going to remind minors to take breaks if they have been chatting with ChatGPT for a really long time.

Speaker 12 So interestingly, there was another bill that California legislators passed, which would have, I think, potentially banned ChatGPT use for minors. And Gavin Newsom vetoed that.

Speaker 12 He was like, that's going too far. But this is kind of like one step back.
And I do think adds some meaningful protections.

Speaker 12 And no longer do we have to rely on the goodwill of an open AI or a character AI to implement these things. Now it's just in the law that says you actually have to do this.

Speaker 11 Now, does this law apply to all of the AI platforms or just the like really big ones with hundreds of millions of users?

Speaker 12 So according to a legislative analysis,

Speaker 12 it will apply to basically any chatbot that can be used as a companion. And initially I didn't know like, well, would that include ChatGPT?

Speaker 12 Most people, I think, don't really think of ChatGPT as a companion.

Speaker 12 But according to this legislative analysis, yes, like, and, you know, look, if you're talking to it for three hours a day, it's some kind of a companion to you.

Speaker 11 Yeah, I think this is a case where the industry kind of understood that something was going to be done about chatbot companions in the arena of state regulation.

Speaker 11 And they had this other proposal that they thought was too too strict and stringent.

Speaker 11 And so they kind of accepted the lesser of two evils and sort of got behind begrudgingly this, this, uh, this bill that actually did end up being signed into law. That's right.

Speaker 12 And can I talk about why I think this is important, guys?

Speaker 12 Okay. So just on Tuesday, we get this really interesting tweet from OpenAI CEO Sam Altman.
Okay.

Speaker 12 This is this week gets a lot of attention because it says at the end that in December, they're going to allow what they call verified adults to start using ChatGPT to generate erotica.

Speaker 12 Let's set that aside for a second. Here's what Sam says at the beginning of this long tweet: He said, We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues.

Speaker 12 We realized this made it less useful/slash/enjoyable to many users who had no mental health problems, but given the seriousness of the issue, we wanted to get this right.

Speaker 12 Now that we've been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases.

Speaker 12 And then he says that what people liked about GPT-4.0, which was the model that they got in trouble over because it was so sycophantic and it encouraged people who were telling it things like, I'm not taking my medication anymore or I think I'm God.

Speaker 12 They said they're going to bring whatever people liked about that model back to ChatGPT.

Speaker 12 How does this connect to the California bill? I'm not sure that OpenAI has mitigated the serious mental health issues that came up with this.

Speaker 11 Yes.

Speaker 12 It has been two weeks since they rolled out parental controls in ChatGPT. Do we really have enough data to say this one is under control?

Speaker 12 I have to say, Kevin, I was actually pretty shocked by this tweet, not for the erotica stuff, but for the GPT-4.0 stuff and the suggestion that we have a handle on how to guide these chatbots so that they don't hurt people.

Speaker 12 What did you think when you saw this? I thought

Speaker 11 they are really trying to drive up usage numbers.

Speaker 11 They must be seeing something that is suggesting to them that people were engaging more with ChatGPT when it was more like a companion, when it was telling people more flattering and sycophantic things.

Speaker 11 And I suspect that that is part of the reason that they are trying to sort of get that mode back. Now, I think there's actual logic there.

Speaker 11 I think there's, you know, a lot of users do want something that's going to tell them they're great, but we do not know how they have solved these

Speaker 11 safety challenges that they have supposedly solved or these mitigations for mental health issues.

Speaker 11 It's also like, it's not clear to me that things are just as simple and binary as like some users have mental health problems and some don't.

Speaker 11 And like for the ones that don't, we're going to give them the kind of like unhinged, unfiltered chatbot experience. And for the ones who do, we'll like guide them onto a set of guardrails.

Speaker 11 Like these things exist on a spectrum.

Speaker 11 And it may not be obvious to OpenAI or even to the users themselves when people are starting to develop mental health issues as a result of talking to these chatbots.

Speaker 12 I just, I find it so confusing because on one hand, you know, this is a company that will put up a blog post that says, We do not want to optimize for engagement.

Speaker 12 We want you to use this thing, have it help you and let you get on with your life. And then they release Sora, the infinite slop feed.

Speaker 12 And then they say, We're bringing back the most sycophantic AI personality in the history of the company because we know that some of you out there might need a friend.

Speaker 12 So it really feels like there are two wolves inside of open AI right now.

Speaker 12 And I think it just makes necessary some common sense legislation that starts to put some guardrails around this and signals to these companies that they cannot just do whatever they want.

Speaker 11 Yeah, I mean, I guess we'll have to see how the AI companies comply with these.

Speaker 11 I do not think that this law is going to dissuade them from trying to build the chatbot companions because that's obviously a lucrative industry for them.

Speaker 11 But I hope it does make them pay more attention to things like safety and mental health for especially younger and more vulnerable users. Yeah.

Speaker 12 Now, at this point, our listeners are thinking, Kevin, you told us that there were actually a lot of bills that California passed, and I'm desperate to know what they are.

Speaker 11 Well, Casey, buckle up. I'm going to run through a few of the other bills here.

Speaker 11 We won't talk about all of them in that much detail, but we have AB 621, which provides stronger protections against deep fake porn.

Speaker 11 This bill, this law, will make it possible for victims of non-consensual deep fake porn to sue platforms that facilitate the creation of that porn for up to $250,000 per violation. Yeah.

Speaker 12 And this is really important because a trend that we haven't talked much about on the show this year, Kevin, is that there are these really sketchy companies that make what are called Nudify apps.

Speaker 12 They've been advertising themselves all over Facebook and Instagram somehow, and people have been using them to generate these defects.

Speaker 12 And now there is a law on the books that says, hey, we can actually come after the companies themselves. So I think that's just obviously a good thing.

Speaker 11 So, next bill I want to talk about is AB 853. This is the California AI Transparency Act.

Speaker 11 And this caught my attention because it essentially requires that AI companies build into their systems tools to detect whether AI generated content, images, video, or audio, is in fact AI generated.

Speaker 11 Basically, they have to offer users a way to put in an image and say, hey, did you generate this image and get a reliable response back? Yeah, here's what this means.

Speaker 12 If you come to California and you see a video of dogs playing poker, no longer will you have to wonder are those dogs really playing poker there will be a watermark and you will get the answer to your question that's true about damn time um then there was a b 56 which was about warning labels for uh minor users of social media platforms about the potential mental health risks associated with using the apps casey this one's pretty wild yeah it's certainly very intrusive like the law dictates how much of the screen this warning has to cover how often it has to appear um which is basically right when the person starts using the app.

Speaker 12 And then again, after three hours, which is like sort of funny to me, it's like you can have your three hours, but then we're going to, you know, remind you that you might be cooking.

Speaker 11 Well, we should say, like, it's not going to be a small little thing on the screen.

Speaker 11 The law says that after three hours of use, platforms must display a 30-second non-bypassable warning covering at least 75% of the screen.

Speaker 11 If you are a 16-year-old and you are scrolling TikTok or Instagram for more than three hours, you are going to get a giant

Speaker 11 cigarette warning, essentially, on your screen that you cannot skip or get off your screen for 30 seconds. And that is going to happen again after every additional hour after that.

Speaker 11 So I predict there will be a lot of teens who are

Speaker 11 finding clever ways around this because teens do not like to wait for their TikToks.

Speaker 12 I'm just curious, like, will teenagers see this and think, oh my God, I have to get up TikTok?

Speaker 12 Or will they think, damn i am such a badass for using this crazy dangerous app because i can see it going that way too yeah what a rebel

Speaker 12 meet me behind the school we're watching tick tocks you know semi-related there was a study that came out this week it was a pretty big study of 6 000 children who were under 13 did you see this uh no what is it so they they tracked their use of social media and they found that the more time per day kids spent on social media the more that was associated with being bad at reading and so my question is should this warning say, Hey, you know, kids, be careful.

Speaker 12 This app could be bad for your mental health. Or should it say, you are actively becoming worse at reading than everyone in your class? I actually think that might be more effective.

Speaker 11 Yes. Actually, in order to bypass the warning label, you should have to do like a reading comprehension quiz based on a short story by Ernest Hemingway.
Just say, like,

Speaker 11 no more TikToks for you. Yes.
Until you can tell me what the old man in the sea was about.

Speaker 12 I'm still trying to find that one out.

Speaker 11 All right. Next bill.
This one is one I wanted to get your take on. This is AB 1043.
This is about age verification, a subject we have talked about on this show before.

Speaker 11 This bill would require that Apple and Google, which make the two most popular mobile operating systems, verify users' ages in their app stores.

Speaker 11 Casey, explain what this bill does and whether it's a big deal or not.

Speaker 12 Yeah, so there are a bunch of different approaches to what they call age assurance in the business.

Speaker 12 And the reason that this one is notable to me is that California actually took the approach that I favor, which is that when

Speaker 12 someone is setting up a device for their child, the parent inputs the age of the child and that information is then passed along to the app store and to the developers.

Speaker 12 And the thing that is great about that is that it seems like the most privacy protecting of all of the age assurance protocols that we've seen, right?

Speaker 12 Other states, they want you to potentially like upload a driver's license, right? You're providing a lot of really personal data. Some of that is being held by third parties.

Speaker 12 All that stuff is subject to, you know, data breaches and who knows what else. In California, it's just like, hey, you're the parent, you're the guardian.

Speaker 12 You tell us how old your kid is and we will make sure that they don't download an app that they're not supposed to have. Right.

Speaker 11 So instead of what's happening today, which is that every app asks you to sort of say how old you are when you sign up and create an account and it just kind of works on the honor system, this would essentially force Apple and Google to, when you're getting a new iPhone or a new Android phone and your parents are helping you set it up, they kind of like, you know, put in your birthday.

Speaker 11 And as a 16-year-old, it shows, okay, I am a minor, I'm 16 years old. And then it passes, your phone will like pass that information to every app that is trying to be installed on that phone.

Speaker 11 Is that more or less correct? Exactly. Now, you said you favored this solution.
Are you taking credit for this bill?

Speaker 12 No, I'm not taking it. It also wasn't my idea.

Speaker 12 Like other smart people, you know, have been talking about this for a while, but but I've written about it in the past, and this is what I said I thought we should see happen.

Speaker 12 And, you know, every once in a while in a democracy, you got to see something you actually want. And it's a lovely thing when that happens.
Every once in a while,

Speaker 11 you have a good idea. Yeah, cherish it.

Speaker 12 Cherish the moment.

Speaker 11 All right. One more bill we should talk about because this is the one that has actually gotten most of the attention and a lot of the lobbying dollars.
This is SB 53.

Speaker 11 This was actually signed into law last month. This is the Transparency in Frontier Artificial Intelligence Act.

Speaker 11 This is the sort of successor bill to SB 1047, which we've talked about in the show before. That bill was vetoed by Governor Newsom last year.

Speaker 11 This new bill is essentially a watered-down version of that bill. It establishes some basic transparency requirements for the biggest AI companies, what they call large frontier developers.

Speaker 11 It requires them to publish information about their safety standards and create

Speaker 11 a new mechanism to report potential critical safety incidents to the California state government.

Speaker 11 It also establishes some whistleblower protections for people inside the companies who may want to disclose some significant risks posed by their models.

Speaker 11 And this one did pass and was signed into law. Casey, do you think this is a big deal?

Speaker 12 I think it's great that we have some transparency requirements. I think it's great that we have some whistleblower protections.

Speaker 12 When I think about the things regarding AI development that concern me the most, this bill does not speak to them.

Speaker 12 But I feel like the main reaction that I've read to this bill is a bunch of people saying, yeah, this couldn't hurt.

Speaker 11 You know, like, that's kind of how this feels. It's like, yeah, it's fine.
Right. It feels pretty toothless to me.
And it also is basically sort of.

Speaker 11 codifying something that a lot of the companies are already doing anyway.

Speaker 11 I think of the large frontier developers, all or nearly all of them already published things that would sort of put them into compliance with this law.

Speaker 11 So I like the idea of not just relying on voluntary self-regulation, but this seems like a pretty weak bill that was

Speaker 11 weak enough that most of the AI industry didn't feel like it was worth opposing.

Speaker 11 Although there were industry groups that lobbied against it, but I think for the most part, they said, okay, well, this is better than the one that we tried to kill last time. Yeah.

Speaker 11 Okay, so that is a bunch of information about these California state AI laws and social media laws.

Speaker 11 When we kind of step back and zoom out here, does this give you any thoughts about how AI regulation and tech regulation in general is going?

Speaker 12 I think in some ways it's going better than I expected, Kevin. You know, we covered the past decade of lawmakers twiddling their thumbs, wondering how social media ought to be regulated.

Speaker 12 It took too long. Some of those efforts have finally gotten off the ground at the state level, but after a lot of harm was done.

Speaker 12 In the case of AI, we are earlier in that kind of epoch of tech, but already already we've seen California and other states make some pretty decisive moves to build some guardrails, create some transparency requirements.

Speaker 12 And I think that's a really good thing. We're going to have to see how effective these things are.
But I just want to say we need something like this.

Speaker 12 It is important that this week OpenAI came out and said, despite everything that has happened this year with their chatbots and mental health, they are going to to hit the accelerator on making them more personable, more sexual, and more powerful.

Speaker 12 That will continue to have reverberations, and we need state lawmakers paying attention to that.

Speaker 11 We need federal lawmakers paying attention to that.

Speaker 12 Be realistic.

Speaker 11 I can't talk to you when you're being hysterical.

Speaker 11 Like,

Speaker 11 what this makes me feel like is... God, I wish we had a Congress that could do something about this.
Like, I really, I am sympathetic to the AI companies on this one point.

Speaker 11 I do not think that state state level regulation is the best way to do this.

Speaker 11 I do not think it is good or efficient to have 50 individual states all kind of coming up with their own bills and trying to pass them and then have the AI companies have to like look at all the 50 states and decide how they're going to build systems that comply with all of those.

Speaker 11 Like that does not feel like a good solution to me.

Speaker 11 For that to not be the default path here, we are actually going to need Congress to step in and do something at the federal level.

Speaker 11 And right now, our government is shut down, so I don't have high hopes.

Speaker 11 But I think that in the absence of Congress getting its act together and deciding to do something federally, what we're going to end up with is a bunch of states doing what California has done here and just trying their best to get some rules on the books while they can.

Speaker 12 Yeah, I agree with that. I would add that Senator Josh.
Hawley is currently circulating a draft bill that would ban AI companions for minors.

Speaker 12 Who knows how far that will make it through?

Speaker 12 But I do think that there are a significant number of members of Congress who would like to see something like this happen.

Speaker 12 The question, of course, as ever, is whether they can get something across the finish line. Yeah.

Speaker 11 All right, Casey, that is what's happening in California. When we come back, we'll talk about how this legislative fight got personal for one AI lawyer.

Speaker 14 This podcast is supported by Give Directly, a nonprofit that lets you send cash directly to the world's poorest families so they can invest in what matters most to them.

Speaker 14 This year, more than 30 of your favorite podcasters are joining forces for Pods Fight Poverty to send cash to over 700 families in three Rwandan villages.

Speaker 14 And until December 31st, your first donation is matched. Join listeners everywhere fighting poverty at give directly.org/slash times.

Speaker 1 The University of Michigan was made for moments like this.

Speaker 1 When facts are questioned, when division deepens, when the role of higher education is on trial, look to the leaders and best turning a public investment into the public good.

Speaker 5 From using AI to close digital divides to turning climate risk into resilience, from leading medical innovation to making mental health care more accessible.

Speaker 9 Wherever we go, progress follows. For answers, for action, for all of us, look to Michigan.

Speaker 10 See more solutions at umich.edu slash look.

Speaker 1 This podcast is supported by AT ⁇ T. America's first network is also its fastest and most reliable.

Speaker 1 Based on Rootmetrics United States Root Score Report 1H2025, tested with best commercially available smartphones on three national mobile networks across all available network types.

Speaker 1 Your experiences may vary. Root metrics rankings are not an endorsement of ATT.
When you compare, there's no comparison. AT ⁇ T.

Speaker 11 Well, Casey, there's another big story involving the law and AI this week that we wanted to chat about.

Speaker 11 And it involves this behind-the-scenes beef that has been going on between OpenAI and some of its biggest critics.

Speaker 12 Yeah, so there are really two big legal battles that this story is at the intersection of. One is the battle over OpenAI trying to convert itself into a for-profit enterprise.

Speaker 12 Right now, OpenAI is famously a nonprofit. This has created many issues for the company over the past several years.
They want to be sort of a more normal, money-making enterprise.

Speaker 12 And this is opposed by lots of people. Some of the people that oppose it are OpenAI's direct competitors, including Elon Musk and Mark Zuckerberg.

Speaker 12 And OpenAI has been pretty aggressive in going after groups that they believe might be connected to those two. The second battle is about...
SB 53, the bill that we just talked about.

Speaker 12 It was just signed into law by California Governor Gavin Newsom, and it establishes some basic transparency requirements and whistleblower protections for people who work at AI labs.

Speaker 12 There were a lot of groups that lobbied both for and against this one, and CODE was one of the groups that lobbied for it.

Speaker 12 And so those are the kind of two big legal battles that were happening next to each other. But today's story, Kevin, takes place right in between both of them.

Speaker 11 Yes. So our guest today, Nathan Calvin, is the vice president of state affairs and general counsel at ENCODE.

Speaker 11 They are a small AI policy nonprofit. They were started several years ago by a high school student.
Fun fact. What were you doing in high school? I wasn't starting AI safety nonprofits.

Speaker 11 Debate team, crushing it. Anyway.

Speaker 11 Anyway, they have become one of these groups that is submitting briefs and lobbying lawmakers on a lot of these AI-related bills and efforts.

Speaker 11 They have also been very vocally opposed to the restructuring of OpenAI as a for-profit.

Speaker 11 And what seemed to happen here in Nathan's telling was that one night as this legislative process was ongoing, a sheriff's deputy showed up at his house and delivered a subpoena from OpenAI demanding that he produce all kinds of personal communications, including anything related to not just the restructuring, but SB 53, 53, this bill that they had been advocating on behalf of.

Speaker 12 Yeah, so this surprised folks because they do still identify as this kind of mission-driven company that's trying to create AI to benefit all of humanity.

Speaker 12 I think it's generally understood that during these legal battles, there are going to be people who lobby for and against, and that's just part of the process.

Speaker 12 But now, one of those people who was doing the lobbying on behalf of his nonprofit finds himself with a legal battle of his own.

Speaker 12 And that got a lot of folks talking, including some people who worked at OpenAI who criticized their own employer for its behavior.

Speaker 12 So that seemed like something that it'd be worth understanding more about, Kevin.

Speaker 11 Yes, this has been a subject of hot debate and conversation within OpenAI, as well as around the broader AI industry. And we wanted to talk to Nathan about his experience.

Speaker 11 But before we do that, since this is, after all, a story involving AI and legal battles, I should note that my employer, the New York Times, is engaged in its own legal battle.

Speaker 11 They are suing OpenAI and Microsoft over alleged alleged copyright violations.

Speaker 12 And my boyfriend works at Anthropic, but so far we've managed to avoid any legal battles. So counting our blessings on that one.

Speaker 11 Well, check the mail when you're. Oh, no.

Speaker 11 All right, let's bring in Nathan Calvin.

Speaker 11 Nathan Calvin, welcome to Hard Fork.

Speaker 13 Pleasure to be here. You know,

Speaker 13 many time listener, first-time caller. I think I said that wrong, but anyway, very glad to be here.

Speaker 11 So, just to set the scene for our listeners, you're in Washington, D.C.

Speaker 11 It's a Tuesday night. I'm assuming it was, you know, a normal weekday.
You and your wife were sitting down to dinner, and then you got a knock on your door. Tell the story from there.

Speaker 13 So,

Speaker 13 when I opened the door, there was a sheriff's deputy who was there to serve me a subpoena from Open AI asking for different communications and documents related to a piece of AI safety legislation I was working on, as well as about our criticism of OpenAI's restructuring to a for-profit.

Speaker 13 One thing I will say, just in terms of the timeline, because it's come up in some of the back and forth, is that on Saturday previously, I had gotten a call while I was visiting my mom and nephew saying that someone was trying to get into my apartment to serve me papers.

Speaker 13 And I said, I'm not there right now. Anyway, they finally did come on Tuesday.

Speaker 13 And so when Jason Kwan, the chief strategy officer at OpenAI, I think in his comment, said something about, you know, I should have known this was coming.

Speaker 13 I did know they were trying to serve me, but I didn't know about any of the details. And I didn't know they would be coming that exact night.

Speaker 11 Now,

Speaker 11 I want to get to all of that, but first, you know, I have not been served with a subpoena. Casey's been arrested many times, so he's familiar with how these.

Speaker 12 but never convicted.

Speaker 11 He's familiar with how these things go down, but like, are they literally handing you a packet of paper like in the movies? Or what does it look like to be served with a subpoena from open?

Speaker 11 Yeah.

Speaker 13 It is just a stack of papers. I did not know, again, I am a lawyer, but I didn't know that sheriff's deputies are the ones who at least some of the time serve subpoenas in D.C.

Speaker 13 I later learned that that's not incredibly unusual, but it certainly was, you know, surprising from my perspective. I don't know.
To be clear, the guy was perfectly nice.

Speaker 13 I don't know, to some degree, like after I had heard on Saturday that someone was trying to serve me papers, by the time it kind of actually happened and they were at my door, there was a little bit of like, okay, now I can figure out what is actually happening.

Speaker 13 And kind of, it was honestly the days preceding hearing that it was coming and it actually happening were honestly some of the most stressful.

Speaker 13 And it's like, okay, now I can figure out at least what we're. dealing with and

Speaker 13 you know how to respond.

Speaker 12 I mean, when you got into AI advocacy, I mean, was this on your radar as something that would likely be happening that people would be saying, like, okay, like, you got to like show us all the emails you've been sending about this?

Speaker 13 No, I mean, I don't know. Like, I, I do feel like, um,

Speaker 13 I don't know. My, my mom worked in, um, for the American Academy of Pediatrics for 25 years and was involved in litigation against tobacco companies.

Speaker 13 And they came and, you know, took all of her papers out of her office at some point and, you know, had, had told me, you know, never, never write any emails.

Speaker 13 You're not, you know, comfortable with having read back to you later or something. And so oh, wow.

Speaker 11 So, we're actually like way better prepared for this than the average person.

Speaker 11 Yeah,

Speaker 11 I think that's fair.

Speaker 13 Yes, and indeed, indeed.

Speaker 11 Did you understand immediately why OpenAI was subpoenaing you? What was your sort of initial response when you actually started reading these papers and understanding what they were after?

Speaker 13 Yeah, I mean, in some ways, there had been a little bit of a

Speaker 13 preceding escalation before

Speaker 13 I received the subpoena. you know, there was

Speaker 13 I had some sense that, you know, we were

Speaker 13 doing,

Speaker 13 you know, lots of advocacy and public communications and writing things to the attorneys general about this issue. And I, you know, was getting some sense that this was getting on their nerves.

Speaker 13 You know, I will say that like when they asked, there's part of me that's still thinking, like, okay, maybe this is just a good faith question.

Speaker 13 And they're trying to figure out like maybe, you know, we are secretly funded and controlled by musk or meta or something when i did was reading through the subpoena though and i got to the part where it said all of your communications about sb 53 a bill we were working on then i started to think this doesn't really feel like they are just asking good faith questions there's a again and i i don't i don't know for a fact what's in their head and i i can't say it but my my impression of it was was not that what you're saying is it would sort of make sense to me if for whatever reason they were serving me a subpoena and saying hey are you funded by elon musk Is that why you're trying to block our for-profit conversion?

Speaker 12 But when they came to you and said, give us all the emails you've been sending about this bill that you're working on, that just kind of felt sort of out of scope.

Speaker 13 Yes, it did.

Speaker 13 And one other thing I will add as well is, again, I was expecting maybe that a subpoena would come, but like when I had talked to a previous organization, like, you know, the other orgs I was aware of that had been subpoenaed, it had been.

Speaker 13 to their organization and just like their Delaware registered agent. And you just like get an email that, you know, your Delaware registered agent got a subpoena.

Speaker 13 Like it wasn't people coming to their, you know, fifth story apartment building at 7 p.m. or whatever.
And so that was another aspect that did just feel kind of eyebrow raising.

Speaker 13 And so I just think it's, it just really does leave a bad taste in my mouth. Right.

Speaker 11 I mean, there's one explanation that is like the sort of uncharitable explanation, which is that OpenAI is trying to sort of bully and intimidate any sort of nonprofits that are critical of its restructuring plan.

Speaker 11 There's another explanation, which I want to get your take on, which is that these are fair questions to ask. We don't have a lot of transparency.

Speaker 11 We have a lot of dark money sort of flooding into fights about tech regulation these days, and it's worth asking questions about who is behind those efforts.

Speaker 11 And I guess we should just sort of dispense with the central claim here.

Speaker 11 Nathan, let me just ask you straight up, are you or in code working with or being funded by either Elon Musk or Mark Zuckerberg themselves or people or entities associated with them or their companies?

Speaker 13 So we are not funded by Elon Musk or Mark Zuckerberg.

Speaker 13 If you go on our website, it says that we have received funding from the Future of Life Institute, which was one that was mentioned in their subpoena.

Speaker 13 Future of Life Institute several years ago got a donation from Musk, but they are not Musk. And we said this in our communications back with them.
Like, I have never... talked to Musk.

Speaker 13 Like Musk is not directing our activities. It's false.
I don't know.

Speaker 13 We submitted a

Speaker 13 asking the FTC to open an investigation into XAI and spicy Grok and their things.

Speaker 13 And I will happily say on air that I think that like XAI's safety practices, I think, are in many cases far, far worse than open AI's. Like, I, I,

Speaker 13 again, so just like that central claim is false.

Speaker 11 What about Mark Zuckerberg? Any relationship with him or Meta?

Speaker 13 None. Zero.
And again, like, I think our partners who work on the issue, like, I don't know, we are an organization that focuses on AI safety and kid safety issues.

Speaker 13 Like, we are just constantly at war with Meta.

Speaker 13 The idea that Meta is backing us is just, it feels, again, I realize not everyone has the context and knows who we are, but it's just like completely laughable.

Speaker 11 You do have a list of donors or funders on ENCODE's website.

Speaker 11 You say this is, we're generously supported by, and then you list a bunch of organizations, including the Omidiar Network, the Archwell Foundation, which is Harry and Megan's foundation, the Survival and Flourishing Fund, which is a kind of effective altruism-linked philanthropy

Speaker 11 funded primarily by Jan Tallin. So you do provide some transparency about who your funders are.

Speaker 11 Why do you think that wasn't enough for OpenAI? Why do you think they still had questions about Elon Musk or Mark Zuckerberg?

Speaker 13 I think to some degree, you'll have to ask them.

Speaker 13 I'm not sure. I mean, again, there's also one thing to say here, like, there is no general right for them to know about all of our

Speaker 13 and again like the subpoena did not ask about the rmed our foundation because the media foundation is not relevant to their litigation in any way like the role of a subpoena is to get relevant information for the litigation you are engaged in not to just like ask whatever questions you would like the answers to from other private organizations like you know we would love to send a subpoena to open ai and be like tell us all the details of what you're planning to do and the restructuring and like are you going to disempower the nonprofit in the ways it it perceives or whatever but like we don't have a right to do that.

Speaker 13 Like, that's not a question we can just ask them, even though we might like to. And so, what we did is, you know, we put out like a public letter asking them a bunch of questions.

Speaker 13 Like, OpenAI can go to the press and say, you know, we want transparency about these things. Again, they do have the right to ask us about Elon because they are in litigation about this.

Speaker 13 And again, I think if they had just reached out to us at our corporate address and said, Are you funded or directed by Elon?

Speaker 13 And, you know, we explained no and proved to them no, and then they moved on. Like, I would understand that.

Speaker 13 And I think that that is a fair thing and that Elon is attacking them and trying to destroy them.

Speaker 13 And they want to make sure that there are efforts that are not covertly being supported and directed by him.

Speaker 13 But that, but I just like can't emphasize how far away what actually happened was from like that narrow question that they were entitled to ask.

Speaker 12 So, I mean, as you reflect on this experience, do you feel like this was intimidation?

Speaker 12 Do you think that OpenAI is trying to penalize organizations for speaking up either against the for-profit conversion or for AI regulation?

Speaker 13 Yeah, I mean,

Speaker 13 to some extent, it's a question of intent,

Speaker 13 and I don't know what's inside their heads. And so I want to be careful about that, but I believe.
that that is what they were doing. That is my best guess.
And that was how I received it.

Speaker 13 And I would like to be another,

Speaker 13 there to be another explanation for this. And if it really, you know, I thought it was possible when I put this out that maybe they would say, you know, hey, this was a misstep.

Speaker 13 Our lawyers went a bit far. We didn't really actually mean to add the thing about 53.
Like, that's not what they said. They like doubled down and said that, you know, we think we are entitled to this.

Speaker 13 And I think that that just is very important to note.

Speaker 13 And I will just say another thing that I don't think we've mentioned is that, you know, even for some folks within OpenAI, for instance, Joshua Akium,

Speaker 13 who was speaking in his personal capacity, but put out a fairly long thread talking about the fact that what I was describing in my thread is, you know,

Speaker 13 doesn't look great.

Speaker 11 Yeah, but that was the unofficial response from someone at the company who was sort of breaking from the company itself.

Speaker 11 We've also seen Jason Kwan, as you mentioned, the chief strategy officer at OpenAI. He wrote a lengthy thread arguing that you and ENCODE were sort of only giving part of the picture, that

Speaker 11 ENCODE doesn't disclose their funding, and that this is not about SB 53. Jason said, quote, we did not oppose SB 53.

Speaker 11 And they said that basically this was

Speaker 11 sort of a tempest in a teapot.

Speaker 11 There was also a quote that a lawyer for OpenAI, Ann O'Leary, gave to the SF Standard saying, We welcome legitimate debate about AI policy, but it is essential to understand when nonprofit advocacy is simply a front for a competitive commercial interest.

Speaker 11 What do you make of the official OpenAI response to what you claim?

Speaker 13 So

Speaker 13 one thing is,

Speaker 13 you know, I think Jason focuses on the fact that we became involved with the lawsuit between Elon and OpenAI by filing an amicus brief, arguing that it was in the public interest for OpenAI to remain a nonprofit.

Speaker 13 Jeffrey Hinton also, you know, made some positive comments about our amicus and showed support for our arguments.

Speaker 13 He's also someone who, by the way, has called for Elon Musk to lose his status with the Royal Academies and is really not a fan of Musk. If you want another example of not everyone who is critical of

Speaker 13 OpenAI's restructuring is a Musk fan.

Speaker 13 Yeah, I mean, also on the point of the did not oppose SB 53, it is true that they never put out something saying that they formally opposed it, but their global affairs head, Chris Lahane, did send a letter to Governor Newsom at a time when SB 53 was in pretty heated discussion saying that he believes the correct path for California is to have an exemption from its AI frameworks for any company that signs on to an agreement with the federal government for testing or that says that they will be adhering to the EU AI code of practice, which in practice means a complete exemption from the California law.

Speaker 13 So, I mean, you can say that advocating for you to be completely exempted and all of your bunch of your fellow companies to to be completely exempted is not the same as opposing it.

Speaker 13 You know, like you can ask a linguist for whether that's fair, but you know, I think it still is important context that he did not discuss.

Speaker 11 What now? Are you going to send OpenAI the information that they're asking for?

Speaker 11 Are you planning to do any more transparency around your funding or your advocacy efforts? What's the next shoe here?

Speaker 12 So

Speaker 13 we sent them our objections and responses where we laid out in four areas that were relevant, like for instance, our communications or funding received by Elon saying that those didn't exist and saying that for the other pieces of information that they were not relevant.

Speaker 13 They never responded to that.

Speaker 13 They could have filed a motion to compel saying to the judge that we have to turn them over, but they didn't do that.

Speaker 13 My view, again, I don't know this for sure, is that they didn't know that because they realized a judge would not grant that motion because they were not, in fact, relevant.

Speaker 13 I think there are fair discussions about transparency. I mean, I think there's fair things of, you know, some of our donors want to be private.

Speaker 13 And when you're donating to C4s, you have the right to give money privately. We have listed on our site a lot of our donors.

Speaker 13 And I think we're, you know, I think you get a clear impression of the different types of motivations that people have who are funding us.

Speaker 13 But I think this kind of like larger discussion about like what the transparency, appropriate transparency is for folks involved in the advocacy process is very different from like, I don't think that's like what OpenAI cares about here or why they're asking about this.

Speaker 13 And even in the subpoena, which was an overreach in many ways, like they don't talk about, you know, the Omidio Foundation, which again is listed on our website as a funder.

Speaker 13 We're not hiding that fact because it's not relevant to their litigation with Musk.

Speaker 11 But you said there are donors that you don't list on your website who want to remain private.

Speaker 11 Would you like to tell us who they are or how much they're giving you? Not, not, not, not, not here.

Speaker 11 Okay.

Speaker 11 Just checking. I have to ask.

Speaker 11 Fair, fair. I mean, I think the they're not, they're not Musker.

Speaker 13 They're not Musker Zuckerberg. They're, yeah, they're, they're not Musker or Zuckerberg.
We don't take money from Frontier AI companies. Yeah, I don't know.
I will, I will say that.

Speaker 11 Yeah, and I think, I think it's a reasonable thing to advocate for that all of these groups should be required to disclose much more about who funds them.

Speaker 11 But I think that should apply equally to organizations that are pushing for the other side of things here. I think all of the.
I think that's fair. Yeah.

Speaker 13 I think that's a fair discussion to have. I'm just not sure OpenAI is like the one to make that argument.

Speaker 12 So as you look back on this episode, how has it changed the way that you think about OpenAI?

Speaker 13 I genuinely... have a lot of positive feelings about OpenAI and think that they do many things genuinely better than their peers.
So for instance, like Meta or XAI.

Speaker 13 And I think that, for instance, some of their safety research and system cards are things that

Speaker 13 they have even improved on in recent months and have done a genuinely good job of.

Speaker 13 And I think that there is some of a feeling among some people at OpenAI that they get disproportionate criticism relative to their peers. And I think that there is some truth in that.

Speaker 13 One thing I'll say is like, I don't know, if one of their peers had been the one to show up at my house and give me a subpoena, I would have said about that too.

Speaker 13 But it was OpenAI that was the one that did it.

Speaker 13 And also, I think there's some aspect that OpenAI is a nonprofit and they are a nonprofit that has a mission to ensure that AGI benefits all of humanity. And,

Speaker 13 you know, they are in the process of trying to weaken and get around that legal mission and be able to consider profit more in their decisions.

Speaker 13 And I think this episode and also things like the discussion about whether to allow, you know, not safe for work, you know, porn or whatever on Chat GPT or to, you know, release Sora 2 in the way they released it and, you their kid safety practice and all sorts of these other things that like they are not a normal for-profit company.

Speaker 13 They are at least for now a non-profit that is dedicated to this mission above profit. And I do think that means that they should be held to a higher standard.

Speaker 11 Yeah, I mean, I'll just say like it's not like Elon Musk is the only person who opposes this restructuring pledge like the whole AI safety community has been up in arms about this for years now.

Speaker 12 It's very unpopular.

Speaker 11 Yes. Yeah.

Speaker 13 I am just

Speaker 13 curious what you make make of kind of the difference between,

Speaker 13 you know, Joshua's statement and Jason's statement and kind of some of this like continued evolution and pressure you have between OpenAI kind of transitioning from more of a research organization focused on some of these loftier ideals to trying to move to the next stage of what it wants to do.

Speaker 12 I mean, I think it just speaks to a very real tension within the company, which is that there are a lot of people there who believe in the stated mission, who want to create this very beneficial AI.

Speaker 12 And then you also have a lot of people who come from other giant tech companies who see this primarily as a competition about winning and being first and making the most money.

Speaker 12 And people who come from those kind of companies are not above, you know, waging lawfare to get what they want. So I'll be curious to see kind of how that shakes out in the coming months.

Speaker 12 It does seem like it's that second group, the kind of big company group that is currently steering the company.

Speaker 12 And I wonder if that's going to continue.

Speaker 11 But I will say, in addition to that, I think that's right.

Speaker 11 Your story, Nathan, has caused more consternation and soul searching among

Speaker 11 people at OpenAI than I think anything since the...

Speaker 11 Daniel Cocatello story about these non-disparagement agreements that they were forcing people to sign or else they would claw back their vested equity in the company.

Speaker 11 That was a big deal to people at OpenAI. And this is a big deal to people at OpenAI.
I've been talking to people. It's not just Josh who is saying this stuff.

Speaker 11 I think there's a lot of soul searching going on inside the company about this question of are we still the good guys? Are we transitioning? to something we no longer support.

Speaker 11 And so I think there's going to be some internal qualms about this and probably other stories to come, but most of them probably won't break out into the open the way this has.

Speaker 11 Nathan, thank you so much for coming on and explaining all this to us.

Speaker 12 Thanks, Nathan.

Speaker 13 Thank you.

Speaker 11 I just wanted to note that we reached out to OpenAI after this interview asking about this question of intimidation, and they responded with a statement from Jason Kwan reiterating that, quote, Elon has opposed a restructure for obvious competitive reasons and ENCODE joined in.

Speaker 11 organizations that suddenly emerge or shift priorities to join Elon raise legitimate questions about coordination and funding, which the subpoena seeks to clarify.

Speaker 11 Our questions have still not been answered, and we still don't transparently know who is funding these organizations.

Speaker 12 When we come back, an old woman falls off a very high shelf. Is it real or is it fake? No, it's the hard fork review of Slop.

Speaker 1 The University of Michigan was made for moments like this.

Speaker 1 When facts are questioned, when division deepens, when the role of higher education is on trial, look to the leaders and best turning a public investment into the public good.

Speaker 5 From using AI to close digital divides to turning climate risk into resilience, from leading medical innovation to making mental health care more accessible.

Speaker 9 Wherever we go, progress follows. For answers, for action, for all of us, look to Michigan.

Speaker 10 See more solutions at umic.edu/slash look.

Speaker 14 This podcast is supported by Give Directly, a nonprofit that lets you send cash directly to the world's poorest families so they can invest in what matters most to them.

Speaker 14 This year, more than 30 of your favorite podcasters are joining forces for Pods Fight Poverty to send cash to over 700 families in three Rwandan villages.

Speaker 14 And until December 31st, your first donation is matched. Join listeners everywhere fighting poverty at give directly.org slash times.

Speaker 15 Over the last two decades, the world has witnessed incredible progress.

Speaker 15 From dial-up modems to 5G connectivity, from massive PC towers to AI-enabled microchips, innovators are rethinking possibilities every day.

Speaker 15 Through it all, Invesco QQQ ETF has provided investors access to the world of innovation with a single investment. Invesco QQQ, let's rethink possibility.

Speaker 15 There are risks when investing in ETFs, including possible loss of money. ETF's risk is similar to those of stocks.

Speaker 15 Investments in the tech sector are subject to greater risk and more volatility to more diversified investments.

Speaker 15 Before investing, carefully read and consider front investment objectives, risks, charges, expenses, and more in perspectives at investco.com. Investco Distributors Incorporated.

Speaker 12 Well, Casey, over the last few weeks on our show, we've been talking a lot about slop. We have, and it seems like the more we talk about it, the more of it appears all over the internet.

Speaker 11 Yes, it is taking over the internet. And for that reason, we thought we should introduce a new segment that we are calling the Hard Fork Review of Slop.

Speaker 12 Oh my God, that's so perfect.

Speaker 11 That's beautiful.

Speaker 12 You know, this, I would say, is generally a STEM podcast. We care a lot about science and technology and engineering, not as much math, but we also care about the arts.

Speaker 12 And so we thought, why don't we carve out some time on the show to talk about some of the new achievements in AI art that we're seeing out there on the internet and also sort of bring our critical eye to them and, you know, put them in conversation with the culture.

Speaker 11 Yes, we have critics out there for books and movies and music and video games.

Speaker 11 And I think slop is like an emerging genre of cultural production, most of which is bad, but some of which may actually be good.

Speaker 11 And so we need to stand here amid the floodgates, sort of filtering out the bad slop and letting the good slop get through.

Speaker 12 Okay, I just want to signal up front. I didn't actually bring anything good.
I didn't know that that was part of the assignment.

Speaker 11 I have one good one, but we'll save it for the end. All right, fair enough.
So, Casey, tell me about the slop that you have been looking at, and then I will tell you about some slop that I've found.

Speaker 12 That's great. Well, Kevin, maybe to just kind of warm us up, we can look at some of the slop that I think of as cocoa melon for adults, just kind of pure visual stimulation, no ideas in it whatsoever.

Speaker 12 And this kind of slop, you can find on TikTok if you search for glass fruit cutting. Have you seen any of the glass fruit cutting? No.
Okay, let's see if we can cue one of these up.

Speaker 12 Some of these are in the sort of like the ASMR realm.

Speaker 11 Ooh.

Speaker 12 This man is cutting into a coconut with a knife, but the coconut is glass.

Speaker 13 Oh, the kiwi is glass.

Speaker 11 Oh,

Speaker 11 that's... Oh, I don't like that sound.
That just has like nails on a chalkboard.

Speaker 11 This is what's replacing reading in our schools. I mean, literally.

Speaker 12 Now, what I liked about this one, Kevin, is it's glass pancakes

Speaker 12 with a beautiful maple syrup. You really hate that sound.

Speaker 11 That's a jump scare sound for me.

Speaker 11 Oh, we got a doughnut, a glass donut.

Speaker 12 We'd love to see a glass donut.

Speaker 12 Oh, just cutting through the glass bowl of cereal there.

Speaker 11 The physics here are actually kind of impressive. Right?

Speaker 11 Like it is showing the reflections of the knife in the glass. It looks vaguely realistic.
Yeah.

Speaker 12 It's like it's weird because it's like the food looks delicious, but beautiful, but then it's glass, so it's off-putty. Yeah, like it just sort of doesn't make any sense.

Speaker 12 And so it hypnotizes your brain into this sense of, I don't know what I'm watching. I don't want to look away.
Yes. And I'm going to stop thinking words.
Yes.

Speaker 11 It's sort of like the spiritual successor to those like crush videos where they'll just have like the hydraulic press and they'll just like press down on like seven objects. Yeah.

Speaker 12 And now instead of just wasting those objects, we can waste water and electricity.

Speaker 11 All right. What do you have?

Speaker 11 So I have an example from the news.

Speaker 11 Actually, this one comes to us from DirecTV, which has just struck a partnership with an AI company called Glance that will allow people with DirecTV Gemini devices to put themselves inside of 30-second AI generated videos.

Speaker 11 Basically, if you step away from your TV to go get a snack or go to the bathroom, you might come back and find that you are in the ad on the TV. And Casey, let's watch an example of this.

Speaker 11 So this kind of shows how it works. You connect it to your TV, you put in your photo, tap a couple buttons, and it generates your looks.

Speaker 12 This process is already so absurd.

Speaker 11 And then boom, there you are in a blazer. Now, Casey, I thought the point of advertisements was to show clothes on people who are more attractive than me

Speaker 11 to entice me to buy them. Why would I want to see clothes ads with me in them?

Speaker 12 I can't answer that question, honestly.

Speaker 12 This one is so funny to me because

Speaker 12 the process you have to go through to do this is so complicated.

Speaker 12 I basically cannot imagine a single person doing this. First of all, you already have a TV that is like working against you, right?

Speaker 12 Like the way that this works is that if you have one of these TVs and you leave it idle for 10 minutes, AI takes over, which is like it's kind of like you know, if the bus goes below 55 miles an hour, it explodes.

Speaker 12 Yes, this is that, but for like AI advertising, and then after it shows you these images, then it's up to you to go scan a QR code and take a photo of yourself.

Speaker 12 Like, no one who is watching TV wants to do any of this

Speaker 11 at all.

Speaker 12 So, um, it's a very silly process. And, you know, I mean, in the demonstration, the photos look fine.
They look fine. They look fine.
Yeah. But let me ask you this.

Speaker 12 Do you not know what you look like with a jacket on? You know what you look like with a jacket on?

Speaker 12 Let me just say this. What are we doing here? That is my review of this.
What are we doing here?

Speaker 11 We are selling advertising technology.

Speaker 12 Okay. So now I just want to show one that made me laugh.
I call this one Woman on the Walmart shelf, if we want to cue this one up.

Speaker 12 I saw saw this one on TikTok, although it does have the Sora watermark on it.

Speaker 12 And I think this speaks to the ability of AI Slop to just kind of create like a classic Pratt Fall physical comedy situation.

Speaker 12 This one involves what looks like store security cam footage of an older woman on a very high shelf inside of Walmart. And there's a police officer who's looking up at her as our story begins.

Speaker 13 Ma'am, please come down from there.

Speaker 11 You want me to come down?

Speaker 11 Yes, ma'am.

Speaker 12 And she does kind of does a header off the shelf and crashes into the police officer. So, Kevin, what did that one make you feel?

Speaker 11 There's a lot there.

Speaker 12 There's a lot of layers to this onion.

Speaker 11 Is this a one-off or is there a larger genre of older people falling off the top shelf at the grocery store onto a police officer?

Speaker 12 It's a whole interconnected cinematic universe with sort of, you know, these very rich sort of characterizations. The vocal performances are really amazing.
So, I encourage you to get into it.

Speaker 12 Beautiful stuff. No, this is a one-off, Kevin.
I've never seen anything else related to it.

Speaker 11 Yeah, I'm not worried that people are going to start throwing themselves off the shelves of grocery stores to sort of mimic the trend here. This one feels pretty harmless to me.

Speaker 11 And I appreciate inspiring older people to do things like climbing up to the top shelf of the grocery store.

Speaker 12 I mean, look, these days, anytime I see a Sora video that isn't like misappropriating the likeness of Martin Luther King Jr., I say that's a win for Slop.

Speaker 11 Yes, this one, I think, pretty harmless.

Speaker 12 All right, what else you got?

Speaker 11 Well, this next one, Casey, was not harmless because it involved America's queen, Dolly Parton.

Speaker 12 Oh, no.

Speaker 11 Leave her alone.

Speaker 11 Basically, some sicko out there has been generating AI images of Dolly Parton looking very sick, including at least one image of Reben McIntyre visiting Dolly Parton on her deathbed, which went around around on the internet and led to a bunch of rumors that Dolly Parton, God forbid, was dying.

Speaker 11 Oh, no. See, I hate this.
Yeah, I don't like this either. Let's watch Reba's video summarizing the whole thing.

Speaker 12 You tell them, Dolly, that AI MS has got us doing all kinds of crazy things. You're out there dying.
I'm out here having a baby.

Speaker 12 Well, both of us know you're too young and I'm too old for any of that kind of nonsense. But you better know I'm praying for you.
I love you with all my heart. And I can't wait to see you soon.

Speaker 12 Love you.

Speaker 12 Wait, Justin was clear. What you showed me was real, not solid.
That is Reba McIntyre's actual Instagram account.

Speaker 11 That is Reba McIntyre's actual Instagram account. She does show some of the slop images inside the thing of Reba at Dolly's deathbed.

Speaker 11 And Dolly responded with another real video from her real social media account saying, quote, I ain't dead yet. So, Casey, what do you make of this one?

Speaker 11 I mean, this is so bad.

Speaker 12 You know, like

Speaker 12 so many of the fears around misinformation have been there will just be come a time when you can't tell what is true and what is false.

Speaker 12 And the better that image generation software gets, the more of these little viral hoaxes we're going to see going around. So, this is super bad.

Speaker 12 I'm truly trying to, like, what kind of person do you have to be to be like, today is the day that I create a rumor that Dolly Pardon has died and I'm going to like use Sora to prove it?

Speaker 11 Truly, it is like mind-boggling to me.

Speaker 11 If you wanted to turn the public against AI and against AI-generated content, the most effective thing you could do would be to go after Dolly Parton, who everyone, literally everyone loves.

Speaker 12 No, I hope Jolene finds whoever did this and has a number on him.

Speaker 11 Let's just say Dolly Parton's lawyers are going to be working more than nine to five.

Speaker 12 This next one is sort of a narrated journey.

Speaker 12 We are returning to Walmart for this one. And this creator is very interested in the use of AI to create

Speaker 12 like art on products. You know, so you're sort of like, you know, I've seen some that are like at a craft store and like there's like framed pictures of what has clearly been AI generated.

Speaker 12 In this case, she picks up some butter cookies at Walmart and makes a pretty convincing case that it is slop art. And I enjoyed this journey.
Let's see how it looks here.

Speaker 16 This is bad. I didn't think it could get worse, but you guys were right.
The butter cookie tins at Walmart are way worse than the popcorn tins. Because why is Santa throwing ass?

Speaker 16 Why is he squatting on a table? Why does he look like he's about to twerk? What is his hand doing? What is

Speaker 12 Santa has the fattest ass in this?

Speaker 11 Look how wonky that is.

Speaker 16 And what is this wall full of random things? Like, can you make out what any of that is actually supposed to be? It looks like there's cobwebs on the roof, whether that's intentional or not.

Speaker 11 I don't know. It's just like random shapes on the wall.

Speaker 12 All right, we can probably stop it there.

Speaker 12 I have to say, this video made me feel very naive because I did not realize that there was like mass-produced products in like Walmart stores that is AI-generated.

Speaker 11 Oh, yeah. And I also love that there are now like slop detectives who are just going out there vigilante style and like investigating the slop on the shelves of their local Walmart.

Speaker 11 That's beautiful to me. We need more citizen participation.

Speaker 12 Honestly, it could be a segment for our show, you know, slop vestigations.

Speaker 11 Let me ask you, wait, let me have a question.

Speaker 12 Yeah. If you're shopping and you pick up an object and you see that, you know, there's slop art, does that affect the way that you want to buy it or not buy it, one way or the other?

Speaker 11 No. Okay.
I mean, I think there's like a whole like category of art that basically doesn't matter, which is like the stuff on the cookie tin, right? The stuff at Walmart.

Speaker 11 No one is winning any prizes for that. No one is reaching any new heights of creativity.

Speaker 11 Basically, this is just a way for the butter cookie manufacturer to save a couple bucks and not have to hire an illustrator or use some stock art from the internet.

Speaker 12 And do you think they're passing the savings on to us, the customers?

Speaker 11 Probably not. Probably not.

Speaker 12 That's probably going right to their bottom line. That's unfortunate.

Speaker 11 Yes.

Speaker 11 What about you? Would you be less likely to buy something if a slop had been used in its advertising?

Speaker 12 I mean, maybe, you know, because I think it speaks to a kind of cheapness and a lack of care.

Speaker 12 And so if I were buying like a heart defibrillator and I saw that there was slop art on the box, I would say, I don't know if I could trust these people.

Speaker 11 What about butter cookies from Walmart? Are you going for quality when you're buying butter cookies from Walmart?

Speaker 12 I want only the brown butter if there was going to be butter cookies.

Speaker 12 Butter is a great flavor, but it needs something else. You know what I mean?

Speaker 11 Okay, so for Casey, only the artisanal images of Santa with a huge ass.

Speaker 12 Small batch, huge ass, Santa butter cookies, please.

Speaker 11 Okay,

Speaker 11 one more example of slop that I want to tell you about today, Casey, and get your opinions on. This one is what I would consider good slop.

Speaker 11 This is slop that is being made toward a noble cause, which is preventing the AI apocalypse. Now, Casey, you might think to yourself, how could this happen?

Speaker 11 How could AI slop be used to ward off the AI apocalypse?

Speaker 12 I was just about to ask you that.

Speaker 11 Well, this is a company called Hyperstition. It was founded by Andrew Cotay and Aaron Silverbook.

Speaker 11 And basically, this is a company that is trying to counteract all of the sci-fi stories and narratives out there about AI going rogue and killing people, which, you know, this, this hypothesis goes sort of makes its way into the training data for these AI systems and actually makes them more likely to sort of go rogue.

Speaker 12 It gives them ideas.

Speaker 11 It gives them some ideas. And so Andrew Cote said, what if we combated this by writing a bunch of AI generated novels about AIs and humans getting along really well?

Speaker 11 And then we fed that into the training data for the AI systems to kind of give them some more good examples to follow.

Speaker 12 All right. Kind of a convoluted explanation, but sure, why not?

Speaker 11 So this company has just gotten a grant. I read about this on Astro Codex 10.
They just got a grant to create 5,000 AI-generated novels.

Speaker 11 And they're trying to have these novels be sort of 80,000 words. And they're going to enlist the public's help to help generate these.
And you can buy credits, about $4 a book to generate this.

Speaker 11 And then they're going to try to feed these into the language models and get the models to think about maybe good scenarios and maybe be more likely to act on them.

Speaker 12 Wait, why does the public get involved if the works are all AI generated?

Speaker 11 I think they want it to reflect a diverse set of, you know, sort of scenarios and characters. Basically, they want just people to sort of get involved in this and make it as diverse as possible.

Speaker 11 All right.

Speaker 12 Well, do we have any examples we can see?

Speaker 11 No. Great.

Speaker 11 So what do you make of this attempt to use slop for the benefit and potentially the salvation of humanity?

Speaker 12 Here's what I'm going to say. If it turns out that the thing that is needed to prevent

Speaker 12 human extinction from AI is a massive infusion of slop into the training data, I'll be very surprised. I'll be very surprised if that was the difference maker.

Speaker 11 I share your skepticism. I think the default outcome from this project is that it probably doesn't save us from the AI apocalypse.

Speaker 11 I think a funny secondary effect would be if one of these like 5,000 slop novels goes on to become a huge bestseller and like becomes the literary craze that takes over the country.

Speaker 11 Do I think that's likely? No, but it could happen.

Speaker 12 Well, as we mentioned earlier in the show, it doesn't seem like people are reading all that much these days.

Speaker 12 But, you know, maybe all of this will eventually be fed into a notebook LM video presentation that folks can watch.

Speaker 11 Yes. All right.
That is it for the hard fork review of slop, and we welcome your submissions for future installments.

Speaker 11 If you spot something, some slop that is worthy of cultural interrogation by some of our nation's foremost slop critics, please send it over to us at hardfork at nytimes.com along with a brief explanation of the effect it had on you, how it moved you.

Speaker 12 Yeah, we want to like, it can't just like, Look at this weird thing. Like, I want to see slot that made you feel something.

Speaker 11 Yeah, and the next time you see a Santa with a suspiciously large posterior,

Speaker 11 call us, call us, email us.

Speaker 12 We want to know about it. We want to see it.

Speaker 11 And we want to see it. He has a folder on his MacBook that's just photos and images of Santa with a very large.

Speaker 12 I love a thick Santa. And I salute them, sir.

Speaker 12 See you on Christmas, big guy.

Speaker 11 The hard fork review of slop.

Speaker 14 This podcast is supported by Give Directly, a nonprofit that lets you send cash directly to the world's poorest families so they can invest in what matters most to them.

Speaker 14 This year, more than 30 of your favorite podcasters are joining forces for Pods Fight Poverty to send cash to over 700 families in three Rwandan villages.

Speaker 14 And until December 31st, your first donation is matched. Join listeners everywhere fighting poverty at giveedirectly.org/slash times.

Speaker 1 The University of Michigan was made for moments like this.

Speaker 1 When facts are questioned, when division deepens, when the role of higher education is on trial, look to the leaders and best turning a public investment into the public good.

Speaker 5 From using AI to close digital divides to turning climate risk into resilience, from leading medical innovation to making mental health care more accessible.

Speaker 9 Wherever we go, progress follows. For answers, for action, for all of us, look to Michigan.

Speaker 10 See more solutions at umic.edu slash look.

Speaker 15 Over the last two decades, the world has witnessed incredible progress.

Speaker 15 From dial-up modems to 5G connectivity, from massive PC towers to AI-enabled microchips, innovators are rethinking possibilities every day.

Speaker 15 Through it all, Invesco QQQ ETF has provided investors access to the world of innovation with a single investment. Invesco QQQ, let's rethink possibility.

Speaker 15 There are risks when investing in ETFs, including possible loss of money. ETF's risks are similar to those of stocks.

Speaker 15 Investments in the tech sector are subject to greater risk and more volatility than more diversified investments.

Speaker 15 Before investing, carefully read and consider front investment objectives, risks, charges, expenses, and more in perspectives at Invesco.com. Invesco Distributors Incorporated.

Speaker 12 Hard Fork is produced by Rachel Cohn and Whitney Jones. We're edited by Jen Poyant.
This episode was fact-checked by Will Peischel. Today's show was engineered by Chris Wood.

Speaker 12 Original music by Alicia Maitoupe, Diane Wong, Rowan Nemastow, and Dan Powell.

Speaker 12 Video production by Sawa Roquet, Pat Gunther, Jake Nicol, and Chris Schott. You can watch this whole episode on YouTube along with that slop at youtube.com/slash hard fork.

Speaker 12 Special thanks to Paula Schuman, Wee Wing Tam, Dahlia Haddad, and Jeffrey Miranda. You can email us at heartfork at nytimes.com with a slop that made you stop.

Speaker 17 AI is changing the game of business. Will you be on the winning team?

Speaker 18 I'm Jordan Wilson, host of the Everyday AI podcast, here to teach you the X's and O's of AI.

Speaker 17 I've helped countless Fortune 500 companies win with generative AI, AI, and now I'm here to share their secrets to help you grow your company and career. Ready to pass your competition?

Speaker 17 Listen to Everyday AI on EverydayAIPodcast.com or wherever you get your podcasts.

Speaker 7 Join us daily, and we'll build the winning team together.