Character.AI’s Teen Chatbot Crackdown + Elon Musk Groks Wikipedia + 48 Hours Without A.I.

1h 0m
“We are living through a dramatic contraction in the access that teenagers have to technology online.”

Press play and read along

Runtime: 1h 0m

Transcript

Speaker 1 This episode is supported by Blockstars, a podcast from Ripple.

Speaker 3 Join Ripple for blockchain conversations with some of the best in the business.

Speaker 5 Learn how traditional banking benefits from blockchain, or how you're probably already using blockchain technology without even realizing it.

Speaker 1 Join Ripple and host David Schwartz on Blockstars, the podcast.

Speaker 8 Crypto investments are risky and unpredictable.

Speaker 9 Please talk to a financial expert before you make any investment decisions. This is not a recommendation by NYT to buy or sell crypto.

Speaker 11 Well, I wondered if you saw this. You know, I keep very close tabs on celebrity news, Kevin.
I know you do.

Speaker 11 And in particular, I'm always interested, has any hard fork guest sort of entered the world of celebrity? Because that's always very exciting for me. It is.

Speaker 11 And this week, it was officially confirmed that Katy Perry, the pop star, is dating one-time hard fork guest and former Prime Minister of Canada, Justin Trudeau.

Speaker 12 I was wondering where you were going with that. I'm not sure I would put hard fork guest on Justin Trudeau's at the top of his resume.

Speaker 11 To my parents, that is the main way that Justin Trudeau is known:

Speaker 11 as a hard fork guest. And now is Katy Perry's boyfriend.
Yes, he came in like a dark horse, and now they're dating.

Speaker 11 And for all I know, she's living a teenage dream right now with the former prime minister of Canada.

Speaker 12 Hey, she's not a teenager.

Speaker 11 No, have you listened to the song? Do you know what the song Teenage Dream is about? It's about falling in love with someone that makes you feel like it's a teenage dream.

Speaker 12 Oh my God. And let this be a lesson to other newsmakers, celebrities.
If you come on the Hard Fork podcast, 12 to 18 months later, you may find yourself dating a celebrity.

Speaker 11 For all we know, that is how Katy Perry became aware of Justin Trudeau.

Speaker 11 She was watching over at youtube.com/slash hard fork, and she saw this man talking about Canada. And she said, baby, you're a firework.

Speaker 11 I'm going to be honest, I've run out of the song titles for Katy Perry's songs.

Speaker 12 I was impressed that you kept it going this long.

Speaker 11 I'm Kevin Roos, a tech columnist at the New York Times.

Speaker 12 I'm KZ Noon from Platformer. And this is Hard Fork.

Speaker 11 This week, the company that said chatbots aren't safe for kids, why character AI is taking AI companions away from teens.

Speaker 11 Then, Elon Musk built a Wikipedia clone.

Speaker 12 Let's see what it says on Kevin's page.

Speaker 11 And finally, journalist AJ Jacobs is here to talk about the two terrifying days he spent without any artificial intelligence at all. My God, I hope he's okay.

Speaker 12 Well, Casey, this week we got some really surprising news about a company that we have talked about on this show before.

Speaker 12 This is Character AI, the company that makes these sort of realistic chatbot companions.

Speaker 12 We talked about it about a year ago on the show in the context of this very tragic story of Sewell Setzer III, a 14-year-old boy who took his own life after becoming emotionally attached to a Game of Thrones chatbot on character AI.

Speaker 12 We got a big update on that story this week, which is that character AI is barring minors, people under 18, from having conversations with its chatbots.

Speaker 12 It is basically saying, we are not going to offer this service to minors anymore.

Speaker 11 Yeah.

Speaker 11 And there is some nuance here that we'll get into, but at a high level, Kevin, I think this is one of the most dramatic steps we have yet seen from a major AI company to try to address the very real harms that these technologies pose, particularly to young people.

Speaker 12 Yes. So let's get into the details.
But before we do, let's make our AI disclosures. Casey, what is yours?

Speaker 11 My boyfriend Roxy Anthropic.

Speaker 12 And I work at the New York Times, which is suing OpenAI and Microsoft. over alleged copyright violations.
Yeah.

Speaker 12 So just to remind folks who may not remember the initial story, Character Character AI is a company that was started several years ago by leading AI researchers from Google who left that company.

Speaker 12 Noam Shazir and Daniel DeFreitas were their two co-founders. They were frustrated that Google was not sort of releasing this chatbot they had worked on.

Speaker 12 And so they said, we're going to go off and build our own startup where we're going to release these chatbots based on large language models. You can make characters, you can talk with them.

Speaker 12 It's sort of a role-playing app experience. And it became enormously popular with young people.
This was one of the first generative AI chatbot-based apps that really took off.

Speaker 12 And many of the users were teenagers or even younger. And if you went on as I did, and I spent some time reporting on this, there were just a lot of chatbots that seemed really aimed at young people.

Speaker 12 People, chatbots that would sort of take the persona of your friend at school or a bully or your crush. It was like a very young seeming app.

Speaker 11 Yeah, or also characters from Game of Thrones or you name the franchise.

Speaker 11 You know, I think the original idea animating character AI is, hey, what if you could chat with a lot of copyrighted material that did not belong to character AI?

Speaker 11 And it turned out that that was hugely popular with a bunch of kids who wanted to talk to, you know, Pikachu or whoever.

Speaker 12 Yeah, and the company, when I was reporting on it a year ago, wouldn't tell me how many of its users were under 18, but said that it was like a significant number.

Speaker 12 And so when I hear of things like that, I just assume that this is an app that is predominantly used by young people.

Speaker 12 For that reason, it's a very big deal that they're making these changes to basically wall themselves off from young people, at least for their central use case.

Speaker 11 Yes. And I think that that comes after really sustained public pressure.
In the wake of Sewell's death, there are other lawsuits against the company.

Speaker 11 And I have to imagine that at some point, the lawyers at this company said that the legal risk to us is simply too great.

Speaker 11 We believe Character AI has about 20 million monthly users is the figure that I have seen reported.

Speaker 11 And they think that there's a better opportunity for them building for adults, at least in the moment, than in making this technology available to kids.

Speaker 12 Yeah, so let's talk about the specifics here about how this is going to work. Character AI put out a blog post this week spelling out the changes that they're making.

Speaker 12 They say that over the next month, they will identify users under 18 and begin giving them time limits on their ability to chat with characters.

Speaker 12 That limit initially, they say, will be two hours a day, and it will ramp down in the coming weeks.

Speaker 12 And by November 25th, so roughly a month from now, under 18 users will not be able to converse in open-ended conversations with any character AI chatbots.

Speaker 12 Basically, they are going to limit the length of the conversations, maybe the topics of the conversations, and they are going to try to give teen users other ways to, what they say, be creative, for example, by creating videos, stories, and streams with characters, but they will not allow this kind of open-ended role-playing experience.

Speaker 11 Yeah, so if you just want to create a little bit of synthetic media featuring these characters, that's okay.

Speaker 11 But what's not okay is essentially the thing that seemed like it was really problematic in Sewell's case, right?

Speaker 11 Sewell had developed this very intense relationship with a chat bot that was called Daenerys Targaryen, as in, you know, the Game of Thrones character.

Speaker 11 And I think there are a lot of concerns about kids getting into these very emotionally heavy relationships with these synthetic characters. It can kind of take them into a world of delusion.

Speaker 11 It can separate them from their friends and family. That's the sort of thing that's not going to be allowed anymore.

Speaker 12 Yes. So it's unclear exactly what character AI is going to do now that it's giving up on what is essentially its sort of entire core use case for young people.

Speaker 12 They say that less than 10% of their current users are self-reporting as being under the age of 18. That's according to their CEO.
But obviously that is self-reporting.

Speaker 12 And I think a lot of teenagers are lying about their age.

Speaker 11 Do you you know how many times I lied about my age on the internet?

Speaker 12 Yes. So I have had an experience over the past couple of months where I have just started to feel like this is the most important and least understood topic in technology right now.

Speaker 12 I was recently at a high school and I often like to sort of poll students about how they're using AI. And so I asked at this high school, like, raise your hand if you have an AI friend.

Speaker 12 And about a third of them put their hands up.

Speaker 12 This is something that was, I think, a year or two ago considered kind of fringe, kind of unusual for young people to have these intimate relationships with the chatbots.

Speaker 12 But the chatbots have gotten better and more compelling and more persuasive. And it is just starting to become this like mass social phenomenon.

Speaker 12 There's one study, a survey done by Common Sense Media recently that found that 52% of American teenagers are regular users of AI companions, which is a startling figure and represents just like how quickly this all is happening.

Speaker 12 And another stat that I found very alarming from this survey was that nearly one-third of teens find AI conversations as satisfying or more satisfying than human conversations.

Speaker 11 Absolutely. And why is that? We've talked about it so many times on the show.
These chatbots are designed to be agreeable, to tell you that you're correct and to support you.

Speaker 11 And that's not inherently a bad thing. But if it becomes becomes your primary mode of socialization, it does seem like there is some real danger here.

Speaker 11 And Character is the first company that has said, instead of trying to introduce these sort of, you know, mealy-mouthed incremental tweaks and guardrails, we're actually just going to shut the whole thing down until we can figure out what's going on.

Speaker 12 Yeah.

Speaker 12 And I think in the minds of a lot of parents or people who, you know, understand and are worried about this phenomenon of AI companions for young people, there is a sense of relief about these changes, a sense that maybe this one company at least has decided to put people's health and well-being above their own profits.

Speaker 12 And I was texting a little bit this morning with Megan Garcia, who is the mother of Sewell Setzer, just sort of seeing how she felt about this. She filed this lawsuit.

Speaker 12 She's been sort of becoming more of an advocate for these issues. And she gave me permission to share this.

Speaker 12 She said, I'm relieved for the children that will actually lose access to character AI because those are lives that can be saved, even if it's one child. But I can't help but feel cheated.

Speaker 12 Why did it take Sewell dying and me taking on this tech company to get them to do this?

Speaker 12 So I think for Megan and for I'm sure the rest of her family, there is some relief, but also some frustration with like, why did it take a lawsuit, enormous public pressure, pressure from regulators around the world to get this company to act.

Speaker 11 Yeah, so this announcement has not been universally praised. The Tech Justice Law Project, which is one of the organizations that brought the lawsuit, sent us a note this morning.

Speaker 11 They pointed out that character AI had not really said how they were going to do age assurance to make sure that all of the adult users who will continue to get access to these chatbots actually are adults.

Speaker 11 They also noted that the company had not addressed, quote, the possible psychological impact of suddenly disabling access to young users, given the emotional dependencies that have been created.

Speaker 11 So I thought that is worth saying. We have seen when other companion bots have removed access how jarring and painful it can be to the people who use them.

Speaker 11 And so while I'm glad that character AI is going to be sort of ramping these users down as opposed to simply just pulling out the plug, I do think it's worth saying this this may be painful for some of their users.

Speaker 12 Yeah, I think that's a really good point.

Speaker 12 I think just because these are not human relationships doesn't mean that they can't produce pain and grief when people lose their connections to something that they have grown attached to.

Speaker 12 So yeah, I think we should be very sympathetic and empathetic toward people who may be having a hard time now that their chatbots aren't talking to them.

Speaker 11 So what kind of impact do you expect this move from character to have on the rest of the industry?

Speaker 12 Not much. I mean, I think character was a sort of special case.
They actually had lost most of their founding executives and leadership.

Speaker 12 Noam Shazir and Daniel DeFreitos went back to Google and basically sort of left behind the kind of shell of this company. So I don't expect that like Character AI is going to recover from this.

Speaker 12 I think they were probably already in a state of losing users and just seeing the sort of decline of their platform. So this may be sort of a final nail in the coffin for them.

Speaker 12 But I think what has happened is that the rest of the industry is now doing what character AI used to want to do.

Speaker 12 I think the ones that we've talked about on the show, Open AI and Meta, that are really pushing into this use case,

Speaker 12 I don't know that they're sort of learning the right lessons from what has been happening with Character AI.

Speaker 11 Yeah, I mean, I have to say, I think it's going to have maybe a bigger impact than you do for this reason. Inevitably, there are going to be congressional hearings about this.

Speaker 11 And I think Character AI will probably be there and they're going to say why they did this.

Speaker 11 And then they're going to go over to whoever is there from Meta or Open AI and say, why do you guys think this is safer than they do? Right.

Speaker 11 Why do you, Meta, still have nasty Nancy available on your platform?

Speaker 11 And I think that that's just going to put a really interesting sort of pressure on them if to do nothing else than to have a response as to how they are justifying what they are doing, right?

Speaker 11 And to try to put some sort of case behind it other than everyone else is doing it, which is an argument I have heard Meta make about why these chatbots are available.

Speaker 12 Yeah. And I think one way that the bigger AI companies may respond to that is is by saying, well, look, we are just so big now.
We have so many users.

Speaker 12 And some small percentage of those people are going to experience mental health crises in their lives, maybe even while using our product.

Speaker 12 But that number is small relative to the number of users of our products in total.

Speaker 12 This is what I would call like the prevalence argument, which we heard a lot from social media companies a decade ago.

Speaker 12 They would say, oh, yeah, there is like, you know, hate speech and toxicity on our platforms. But like, if you just look at like the overall percentage, it's like quite small and meaningless.

Speaker 12 So I just think we'll start to see a lot more of that kind of argument.

Speaker 11 You know, this whole discussion, Kevin, ties into some really interesting research that OpenAI released this week, where they began to map out the scale of the mental health crisis as it can be seen on ChatGPT itself.

Speaker 11 There are now more than 800 million people a week using the platform. That's a pretty, you know, decent subset of the population.

Speaker 11 And while the numbers of people who are having these kind of disturbing or

Speaker 11 potentially dangerous conversations with ChatGPT are low on a percentage basis, by the company's own estimates, you have 560,000 people a week whose messages to ChatGPT indicate psychosis or mania, 1.2 million people a week who are potentially developing an unhealthy bond to a chatbot, and 1.2 million people who are having conversations that contain, quote, indicators of potential suicidal planning or intent.

Speaker 11 So if you just want to be very cynical about this and think about it only from a legal liability perspective, if you have more than a million people a week who are developing an unhealthy bond to your chatbot, who are expressing thoughts of self-harm, think about the lawsuits that are going to follow, right?

Speaker 11 I mean, that could be just hugely damaging. So I wonder if the other big labs will look at what character AI did this week and decide maybe we actually should build some of these safeguards faster.

Speaker 12 Yeah, I don't know. I'm

Speaker 12 still not that optimistic. I think that these companies are kind of trapped because they want the engagement and the depth of connection that people are having with their products.

Speaker 12 Like any company that makes technology wants people to, if not fall in love with it, at least develop like a bond with it and feel like very connected to it.

Speaker 12 So they want that, but they don't want the responsibility for the emotional relationships that people are going to develop with these systems already are developing in many cases.

Speaker 11 Yeah. I mean, the main conclusion that I have about this whole story, Kevin, is just that this story reminds us that nothing is inevitable when it comes to AI, right? You don't have to build it.

Speaker 11 You don't have to release it to everyone. You don't have to make it free.
You don't have to decline to build any meaningful guardrails.

Speaker 11 You can actually just say, based on what we've seen, we don't think that this is safe and we are going to take it off the market.

Speaker 11 And at a time when everyone has their foot on the gas pedal and everyone feels like they're in this all-out existential race to a finished line called AGI, I've been so worried that companies were going to cut corners when it comes to safety.

Speaker 11 We've seen it over and over again.

Speaker 11 And so while you hate to give too much credit to a company, particularly one like character AI, which did, after all, get itself into this mess in the first place, I do think there is something to be said for saying, we are going to stop the bleeding here.

Speaker 11 We are actually going to admit that we don't know how this affects people and we're going to take it off the market.

Speaker 12 Yeah. I mean, I think it's a really overdue, but responsible thing that they did.
I think they probably had their hand forced by the lawsuits and the regulators breathing down their necks.

Speaker 12 And so I hope that Mark Zuckerberg and and Sam Altman and these folks who are building these very persuasive, compelling chatbot companions are looking at this as a cautionary tale for what can happen if you don't think about the consequences of what you're building.

Speaker 11 Kevin, maybe one more thing to say about this. We are living through a dramatic contraction in the access that teenagers have to technology online.

Speaker 11 In the same week that we are announcing this, YouTube, Snap, TikTok, and Meta have all said that they will abide by a law passed in Australia that will ban kids under 16 from having a social media account.

Speaker 11 So you can still look at YouTube, but you cannot have your own account. You're going to have to use someone else's.

Speaker 11 And you can't have an Instagram account. You can't have a TikTok account.

Speaker 11 So that's something that there are many American states that are trying to bring about with sort of mixed success based on legal rulings.

Speaker 11 But I think you look at what character AI is doing, you look at what some of the states here are doing, you look at what Australia is doing.

Speaker 11 And I think the social media and AI companies have just lost this argument, right? There is no longer a consensus that unfettered access to these technologies is good or healthy or safe for teens.

Speaker 11 And people are finally starting to do something about that.

Speaker 12 Yeah, I think that's right.

Speaker 12 But I think the question of how to regulate these chatbot companions is only kind of part of what I'm thinking about these days, because I don't think this is actually something that we're going to be able to regulate our way out of.

Speaker 12 I think even if countries do what Australia has done and ban social media use for kids under 16, kids are going to find this stuff. They're going to get access to it.

Speaker 12 They're going to think it's compelling. Some number of them are going to grow emotionally attached to it.

Speaker 12 And so I think we do actually have to also address the real possibility that there's really nothing we can do at a regulatory level to prevent every teenager from forming an emotional attachment to a chatbot.

Speaker 11 I don't know. I I think that sometimes passing rules like this can be the first step toward a society just changing its relationship with these technologies overall.

Speaker 11 You know, when I went to high school, there was still a smoking section indoors at my school because it was just sort of taken as a given that, yeah, well, you know, you can't stop the seniors from smoking.

Speaker 11 It's cigarettes. Of course they're going to smoke their cigarettes.
But, you know, you go back to my high school today, there's no smoking section.

Speaker 11 And banning it for teens, I think, was part of a larger movement of telling the adults, hey, this isn't actually very good for you either. Right.
And eventually that sort of shifted.

Speaker 11 So I think there may be some hope that if we can take the next generation and not have their primary relationships be with character AI, there is some hope that maybe this doesn't actually change society quite as radically as maybe it otherwise might.

Speaker 12 There was a smoking section at your high school. Can you believe that? That's amazing.
What year was that?

Speaker 11 This was the late 1990s. Now it was the 1900s, but it was like the very end of them.

Speaker 12 When we come back, back, a look inside Elon Musk's new Wikipedia clone and what it says about Casey's relationship.

Speaker 1 This episode is supported by Blockstars, a podcast from Ripple.

Speaker 3 Join Ripple for blockchain conversations with some of the best in the business.

Speaker 5 Learn how traditional banking benefits from blockchain, or how you're probably already using blockchain technology without even realizing it.

Speaker 1 Join Ripple and host David Schwartz on Blockstars, the podcast.

Speaker 8 Crypto investments are risky and unpredictable.

Speaker 7 Please talk to a financial expert before you make any investment decisions.

Speaker 3 This is not a recommendation by NYT to buy or sell crypto.

Speaker 14 Picture this. You land the perfect name for your startup, only to find Peter from Delaware owns the dot-com.
Your options? Pay up or settle for a domain that looks like a Wi-Fi password.

Speaker 14 But thanks to.tech domains, there's another solution. With With.tech, you get the domain name you want that instantly says you're building tech.

Speaker 14 Tech companies worldwide use.tech domains like CES.tech and 1x.tech. Don't settle.
Visit a trusted platform like GoDaddy and get your.tech domain today.

Speaker 9 Over the last two decades, the world has witnessed incredible progress.

Speaker 13 From dial-up modems to 5G connectivity, from massive PC towers to AI-enabled microchips, innovators are rethinking possibilities every day.

Speaker 15 Through it all, InvescoQQQ ETF has provided investors access to the world of innovation with a single investment.

Speaker 13 Invesco QQQ, let's rethink possibility.

Speaker 10 There are risks when investing in ETFs, including possible loss of money.

Speaker 7 ETFs risk is similar to those of stocks.

Speaker 10 Investments in the tech sector are subject to greater risk of more volatility than more diversified investments.

Speaker 9 Before investing, carefully read and consider front investment objectives, risks, charges, expenses, and more in perspectives at Invesco.com.

Speaker 10 Invesco Distributors Incorporated.

Speaker 11 Well, Kevin, I was reading about you recently. Oh, yeah? Yes.

Speaker 11 Let me ask you if this is true, because I read someone say this about you, that when you were growing up, your upbringing fostered an outsider's perspective on religious and cultural fringes shaped by a family dynamic that prioritized open inquiry over doctrinal adherence.

Speaker 12 How do you respond to these charges? How'd you get into my therapist's notes?

Speaker 11 Believe it or not, Kevin, that's not your therapist. That's from a little something called Grockopedia.

Speaker 12 Oh, boy.

Speaker 11 Grockopedia, of course, the Wikipedia. competitor challenger that has been developed by Elon Musk and XAI as part of a huge culture war that has gone on over the world's most popular encyclopedia.

Speaker 11 And I thought today we should take a look at this thing and talk about what we think.

Speaker 12 So I need you to kind of walk me slowly through this because I am coming to this.

Speaker 11 I'm going to walk you slowly through this.

Speaker 12 I am coming to this totally cold. And I want to get into Grackopedia and all the details.
But first, I must know, what else does Grackopedia say about me?

Speaker 11 Oh my goodness. Well, first of all, this article is long.
There are more than a dozen subsections about your life,

Speaker 11 including your books, your New York Times career, and my personal favorite, notable events and controversies.

Speaker 12 What are my notable events and controversies?

Speaker 11 Well, you'll be happy to know that there is an extended section about your interaction with a certain Bing Sydney chatbot back in 2023.

Speaker 12 Never heard of it. And then there's also criticisms of your AI reporting.
Oh, boy. Which

Speaker 12 we can save those for the

Speaker 12 Patreon subscribers. Yes, exactly.
Check the Patreon if you want to hear those.

Speaker 12 So obviously, Grokapedia is part of Elon Musk's AI chatbot Grok.

Speaker 12 But what is this project and why did he decide to make his own Wikipedia?

Speaker 11 So for well over a year now, conservatives and the right wing have been fomenting this backlash against Wikipedia, which they say is biased against conservatives.

Speaker 11 That is, of course, a familiar conservative talking point about basically every popular tech platform on the internet.

Speaker 11 In the case of Wikipedia, they're particularly concerned that Wikipedia editors have labeled a bunch of conservative media as unreliable and therefore ineligible for inclusion as citations on articles about controversial subjects.

Speaker 12 It's a little too Wokeopedia, if you know what I'm saying.

Speaker 11 I do know what you're saying, and I don't like it.

Speaker 11 Because embedded in that critique is the idea that, for example, the Heritage Foundation's blog posts or Breitbart or Fox News's political coverage deserve to be seen with the same credibility and fact-checking journalistic rigor of, let's say, platformer.news.

Speaker 12 Yeah, I feel like this happens kind of every few years where like a group of partisan activists like gets very mad at Wikipedia, like the crown jewel of the internet.

Speaker 12 Like every three years, people are just like, it's horrible. It's biased.
We have to destroy it.

Speaker 11 Yes. And this particular backlash seemed to get a lot of fuel after the famous Elon Musk, Was It a Nazi Salute Incident?

Speaker 12 What happened there?

Speaker 11 Well, you may have to.

Speaker 12 I remember the incident, but what was the controversy surrounding it?

Speaker 16 Well.

Speaker 11 Elon Musk says and has continued to argue across many, many, many posts on X that this was not a Nazi salute.

Speaker 11 And now whenever any Democratic politician raises their hand in a vaguely Nazi salute seeming way, he does a post about it.

Speaker 11 But he's very mad about how this controversy was handled on Wikipedia, where there is an entire page devoted to it.

Speaker 11 And that appears to be one of the main reasons why Elon Musk says we are going to build our own Wikipedia. It is not going to have the same biases baked into it.

Speaker 11 It is going to be maximally truth-seeking, to use one of his favorite phrases. And as of this week, it is now live.

Speaker 12 So can I just ask some technical details about Grockapedia? So is it all written by AI? Is that the premise here?

Speaker 11 Well, certainly Grok seems to have played a starring role in this thing. When you read it, it reads very much like Grok output.

Speaker 11 But as many writers have noted, including Jay Peters at The Verge, when you do side-by-side comparisons of Wikipedia and Grok, there appears to be just some pure plagiarism.

Speaker 11 And Grok does acknowledge that it has used large chunks of Wikipedia under Wikipedia's

Speaker 11 license.

Speaker 12 So it seems like maybe Elon Musk and his team have sort of ingested some or all of the sort of regular Wikipedia and just given Grok a prompt that's like, kind of rewrite this to be more Grok-like?

Speaker 12 Yes.

Speaker 11 And at launch, Grockopedia has more than 800,000 articles. That compares to around 7 million on English language Wikipedia.
So it is a small subset, but presumably that will grow over time.

Speaker 12 Right. And can anyone edit the articles on Grackopedia as on Wikipedia?

Speaker 11 No, you cannot.

Speaker 11 What you can do, if you see something on Grackopedia that you think is wrong, you can highlight it and then a little button will pop up that lets you click it and you can say this is wrong and you can sort of make your case.

Speaker 11 And that seems like a great way to waste a lot of time if you have nothing else to do in your life.

Speaker 12 Hang on, give me a second. I have some bones to pick with my Grackopedia entry.

Speaker 11 You really haven't looked at yours yet. No.
Oh, well, of course I had to look at mine. Yeah, what are yours?

Speaker 11 Well, first of all, I'm very proud of us that we made the first 800,000 articles in this encyclopedia, right? Not easy to do. I was not one of the first 800,000 entries on Wikipedia.

Speaker 11 I'll tell you that much. So I appreciate that.

Speaker 12 Wait, I have to go see this for myself.

Speaker 11 Go ahead and pull it up. I want to be intellectually honest here.
There are parts about my Grackapedia page that I like. It goes into way more detail than my Wikipedia page does.

Speaker 11 And I think, overall, presents like a pretty good picture of like who I am and what I have done. Oh my God, it's so long.
It's incredible. I'll say it.
It's too long.

Speaker 11 Like nobody actually wants that much information about me.

Speaker 12 Wait, can I read the family and relationship section to you? Please do. Because this is breaking some news here on this podcast.
It says Newton is married to a lawyer.

Speaker 12 Congratulations. I thought your boyfriend worked in Anthropic.

Speaker 11 Well, he does. And so we have found the first of many mistakes that you will find in Gracchopedia.

Speaker 12 He maintains a low public profile regarding his personal relationships. Nature can't stop talking about it.

Speaker 12 With no further details on partnerships or children disclosed in available interviews or profiles.

Speaker 11 Yeah, so I guess I'll try to say more about my boyfriend to try try to help Gracopedia.

Speaker 12 Wow, he's got your Goodreads profile here. Newton exhibits a keen interest in reading.
False, he hasn't read a book in years.

Speaker 12 Evidenced by his Goodreads profile cataloging 112 books with ongoing reads, including Cahokia Jazz by Francis Spufford, The Saint of Bright Doors by Vajra Chandrasekhara, and others spanning fiction and non-fiction.

Speaker 11 Let me give a shout out to The Saint of Bright Doors, by the way. That is the best book I've read this year.
Super, super good.

Speaker 11 So, yeah, so there, I mean, there's like kind of something a little creepy about it, right?

Speaker 11 It's like, we're going to like go in and look at all of your public profiles and kind of see what we can scrape in.

Speaker 11 But I do think it like winds up putting together a kind of, you know, decent picture of my life.

Speaker 11 Now, for the most part, I've been able to stay out of a lot of culture wars and political controversies.

Speaker 11 And so, you know, I didn't see anything in there that made me, you know, really roll my eyes and feel bad.

Speaker 11 If, though, you are an Elon Musk or a Donald Trump, you may find that you're getting a much friendlier treatment on Grackopedia than you would on Wikipedia.

Speaker 12 Okay, so let's talk about the politics of Grackopedia. Where does it differ meaningfully from standard issue Wikipedia?

Speaker 11 So because it is designed as a kind of right-wing alternative, when you pull up articles that have been the subject of a lot of culture warring, you will just find material that is much closer to the conservative or Republican view.

Speaker 11 So for example, If you pull up the article on Donald Trump and there is what I would say is a very friendly view of the events of January 6th, 2021 that sort of goes out of its way to talk about how,

Speaker 11 you know, Democrats sort of overstated the risk to democracy. So I will say that in some ways I expected Grackopedia to go further to the right.

Speaker 11 And like, you know, do not get it wrong, there is a lot of really racist stuff in Grackopedia. There's a lot of anti-trans stuff in Grackopedia.

Speaker 11 But like, as somebody who has spent more than a share of time like reading 4chan and like R. The Donald back in the day, the stuff that I'm seeing in Gracopedia is not as bad as that.

Speaker 12 What is the strategy here? Like what is Elon Musk hoping to accomplish?

Speaker 12 Is he hoping that people will, instead of going to Wikipedia to learn about stuff, go to Grackopedia and that we will sort of educate people differently in this country?

Speaker 11 Yeah, I think, you know, I read a quote in the New York Times article about the Gracchopedia launch where they had some scholar who said, like, ever since people began to study things, people have wanted wanted to control knowledge and how it is distributed, right?

Speaker 11 And I think Grockopedia is just a step in that direction.

Speaker 11 If you believe that Wikipedia has a chokehold on the public imagination, if you're concerned that Wikipedia data is being used as a pillar of most of the big large language models that we're now using every day, if you wanted to inject other views into the populace, you might want to create something like Grockopedia.

Speaker 12 Yeah, I'm also wondering how much of it has to do with the actual training of Grok, because one thing that we know is that Elon Musk has been frustrated in the past that despite his best efforts, it keeps sort of hoovering up all this data from the internet.

Speaker 12 And that makes it, he thinks, too liberal.

Speaker 12 And so I'm wondering if this is kind of like an effort to give Grok a new kind of substrate of knowledge that it can learn from so that it's not reliant on Wikipedia.

Speaker 11 I mean, it's an interesting idea, but I'm not sure how much original work Grok is really doing.

Speaker 11 Like, I think that it is almost certainly showing you a wider range of sources than you might find on Wikipedia, or at least a wider range of right-leaning sources.

Speaker 11 But it's not as if there are a bunch of like, you know, right-leaning Grokopedia editors who are going out there, you know, doing original research or something.

Speaker 11 Like, this is very much akin to like a deep research report that you might get a chat GPT or a Gemini to do for you, except that this time it's Grok.

Speaker 12 Right. So how much of a big deal is this? Like, are you seeing reactions from people who are scared that this actually will replace Wikipedia?

Speaker 12 Is this just kind of like one of Elon's many passion projects?

Speaker 11 You know, so far, I don't know that Grokopedia has made much of a splash in the mainstream aside from just being a curiosity.

Speaker 11 If you follow Elon Musk on X and you visit X a lot, it is something that you've heard a lot about.

Speaker 11 And I've seen conservatives and the tech right talking up certain Grockopedia pages as like, aha, this is so much better than Wikipedia. So it is kind of having that moment right now.

Speaker 11 How long does that last? I don't know. You know, Wikipedia is one of the very most popular sites on the internet, and it's going to take a lot to displace that, right?

Speaker 11 I think for a lot of people, going to Wikipedia is just muscle memory.

Speaker 11 So barring some sort of massive leverage that Elon Musk is able to get in distributing Grakopedia to more people, I think it's probably going to remain more of a curiosity.

Speaker 12 Yeah. And do you think this contributes to sort of the fears that people have about the decline of Wikipedia?

Speaker 12 Because we've been, you know, talking for years now about how generative AI chatbots are now people's increasingly their first step toward learning about a new subject, where maybe before, you know, if you wanted to learn about the, I don't know, the Franco-Prussian war, you would have like gone to the Wikipedia page for it.

Speaker 12 But now you might pop open chat GPT and just ask for information and it would sort of go out and look at Wikipedia and other sources for you and synthesize it.

Speaker 12 So is this like coming at a time where Wikipedia is already pretty vulnerable as a result of AI?

Speaker 11 I think to the extent that Wikipedia is vulnerable, Gracopedia doesn't really pose much additional threat. I think by far the larger threat to Wikipedia is what you just said.

Speaker 11 It is that more and more people are accessing the information via chatbots, via Google search results.

Speaker 11 And the main consequence of that for Wikipedia is that if Wikipedia can't get you to go to the site, it also can't get you to contribute. It can't get you to update articles.

Speaker 11 It can't get you to become an editor. And so, the fear is that as traffic to Wikipedia declines, the quality of the site will decline as well.

Speaker 11 And in fact, just this month, Wikipedia published a blog post in which they said they are starting to see traffic declines due to generative AI. So, that's a very real threat to the encyclopedia.

Speaker 11 Grackopedia, I think, isn't quite that.

Speaker 12 Yeah, I mean, whether or not Gracopedia takes off as a product, I think that this larger effort to delegitimize Wikipedia is quite possibly going to be successful because I think that people on the right, especially, have identified that, you know, even though Wikipedia looks like this sort of canonical thing that is just sort of appearing on the internet, like it is made by people, a relatively small percentage of people on Wikipedia are actually contributing to it.

Speaker 12 And so I think they have recognized that this is another set of refs that they can essentially work are the Wikipedia editors and the moderators who control the rules.

Speaker 12 And maybe that is sort of the larger victory that they see as being possible here is maybe they can just kind of undermine the sort of long-standing traditions and norms of the Wikipedia community and get it to behave more like Grackapedia.

Speaker 12 We'll see.

Speaker 11 I think Wikipedia has been really resilient so far.

Speaker 11 You know, there actually have been hearings in Congress about this alleged bias in Wikipedia, which I find outrageous because Wikipedia should be able to say whatever it wants about vaccines or January 6th or whatever else, right?

Speaker 11 It doesn't have any legal obligation to the federal government to provide one set of views over another. But that does get, Kevin, at why on balance, I'm actually glad that Grackapedia exists.

Speaker 12 Why?

Speaker 11 Because I think if you see something online and you get really mad and you think that there is a better and smarter view out there, I think the best thing to do is to just put it up on the web, right?

Speaker 11 Not in every single case. There's some like horrible things that I wish you wouldn't post on the web.
But look, if you want to have a debate about January 6th, go ahead and create a webpage, right?

Speaker 11 That is my preferred, you know, resolution to political conversations, as opposed to we are going to start having hearings to try to pressure Wikipedia into having one particular political view.

Speaker 11 So I view Grackopedia as silly and bad and offensive as it can sometimes be. as still a case of countering speech with more speech.
And I think that that is overall a better way to have a democracy.

Speaker 11 Yeah.

Speaker 12 so I understand that take,

Speaker 12 but I'm also wondering if the fact that Gracopedia is AI generated makes it any different.

Speaker 12 Like, is the answer to like speech that you don't like really having a chatbot go out there and write a bunch of slop text for you?

Speaker 11 I think that is a good question. And maybe we should sort of value Grackopedia less than we do Wikipedia for that reason.

Speaker 11 At the same time, humans are involved in the creating and the shaping of Gracopedia, right?

Speaker 11 Like it seems very unlikely to me that what we're reading on some of these really high-profile entries has not been edited or tweaked by someone, right?

Speaker 11 So I think I still see a pretty strong human hand in here. And that's why on balance, it still does feel like counter speech to me.

Speaker 12 Wow, that's a very pro-First Amendment take from you. Yeah.
Your lawyer husband must be so proud.

Speaker 12 I just think like this inevitably ends with like Elon Musk forcing a state government to like teach Grackapedia in schools.

Speaker 11 I mean, you, you joke, but like that's probably only a half joke. And I wouldn't be surprised if it does become like the curriculum in Texas in 2028.
So we should absolutely keep an eye on that.

Speaker 11 And there are places it can go bad. But like at the end of the day, one thing about me as an elder millennial is I love the web.
And so I'm generally in favor of people making websites.

Speaker 11 Cause look, at the end of the day, the truth is most websites just get ignored, but they wind up being useful for like some people, right?

Speaker 11 So, look, I'm not going to be visiting Rockapedia every day, but if the uncle who you're going to have your worst conversation with at Thanksgiving this year wants to use it, that's fine.

Speaker 12 Yeah. I mean, I just, my bigger question is like whether this whole category of kind of the online encyclopedia is just obsolete, right? Like, I love Wikipedia as an idea, as

Speaker 12 an expression of collective

Speaker 12 knowledge, as

Speaker 12 what I consider like a true gem of the internet. It's a miracle.
It's a miracle.

Speaker 12 And I cannot tell you the last time I went to Wikipedia. Really? Yes.
Oh, I know every day.

Speaker 11 Really? Yes.

Speaker 12 So I feel like, you know, I go there now when I need to check something that a chatbot has told me, but I do not really go there as my first stop on any sort of given fact-finding mission because my consumption has shifted almost entirely to these chatbots and to search engines, we should say.

Speaker 12 Like I still do use Google on occasion.

Speaker 12 So I just wonder if like there is any future in which Grackapedia, Wikipedia, any of these sites have a realistic hope of making it, or if they just sort of end up being kind of crammed into the chatbots and that becomes people's primary way of finding things out.

Speaker 11 Well, I think you're onto something real here because of all of the products that Elon Musk has launched in the past 10 years, Gracopedia does seem like by far the least forward-looking. Yes.
Right.

Speaker 11 Now, it is possible that you could take the contents of Gracchopedia and find other ways to distribute them. And maybe that's what he will do.

Speaker 11 You know, notably, he is not relying on human contributors to bring in the knowledge or, um, because he just stole most of what was on Wikipedia. So those humans already did the work for him.

Speaker 11 But he can just take the material that he already has and sort of find find new things to do with it, you know, update it via conservative media of whichever flavor he likes.

Speaker 11 And so maybe that's the way that this thing winds up being a little bit more forward-looking than it seems today. Yeah.

Speaker 11 What's your Gracopedia take?

Speaker 12 Well,

Speaker 12 let me spend a little more time on it. Let me see what it has to say about the Hard Fork podcast.

Speaker 11 Oh, let's see.

Speaker 11 Do we have a page?

Speaker 12 We don't have a page. Okay, I hate this thing.
Oh, my God. We could do Tuning Fork, Clark Fork River, or Pastry Fork.
Let's see what it has to say about Pastry Fork.

Speaker 11 When we come back, why author A.J.

Speaker 12 Jacobs collected rainwater and foraged food while spending two days with no AI.

Speaker 1 This episode is supported by Blockstars, a podcast from Ripple.

Speaker 3 Join Ripple for blockchain conversations with some of the best in the business.

Speaker 5 Learn how traditional banking benefits from blockchain, or how you're probably already using blockchain technology without even realizing it.

Speaker 1 Join Ripple and host David Schwartz on Blockstars, the podcast.

Speaker 8 Crypto investments are risky and unpredictable.

Speaker 7 Please talk to a financial expert before you make any investment decisions.

Speaker 10 This is not a a recommendation by NYT to buy or sell crypto.

Speaker 14 As a small business owner, you don't have the luxury of clocking out early. Your business is on your mind 24-7.
So when you're hiring, you need a partner that works just as hard as you do.

Speaker 14 That hiring partner is LinkedIn Jobs. When you clock out, LinkedIn clocks in.

Speaker 14 LinkedIn makes it easy to post your job for free, share it with your network, and get qualified candidates that you can manage all in one place. Post your job.

Speaker 14 LinkedIn's new feature can help you write job descriptions and then quickly get your job in front of the right people with deep candidate insights. Either post your job for free or pay to promote.

Speaker 14 Promoted jobs get three times more qualified applicants. At the end of the day, the most important thing to your small business is the quality of candidates.

Speaker 14 And with LinkedIn, you can feel confident that you're getting the best. Find out why more than 2.5 million small businesses use LinkedIn for hiring today.
Find your next great hire on LinkedIn.

Speaker 14 Post your job for free at linkedin.com slash hard fork. That's linkedin.com slash hard fork to post your job for free.
Terms and conditions apply.

Speaker 2 AI is transforming the world, and it starts with the right compute.

Speaker 13 ARM is the AI compute platform trusted by global leaders. Proudly NASDAQ listed.

Speaker 1 Built for the future.

Speaker 9 Visit ARM.com/slash discover.

Speaker 12 Well, Casey, you and I are AI maximalists. We use this stuff all the time.
But today we're going to talk with someone on the show who went 48 hours without using AI at all. An unthinkable idea to me.

Speaker 11 Suffice to say, an experiment that would not have occurred to the two of us to try.

Speaker 12 Yes. So today on the show, our guest is A.J.
Jacobs. AJ is a great writer and a former mentor of mine.
He was actually my first boss in journalism.

Speaker 12 I helped him with a book many years ago, about 20 years ago now, God,

Speaker 12 about the Bible. He is known for these kind of immersive experiments where he throws himself deeply into a topic.
He wrote a book about following the Constitution literally.

Speaker 12 He wrote a book about following the Bible literally. He's He's also the host of the Puzzler podcast.

Speaker 12 And this week, he published an article in the New York Times titled 48 Hours Without AI, in which he sets out to live his life with as little AI contact as possible.

Speaker 11 Yeah. And, you know, my expectation as I started to read this piece was that it would be pretty easy to go 48 hours without using AI.

Speaker 11 But for reasons that AJ gets into, it actually winds up being quite difficult.

Speaker 12 Yeah, you essentially have to time travel back to the 1800s to avoid contacting anything that has any form of AI in it.

Speaker 12 And I think it's a useful point in addition to being a very fun article because it does drive home just how intertwined all this stuff is with the way we live our lives today.

Speaker 12 And it's not inevitable that it's going to continue getting more intertwined, but I think it's a pretty good bet.

Speaker 11 Yeah, so this is a story that begins with AJ forswearing modern electricity and ends with him foraging for food in Central Park. And I think it's time to bring him in and talk about it.

Speaker 12 Yes, his life sort of resembled that show Naked and Afraid,

Speaker 12 where you have to just kind of find your way out of the forest. That's what living without AI in the year 2025 is like, according to AJ Jacobs.

Speaker 11 Let's bring him in.

Speaker 11 AJ Jacobs, welcome to Hard Fork.

Speaker 16 Delighted to be here. Thank you, Kevin.
Thank you, Casey.

Speaker 12 So you just did this experiment where you went 48 hours without using AI or machine learning. And I want to talk to you all about that.
But can we just start with the photos at the top of this story?

Speaker 12 You are wearing what I would describe as a sort of very loud outfit that has some like red checkered pants and like a paisley flowered print shirt and these like glasses that look kind of like, you know, Elton John or like the ones you get after you get your pupils dilated.

Speaker 16 Right. I brought them along.

Speaker 11 So

Speaker 12 why the fit? Explain this.

Speaker 16 Well, the premise of the article was, as you said, try not to interact with AI or machine learning for 48 hours. And one thing I realized quite early on was it's everywhere.

Speaker 16 It is everywhere, especially machine learning. So clothing designers are experimenting with it in terms of designing, but also anything on the supply chain is totally machine learning optimized.

Speaker 16 They figure out how to to route it, how to pack it using machine learning. So I'm like, well, anything in my closet that's 10 years old or less is probably off limits if I'm really being strict.

Speaker 16 But I did have deep in my closet my grandfather's 1970s Paisley shirt and red and white checkered pants. And he went through an Austin Powers phase.

Speaker 12 He was very much a dandy.

Speaker 16 And I was like, all right, I got to do it, even though it made me very uncomfortable.

Speaker 16 Although my wife said that it was the coolest I had looked since we've been married, which was insulting and also flattering.

Speaker 12 I think you got the timing right.

Speaker 11 I think it's been so long since those clothes were in fashion that they now actually do look fashionable again.

Speaker 12 Yeah. Yeah.

Speaker 11 AJ, let me ask you this. As you headed into this experiment, what is or was your relationship with AI? Are you the sort of person who was using generative AI tools like ChatGPT every day?

Speaker 11 Was it a more occasional thing? Or where were you on that spectrum?

Speaker 16 Yeah, I would say I'm in the middle. I'm not a Luddite, but I don't like have it controlling my life.
I did use it for research and it was actually not bad. I was impressed.

Speaker 12 Got it. So you were not trying to prove that like AI is bad.
I just remember some of your other experiments that you've done over the years have included following the rules of the Bible.

Speaker 12 And that was sort of at a time when people were talking about taking the Bible very literally.

Speaker 12 And I knew that part of why you were doing this was like a kind of attempt to like say, well, here's what would happen if you just went all the way toward your stated belief.

Speaker 12 So there was sort of a point in there about the dangers of literalism. Were you trying to make a similar point about the dangers of AI here?

Speaker 16 I did not go in with an axe to grind on this one. It was more the thesis is where is AI hiding? Because I don't believe AI is all good or all bad.

Speaker 16 I didn't believe that before and I don't believe it now. I think in some cases it's awesome.
Thank God that machine learning checks whether there's credit card fraud.

Speaker 16 But on the other hand, it has huge risks and has divided our country. So

Speaker 16 I was not coming in saying it's all good or all bad, just where is it? And also any lessons can I learn from spending two days without it?

Speaker 11 So let's get into the experiment. I feel like the biggest choice you had to make at the outset was, how am I going to define what counts as AI? And you decided to include machine learning.

Speaker 11 Talk to us a bit about kind of how you set the boundaries for how you were going to run this thing.

Speaker 16 Right. All the experts I talked to said AI is a big umbrella.
And you've got generative AI like ChatGPT and that's getting all the heat now.

Speaker 16 But AI has been around for decades because the umbrella also covers machine learning. It basically covers these machines that can evolve, that can look at new data and change.

Speaker 16 The way I explained it in a paragraph that was cut was that I see.

Speaker 11 This is why I love freelance writers. They harbor so many grudges, and it's great to be able to have a podcast to air them out.

Speaker 12 I bet you had a better idea for the headline, too, didn't you, AJ?

Speaker 12 I know, what is this crap?

Speaker 16 No, he was likable. It was just a matter of space.
But I said, traditional programs are input A yields output B, whereas machine learning is more like a recipe that changes.

Speaker 16 So you have a recipe, but then there's data that comes in and the recipe says, oh, people really like sugar. I'm going to add sugar.
So it evolves.

Speaker 16 And the reason I thought it was important to put the both in is because I feel they both have this great potential and great risk. They both have these unintended consequences.

Speaker 16 When you have machines that can change and you can't predict what they're going to do, that is,

Speaker 16 as I said, sometimes wonderful. Sometimes you end up with YouTube algorithms that turn us all into flat earthers.

Speaker 12 Right. So let's talk about some of the things you did on this experiment.

Speaker 12 My sort of characterization of this up top would be that you had to basically become Amish for 48 hours. You were.

Speaker 16 That was a line again.

Speaker 11 Did that also get cut out of the story?

Speaker 16 That was in there. I said Amish cosplay was one, or um, Laura Engel Wilder.
I was another comparison.

Speaker 16 Yes, because pretty much anything that's electric or electronic includes machine learning, including electricity itself, because Con Edison uses tons of machine learning to figure out where is the demand going to be.

Speaker 16 So yeah, I had to go Amish. I did have a solar-powered generator, so I could plug in a lamp for a while.
But yeah, it started the moment I woke up. My iPhone uses facial recognition.
That's AI.

Speaker 16 But even the goes further than that. You know, the iPhone camera uses AI.
The Gmail uses machine learning, even without the new AI features of Google.

Speaker 16 And

Speaker 16 water, that was a surprise because the New York Reservoir System uses machine learning to help.

Speaker 16 They want to make clear that humans make the final decisions because they don't want people to freak out.

Speaker 16 But the machine learning helps them figure out where is the demand and when should we make repairs.

Speaker 11 Well, so how did you stay hydrated for two days?

Speaker 16 Well, I did plan ahead, which maybe was a little bit of cheating, but I put a bowl out or several bowls on my windowsill in the weeks before to collect rainwater. And I didn't get Giardia or anything.

Speaker 16 So I feel lucky.

Speaker 12 How much rainwater were you able to collect before you started the experiment?

Speaker 16 Well, that's why I had several bowls and it was weeks. So yeah,

Speaker 16 it was not ideal. It was not ideal.

Speaker 12 So there's another piece of this experiment that I wanted to ask you about, which is that you had to forage for a meal in Central Park.

Speaker 12 Now, I was not under the impression that food itself was generated by AI, but what am I missing here?

Speaker 16 Well, of course, it all depends how you define it. I mean, food is really intertwined with AI.

Speaker 16 Industrial farms use AI and machine learning for figuring out whether to water the crops, when to plant them. Food is, of course, shipped along the supply chain, which is AI optimized.

Speaker 16 So if I'm being really strict, which of course I was, I was like, well, maybe I can't eat anything from the grocery. So to be super safe, I found a video where...

Speaker 16 A man named Wild Man Steve Brill is his name, and he teaches you how to find edible food in Central Park.

Speaker 16 So I took him up on that, and I went foraging, and I got some what are called plantain weeds in Central Park, which I ate. And

Speaker 16 not great, not great. They taste like dirt, but they didn't kill me.
They didn't kill me.

Speaker 11 That's that park-to-table cuisine that is so popular in New York these days.

Speaker 12 They're so ahead of the curve. I ate some plantain weeds from a guy in Central Park once and I saw the fifth dimension.

Speaker 12 That's right.

Speaker 11 I've seen all those plantain weed dispensaries that are popping up all over the city lately.

Speaker 11 Now, Kevin, as AJ is describing his experience of depriving himself of so much AI and so much technology, I'm wondering how that lands on you.

Speaker 11 Because I think if I were to come to you and to say, hey, all like generative AI services are going to be down for the next 10 minutes, I think, you know, your heart would seize up.

Speaker 11 You'd start having palpitations. I'd see a cold sweat running down your forehead.
So was this a challenging story for you to read? You know, it was.

Speaker 12 And I was thinking, how would I do at this? And then I was thinking, I would probably,

Speaker 12 this would probably be good for me. I should probably do this like every couple months.
I should do 48 hours of no technology at all.

Speaker 11 I mean, we've talked on the show. You've gotten to the point where you'll put your phone in a prison, you know, overnight if you feel like you're developing too strong of an attachment to it.

Speaker 11 But, you know, at the same time, I have never seen you run an experiment quite like this.

Speaker 12 Yeah, this is true. I have taken breaks from technology.

Speaker 12 There was a period where I was doing this like digital Sabbath thing where like one day a week, I would try not to use my phone or send any emails, but that was too hard. So I quit that.

Speaker 12 But AJ, did this experiment, we should, we should continue talking about this experiment because I want to get to the takeaways here. So you have this 48-hour period where you're not using AI.

Speaker 12 And then like, what is your emotion upon reaching the end of this period and being able to use this stuff again?

Speaker 16 Well, I've I had a lot of mixed emotions. I mean, on the one hand, there was the relief of being cut off, like you've talked about at the digital detox.
There was also annoyance.

Speaker 16 I mean, it is super annoying to try to,

Speaker 16 you know, you can't Google. You can't Google.
My Encyclopædia Britannica is not up to date.

Speaker 16 On the other hand, it was terrifying. That was another feeling I had because I realized how omnipresent AI is.
And as I've said, it's not all bad, but it has these huge risks.

Speaker 16 So it was really a mixture of emotions, which maybe is the right way to react to AI. It's not a monolith.
It's not black and white. It's super confusing.

Speaker 11 How much easier would this experiment have been if you had limited it to generative AI? If you said, okay, machine learning is fine.

Speaker 11 That's like pretty well just kind of integrated into all services, as you reported out here.

Speaker 11 But what if you just sort of said, well, I'm not going to use ChatGPT ChatGPT and other sort of generative AI services?

Speaker 16 I think it would have been easier for now.

Speaker 16 In five years, I think that line will be erased. I also think it would have been hard to research because I, as I confess in the article, I used a ton of ChatGPT to research this article.

Speaker 16 And that was another takeaway was how to use ChatGPT because ChatGPT sensed the thesis of my article. It knew I wanted to find machine learning and AI everywhere.

Speaker 16 So it was like serving me up these half-truths. And I had to give it some tough love and say, ChatGPT, pretend I've got the opposite thesis.
I don't want AI and ML to be anywhere.

Speaker 16 Tell me now, what are the reliable sources? Because, yeah, as you know, it's just, you know, an obsequious machine.

Speaker 12 Yeah. I mean, I think if there's one takeaway from your piece for me, it's that like the line between sort of classical AI or machine learning and generative AI is like thin and getting thinner.

Speaker 12 Agreed.

Speaker 12 Right now, people are very angry at generative AI. They say, oh, it takes all this electricity.
It uses all this water. These companies are sort of foisting it onto us.

Speaker 12 So I think there will be people who read this article and say, well, he's making sort of this inevitablist argument that like there's nothing we can do and we have to like live in this world.

Speaker 12 And it's, you know, it's too late to sort of turn back the tide.

Speaker 12 And I think what I came away feeling from your article was that, yeah, in five years, the sort of difference between generative AI and classical AI may be so small as to be. invisible.

Speaker 12 And we will just sort of think of this stuff as being on a continuum that starts with like Netflix recommendations and image recognition and self-driving cars and like goes through chatbots and all that other stuff.

Speaker 11 Well, I mean, there's like an old joke in the tech industry that AI is just whatever we call whatever the computer can't do yet.

Speaker 12 Right. Right.

Speaker 11 Like that we've just sort of been on this like advancing frontier forever. And yeah, it keeps being able to do more stuff.

Speaker 16 Right. Can I just add one thing about the inevitablest part? Because I don't want that to be the takeaway.
I don't want people to give up. I feel we need more transparency.

Speaker 16 I love the law that you were talking about, the California law about watermarking AI images. We need more transparency on what's AI and what is AI generated.
I'm in favor of more regulation.

Speaker 16 I mean, it is such a powerful technology. And I also want more control over my algorithms.
I hate that Facebook has so much control.

Speaker 16 Maybe there's a way to go in and make it show me articles that I disagree with. Maybe, but it would take me days to figure it out.
So there are things we could do. It's not inevitable.

Speaker 16 We got to take action because we are somewhat in control of where AI and ML are going to take us.

Speaker 12 I mean, I guess I'm curious what you think, AJ, as someone who has spent a lot of time thinking and writing about religion as well as AI, how far that comparison holds.

Speaker 12 Because I often tell people that being in San Francisco in the AI world in 2025 feels a little like being in the Protestant Reformation.

Speaker 12 You know, you've got all these cults and these groups and these people, you know, handing out pamphlets, declaring that the end is near and trying to recruit you to their movement.

Speaker 12 And it just feels like kind of this great blossoming of these

Speaker 12 ideas about the future and the end of the world.

Speaker 12 I also know that like the last time we had a big industrial revolution, there were like a lot of weird cults and utopian communities and sort of people who opted out of the new technology.

Speaker 12 I'm curious, like I understand this was a sort of a joke or a stunt or a piece meant to illuminate some larger point, but But I'm wondering if you think there will be people who actually choose to live like this because they will just see all of this AI.

Speaker 12 And instead of having the reaction that you had, which is like, I want to dive in and like investigate this, they will just be like, screw it. I'm out.
I'm going to

Speaker 12 my haven in the woods and I'm going to turn off all my devices and I'm going to live like it's 1870.

Speaker 16 Yeah, I think you will have that. You will have some big Luddite movement.
And I can understand it because it is scary. As to the religion metaphor, I think it's a good one.

Speaker 16 I think there is a lot of overlap and this sense of

Speaker 16 this destiny that AI is destined to create heaven on earth or even replace us. I would say one difference between religion and science is the idea that science can be falsified.
So my hope is.

Speaker 16 that people in the AI industry keep an open mind and look for falsification. Look for examples of where AI is actually not doing good and trying to adjust to that so that it doesn't become a religion.

Speaker 12 Well, AJ, thanks so much for coming. Thanks for doing this experiment.
The piece is called 48 Hours Without AI, and you can read it at the New York Times.

Speaker 11 Very scary story to read on Halloween.

Speaker 12 Oh, good point.

Speaker 1 This episode is supported by Blockstars, a podcast from Ripple.

Speaker 3 Join Ripple for blockchain conversations with some of the best in the business.

Speaker 5 Learn how traditional banking benefits from blockchain or how you're probably already using blockchain technology without even realizing it.

Speaker 1 Join Ripple and host David Schwartz on Blockstars, the podcast.

Speaker 8 Crypto investments are risky and unpredictable.

Speaker 7 Please talk to a financial expert before you make any investment decisions.

Speaker 9 This is not a recommendation by NYT to buy or sell crypto.

Speaker 14 As a small business owner, you don't have the luxury of clocking out early. Your business is on your mind 24-7.
So when you're hiring, you need a partner that works just as hard as you do.

Speaker 14 That hiring partner is LinkedIn Jobs. When you clock out, out, LinkedIn clocks in.

Speaker 14 LinkedIn makes it easy to post your job for free, share it with your network, and get qualified candidates that you can manage, all in one place. Post your job.

Speaker 14 LinkedIn's new feature can help you write job descriptions and then quickly get your job in front of the right people with deep candidate insights. Either post your job for free or pay to promote.

Speaker 14 Promoted jobs get three times more qualified applicants. At the end of the day, the most important thing to your small business is the quality of candidates.

Speaker 14 And with LinkedIn, you can feel confident that you're getting the best. Find out why more than two and a half million small businesses use LinkedIn for hiring today.

Speaker 14 Find your next great hire on LinkedIn. Post your job for free at linkedin.com/slash hard fork.
That's linkedin.com/slash hard fork to post your job for free. Terms and conditions apply.

Speaker 15 AI is transforming the world, and it starts with the right compute.

Speaker 13 ARM is the AI compute platform trusted by global leaders.

Speaker 7 Proudly, NASDAQ listed, built for the future.

Speaker 9 Visit ARM.com/slash discover.

Speaker 11 Hard Fork is produced by Rachel Cohn and Whitney Jones.

Speaker 12 We're edited by Jen Poyat.

Speaker 11 This episode was fact-checked by Will Peischel and was engineered by Katie McMurray.

Speaker 11 Original music by Mary Lozano, Rowan Nimaso, and Dan Powell. Video production by Sawyer Roquet, Pat Gunther, and Chris Schott.

Speaker 11 You can watch this whole episode on YouTube at youtube.com/slash hard fork. Special thanks to Paula Schuman, Hui Wink Tam, Dahlia Haddad, and Jeffrey Miranda.

Speaker 11 You can email us at hardfork at nytimes.com with the error you found on your Brachipedia page.

Speaker 17 Mass General Brigham in Boston is an integrated hospital system that's redefining patient care through groundbreaking research and medical innovation. Top researchers and clinicians like Dr.

Speaker 17 Pamela Jones are helping shape the future of healthcare.

Speaker 18 Mass General Brigham is pushing the frontier of what's possible. Scientists collaborating with clinicians, clinicians pushing forward research.
I think it raises the level of care completely.

Speaker 17 To learn more about Mass General Brigham's multidisciplinary approach to care, care, go to nytimes.com/slash mgb. That's nytimes.com/slash mgb.