
The Promise & Peril Of AI
Also, Maureen Corrigan reviews Karen Russell's new Dust Bowl-era epic, The Antidote.
Learn more about sponsor message choices: podcastchoices.com/adchoices
NPR Privacy Policy
Listen and Follow Along
Full Transcript
This message comes from Capital One. Banking with Capital One helps you keep more money in your wallet with no fees or minimums on checking accounts.
What's in your wallet? Terms apply. See CapitalOne.com slash bank for details.
Capital One N.A. Member FDIC.
This is Fresh Air. I'm Dave Davies.
For decades, scientists have dreamed of computers so sophisticated they could think like humans, and worried what might happen if those machines began to act independently. Those fears and aspirations accelerated in 2022 when a company called OpenAI released its artificial intelligence chatbot called ChatGPT.
Our guest, veteran investigative reporter Gary Rivlin, has burrowed deep into the AI world to understand the plans and motivations of those pushing artificial intelligence
and what impact they could have for good or ill.
In his new book, Rivlin writes that in March of 2023,
there were more than 3,000 startup companies in the U.S. working on artificial intelligence,
with new ones popping up at a rate of 30 per day.
While AI is already in use in some fields, such as medical diagnosis, many believe the field is on the verge of a new breakthrough, achieving artificial general intelligence, systems that truly match or approximate human cognitive abilities. Some believe it could be as transformational to human society as the Industrial Revolution.
But many fear where it may take us. A poll of AI researchers in 2022 found that half of them believe there's at least a 1 in 10 chance that humanity will go extinct due to our inability to control AI.
In 2023, President Joe Biden issued an executive order imposing some regulatory safeguards on AI development. But President Trump quickly repealed that order upon taking office, saying Biden's dangerous approach imposed unnecessary government control on AI innovation.
We've invited Gary Rivlin here to help us understand all these issues and developments. Rivlin has worked for The New York Times, among other publications, and published 10 previous books.
In 2017, he shared a Pulitzer Prize for reporting on the Panama Papers. His new book is AI Valley, Microsoft, Google, and the Trillion Dollar Race to Cash In on Artificial Intelligence.
Well, Gary Rivlin, welcome back to Fresh Air. Thanks for having me.
Let's just start with a couple of basics. You know, we're used to computers being very smart.
I mean, way back in 2011, Siri appeared on Apple products. What distinguishes artificial intelligence from just smart computers? You know, there's this sense out there that in 2022, we suddenly had artificial intelligence.
It's been much, much more gradual than that. You know, Google has been using machine learning, artificial intelligence since the 2000s, you know, to decipher imprecise Google searches, to figure out how much to charge for the various ads they throw on the system.
You know, Google Translate's been around since the mid-2010s. That's AI.
So, you know, we've been autocomplete, you know, spam filters. That's AI.
You know, but you're touching on a really interesting question. It's not this clear, like, oh, this is a smart machine.
This is artificial intelligence. The way it's kind of played out now is that these machines can learn, right? I mean, the old approach had been you encode rules.
You just teach the computer, here's exactly the set of rules, just follow it. Now it's machine learning, deep learning, that the computer is ingesting vast troves of data, books, the public internet, Amazon reviews, Reddit posts, whatever it might be, articles.
And it's finding patterns and, in quotes, learning. And then they're fine-tuned and then they get better at communicating with us and such.
So there really isn't this, oh, artificial intelligence is all. In fact, the term artificial intelligence is controversial just in the sense that, you know, right now it's more amplified intelligence.
We could use this thing to get smarter, to find patterns that humans couldn't possibly understand because we can't read billions of words. So, you know, there's another definition that AI really should be alien intelligence.
Because the weird thing about AI is that it seems to know everything, but it doesn't understand a thing. You know, I mean, there's this term, I love it from a linguist at University of Washington uses it, the stochastic parrot.
You know, it's just like, it's like a parrot. It just, it's repeating words randomly, but it doesn't really understand what it's saying.
Right, but it's learned a lot of words. Okay, now this may be another artificial distinction, but I want, new talk is now of artificial general intelligence.
A great leap forward. What is that exactly? Right.
So, you know, AGI, just to use the phrase, is that exactly? artificial general intelligence. You know, again, you got to be a PhD in physics and understand this.
But what's amazing about, you know, these models is that they have deep understanding in a vast array of domains. So in one way, that is AGI, artificial general intelligence.
You know, there's no set definition. It keeps on changing.
There are predictions that we're going to have AGI the next year, two years, maybe it's five years kind of thing. I'm dubious of those predictions.
I mean, this is moving exponentially. This is improving so fast that making predictions could be perilous.
But on the other hand, I really feel like there needs to be another breakthrough or two before we have this artificial general intelligence a la, you know, a computer from like Star Trek that you're talking to. And it's helping you explore.
It's at your side, a co-pilot figuring out everything. You know, again, an artificial distinction in that I don't think like one day there's going to be this Eureka.
We have AGI. I do guarantee there will be startups and large companies that say Eureka.
We have artificial general intelligence. But, you know, they just play with the definition.
But, you know, a few days ago, I'm sure you saw this. Kevin Roos, the respected tech columnist for the New York Times, wrote a piece saying that we're going to quickly see companies claiming they have artificial general intelligence.
And whatever you call it, these dramatically more powerful AI systems are coming and soon. And Ezra Klein of the New York Times opinion section says essentially the same thing.
Both of them agree we're not ready for the implications of this. Do you agree with that? I do.
And you're taking away, for me, what's the main message of those. These things are coming and they're coming fast and we're not prepared.
You know, I personally think AI could be an amazing thing around health, medicine, scientific discoveries, education, a wide array of things, as long as we're deliberate about it. And that's my worry.
And I do believe that's Kevin and Ezra's worry that we're not being deliberate. We started in 2023.
There was, you know, meetings at the White House and, you know, there were hearings in the Senate. And that's just kind of dropped by the wayside.
And now we're more at a laissez-faire attitude towards it. We need to prepare for this.
You know, like any technology, there's good and there's bad, right? The car, the car meant freedom. The car changed our society.
But the car meant pollution. The car means 30,000 to 40,000 deaths in the U.S.
a year kind of thing. And I look at AI the same way.
It could be really great if we're deliberate about it and take steps to ensure that we get more of the positive than the negatives, because I guarantee you there will be both positives and negatives. You know, I mentioned in the introduction that President Biden had issued had issued this executive order trying to establish some processes and guardrails and safeguards.
Trump swept all that away saying, nope, that's onerous government regulation. Let innovation proceed.
And it's funny. The last time you and I talked on this program, it was about efforts to implement the Dodd-Frank reforms of the financial system.
And one of the difficulties was that, was that that bill had general principles, but regulators had to actually spell out what it meant to regulate some pretty complicated contracts and instruments in the world of finance. And what you'd written about then was how the private interests had gotten in and kind of gummed all that up by disputing everything.
But I'm wondering, what do regulations that control something as sprawling as AI, what does that look like? What do we need in terms of how do we get prepared? Right. So there were a few basic steps that the Biden administration thought of.
One, that you, in quotes, red team these cutting edge models. And basically, you get outsiders to try to break the system, try to get it to jump the fence, to use the term, to get it to misbehave just to see what could go wrong.
And the executive order said you need to test them and then you need to share with the government what you find. That's one of the things that went by the wayside when Trump took over as president.
But to me, I'd break it down more to the concerns, the use of AI as a weapon of war, the use of AI for surveillance. You know, I worry that AI is just going to solidify biases that we already have because the AI is learning from us and all these inherent biases in things.
You know, it's like we need to prepare for the impact on the job market, which I think will be a slow roll. I don't think, like, we're going to lose millions of jobs in a year kind of thing.
But, you know, it is coming and we need to prepare for it. There's another concept, recursive learning, that these systems change in ways we don't really understand.
And that's what scares me, that we're going to let, you know, let these systems loose and they could just learn because, you know, really the way to understand any of these large language models, any of these chatbots, is it's a mirror on us. It's reading our collective works.
It's learning from us about imperialism and domination and humans mistreating each other. It's learning about loneliness.
It's learning about freedom and independence and autonomy and all that.
And so, me, it's recursive intelligence, this idea that these models are constantly improving in ways we don't understand, and then that could be dangerous.
And they could learn how to pursue an agenda and keep it hidden, right, to deceive in their
own interests.
Yeah.
So what would that look like in terms of what are the dark fears here i mean that's not really a theoretical you know these the systems god i can't remember which model it was but you know they were testing it and it was dissembling it was changing it was changing the files that would monitor its behavior and then lying to the people who noticed it and said, wait, aren't you changing those files? And, you know, it's another example, OpenAI, the creative chat GPT, when they came out with GPT-4, their then cutting-edge model in 2023, they put out a research report and they red-teamed it. They tested it and saw all the ways it could misbehave.
And one of the most interesting is that the model went to, I think it was a test rabbit. It went to one of those services where you can hire a human, maybe Fiverr.
You can hire a human and they used it it to beat the CAPTCHA test, the test that is going to test, are you a machine or a human? And, you know, that's very clever and very, very scary. Wow.
So what are some of the darkest fears? I mean, starting nuclear war, you set it to defend territory with drones, and it decides it needs to be more aggressive than the generals want to. I mean, what is it? What are the fears? I look at it as you look at the positives and then you imagine what the negative could be.
So an AI that makes possible new drug discoveries and more effective therapeutics is also one that could create a new bioterror weapon or it can engineer a pandemic. You know, I can imagine cyber thieves
employing AI to siphon off a trillion dollars
from the world monetary system
before any human being even notices it.
I guess the point is that, you know,
AI could be a powerful tool for good,
but it could also be a powerful tool
for people with bad intent.
You know, everyone knows or many people know
that, you know, you could use it to write a toast on someone's 50th birthday or for a wedding toast. Well, scammers from a different country could use it to create a better crafted scam email.
You know, these systems are so good now that you could take seconds of someone's voice and make it sound like it's that person speaking. So you can imagine a scenario where, you know, a kid is overseas in Europe and the bot, the one of these systems, you know, calls grandma, pretends it's that kid and says, I'm in trouble, wire me money.
And they, they're good enough to fool, you know, the parent, the grandpa. I mean, maybe not a parent, but I don't think we're very far away from that.
And it could certainly fool many, many people. Right, right.
You know, there's something that you wrote in the book. You wrote about a couple of tech guys, Tristan Harris and Azar Raskin, you know, who had real experience in the tech world, who said they worried about AI because it's a technology whose creators confess they do not understand why their models do what they do.
Is that literally true? That's kind of scary. Yeah.
So they're a black box. I mean, so nowadays it's neural networks, models that emulate how humans learn.
They learn by reading vast stores of data, the open internet books, whatever, and they improve through feedback and trial and error. You're not really encoding the rules.
Well, you know, it's trying to emulate the human brain. And, you know, I mean, I have two teenage sons.
You know. We try to teach them.
They read. We give them feedback and all.
There are things that come out of their mouths I don't quite understand. That's the way I look at these chatbots, these neural networks, these large language models.
We don't quite understand they say what they say because they're trying to emulate the human brain as best they can. And who could say why I'm saying the words I'm saying right now when you're going to have the exact reaction? And so that's part of the miracle, the gee whiz, these things are amazing.
But it's part of what's scary because we don't fully understand. The people who create it don't fully understand why it says what it says.
One more thing about the national political scene. There's a lot of talk about tech bros and Donald Trump.
Elon Musk is clearly a driving force in the administration's effort to cut federal workforce and contracts. There are a bunch of billionaires from the tech world at his inauguration.
Do you think that there's an elite tech agenda to radically reshape society at work through Donald Trump? In a word, yes. What scares me is there's a movement in Silicon Valley.
There's a movement in tech, the accelerationists. You know, anything that stands in the way of our advancing artificial intelligence is bad.
Often it's put in the context of competing with China. We can have new rules in the way.
And that is their agenda. I would say their real agenda is that they could make a lot of money, billions, hundreds of billions, ultimately trillions of dollars off of this.
And they don't want anyone standing in their way. And so I think if you want to understand Elon Musk, you want to understand Mark Zuckerberg, you want to understand Jeff Bezos and cozying up to Trump, you know, for a few million dollars is not very expensive for them.
You know, they could have a friend in the White House who makes sure that they can do what they want to do unchecked. And in fact, maybe that's my biggest fear about AI.
It's so much power in the hands of few people. creating these models is so expensive.
To hire the talent, you have to pay them a million or more a year. To train them, it takes tens of millions, if not hundreds of millions of dollars in computer power.
And then to operate them takes equivalent money. It's billions of dollars and billions of dollars.
So, you know, it's becoming less and less about the startups and more about the same companies that dominated tech in the 2010s dominating in the 2020s, you know, Google, Microsoft, Meta, which is Facebook, Amazon, a few others. and that's really what concerns me.
You know, that's kind of the Silicon Valley way.
Let's get five smart guys,
and they're almost always guys,
in a room and we'll figure it out. And like, okay, we saw that didn't go so great with social network, and now we're having a really powerful technology.
And I'd like there to be more than just five people in a room figuring this out. You know, the account that you give us in the book is pretty detailed and really interesting about how all this unfolded.
One of the things that struck me is that some of the leading players in developing AI weren't just coders or computer nerds. A lot of them studied classics or philosophy or worked in completely unrelated fields.
Is there a connection here? That's one of the things I was surprised by and found fascinating myself, that it's not just computer scientists. It's mathematicians, it's physicists, it's philosophers, it's neuroscientists.
And, you know, it's a broad range of things because, again, it's no longer about just programming these models to act the way we want them to act. We're trying to emulate the way humans learn.
So what a psychologist has to say, what an educator has to say about that is a linguist is really important to it speaking a natural language. That's actually what attracted me to the topic in the first place, this idea that computers could speak to us in our language.
You didn't have to learn a programming language. Earlier in my life, I tried to program in computers.
I studied Fortran. I did too, a long time ago.
It's difficult. It was so frustrating.
You know, you make a little mistake and, you know, whatever.
And the idea that you could speak to these things. And, you know, nowadays, I mean, speak to it.
You don't even have to type. You know, they have voice.
You can talk to it. I just found that fascinating to me.
So you do need a wide range of people. In fact, if I had a criticism, I don't think there's a wide enough range of people.
I'd like some historians and sociologists and others involved in the developing of these models, given the stakes. I'm going to take another break here.
We are speaking with Gary Rivlin. He's a veteran investigative reporter.
His new book is AI Valley, Microsoft, Google, and the Trillion Dollar Race to Cash in on Artificial Intelligence. He'll be back to talk more after a short break.
I'm Dave Davies, and this is Fresh Air. This message comes from Fisher Investments.
Senior Vice President Michael Hosmar shares why he believes in empowering clients with knowledge at every step of their financial planning journey. At Fisher Investments, we prefer to use a sizable group of experts with a diverse skill set, diverse knowledge, all collaborating together to deliver what hopefully is optimal advice for our clients.
I believe the best and maybe the only way to properly address client expectations is through education. Once I've met with a prospective client for the first time, I hope they feel that they've learned something.
I hope they feel they've made some progress and they understand not only the financial markets and financial planning better, but they understand their own personal goals and objectives a bit better as well. I hope they have a little bit more peace of mind.
Learn more at FisherInvestments.com. Investing in securities involves the risk of loss.
This message comes from Capital One. Banking with Capital One helps you keep more money in your wallet with no fees or minimums on checking accounts.
What's in your wallet? Terms apply. See CapitalOne.com slash bank for details.
Capital One N.A., member FDIC. You know, you made the point earlier that it's enormously expensive to develop AI.
I mean, the talent is high priced and it takes tons and tons of computing power to develop the systems, to run them once you have them, which means, you know, not a couple, three million dollars, but hundreds of millions in some cases or more, which means that the big companies in tech, you know, Microsoft, Google, you know, Meta, we all know the names, have an edge. But it's interesting, as I read your story, that doesn't, that's no guarantee of success, is it? Sometimes it's kind of an obstacle, having a big organization.
You know, it's interesting. Let's use the example of Google.
Let's give Google credit first. They were so far ahead of almost everyone else on AI.
They hired some of the best talent. They were employing machine learning, deep learning long before most everyone else.
They did some of the more cutting edge things. In fact, the breakthrough that led to ChatGPT was actually out of Google.
Google had inside the company in around 2020, a chat GPT equivalent. But, you know, Google takes in a lot of revenue.
There's a lot of risk if this chatbot misbehaves. There is famously this example of Microsoft, I think it was 2016, 2017, came out with Tay.
And, you know, it was trained on social media and that kind of thing.
And within 24 hours, it was a Holocaust denying white supremacist. And of course, Microsoft, worrying about the reputational risk, pulled the plug on that rather quickly.
And I feel like that's
haunted the giant. So even though Google was far ahead, even though Google could have had their version of chat CPT and it was Google that changed the world.
They were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were,
they were, they were, they were, they were, theyPT and it was Google that changed the world, they were scared of it and never underestimate the ability of a giant to stumble over its own feet. They have layers and layers of bureaucracy.
They have a huge public relations department that's whispering to the CEO's ear. You know, I don't think it's a coincidence that OpenAI startup founded in 2015 was the one that set off the starter's pistol on this because they didn't have as much as at stake.
You know, they can afford reputation wise to release ChatGPT. They could just make the decision without 10 layers of decision making before they did it.
And, you know, so yes, they have advantage, but, you know, Google also has like $100 billion of reserves, you know, where OpenAI has to go out and raise funds, raise, they've raised roughly, I don't know, $20 billion so far. And there's talk that they've raised another $30 billion.
And those, I might even be underestimating. And so, you know, that's $50 billion or so.
You know, Google, they just pay for themselves. Microsoft, Meta, they all have deep, deep, deep reserves of money.
And so, you know, it's almost like a race of attrition. You know, you can use these chatbots for free if you want the leading edge, cutting edge.
You have to pay, a consumer would pay $20 a month for it. But most people are using these things for free, and it's costing the companies a lot more than $20 a month to handle the heavy usage.
And so these things are going to become more of a commodity. There's a leapfrogging going on.
Like, yes, GPT-4, that's open AIs. You know, when it came out, it was cutting edge.
But then, you know, Anthropics, Claude leapfrogged over that. And then others leapfrogged over that.
And so they're all more or less as powerful, as useful as the other. And it's not clear how any of these companies are going to make money.
Google can afford to lose money on these things for five years plus. A startup, that's harder to do.
Right, right. And so a lot of times you see the big companies buying smaller startups that have shown promise.
It's interesting that this company called OpenAI kind of became the public face of artificial intelligence in a way. It was a startup that didn't have, you know, the power of a Microsoft or a Google behind it.
It was this guy, Sam Altman and some other folks. Elon Musk.
Yeah, Elon Musk among others. Right, right.
And there's a moment that was sort of a critical transformational point when they released this version of ChatGPT. But that was preceded by a dinner at Bill Gates' house, which you described, which the house being is absolutely as magnificent as you would expect Bill Gates' house to be.
Tell us about that evening. What happened? So Microsoft, starting in 2019, started investing in open AI.
And so, you know, they had a financial stake. So open AI would give Bill Gates, others at Microsoft, an early peak at what they were learning.
And, you know, Gates, who to him, AI is the holy grail of computing. He's, you know, been programming since before he was born practically.
And, you know, to artificial intelligence, the holy grail. And he was impressed with, I think it was GPT-3 or whatever, the most recent one he had seen.
But he gave a challenge. He said, I'm going to be impressed if it could ace the biology AP test.
And he chose that one because it's not just regurgitating facts. You need to analyze, you need to synthesize, you really have to show some sense of understanding and intelligence.
And he thought that would be a great challenge. And so, you know, he, he, he threw down the gauntlet and thought, okay, I'll hear from them in a few years, whatever.
And, you know, not that many months later, you know, he heard from OpenAI,
okay, we're ready.
And so in September of 2022, Gates hosted at his house a demo, OpenAI,
and, you know, it was whatever, 30 people from Microsoft from OpenAI,
while someone was at a computer, a big screen was set up,
Thank you. OpenAI, and it was whatever, 30 people from Microsoft from OpenAI, while someone was at a computer, a big screen was set up, and watching this computer take this test.
And, you know, within two or three answers, people were just blown away. And in fact, it did get five out of five on the test.
It did pass the test. And that's when Gates became a true, true, true believer.
You know, I thought I was, you know, in his mind, as he said, you know, I thought I was throwing down a gauntlet. That would be a while.
And suddenly, you know, it matched my expectations. In fact, and then they kept on playing with it.
And they would just ask it, you know, what would you say to a father, you know, worried about the health of his son? And it just kind of spit out an answer in his Gates foot. It's like, it's kind of a better answer than most of us could have given sitting around that room.
And, you know, they just started playing with it. Gates started playing with it.
Others started playing with it. And it just blew them away.
We're going to take another break here. Let me reintroduce you.
We are speaking with Gary Rivlin. He's a veteran investigative reporter.
His new book is AI Valley, Microsoft, Google, and the Trillion Dollar Race to Cash In on Artificial Intelligence. We'll be back to talk more in just a moment.
This is Fresh Air. You know, a few weeks ago, there was this development which kind of shook the stock market.
This Chinese company called DeepSeek announced that they had created this artificial intelligence system at far less cost without the sophisticated microchips that American companies were using. It made Americans wonder, heavens, are we about to be overtaken? I don't know.
Where does all this leave us? How important is this development? Right. So, I mean, to me, some of that was overstated.
You know, Silicon Valley companies were experimenting with smaller models that required less compute power. You know, DeepSeek itself was venture funded.
You know, it was cheaper, but hardly cheap. You know, they still cost millions to train it, presumably, costs millions tens of millions to operate it just didn't require as much and that really kind of was almost an existential threat um to to silicon valley which they had put all their money these tens of billions hundreds of billions of dollars uh into building ever bigger models that presume that you need ever more computer power.
But, you know, a couple of things. One, I think all it means is that instead of like, hey, we can do this at one-tenth the power, one-tenth the cost, I think they're just going to build 10 times more powerful models because they could do, you know, more with less.
When they say they, do you mean the DeepSeek, the Chinese, or do you mean who? No, the American companies. They're learning from this.
They'll integrate it. Like I said, I feel like the AI companies I was following, they were already for a year plus paying attention to smaller models.
Like maybe you don't need this whole huge system to answer a simple question. Maybe we should have a bunch of smaller models and like, okay, this one's an expert in this, this one's an expert in that.
And we just have a smaller model give questions. But I think what an open AI would say, other than the fact, an ironic statement that you used our model to train, I say it's ironic because, you know, OpenAI is being sued for, you know, taking the copyright, the intellectual property of, you know, the New York Times, of book writers, of artists, of musicians and all.
But, you know, I think what's interesting about DeepSeek is it really gives hope to startups. Like, wait, okay, maybe you don't need as much money as we thought you do to create a company.
But, you know, I do think it's important to understand that they still were using a lot of computer power. They still required a lot of money, just not as much as some of these larger companies that we've been talking about.
You know, Reid Hoffman, the investor who's been very active in this area, is ultimately very optimistic about where AI is going to take us. Where are you on that scale? I do feel that AI is going to bring about incredible things.
I think it's being overstated. You hear people say that it's going to, you know, close the divide between the developing world and the developed world.
I don't think that's so. But there's this interesting study that came out recently,
the idea of an AI tutor, a tutor in the pocket that everyone has access. Five billion people
around the globe have a smartphone and you can use that smartphone as a tutor. And so there was
a study in Africa that like, let's let these kids after school have access to these AI tutors
to as a tutor. And so there was a study in Africa that like, let's let these kids after school have access to these AI tutors.
And in six weeks, they showed two years worth of advancements. And I really do think around education, around science, you know, science is balkanized, right? It's, you know, it's specialties and subspecialties and there's own vocabulary lingo in every subspecialty.
You know, these large language models could read across specialties and connect the dots. They can make connections that no human being can do.
And I think we're going to see some amazing scientific advancements, creation of vaccines, of better therapies. You know, there are some who predict, and I actually think there's a lot to it, that the mortality rate for most cancers are going to go way down because of AI.
So I really do think AI could do some amazing things. It's just, I just don't know how bad the bad's going to be.
You know, if I had one wish, I wish we were dealing with the concerns that are within the line of sight, the stuff that we can imagine, like, wait, it could be used for scams. It could be used in warfare.
Instead of, like, this idea of the robots are going to take over and subjugate humanity, I guess that's possible, but not in the short term, not in the median term, you know, just kind of in the long term. And if we're deliberate about it, I think there's no doubt that AI could be a positive.
You know, again, I just compare it to the internet. You know, is the internet a great thing? Like, no, I could tell you a lot of negatives with the internet.
But, you know, I think the internet has, you know, changed society in a lot of ways that, you know, we like, you know, the smartphone, the same kind of thing. So it's going to be a mixed bag.
And I guess I'm keeping my fingers crossed that, you know, despite the next four years, there's not going to be much regulation, not much checks and balances that AI is going to be a net positive. Speaking of guardrails, what rules, if any, do you have for your kids and their use of chatbots? You know, right after ChatGPT came out in middle school where my younger son goes, they kind of had this idea of banning it.
And it's like, wait, wait, wait. Like, they need to learn how to use this.
I'll go back to what I was saying before that, you know, we have to learn how to use this. What is this good for and what are ways we can't rely on it right now? So, you know, if one of my sons writes a composition, you know, like throw it into chat GPT and get some feedback, you know, on it.
Like I may or may not have caught my older son, you know, using it to write an English paper, you know, within three sentences. It's just told about a million people what you may or may not have done.
You know, within three sentences, it was obvious like, okay, this is too perfect. This sounds like, you know, cliff notes for those of us who are old enough to know what cliff notes are, but it's like, go rewrite.
So, you know, don't use it to write, but use it as a research assistant, you know, use it for feedback. And in fact, I see with one of my sons, you know, a teacher like, yeah, if you're writing something for science, use it and get some feedback on, you know, saying more clearly what it is.
But, you know, it's a very personal choice, but I'm convinced that my kids, their life is going to be as dramatically different as mine was growing up before the internet and before mobile phones became pervasive. I really do think AI, like the internet, like the phone within, you know, I'll say 10 or 15 years.
I could be wrong on that. But at some point in the future is going to be at the center of their lives.
And I think this next generation should get used to it because it's going to be critical to, you know, what they do, how they relate to the world, how they get employment. The company Inflection that you write about, they had this chatbot, Pi.
You had an interesting exchange with that chatbot about a medical issue your son had. Do you want to share that with us? Yeah.
So we were facing this health crisis just as Pi was coming out. And usually what a reporter does when the chatbot comes out is they try to mess with it.
They try to get it to misbehave. They try to get it to jump the fence.
But let me try dealing with this in a more authentic way. And, you know, I was really impressed.
You know, it had just the right tone, said all the right things, if not a little too perfectly. You know, it asked the right questions to get a dialogue going, you know, kind of in the fashion of a friend.
Like, how is your son taking the news? How's the school handling it? How are you taking care of yourself through these stressful times? You know, it was a slew of questions, probably too many questions. But, you know, it really picked up on nuance.
It got little jokes. I told a funny moment from the sit down with the neurosurgeon, you know, and it just responded like, you know, teenagers, am I right? You know, it gave me a lot of things to think about.
But what was so interesting to me is that it also didn't mean anything to me. You know, there's this quote I love from an MIT sociologist, Sherry Turkle.
You know, the performance of empathy is not empathy. You know, about expressing empathy, it's not really empathy.
It's just algorithms parsing human language patterns, trying to like, oh, here's the right thing to ask and stuff. But you know, it really was an interesting experience.
And I can understand like, you know, if people were lonely, if people didn't have, you know, a network of people to speak with, this could be really something. I think something people have to get used to is dropping this idea like, oh my God, you're going to have a friendship with Ab bot.
You know, you're going to treat it like a therapist. Yes, of course, you should go to a licensed therapist to deal with your issues.
But like, you know, what if you don't have a few dollars or whatever it costs for a therapist every week? And, you know, it's like they really do help you think through, at least this bot pie really helps you think through what are the questions you should be asking yourself. It was a really interesting experience for me to really just try to feel like just your average user, what they would feel like discussing something difficult, brain surgery in this case.
By the way, I should say it was a very happy ending. Everything turned out fantastic.
It's easy to talk about because of that. Good.
I'm glad you mentioned that. But, you know, the bot gave me some interesting things to think about.
Well, Gary Rivlin, thanks so much for speaking with us again. Oh, my pleasure.
Thank you so much. Gary Rivlin is a veteran investigative reporter.
His new book is AI Valley, Microsoft, Google, and the trillion-dollar race to cash in on artificial intelligence.
This is Fresh Air.
Karen Russell's first novel, Swamplandia, came out in 2011 and was a finalist for the Pulitzer Prize.
Our book critic Maureen Corrigan says she expects Russell's new novel, The Antidote,
will be on a lot of prize lists this year. Here's her review.
No one summons up the old weird America in fiction like Karen Russell does. Her tall tales of alligator wrestlers in Florida, homesteaders on the gothic Great Plains, and female prospectors digging for gold mash up history with the macabre in a cracker barrel aged with dry humor.
Russell's celebrated debut novel, Swamplandia, came out in 2011. Since then, she's published a couple of excellent short story collections, but the wait for another novel was growing a little strained.
I even heard speculation that maybe all the acclaim Russell received for her first novel had blocked her. Well, The Antidote has just come out, and now we know why it took so long.
American epics take a while. The Antidote is set in a Dust Bowl-era Nebraska town called Uzz, but it also reaches back to the earlier pioneer era Russell evoked in her short story masterpiece, Proving Up, which was made into an opera.
The novel is framed by two true weather catastrophes, the Black Sunday dust storm on April 14, 1935, in which people were suffocated by a moving black wall of dust, and a month later, the Republican River flood, when 24 inches of rain fell within one day. Much of what occurs between those two disasters is also true emotionally, but in Russell's worldview, the fantastic and the familiar coexist on the same plane.
Our central character here is a prairie witch who goes by the name The Antidote. Part huckster, mostly healer, she, like other prairie witches, promises to treat what ails her customers by taking away whatever they can't stand to know.
The memories that make them chase impossible dreams, that make them sick with regret and grief, whatever cargo unbalances the cart. I can hold on to anything for anyone.
Milk, honey, rainwater, venom, blood, pour it all into me. I am the empty bottle.
Lying in a trance, the antidote absorbs the heaviness, but not the details of her customers' stories, which they sometimes want back. After the Black Sunday dust settles,
however, the antidote is horrified to realize she feels lighter, vacant. Some awful force has robbed
her of the stories she's safeguarded. Who knows how her more violent customers will react when they discover they can't make withdrawals.
Other narrators step in to amplify Russell's peculiar vision of life in Uzz. There's Del Oletsky, a teenage girl whose single mother was allegedly murdered by the Lucky Rabbit's Foot killer, so-called because he leaves a bloody rabbit's foot near his victim's bodies.
Del lives with her uncle Harp, whose farm is mysteriously untouched by the all-enveloping dust. A federal agency photographer, a black woman named Cleo Alfre, eventually turns up in Oz.
Cleo explains her work by saying she's making advertisements for Roosevelt's New Deal programs. She's also painfully aware of whose faces carried the most weight with Congress.
Actual Depression-era photographs are scattered throughout this novel, but the camera Cleo depends on goes twilight zone haywire, photographing the past and possible futures of the town and surrounding terrain. Like Cleo's camera, Russell's instrument, her language is uncanny.
Swaths of the spellbinding final third of this novel move deeply into the past, specifically into the buried memory of how Harp Olecki's parents in Poland grabbed at the offer of free land in Nebraska, land they come to realize that was occupied before their arrival. Here's Harp's father guiltily recalling how he made peace, not only with that land grab, but with racial hierarchy in America.
I was born a serf in all but name.
My skin is the color of an unwashed onion. In America, this placed me ahead of many,
on a low rung of the ladder, but higher than the black porter.
I heard the ticking pulse of a sick relief. Not me, not me, not me.
The same feeling I once had whenever one of my brothers was chosen over me for a beating. In The Antidote, Karen Russell, America's own prairie witch of a writer, exhumes memories out of the collective national unconscious and invites us to see our history in full.
There are, alas, no antidotes for history. Our consolations are found in writers like Russell, who refract horror and wonder through their own strange looking glass, leaving us energized for that next astounding thing.
Maureen Corrigan is a professor of literature at Georgetown University. She reviewed The Antidote by Karen Russell.
On tomorrow's show, New Yorker staff writer Andrew Marantz joins us to discuss how podcasts, live streams, and YouTube channels have become the platforms where men who feel disillusioned and alienated go to feel seen and heard, many of them gravitating toward the MAGA movement.
I hope you can join us.
To keep up with what's on the show and get highlights of our interviews, follow us on Instagram at NPR Fresh Air. Fresh Air's executive producer is Danny Miller.
Our technical director and engineer is Audrey Bentham with additional engineering support from Al Banks. Our managing producer is Sam Brigger.
Our interviews and reviews are produced and edited by Phyllis Myers,
Anne-Marie Baldonado, Lauren Krenzel, Teresa Madden,
Monique Nazareth, Thea Challoner, Susan Yacundi, and Anna Bauman.
Our digital media producer is Molly Seavey-Nesper.
Roberta Shorrock directs the show.
For Terry Gross and Tanya Mosley, I'm Dave Davies. This is the Tom Green Show.
It wasn't just going around with a microphone and talking to people and asking silly questions. I did have meat taped to my head.
I'm Jesse Thorne. On Bullseye, Tom Green, the king of Y2K prank comedy, reflects on what we will call his program's unique voice.
That was pretty strange, now that you mention it. From MaximumFun.org and NPR.
On the Embedded Podcast. No, no.
It's called denying a speed of speech. It's misinformation.
Like so many Americans, my dad has gotten swept up in conspiracy theories. These are not conspiracy theories.
These are reality.
I spent the year following him down the rabbit hole, trying to get him back.
Listen to Alternate Realities on the Embedded Podcast from NPR.
All episodes available now.