A Chatbot Reacts To A Book About Tech
Also, TV critic and historian David Bianculli reacts to the cancellation of The Late Show with Stephen Colbert.
Learn more about sponsor message choices: podcastchoices.com/adchoices
NPR Privacy Policy
Listen and follow along
Transcript
Support for this podcast and the following message come from Sierra Nevada Brewing Company, where pure ingredients and sustainable brewing meet a legacy of craft.
Share one with a friend today and taste for yourself.
Sierra Nevada, taste what matters.
Please drink responsibly.
This is fresh air.
I'm Terry Gross.
Here's the kind of conflicted relationship my guest has with big tech.
Tech journalist and award-winning novelist Wahini Vara has ethical reasons why she shouldn't shop on Amazon, and at least as many reasons why she does.
Then there's Google.
She's opposed to how Google monetizes our personal information to sell ads geared to our interests, but she appreciates the archive of her own stored searches, many of which she lists in her book because of what they reveal about different periods of her life.
As a tech reporter, she got access to a predecessor of ChatGBT.
She loves playing with AI and has found ways it can be helpful.
But she's skeptical of its use as an aid for writers.
She's written twice about testing a chatbot in that capacity.
First in an essay that went viral called Ghosts, in which she asked AI to help her write about her sister's death.
And now again in Vara's new book, Searches, Selfhood in the Digital Age.
After she wrote chapters of the book, she fed the chapters to ChatGPT and asked for help with her writing.
Then, she analyzed the advice and what it says about the abilities, shortcomings, and biases of the chat bot.
She added her interactions with ChatGPT to her book.
The theme of the book is how tech is helping and exploiting us.
Vara started as a reporter at the Stanford University campus paper, where she edited its first article about Facebook when Stanford became the third university to get access to it.
She covered tech for the Wall Street Journal, was a tech writer and editor for the business section of the New Yorker's website, and now contributes to Bloomberg Business Week.
Her novel, The Immortal King Rao, was nominated for a Pulitzer Prize.
Her short story, I Buffalo, won an O.
Henry Award.
Well, Hini Varro, welcome to Fresh Air, and thank you for getting here.
Your windshield got shattered by who knows what on the way over to the studio.
I'm so grateful to you for making it.
It was worth it.
I was like, I'm getting to that studio.
And you did.
Thank you.
And And welcome.
And I enjoyed your book.
So you did this exercise of feeding chapters of your book to ChatGBT, asking for advice.
What did you tell the chatbot?
Why did you tell it you wanted its help?
I'm glad you asked the question that way because I'm really interested in the way in which we sort of perform different versions of ourselves when we communicate, whether it's with other human beings or with technologies.
And so I was definitely playing a role with the chatbot.
I told the chat bot that I needed help with my writing, and I was going to feed it a couple of chapters of what I was working on, and I wanted to hear its thoughts.
The reality was that I wanted to see how ChatGPT would respond.
And so the interplay between sort of my performance and its performance was super interesting to me.
I have an ethical question for you, Wahini.
Is it ethical to mislead a chatbot and ask questions under a kind of false pretense?
A hundred percent.
I say that as a journalist with the full expertise and authority of my role as a journalist.
You know, I think so.
I think our relationship with these products is really different from our relationships with other human beings.
I feel really strongly about, obviously, things like accuracy and ethical standards in my daily life when I talk to other human beings, whether it's as a reporter or not.
What I think is really interesting about technologies, whether it's ChatGPT or something else, is the way in which we can sort of play with these ideas of
how people are supposed to communicate in ways that are, I think, pretty interesting and freeing.
After you got some feedback on the first couple of chapters, you asked the chat bot if it's okay to share a couple of more chapters.
And ChatGPT answered, absolutely.
Feel free to share more chapters.
I'm looking forward to reading them and continuing our discussion.
And that gets to a very fundamental question that you asked ChatGPT about, which is when a chatbot uses the first person I,
what exactly does it mean?
Because it is not a person, it is not an I.
It is artificial intelligence, it's a computer program, it's, you know, it's basically a machine.
So, what is the I?
What does that mean that chatbots using I?
Yeah, I mean,
I would argue that that I is a fictional creation of the company OpenAI that created ChatGPT.
So we think about these technologies, I think, sometimes as
being very separate from human experience and human desires and goals.
But in fact, there's this company called OpenAI whose investors want it to be very financially successful.
And the way to be financially successful is to get a lot of people using a product.
The way to get a lot of people using a product is to make people feel comfortable with the product, to trust the product.
And one device that a company might use in order to do that is to use, have the product use language that makes you feel a bit like you're talking to another person.
Trevor Burrus, Jr.: So, in reading ChatGPT's responses to your chapters, one of the biases you noticed was that it suggested you add more about the positive side of AI and its creators.
I thought that was interesting.
Did that say to you that it was revealing the chatbot's bias or pointing out your negative bias or both?
It's such an interesting question.
And it gets to the heart, I think, of what is
problematic about these technologies because I can't claim to have any way of knowing why it said what it did.
So basically, I fed it these chapters about big technology companies, and I said, what feedback do you have for me?
And it said, you could be more positive.
And then later it goes on to provide these sample paragraphs it thinks I should include in the book about how Sam Altman, the CEO of OpenAI, is a visionary leader who's also a pragmatist, like this really glowing stuff.
It would be fun to be able, and it would support a strong critique to be able to say, oh, clearly OpenAI has built this product in such a way that it's deliberately having the product spout this propaganda about its CEO that's positive.
It's certainly possible that that's the case, but there are all kinds of other explanations for it too.
It's possible that the language that the technology behind the chatbot absorbed in order to learn quote unquote how to produce language happens to be somewhat biased toward people like Sam Altman.
There are all kinds of possible reasons and we just don't know.
But I think that not knowing is a problem.
Aaron Ross Powell, did you use any of the chatbots' advice about balancing your reactions to AI and including more positive aspects of it?
I didn't.
So one thing that I wanted to be really thoughtful about was actually
not writing a book that was influenced in any way by the rhetoric of the chatbot that I was then conversing with about the book.
And so I wrote the entire book.
And after doing that, I fed it to the chat bot.
And certainly later, there were these edits that I made in the editing process with my editors at my publishing company.
But those were not in response to integrating anything that the chatbot said, because on a philosophical level, I did not want to integrate anything the chatbot suggested.
Aaron Powell, Jr.: So you asked the chat bot about how it seemed programmed to flatter and to sound empathetic and kind.
Because before it gives you any kind of criticism ever, it tells you like this is very well written and this brings out like a combination of like tech history with your personal history
and oh I have to say
let me just interject here that just to see what happened I asked chat GPT
what questions would Terry Gross ask Wahini Vara in a fresh air interview and it was very flattering to me it praised me as a good interviewer with like sensitivity and deep questions.
So, you know, just another example that it can be very flattering.
So you asked it about how it seemed programmed to flatter and to sound empathetic and kind.
And it responded, the way I communicate is designed to foster trust and confidence in my responses, which can be both helpful and potentially misleading.
And I thought, that's so true.
It can be helpful and misleading.
And I thought, maybe the chatbot is actually very transparent.
So Open AI has said that
ChatGPT and his other products are designed to foster trust, to appear helpful.
So it says those things explicitly.
And right, by saying, this is how I function, yeah, I guess ChatGPT is being as transparent as it can be.
I don't know that I would trust it to always be as transparent as that or even to know how to be transparent, right?
Because it's just a machine that's generating words.
It has no way of definitively always generating material that's even accurate, right?
And so I have fairly low confidence that that transparency is always going to be there.
I am curious, though, about what you thought of the questions that it produced, whether any of them were like at all interesting to you.
They were kind of pretty broad, general questions.
They all touched on themes in your book.
Yeah.
They were all subjects I wanted to explore, but they were so like broad in nature that
there was no personality.
I'm not saying like, well, look at my personality, but there was no point of view and there was no follow-up.
It was just like a list of questions,
but
there was no follow-up to try to go deeper after the answer.
And I know it wouldn't have heard your answer, but still
when I prepare an interview, like one question leads to the next question to go deeper into that answer.
And there was nothing like that.
Yeah.
Well, what's interesting, I think, about that, and what I experienced, too, using ChatGPT in the context that I did, is that there's something fundamental about human communication, about like two people talking to each other, or the fact that right now, for example, you and I are talking to each other.
But I imagine you always also have an awareness of the fact that eventually millions of other people will hear the same conversation.
And so both you and I are keeping in mind like this kind of complex idea about who we're communicating to, what the communication is for, and like our own backgrounds and experiences and ways of communicating come into it.
And a chatbot is like not doing any of that.
It seems like it is because the words it produces sound kind of like the language humans use, but it's not using language in any way that's remotely like how we do.
So in 2021, when you wrote your essay, Ghosts, which went viral, it was published initially in The Believer.
It was adapted into a This American Life story.
And the premise of this was you had wanted to write about the death of your sister.
She was diagnosed with cancer, Ewing sarcoma, when you were a freshman in high school and she was a junior and she died when you were both in college.
And you felt like you just didn't have the words to describe how bereft you were, what a life-changing event this was in every way.
And so in 2021, while you were playing around with a predecessor of ChatGPT,
you asked it to help you write the essay.
And I want you to describe what your process was.
This was before ChatGPT was out.
This was before AI was a big part of our lives.
But I got early access to this AI model.
And the way it worked was that there was this web page and it had a big white box and you could type in that box and then press a button and it would seem to complete the thought for you.
It complete the text for you.
And so I'd been playing around with it for a while.
I would just like, I put in the beginning of Moby Dick, which is my favorite novel, and hit the button just to see what would happen.
I did all kinds of stuff like that.
And then I started to think about
what
the promise was that a technology like this was making.
And it seemed to me that the promise was that it could produce words for you when you were at a loss for words.
And because I'm a writer,
I tend to want to make the effort to come up with my own words to describe my experiences or things I've observed.
But there was this thing and continues to be this thing that I have a really hard time finding words for, which is the death of my sister and my grief over it.
I mean, I think anybody who has experienced death or any other kind of loss will be familiar with that feeling of, of not knowing how to come up with the words to describe this experience that was so profound.
And so I kind of took this technology somewhat at face value.
I thought to myself, well, if this is the promise this technology is making, let me try to get it to communicate for me about my sister's death.
And so I sat down
and I wrote this sentence, which was, when I was in my freshman freshman year of high school and my sister was in her junior year, she was diagnosed with Ewing sarcoma.
And then I hit the button and it produced this story that really has had very little to do with my actual experience and my actual sister.
And the last line of that little, you know, three or four paragraph story it produced was, she's doing great now.
And which was like
devastating, it was like the opposite of what I wanted this chatbot to do, right?
Like it was producing a lie, a falsehood that was sort of like the worst possible falsehood that it could produce, right?
Because my sister,
my challenge was trying to communicate the reality of what had happened.
And this was like the opposite of that reality.
And so I thought, okay, well, I know enough about these technologies to know that if I give it more words and hit the button, then it'll have more to go off of and it might match more closely my experience.
And so I erased everything it wrote, kept my first sentence.
I wrote a little more, hit the button again.
And in some ways it did get a little closer.
And I did that over and over, sort of deleting what the chatbot wrote every time.
And the strange thing that happened was that as I did that, the technology
did get closer to describing something that resembled grief or my experience of it in a way that was like weirdly moving to me and impressive to me.
But ultimately, this machine was not me.
And so it couldn't say anything authentic about my actual experience.
And so I realized eventually toward the end of writing this thing that there was nothing that the technology could come up with that was actually going to fulfill my desire to be able to communicate because it wasn't it wasn't me.
It wasn't doing the thing that I wanted to do, which was to communicate myself on my own.
And so ultimately, I published that experiment as an essay, and I thought it was so interesting how it showed both the ways in which a product like this can legitimately produce language that somebody can find moving and intellectually stimulating and interesting
and yet be doing something very different from what a human is doing when we're trying to communicate.
Well, a couple of things.
Early on when you were giving it very little information, it twice had you as like very athletic.
I don't know if you are or not, but like in one, you're like a lacrosse player.
And in the other, it's like you run for like miles and miles and miles.
And it also seemed to have a bias toward a happy ending.
You know, she's fine now.
It's like you watched too many mediocre movies.
Yeah, I mean, notice how I personalize it.
Like he walked
and I genderized it.
Totally.
And it's interesting that you genderize it too, because in that second one, the one where it thought that I was a runner, it seemed to think that I, the writer, was male.
So I think
there are all kinds of things going on.
But then later it realizes that I'm female and then ends up generating this meet-cute between me and this like handsome professor who helps me deal with my grief.
And so there are all these tropes and biases that are embedded in what it's producing.
So I want to give an example of some writing that is like very dramatic, but also like very puzzling,
very odd.
So this is how hard it is to describe your sister and what you felt for her.
This is the technologies.
Right.
This is AI speaking here.
This is AI writing on your behalf.
Exactly.
So I can't describe her to you, but I can describe what it felt like to have her die.
It felt like my life was an accident or worse, a mistake.
I'd made a mistake in being born and now to correct it, I would have to die.
I'd have to die and someone else, a stranger, would have to live in my place.
I was that stranger.
I still am.
What?
It sounds very dark and interesting, but I'm not sure it makes any sense.
What do you think?
It's funny because
I think so much of the experience of reading
is about making your own meaning as a reader.
And so for me, I think there's something that like in my reading of it is kind of poignant.
I read it as saying when somebody who's very close to you, whose existence is a big part of your identity dies.
you have to then rebuild a new version of yourself, right?
Like a kind of new identity.
So I read this as talking about the period after my sister died and
I had to become a new version of myself and I was learning who that new version of myself was.
And so that person was kind of like a stranger.
And in a way, there is a sense of estrangement that continues.
What's interesting is like, I read it that way, but I read it that way because I'm a reader making meaning from language that.
a technology generated with no particular intent, no knowledge of what my experience of grief was.
I'm thinking of how weird it is that you and I are doing literary criticism of AI.
It must sound a little strange, don't you think?
Yeah, I agree.
And I think the funny thing about it is that we're two human beings trying to make meaning out of something that is fundamentally, one could argue, meaningless in that the entity that created the language wasn't doing it with any consciousness, right?
Any intent behind it.
Well, let's take another short break here.
If you're just joining us, my guess is Wahini Vara.
Her new book is called Searches, Selfhood in the Digital Age.
There's a lot more to talk about, so stick around.
We'll be right back.
I'm Terry Gross, and this is Fresh Air.
This message comes from Sony Pictures Classics with East of Wall, written and directed by Kate B.
Croft, an authentic portrait of female cowgirls and their resilience in the New West.
Set in South Dakota's Badlands, it follows a rebellious young rancher who rescues horses and shelters wayward teens while navigating grief, family tensions, and the looming loss of her land, now playing only in theaters.
Level up and invest smarter with Schwab.
Get market insights, education, and human help when you need it.
This message comes from FX's Alien Earth.
From creator Noah Hawley and executive producer Ridley Scott comes the first television series inspired by the legendary alien film franchise.
A spaceship crash lands on Earth, bringing five unique and deadly species, more terrifying than anyone could have ever imagined.
And a technological advancement marks a new dawn in the race for immortality.
FX's Alien Earth.
All new Tuesdays on FX and Hulu.
This message comes from Wise, the app for using money around the globe.
When you manage your money with Wise, you'll you'll always get the mid-market exchange rate with no hidden fees.
Join millions of customers and visit wise.com.
T's and C's apply.
So based on your interactions with AI, what are your thoughts about chatbots and the use of AI for writing or editing?
It wasn't that useful for you.
It was very instructive about AI.
But do you think there are other people that it would be very useful for?
There was a study out of Coronel a couple of years ago that I found really interesting where they had
some people write an essay about social media just on their own, and then they gave these two other groups special
AI models.
For one group, they gave them an AI model that was predisposed to produce positive opinions about social media.
And then they gave this other group an AI model that was predisposed to produce negative opinions about social media.
What they found was that when people wrote essays with the help, quote unquote, of these AI models, they were twice as likely to produce essays that reflected the quote-unquote opinion of the AI model.
It seems from that research and other research that's emerged since then that even if we are
using these AI companies' products to edit our work or ask for feedback on our work, there's a real danger that the responses that we're going to get are going to change our writing in fundamental ways that we might not even be aware of.
Your father uses AI, including tools.
So, how does he use it and what do you think of that?
Oh my gosh, we could do a whole interview about my dad's use of AI.
My dad has recently started sending me messages on WhatsApp where the whole text of the message is something he asked ChatGPT to write.
So, for for example, he recently sent me one that was,
it's hard to decide whether to retire in Canada, the United States, or India.
Here are some pros and cons for each option.
So he never said to me, I'm wondering whether I should move to India or Canada for my retirement.
He just sent me that response.
And so the subtext, like what he's communicating through ChatGPT is the thing that's actually unsaid.
And so there are a lot of people out there.
I think my dad is one of many people
who want to communicate something.
My dad was explaining to me on the phone the other day that he's not a writer.
He can't communicate these things himself.
But if he gives ChatGPT enough of a prompt, it can communicate the thing he wants it to communicate.
And there are things that I find problematic about that for sure.
You know, AI in some ways is being used like a a personal syrano de Bergerac.
Like you want to express your love for someone, you don't have the words, so you have this other guy write it as if you're saying it and signing your name to it.
Yeah.
So how do you use AI for real in your life?
So the truth is that I use AI in very limited ways.
The fact that I fed large portions of this book to ChatGPT might give people the impression that I'm some huge AI super user, which I'm not.
I'm a journalist who writes about AI.
So to the extent that that's part of of my work, I think it's really important for me to engage with the products.
At the same time, I'm really concerned about all the things we don't know about how these products function and how the companies behind these products might ultimately use everything we're putting into their products to exploit us,
to expand their own wealth and power.
I sometimes use ChatGPT.
A use that comes to mind is like if there's a word on the tip of my tongue, I'll go to ChatGPT and write a sentence with a blank in it and kind of explain the gist of what I'm looking for.
And one thing it's pretty good at is coming up with what that word was that was on the tip of my tongue.
So that's a small example of how I use it.
When I do use it, I tend not to log into it.
I tend to just go to ChatGPT,
use the interface without logging in so that my use of it is not associated with my account.
I do still have an account because, again, as a journalist, I want to be able to have access to these products.
Aaron Powell, Jr.: So I'm unclear since I've only used it twice each time to ask questions
to help me understand how it worked for the interview I was about to do, as I did in your case, because, you know, AI is at the center of the interview, so I wanted to ask it some meta questions.
But, you know, I used it for free.
I don't have an account.
I just put my question in and it came up with stuff.
What are some of the ways you expect it to be monetized in the future that it's not monetized for yet?
Because I feel like being able to use it at all for free is kind of like a teaser until
no one can use it for free.
I'm just speculating.
I have no knowledge.
Yeah, so I'm speculating to an extent too, but these products are really, really expensive to build.
And so investors are putting a lot of money, companies themselves are putting a lot of money into into building these products.
And some small number, some small percentage of users are paying for premium versions of the product.
But that's just not enough to turn these companies into the enormous businesses that the investors are betting that they are going to be.
And so that leaves us in this really interesting situation in 2025 where the companies are starting to say, okay, we're going to need to figure out how to monetize our free users is how they put it.
And the CFO of OpenAI said to the Financial Times last year that the company is looking into advertising as an option.
Other AI companies, and here I'm talking about big companies like Google and Microsoft, also seem to be thinking about this.
So this is speculation, but here's one way in which it would be obvious for AI companies to monetize our use of the products.
When people trust these products a lot, they end up going to these products with all kinds of personal information, their marital struggles, their conflict with their boss at work.
And while we focus a lot on the question of like how accurate or unbiased or useful the information is that these products are giving us, I think something we kind of forget about is everything we're providing to the makers of these products in asking them these questions about really intimate details of our lives.
And so eventually, these companies are going to know a lot about who we are, about what kind of language can be used with each specific user to persuade them of something, to influence them in a particular way.
And that puts these companies in a position to, for example, recommend products to us using language that's geared toward us specifically and our circumstances and our vulnerabilities, and ultimately collect this huge database of all of us who are using these products, who we are, and what makes us tick.
Yeah, and it sounds like, you know, as you're saying, that AI and the companies that own the AI products are going to know a lot more than, say,
knowledge based on what I search for on Google or the books I bought on Amazon or the TV shows I'm watching on Netflix or the algorithms are going to recommend what I want to buy or watch next.
Exactly.
Yeah.
There's parts of your book where you describe your life through your searches because you don't like the fact that Google has a lot of information on you based on your searches, but you do like the fact that your searches have been archived.
You can access that archive and learn about where you were at different periods of your life based on what you searched for.
How did you start thinking of searches as
a record of your life?
So the first thing that I ever wrote that ended up in this book was this chapter made up entirely of my Google searches.
I wrote it in 2019.
I had been covering tech for a long time by that point.
And so I knew that Google kept records of our searches unless we turned off its ability to do that.
And I could have sworn I'd turn it off, but I hadn't.
And so for the past, you know, 15 years, off and on, but mostly on, Google had been collecting all my searches.
Realizing that, it freaked me out on one level.
But then also, I found it fascinating because as a writer, as a journalist, I'm always interested in archives.
Right.
And I used to keep a diary when I was a kid, but I haven't in a long time.
And it occurred to me that probably the best possible archive of my life was the archive of everything I'd searched for over the years.
And it made me think about the way in which, like, it's sort of too simplistic to say
these companies exploit us and we have no say in the matter, or to do what the companies say in turn, which is, you're only using these products because they're useful to you.
You could stop using them tomorrow if you really wanted to.
I think there are these like very binary positions.
And I think the reality is that the exploitation and the usefulness totally go hand in hand hand with all of these products.
And I think what makes that really uncomfortable for us as users is that then we have to contend with our own complicity, like our own role in the exploitation that's taking place when these companies
collect our personal information and use it to become more wealthy, to become more powerful, to influence political systems.
We have to admit, like, well, that's partly our fault because here we are using these products and giving them permission to keep archives of our lives.
Well, let's take another break here.
Let me reintroduce you.
If you're just joining us, my guest is Wahini Vara.
Her new book is called Searches, Selfhood in the Digital Age.
We'll be right back.
This is Fresh Air.
This message comes from Jerry.
Many people are overpaying on car insurance.
Why?
Switching providers can be a pain.
Jerry helps make the process painless.
Jerry is the only app that compares rates from over 50 insurers in minutes minutes and helps you switch fast with no spam calls or hidden fees.
Drivers who save with Jerry could save over $1,300 a year.
Before you renew your car insurance policy, download the Jerry app or head to jerry.ai/slash npr.
Support for NPR and the following message come from IXL Learning.
IXL Learning uses advanced algorithms to give the right help to each kid no matter the age or personality.
Get an exclusive 20% off IXL membership when you sign up today at ixl.com/slash npr.
This message comes from REI Co-op: A Summit View, Climbing L Cap, a Faster Mile, or that First 5K.
It all starts here with gear, clothing, classes, and advice to get you there.
So you can wave to your limits as you pass them by.
Visit REI.com or your local REI co-op.
Opt outside.
Is um the internet and social media making you feel obsolete as a novelist?
And also worried that all your writing, your essays, your journalism will be appropriated by AI?
Yeah, I mean,
what I would like to think is that we have choices here.
And part of the reason that in this book and when I think about my own personal use of these products, I'm so interested in like
our
choice and agency agency in the matter, is that if it's the case that
big technology companies are just going to continue to amass more wealth and power and AI is here, and so AI is going to be even bigger in the future and take everything over,
like that suggests that we don't have a choice in the matter, right?
However, if we say we have a choice in the matter and we can actually decide to choose a different future because we are unhappy with the one we're currently currently in, then we can potentially build a future that's different from the one that we're in now.
But I think like in 2025, we're in this really interesting, crucial period where
not as many people are using AI as I think we think.
So in the U.S., for example, most people have never tried ChatGPT still in 2025.
And so we're in this interesting position where we can actually decide as individuals, as communities, as societies,
the extent to which we want AI to be a part of the future, the extent to which we want AI
generating novels or generating something that is going to substitute a newspaper or magazine article or a radio show.
I have to ask you about the spelling bee.
A moment of semi-fame in your younger years was when you were third in the national national spelling bee.
I always wonder, what is the point of asking young people to spell obscure words that no one uses and no one even can define?
Can you explain that?
Because it makes no sense to me.
Yes, I have so many thoughts on this, Terry.
I continue to love spelling.
I love language.
I think,
you know, to get philosophical about it, I think we could ask that question about anything, honestly.
I think we could could say, what's the point of trying to run a three-minute mile, right?
Or like,
I've taken up rock climbing.
What's the point of like trying to climb to the top of a rock wall when elevators exist, right?
And I think it speaks to this thing about AI too.
What's the point of trying to write an essay if AI can write it for you?
And I think the point is that we as humans are like idiosyncratic, curious creatures.
And we created this thing called language that's really really important to us.
And for me personally, who knows why?
I don't know why.
I love words.
I love language.
I love knowing how sounds fit together and produce meaning.
Like that feels, that's always been fascinating and meaningful to me.
And so, like, I love it because I love it.
And I kind of love about spelling that there is, you're right, there is no point, especially in the age of spell check and AI.
Like, there's probably even less of a point than there was in the mid-90s, but strong supporter of spelling.
I think everybody should be in spelling bees.
Yes, but the question, though, was about asking young people to spell words that nobody ever uses, that no one can even define.
Really obscure words.
Do you take pleasure in spelling words that you didn't even know existed?
Yes, I love words that I didn't even know existed.
I take pleasure in spelling them.
I take pleasure in like knowing that they exist when previously I didn't know that they exist.
I know, I love it.
And I think they do too.
I think there's this misconception about spelling bee kids that, like, they're all doing it because their parents are making them so that they can get into college 10 years later or something.
But every, I wrote an article about spelling bee kids in 2018, I think it was, for the magazine Harper's.
And so I got to spend time with like a more recent generation of spelling kids.
And the thing that I think they had in common with the spelling kids I knew in the mid-90s is just like this genuine, idiosyncratic, strange love for words and how they're put together.
So you place third in the national spelling bee.
What was your losing word?
Oh, Terry, the losing word was para plus.
Para plus?
Can you spell it?
Yeah, but again, I've never heard the word before.
I have no idea what it means.
I would spell it para plus or para plus?
Because one sounds like an animal and the other sounds like an amount.
Okay, so the word is para plus.
There is an alternate pronunciation, which is paraplus.
Okay.
So I would spell it P-A-R-A-P-L-U-S
or
P-L-U-S-S.
Or if it's Paraplus, P-L-O-U-S-E.
Oh, I know, but this isn't a spelling bee, so I'm allowed to do this.
You're out.
You're out.
I'm out.
After three or four tries, I'm out.
Okay.
You're still out.
Yeah.
So it's P-E-R-I.
Oh, you said Paraplus.
I know.
Paraplus.
That's fair.
It might be my pronunciation.
But it is P-E-R-I-P-L-U-S.
And I spelled it P-E-R-I-P-L-U-S-S-E.
Ah, okay.
You got a little fancy there.
You over fancified it.
Yeah, exactly.
Exactly.
And what does it mean?
It has to do with like the,
I don't know the, I don't remember the exact definition, but it has to do with like the log that is kept on ships when they circumnavigate, like when they're, when they're trying to see where the borders of islands or continents are.
Well, I think it's only fitting that we started the conversation doing a literary critique of AI's writing, and we're ending it with spelling.
It's a whole sample.
Perfect.
Yes.
Thank you.
It was really great to talk with you.
I really enjoyed it.
And thank you again for coming.
In spite of the fact that your windshield shattered on the way over, I should let you go and get it repaired before it rains or whatever.
I appreciate it.
This was so fun.
It was a real honor to get to talk to you, Terry.
Wahini Vara's new book is called Searches: Selfhood in the Digital Age.
Our TV critic David Biancoule will talk about the significance of CBS's cancellation of the late show with Stephen Colbert after a break.
This is fresh air.
Support for this this podcast and the following message come from Sutter Health.
From life-changing transplants to high blood pressure care, Sutter's team of doctors, surgeons, and nurses never miss a beat.
And with cardiac specialty centers located in the community, patients can find personalized heart care that's close to home.
Learn more at Sutterhealth.org.
Support for NPR and the following message come from USPS.
With USPS Ground Advantage Service, it's like your shipment has a direct line to you.
It leaves the dock, you know about it.
It's on the road, you know.
And when it reaches your customer, you guessed it, you're still in the know.
Here's the real game changer.
It's one journey, one partner, total peace of mind.
Check out USPS Ground Advantage Service at usps.com slash in the know, because if you know, you know.
This message comes from Bombus.
Nearly 30% of marathoners end their race blistered.
Bombus running socks are strategically cushioned to to help say bye to blisters.
Run the bombus.com/slash npr and use code NPR for 20% off your first purchase.
Last week, CBS canceled the late show with Stephen Colbert, although it will remain on the air until next May.
Our TV critic David Biancoule says that even in an era when broadcast late-night talk shows are viewed less than ever before, this amounts to a significant moment in television history.
Most of the time, the landscape of late-night TV seems almost exactly like that.
A landscape, ever familiar, never changing.
But once in a great while, the tectonic plates shift suddenly and what we see becomes notably different.
Johnny Carson ruled late night on NBC for an amazing 30 years.
And when he stepped down, the network should have given The Tonight Show to David Letterman.
Instead, Letterman defected the CBS and launched that network's talk show franchise, The Late Show, which Stephen Colbert eventually inherited.
Conan O'Brien had the Tonight Show briefly, but walked away to protest NBC's plan to present the Tonight Show in a later time slot so that a show by Jay Leno could air first.
Jay Leno ended up with the Tonight Show and the time slot, then turned it over to Jimmy Fallon, while another Jimmy, Jimmy Kimmel, established his own mini-empire at ABC.
And for a long time now, that's been it, Colbert, Fallon, and Kimmel in late-night, with Seth Meyers checking in on NBC even later.
But a late-night TV earthquake, the first big one since the 1990s, set off tremors last week that are bound to have repercussions for years.
Returning from a two-week vacation, Colbert opened his late show on Monday, July 14th by noting what Paramount, the parent company of CBS, had been up to.
While I was on vacation, my parent corporation, Paramount, paid Donald Trump a $16 million settlement over his 60 Minutes lawsuit.
As someone who has always been a proud employee of this network, I am offended.
And I don't know if anything will ever repair my trust in this company.
But just taking a stab at it, I'd say $16 million would help.
That was a soft enough jab.
Kind of like when Letterman would poke fun at his corporate bosses at General Electric once that company acquired NBC.
But then, getting deeper into the weeds and the controversy, Colbert said this.
Now, I believe that this kind of complicated financial settlement with a sitting government official has a technical name in legal circles.
It's Big Fat Bribe.
Because
this all comes as Paramount's owners are trying to get the Trump administration to approve the sale of our network to a new owner, Skydance.
Three days later, Colbert opened his program seated at his desk, informing his studio audience and the viewers at home of a stunning piece of news.
Before we start the show, I want to let you know something that I found out just last night.
Next year will be our last season.
The network will be ending the late show in May.
And
yeah, I share your feelings.
It's not just the end of our show, but it's the end of the late show on CBS.
I'm not being replaced.
This is all just going away.
There are several issues here, and some are less clear-cut than others.
Whether the decision to drop Colbert and his program indeed was a direct reaction to Colbert's jokes and observations about his bosses and Donald Trump is debatable.
CBS said the cancellation was a purely financial decision, and the show's annual losses have been reported as an estimated $40 million.
By comparison, back in the glory days of The Tonight Show, Johnny Carson's late-night show was responsible for almost a quarter of of NBC's profits.
It also is arguable whether the timing, with Paramount needing federal approval for that proposed merger, is a major factor.
When you consider the recent cases of network news organizations bowing to lawsuits and other pressures at both ABC and CBS, it's not an unreasonable conjecture.
And in terms of presidential administrations putting pressure on comedians critical of their policies, there's plenty of precedent.
Most famously with CBS back in the 1960s, with the firing of the Smothers brothers.
Stephen Colbert isn't being fired, of course.
And, like David Letterman and Conan O'Brien before him, Colbert is likely to be embraced and rewarded for whatever he does next.
But the big loss here, from my view as a TV historian, is that CBS also is throwing out the late show franchise, which Letterman built from scratch and which, under Colbert's auspices for the next 10 months, will continue to emanate proudly from the Ed Sullivan Theater on Broadway.
I'm certain Colbert's final months on CBS, especially his last week, will be very vibrant and quote worthy.
Almost no one I know still watches a late night show on broadcast T V from start to finish.
Instead, we all wait for the highlights to start circulating on the Internet or the morning shows the next day.
But the secondary reach of those monologues and other clips is significant.
They They pull many more millions of viewers on average than the late-night shows themselves.
And CBS and Paramount, by planning to take the late show out of circulation, is silencing one of its few meaningful remaining CBS broadcast platforms.
By not appreciating, defending, and nurturing the late show,
or 60 minutes for that matter, the parent company is making a paramount error.
It's muzzling its best voices and diluting its own future.
David David Biancoule is a professor of television studies at Rowan University and the author of Dangerously Funny, The Uncensored Story of the Smothers Brothers Comedy Hour.
Tomorrow on Fresh Air, the Trump administration has been pressuring elite universities like Harvard and Columbia with widespread demands and threats of federal funding cuts.
So why are they now investigating George Mason University?
Education reporter Catherine Mangan tells us about her investigation and why GMU's president thinks it's driven by a backlash to DEI efforts.
I hope you'll join us.
To keep up with what's on the show and get highlights of our interviews, follow us on Instagram at NPRFresh Air.
Fresh Air's executive producer is Danny Miller.
Our technical director and engineer is Audrey Bentham.
Our managing producer is Sam Brigger.
Our interviews and reviews are produced and edited by Phyllis Myers, Anne-Marie Bodonato, Lauren Krenzel, Teresa Madden, Olenik Nazareth, Thea Chaloner, Susan Yacundi, Anna Bauman, and John Sheehan.
Our digital media producer is Molly C.
V.
Nesper.
Our consulting visual producer is Hope Wilson.
Herberta Shorak directs the show.
Our co-host is Tanya Mosley.
I'm Terry Gross.
This message comes from NPR sponsor, Thrive Market.
It's back to school season, aka snack packing, lunchmaking, schedule juggling season.
Thrive Market's back to school sale is a great way to stock up this month with 25% off family favorites.
Easily filter by allergy or lifestyle to find kid-approved snack packs, organic dinner staples, and more all delivered to your door.
Go to thrivemarket.com slash podcast for 30% off your first order and a free $60 gift.
This message comes from NPR sponsor Capella University.
Sometimes it takes a different approach to pursue your goals.
Capella is an online university accredited by the Higher Learning Commission.
That means you can earn your degree from wherever you are and be confident your education is relevant, recognized, and respected.
A different future is closer than you think with Capella University.
Learn more about earning a relevant degree at capella.edu.
This message comes from Squarespace.
Squarespace allows you to inspire people to support your cause by fundraising directly on your website.
Go to squarespace.com/slash npr for 10% off your first purchase of a website or domain.