What should we do about teens using AI to do their homework?
More Than Words - John Warner
Center for Digital Thriving
Support the show!
To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy
Learn more about your ad choices. Visit https://podcastchoices.com/adchoices
Listen and follow along
Transcript
This episode is brought to you in part by Viore.
A new perspective on performance apparel.
Perfect if you're sick and tired of traditional old workout gear.
Viore clothes are incredibly versatile and comfortable, perfect for whatever your day brings.
They're designed to look great beyond the gym, whether you're running errands, heading to the office, or meeting up with your friends.
One specific Viore item I would recommend is the core short.
This is a short that started it all for Viore.
It is one short for every sport,
for whatever sports you play.
It's ideal for fitness, running, and training, but also genuinely stylish and comfortable enough to just wear all day.
Viori is an investment in your happiness.
For search engine listeners, they are offering 20% off your first purchase.
Get yourself some of the most comfortable and versatile clothing on the planet at viore.com slash pjsearch.
That's vuori.com slash pjsearch.
Exclusions apply.
Visit the website for full terms and conditions.
Not only will you receive 20% off your first purchase, but enjoy free shipping on any U.S.
orders over $75 and free returns.
Go to Viori.com slash PJsearch and discover the versatility of Viori clothing.
Exclusions Apply, visit the website for full terms and conditions.
Do you have a name that you want me to use?
Playboy Farty.
I'm not going to call you Playboy Farty.
Give me a different name.
Ken Fartson.
It's not going to be a rapper's name, but you changed part of their name to Fart.
Are you going to be Carl Fart?
Do you want to be Ken Fartson?
Do you want to be Ken Fartson?
I want to be Playboy Farty.
Okay, you can be Playboy Farty.
Playboy Farty is a teenager in the early part of his high school career.
Mr.
Farty, a very bright kid who, like many teenage boys, has recently discovered the joys of defiance and mischief.
I was hoping Playboy could tell me about this story I'd been tracking, the rise of AI, but how it had been viewed from his teenage teenage vantage point.
Do you remember the first time you heard about ChatGPT?
I heard about it over social media and I didn't really know what it was.
So then I kind of looked it up and it was kind of the first AI thing I'd ever used.
So I found it really interesting.
This then 13 year old boy started talking to this then one month old AI agent.
And like most of us, At first, Playboy Farty was just trying stuff, asking questions like, how do I get to the moon?
How would I I say this English phrase in another language?
Overall, I found it really cool and I was like, how can I use this?
And what you just winkled your eyebrows.
And when you thought, how can I use this?
What were you thinking?
At the time, I was doing a coding class and I actually used it to create some code and run some code.
But after that, I started realizing, oh, I could probably use this for something.
more interesting.
And so then I started asking for ideas on different homework assignments.
So
walk me through what this would have looked like.
Like you had a homework assignment and you were supposed to do what?
Well, it was a slideshow presentation on some country just for the beginning of class.
And it was just like facts, population, and stuff like that.
Playboy Fardi asked ChatGBT to provide facts about this country he'd been assigned.
And then he rewrote those facts and stuck them in his presentation.
Honestly, not so different from how teens typically use reference materials.
And then the second time I used it, I think it was for not an essay, but a small like two-paragraph writing project on a book.
That time, it just gave me like a thesis and summarized the chapter, which I had already read, but it was a pretty in-depth summarization.
I may or may not have gotten caught, but apart from that, it actually wrote it pretty well, and I just changed it a bit.
Despite Playboy's breeziness, here in the adult world, the behavior he's describing, asking the chatbot to provide his paper's thesis and summarize the plot, and then slightly modifying that language into his own words, that is considered cheating, something that would have gotten him in serious trouble at school.
But Playboy Farty never got a chance to hand in the assignment.
Instead, the adults in the house noticed that the homework that night had been completed a little too quickly, and he'd been hoisted on the damning evidence of his own browser history.
But the conversation afterwards between the adults and the teenager was unusual.
because both sides just disagreed on the meaning of what had happened.
Was Playboy Farty actually guilty of an academic crime as the adults saw it?
Or was he just an early adopter of a new labor-saving technology, which is more how he saw it?
Playboy was threatened with consequences, and he says he has not used AI for homework since.
But there's a difference between following a rule because you understand it versus following a rule because you know that otherwise you'll get in trouble.
Did you feel guilty?
Um, maybe afterwards, but not really.
You didn't feel guilty?
Afterwards, yes.
But not while you were doing it?
No, because I was kind of like,
the process going on in my brain at the moment was like, I'm looking at this, but I'm writing it.
So technically speaking, it's still my writing.
Now I realize that's not really how it works.
I mean, I'll tell you my fear.
So when I was a kid in math class, we had to learn long division.
And I felt like it was hard and stupid.
And like I could just use a calculator, which is like the best technology we had for doing homework at the time.
And the adults in my life were like, no, it's really important that you know how to use long division.
I was like, I really think you're wrong.
And they made me learn long division.
And I was right.
I don't, I've never, as an adult, use long division.
And I can't.
Not even for taxes.
No, you don't use long division for
using a calculator.
Exactly.
That's my feeling about ChatGPT.
Why would I ever not use it as an adult?
Well, this is what confuses me is.
I think
writing is really important.
Like, I think when
I write down my thoughts, it actually helps me think and it helps me think more clearly.
It's not just a skill for communication.
For me, it's actually a tool for thinking.
And the slight worry I have, or the real worry I have, is I don't think people should have to do long division if calculators exist, but I worry that if your
generation
is not really learning how to write, they're just learning how to edit a machine, you might be losing at least one tool people use to think clearly.
Do you worry about that?
I feel like as an adult, I would use ChatGPT.
Right.
Like, why wouldn't I?
I'm an accountant.
I could just
get it to do all the values for me.
What job would I get in trouble for using ChatGPT for?
Being a podcaster,
writing.
I feel like if I wrote a book using ChatGPT, it would be a world-renowned book.
It'd be like, oh my God, the first book written by an AI, and then I still wouldn't get in trouble.
Well,
I mean, some adults, it's kind of neither here nor there, but adults are upset when adults use ChatGPT, not because they care about cheating on homework, but because they care about people losing their jobs.
Like they're like, oh, like if I wrote a book with AI, people get mad at me because they'd say, well, the AI uses other people's work and like.
you would have hired a research assistant, but instead like a computer's using it.
So that's the adult conversation around it.
So they'd get mad at me for writing the book by myself?
Yes.
Some adults would get mad at you for using AI instead of research assistants.
So we're getting rid of the work smarter, not harder mentality just so we can hire more people?
That is how some adults feel, yes.
Okay.
Anyway, we're not even going to get into that.
There's like more trouble there than you want.
It's funny.
In the adult world, one factor I have seen slowing down widespread AI adoption is actually a fear of getting in trouble.
A lot of people feel nervous about being seen using a tool that could automate their colleagues' jobs away or their own.
But for the next generation who don't yet have jobs to lose, these concerns seem much less present.
Playboy Farti said that by his estimation, nearly all the kids in his class were playing around with AI, and about a third of them were covertly using it for homework.
The more I talked to him, The more I came to believe that the adults in his life were just not going to be able to keep him off of AI forever.
The digital walls adults have been throwing up to stop teens like him from using these tools seem exactly as ineffective as the digital limits my parents tried to put on me 25 years ago.
Are there kids whose parents are trying to stop them from using it who are finding ways around it?
All I know is my parents, and yes, in every way possible.
There's always a way around it.
Yes, there's a way around every barrier.
There's a way around every barrier, the motto of Teen Nation.
This week, can we stop teenagers from feeding their homework assignments into AI?
And if not, then what?
Let's after this break.
This episode of Search Engine is brought to you in part by Bombas.
Fall's here, kids are back in school, vacations are over.
It is officially the start of cozy season, which means it's time to slide into some bombas.
You know bombas, the most comfortable socks, slippers, tees, and underwear out there, made from premium materials that actually make sense for this time of year.
The season's softest materials, think merino wool that keeps you warm when it's chilly, but cool when it's hot.
Sapima cotton that's softer, stronger, and more breathable than regular cotton.
And even rag wool, the thick, durable, classic cozy sock you'll want all fall.
The best part, for every item you buy, Bombus donates one to someone experiencing homelessness.
Over 150 million items have been donated thanks to customers.
I mostly just wear Bombus socks.
I've been only wearing red socks because it's harder for people to steal them from me.
But I've decided this year, I'm switching to purple.
You can head over to bombus.com slash engine and use code engine for 20% off your first purchase.
That's bombbas.com slash engine.
Coda Engine at checkout.
This episode is brought to you in part by Groons.
If you love the taste of fresh green apples, you're going to want to try Groon's limited edition Groony Smith apple gummies, available only through October.
They taste like sweet tart green apple candy, but with all the full body benefits you get from groons.
It's basically fall, wrapped up in a gummy.
Groons aren't your typical typical vitamins.
They're not just a multivitamin, a greens gummy, or a prebiotic.
They're all three and more, all packed into a convenient daily snack pack.
Each pack has six grams of prebiotic fiber.
That's three times the fiber of leading greens powders and the equivalent of more than two cups of broccoli.
Plus, Groon's ingredients are backed by over 35,000 research publications.
They're vegan, gluten-free, nut-free, dairy-free, and come low-sugar or in sugar-free options.
And they taste amazing.
Grab your limited-edition Grooney Smith Apple Groons available only through October.
Stock up because they will sell out.
Get up to 52% off.
Use the code Search.
That's at groons.co, G-R-U-N-S.co.
Welcome back to the show.
I went on a real journey this month with this question about teens and AI and homework.
And when it was done, I ended up thinking very differently about all of this.
Like my mind had fundamentally changed in some ways.
Just to give it away, I've somehow landed in a place where I'm actually both much more worried about AI and its effect on society, particularly teenagers.
But also, weirdly, these conversations forced me to confront some of my assumptions about how education works, to actually examine how writing is taught in American schools right now, and to ask, is there something here that needed to be fundamentally changed anyway?
But it took me a while to see all this the way I see it now.
So I want to retrace my steps for you, and you can make up your own mind.
So to start, let's just refresh ourselves on how we got to this moment.
Tonight, we're taking a deep dive into the world of AI with a special focus on Chat GPT, the revolutionary new language model developed by OpenAI with the ability to generate human-like text and hold natural conversations.
Chat.
November 30th, 2022, Chat GPT version 3.5 arrived.
This was the first version that actually got widespread attention.
You'd be kind of amazed at how much it can do.
In the adult world, people were impressed, but also very concerned about the implications of this new technology.
What do we mean for jobs, for disinformation, for the environment?
And
who was going to regulate all this?
Assuming someone even could.
Those are the sorts of questions I was preoccupied with in the last year.
So much so that I missed a more immediate one.
A new artificial intelligence tool is going viral for cranking out entire essays in a matter of seconds.
How will this affect English homework?
The idea is to give you a simple prompt like write a summary of the American Revolution and the bot will generate several paragraphs in under a minute.
I was not thinking about this use case, but students obviously were, as were their teachers.
Both sides seem to understand the implications of the new technology from the jump.
Just 10 days after GPT 3.5 hits the internet, The Atlantic magazine publishes a piece from a teacher explaining why he thinks this marks the end of traditional high school English.
But other educators see this, at least in the beginning, as a still winnable fight.
The New York City Department of Education is cracking down on a particular tool.
Students and teachers can no longer access an artificial intelligence chat bot that generates writing.
It's called chat.
The New York City public school system, the largest in the nation, makes an announcement that first January.
New York City Department of Education has restricted the tool on all city public school networks and devices, citing negative impacts on student learning and concerns regarding the safety and accuracy of content.
In the months that follow, many other school districts follow suit.
That March, OpenAI releases an even more powerful version, GPT-4.
It can do harder homework more efficiently.
Remember, this is all still just the first school year.
I spoke to two academics who were tracking this as it happened.
I'm Beck Tench, and I'm a researcher and designer at the Center for Digital Thriving.
And I'm Emily Weinstein.
I'm a psychologist and longtime researcher studying teens in technology, and I co-founded the Center for Digital Thriving.
Harvard's Center for Digital Thriving tries to open up conversations between teens and adults about the technological changes we are constantly inflicting on them.
For Beck and Emily, ChatGPT may have been new, but the anxieties, the conversations around it, reminded them of earlier skirmishes in the teens versus adults alarming new technology wars.
Here's Emily.
There was something that felt very familiar about the way that schools and parents were asking questions about ChatGPT and kind of rushing to figure out what policies would help ban it.
And I think this is a really imperfect comparison for a lot of reasons, but it reminded me a lot of the early days of Wikipedia,
where I don't know if you were in school when we, I don't know if Wikipedia was part of your trajectory.
Yeah, yeah, it was.
Yes.
Anyway, it felt like the early conversation around Gen AI is sort of like that early Wikipedia conversation of like, don't go there.
That is bad.
We need to figure out how to make sure that no one even looks at that.
And we're hearing teens say things like, you know, my parents have blocked chat GPT from our Wi-Fi in our house because they're so scared of me looking at it.
I'm smiling.
Did you try and do that, PJ?
I blocked chat GPT.
I vlogged Copilot.
I blocked the new Chinese one, DeepSeek, that came out last week.
I've blocked weird knockoffs.
There's one called like GPT chat, I think.
Yeah, and it's very much been, it's funny.
I used Wikipedia.
I didn't use it for plagiarism.
I like learned a literacy around it and like checked, like knew to check sources and knew to work backwards and knew the difference between a really poorly written, like,
I used that tool and turned out fine.
And now in the other position, I find myself like a reactive, strange adult who's like, shut it off, unplug it while we figure this out.
I think for good reason, because obviously Chat GPT GPT is completely different than Wikipedia in terms of the like implications for our thinking, but but I do think this impulse of like, shut this thing off while we figure it out is so widespread right now.
Parents in 2022, 23, and still now feel like they are still trying to wrap their heads around cell phones and social media.
And those issues feel so front of mind that I think that a lot of people, a lot of adults feel like, what?
Like, I don't want them to cheat on Gen AI, but also like, I still don't understand TikTok.
I haven't figured out Snapchat.
If you imagine like the pie chart of parents' concerns and then the slice of it that is tech, and then the amount of that slice that is currently being still taken up by social media
and mobile phones, I think there's just frankly not a lot of time left and people don't even really know what questions to ask.
So this is the mental state parents and educators were starting from.
AI had arrived at a moment where nobody had really figured out yet what to do about cell phones in schools or social media.
Those are unsettled debates.
So AI felt to some of these adults like having a pandemic during your pandemic.
Many of the school districts would reverse their initial bans, but everyone is still trying to figure out with AI what is okay and what is not okay.
And in that vacuum, students have just been using it more and more.
According to a study from just a couple months ago, 70% of students now report having used AI for schoolwork.
And now we have this new category of young influencers.
I'm going to call them homework influencers.
Students themselves usually with massive TikTok followings, educating their peers about how to cheat with AI at a level of sophistication that I cannot help but marvel at.
If you want an easy way to cheat on your homework, then stop scrolling.
This AI tool allows you to upload any PDF file and it will generate you answers.
I'm going to show you how easy it is.
If you're in school right now and planning on using ChatGPT or AI to do your your work for you this year, you're gonna wanna watch this video.
So here it's just written me an essay in literally seconds.
And it gets even better.
It can even edit and expand the writing you already have.
Now this will take more time than just copying and pasting from GPT, but remember, good cheating takes time.
The best of these videos we saw comes from a young influencer named Carter PC Smith.
Carter is 19.
He has 5.6 million followers, nearly four times as many TikTok followers as the New York Times.
carter comes across like a smart jockeish kid who you might meet on the first day of school happily offering to show you the ropes there's a couple tools you'll need for a good ai written essay you need the instructions for the essay co-pilot gpt and gpt0 gpt is going to do the bulk of the writing co-pilot is good for factual information and gpt zero helps you not get caught i can't overstate how carter set up a classic led adorned gamer cave from within he points to his ultra-wide monitor one window with the essay instructions instructions, the other three with different AI agents, each overseeing one part of his signature automated homework process.
I start by putting in the whole rubric and copying and pasting in another essay that I actually wrote and say,
write this essay in the style of this essay so it gets the tone right.
Once you get your output here, it is, I'm sad to admit, a well-made piece of service journalism.
Carter signs off with the verbal equivalent of a mischievous wink.
The message, we all know we're not supposed to do this, but catch us if you can.
And second of all, I don't actually do this.
This is all theoretical.
This is what the educators are up against.
This is the status quo in February 2025 during the fifth semester of the new war on homework.
So to recap, the grown-ups so far have tried regulating to the point of banning and then unregulating to the point where this is basically a free-for-all.
Obviously, the question to answer is, how should teenagers be using these tools?
And those Harvard researchers, Emily Weinstein and Beck Tench, they are part of the search for an answer here.
For the past year, they've been studying how teens currently use AI.
Their approach is to almost look at teens with the curiosity and respect with which any good anthropologist approaches a different culture.
Here's Beck.
The way that we orient to young people using tech is we listen, we look at them as experts of their experience.
We're kind of first generation to tech in a way, and they're actually living it.
And we knew that if we weren't listening, policies would be created that actually would take away agency from them.
So the goal of the project was how do we use participatory design?
How do we include young people in creating the policies that will govern their use of AI in the classroom?
So that was the larger question.
And so the idea is like, if adults don't understand this stuff correctly, they'll create rules that don't make any sense.
The kids are going to do what the kids are going to do.
And you're not going to get to the outcome you want, which is like
figuring out how the kids are using this anyway
to work with them and create rules around them so that they're using these as tools and not in a way that destroys their ability to think or learn.
Yeah.
And we wanted to
change the shape of the way that educators were responding to this.
Instead of limiting access to a technology because people were cheating, we wanted to help educators create assignments that you would feel you were missing out if you used AI to help you with them.
So what you're saying is like, if the problem is that, and I had not isolated the problem this cleanly in my head, but like if the problem is that AI can do drudge work, and if homework historically and famously has been like drudge work that trains kids on the idea that life is dredgy.
You're like, rather than trying to put handcuffs around their ability to use AI, one way you could think about solving this problem is you have to make homework interesting.
Yeah.
And school interesting.
And not only interesting, but interesting and meaningful and joyful.
Yeah.
That seems very
ambitious.
Yeah.
Perhaps you can hear my skepticism here.
This is the first time I encountered this idea that the problems of AI were highlighting an older, existing problem.
Homework and how boring it is.
And this notion that in the future, homework technology must advance.
Homework simply must be not just fun, but interesting, meaningful, and joyful.
Obviously, right now, it's not.
Anyone raising kids has confronted this.
Making them do homework is almost worse than when you had to do it yourself.
But you tell yourself, you tell them, that tolerating this painful boredom is important training for adult life, which contains multitudes of painful boredoms.
I did not believe this as a kid.
I don't think I believed it until I found myself having to say it myself.
I had to say something to these kids who didn't want to do their homework.
But was it true?
Was I actually sure homework was good?
Or was this just a required belief for members in good standing of the grown-up community?
I spoke to a person who really challenged my thinking on this.
His name is John Warner.
He's a writer.
He edited the website McSweeneys.
He's taught writing to tons of students, and he's just written a book called More Than Words, How to Think About Writing in the Age of AI.
Before we even begin, I just like,
the first LLM chatbot AI agent I remember seeing was ChatGPT.
I'm just curious, like,
Do you remember when you first saw that it existed, tried it, and do you remember what your reaction was?
Yeah, for sure.
I think I I saw it the day it released.
I had actually played around with GPT 3.0, right?
And ChatGPT was 3.5 and had written a little piece about it and said, this stuff is not really a threat to student writing or
education because while it could do something that seemed sort of like writing, it was still essentially gibberish, right?
It was not doing the work.
So I saw ChatGPT and like a lot of people, I was like, whoa, hold the phone.
This thing can do something that I didn't think machines could do, which is string together sensible syntax in response to a plain language prompt.
And so I was like, this is new.
We have to deal with this.
But at the same time, because of my background as a writing teacher, who had been frustrated for well over a decade about the kinds of things students are asked to do when they write in school, I was excited because I was excited to have a conversation about writing as a human activity of thinking, feeling, and communicating.
So let's have that conversation.
The rest of this episode, we're going to talk about the act of writing, what writing is for, and what American high schools seem to think it's for, because those are two very different things.
John Warner, our guide for this section, The thing you need to know about him is that he loves writing and he's very skeptical of AI.
He says what chatbots really generate is, quote, bullshit.
But even though he's concerned about the future, he thinks the status quo for English class, for English homework, deserves to be demolished.
John says, if you want to understand how broken English class is, look at the writing assignment we ask kids to complete over and over again.
An assignment that Playboy Farty wanted a machine to do for him.
The five-paragraph essay.
One of your sort of
a villain, an intellectual villain for you, and the thing that keeps coming up with the students I'm talking to is the five paragraph essay.
For Americans, this is very, very, very understood.
For some non-American listeners, like they're not as familiar.
Like, what is the five-paragraph essay?
Well, the five-paragraph essay is essentially a template that can be used to make a simulation of an argument where you establish your thesis and topic at the beginning.
You use three body paragraphs to support the thesis that you've established at the beginning, and then you summarize those three paragraphs in your conclusion, which starts with in conclusion.
And in our school system, that also has come along with all kinds of other prescriptions that students are familiar with.
Things like no contractions, never use the first person pronoun, I
always have transition phrases.
In fact, I still recall a student I had at my last job at College of Charleston, where she had brought in her high school notebook that had three handwritten pages of the allowable transition words and phrases that she had checked with her teachers over the years.
And she wanted to see if I was good with them also, right?
That's great because that's like computer.
Like, you know what I mean?
Like, they're following rules.
Yes, yes, yes.
The five-paragraph essay itself is more an avatar of the problem than the problem itself, right?
Like, the problem itself is that we decided we're going to try to measure students' progress as writers, and we need this kind of common scheme to do it.
John's thesis is that the five-paragraph essay sucks.
First, it sucks because it's an artificial form, the simple thesis always followed by exactly three supporting points.
It further sucks because of how teachers grade it, enforcing these made-up rules, like never make an I statement.
And finally, it sucks because it trains students not to write to explain or to think, but instead to just obey the rules of this rigid form.
In conclusion, John Warner believes we need to understand the reasons for this assignment's popularity because the story of how it came to be reveals larger dysfunctions in American education.
One place you might start that story is with one particular American, a Hollywood actor turned president.
Today, our children need good schools more than ever.
We stand on the verge of a new age, a computer age when medical breakthroughs will add years to our lives.
This is Ronald Reagan in 1983, his first term, talking about a new priority for his administration, education.
But from 1963 to 1980, scholastic aptitude test scores showed a virtually unbroken decline.
Science achievement scores of 17-year-olds have shown a steady fall.
And most shocking, today, more than one-tenth of our 17-year-olds are functionally illiterate.
I look at a report that came out during the Reagan administration called The Nation at Risk, which said our kids are dum-dums.
And because they're so dumb, we're going to fall behind.
Our worry at that time was Japan, who was eating our lunch in terms of electronics and this kind of stuff.
So Japan, with a population only about half the size of ours, graduates from college more engineers than we do.
We need kids to be smarter.
We need to start measuring these things.
And this sort of begat what has essentially been a bipartisan approach to to school reform.
Reagan had actually tried and failed to eliminate the federal Department of Education.
He'd settled instead for giving its budget a haircut.
But his argument was we could still have better outcomes in our high schools if we just held those schools more accountable.
The plan didn't really work.
Test scores did not rise in Reagan's era.
But at the state level, some politicians listened to his pitch.
that schools themselves needed to be more rigorously tested.
And later, another Republican president really ran with this idea.
Today begins a new era, a new time in public education in our country.
As of this hour,
America's schools will be on a new path of reform
and a new path of results.
This is George W.
Bush in 2002 announcing the passage of No Child Left Behind.
20 years after Reagan, the federal government would be able to withhold funding from states whose schools had bad enough test scores.
And so what this bill says, it says every child can learn.
And we want to know early before it's too late whether or not a child has a problem in learning.
I understand taking tests aren't fun.
Too bad.
We need to know in America, we need to know whether or not children have got the basic education.
School funding now relied in part on the strength of students' measured reading skills.
The way you measure reading is through comprehension essays.
And the drill teachers relied on to prep teens for those tests was a five-paragraph essay.
Easy to teach, close to the formulaic writing the test demanded.
John Warner has been in education long enough that he's watched this whole transformation happen.
I'm 54, almost 55.
I graduated high school in 1988.
I happen to still have my fifth grade writing portfolio that my mom, for whatever reason, kept over the years.
And there is not a single five-paragraph essay in the entire thing, right?
Fifth graders today are already either writing five-paragraph essays or there's a couple of forms that are essentially precursors to it.
And what, like, just to spell it out, like, as an educator, as a person who loves writing, what's the downside of it?
Like, why does the five-paragraph essay break your heart as a person who's trying to teach kids to write and to love writing?
Well, it prevents them from having the kinds of experiences that help you learn to write.
Writing is fundamentally making choices inside of a specific rhetorical situation, message audience purpose, right?
In the book, I talk about my third grade teacher, Mrs.
Goldman, who taught me the rhetorical situation by requiring our class to write instructions for making a peanut butter and jelly sandwich, and then
requiring us to make the peanut butter and jelly sandwiches strictly according to our own instructions.
So if you had done something like I did, which is forget to say, spread the peanut butter on the bread with a knife,
I was dipping my hand in the jar of peanut butter to spread it on the bread.
And in fact, I have a picture of this moment.
It's the avatar I use on my newsletter, of me in third grade smiling, doing a writing assignment.
where I had learned that audience fundamentally matters, that on the other side of this piece of writing is somebody who's going to read it and use it.
And so you have to take care with what you write in order to make sure it connects with the audience.
Now, I was in third grade, right?
Yeah.
So we're still mostly just messing around, but that's a great way to learn how to write when you're young is to mess around, right?
Again, I was working with 18-year-old college freshmen who would report that they had never had the experience in school of writing to an audience, which is what writers do, right?
But if you've never practiced this, you don't have that frame of reference when you go about writing.
The story that John is telling here, it actually exactly fits my memory.
I arrived at high school in 2000, where America's greatest English teacher, Ann Gerbner, taught me to love the thing I'm doing for you right now.
But when I arrived at college in 2004, I remember struggling with writing for the first time, smacking my head against the brick wall of the five-paragraph essay.
In high school, I'd learned to love writing.
In college, I learned to slightly loathe it.
I wouldn't have used these words then, but I was being asked to write less like a person and more like a machine.
Me and everybody else.
And maybe it would have gone on like this for students forever.
But now...
An actual machine has arrived that can itself write like a machine, which functionally kills the five paragraph essay as a test of anything.
All it measures now is that the student is capable enough to ask ChatGPT to write an essay.
But John says you have to be really careful here with what you do next.
There's this temptation from a lot of teenagers and some adults to see chatbots as analogous to calculators, to think it's okay to no longer learn a skill that a machine can approximate.
But John says that that is faulty reasoning.
You know, when ChatGPT arrived, there were a lot of people analogizing it to calculators.
And this is true in some senses, but it's not true in some very important ways.
The most important way is that the labor of you struggling to do long division at your kitchen table
and what a calculator does when it calculates is identical, right?
The exact same thing is happening between a calculator or a human doing calculation.
ChatGPT is not using a process that is the same as what humans do when they write.
ChatGPT is generating syntax on the basis of weighted probabilities.
And that is not what humans do when they write.
When humans write, we are using syntax in order to try to capture an idea or notion or image or whatever we're trying to get on the page.
Writing is thinking, writing is feeling, writing is communicating.
All of these things happen as humans write.
None of them happen when ChatGPT writes.
This is meaningful.
It must be meaningful.
If it isn't, then we can just pack it in and give up on writing.
One of the parts of the social media era that I only understood in retrospect was how big a deal it turned out to be.
Just that we replaced reading sentences in books with mainly reading sentences on screens, on social media.
It took me too long to notice how, for me, that meant I'd lost important capacities.
For deep attention, for reflection, because I didn't notice how different the thing being replaced, one kind of reading, was from the thing replacing it.
A totally different kind of reading.
The idea that the next generation will need to understand what they're losing when they ask machines to write,
to generate their arguments, to generate the emails where they explain themselves or apologize, or just think through a problem, that does not seem to me like a small deal.
That's the alarming part.
After the break, we'll return to the research.
We'll get some actual information about how this next generation views AI and things like authenticity.
Their ideas very much challenged and surprised me.
That research, after some ads.
This episode of Search Engine is brought to you in part by Square, the easy way for business owners to take payments, book appointments, manage staff, and keep everything running in one place.
Whether you're selling lattes, cutting hair, detailing cars, or running a design studio, Square helps you run your business without running yourself into the ground.
Square works wherever your customers are, at your counter, your pop-up, online, or even on your phone, all synced in real time.
You can track sales, manage inventory, and get reports from anywhere.
And with instant access to your earnings through Square checking, you're never waiting on your money.
Square even helps you keep your regulars coming back with built-in loyalty and marketing tools.
The hardware looks polished, the software is intuitive, and you can start running smoothly right away.
With Square, you get all the tools to run your business with none of the contracts or complexity.
And why wait?
Right now, you can get up to $200 off Square hardware at square.com slash go slash engine.
That's square.com/slash go slash engine.
Run your business smarter with Square.
Get started today.
This episode of Search Engine is brought to you in part by Mint Mobile.
You know, it's not on my summer bucket list?
Paying a sky-high wireless bill.
If you, like me, do not want to fork over way too much money every month for the same service that you're already getting, you can pay way less thanks to Mint Mobile.
And right now, you can get three months of unlimited premium wireless for just 15 bucks a month.
Switching could not be easier.
You can keep your phone, your number, all your contacts, no contracts, no hidden fees, and no surprise overtips.
The best part, you'll save enough money each month to put toward actual fun stuff.
Trips, concert tickets, you name it, instead of wasting it on your phone bill.
This year, skip breaking a sweat and breaking the bank.
Get this new customer offer and your three-month unlimited wireless plan for just $15 a month at mintmobile.com slash search.
That's mintmobile.com slash slash search.
Upfront payment of $45 required, equivalent to $15 a month.
Limited time, new customer offer for first three months only.
Speeds may slow above 35 gigs on the unlimited plan.
Taxes and fees extra.
See Mint Mobile for details.
This episode of Search Engine is brought to you in part by Chili Pad.
Will my kids sleep tonight?
Will I wake up at 3 a.m.
again?
Am I going to wake up hot and sweaty because my partner leaves the heat on?
Those are the thoughts that bounce around my head when I can't sleep too.
And let's face it, sleep slips away when you're too hot, uncomfortable, or caught in a loop of racing thoughts.
But cool sleep helps reset the body and calm the mind.
That's where Chili Pad by SleepMe comes in.
It's a bed cooling system that personalizes your sleep environment.
So you'll fall asleep faster, stay asleep longer, and actually wake up refreshed.
I struggle with sleep constantly, and I have found that having a bed that is cool and temperature controlled actually really does make a huge difference.
ChiliPad works with your current mattress and uses water to regulate the temperature.
Visit www.sleepme/search to get your ChiliPad and save 20% with code search.
This limited offer is available for search engine listeners and only for a limited time.
Order it today with free shipping and try it out for 30 days.
Even turn it for free if you don't like it with their sleep trial.
Visit www.sleep s-le-e-e-p.me/slash search and see why cold sleep is your ultimate ally in performance and recovery.
Welcome back.
Everyone I've talked to for this story who's been thinking deeply about AI and education, their common understanding is that this new thing is just very different from what preceded it.
Metaphors from the past mislead us here.
Chatbots are not Wikipedia.
They're not calculators.
They're something new.
And while the chatbots are powerful, they're not substitutions for what is valuable about human thought.
Which means the job then for the adults is some kind of conversation with the teens who will someday soon use these chatbots to take our jobs away from us.
Ha ha ha, but also maybe really.
A conversation about what these tools offer and what they might take away.
Over at the Center for Digital Thriving with Emily and Beck, those conversations have actually already begun.
They shared with me some results from their most recent study, one that's still ongoing.
Okay, so here's what we did this week.
Okay, so what are we looking at?
Okay, so this is a
Miro board.
It's basically a, Beck, how do I describe this?
It's a digital whiteboard.
Okay, a digital whiteboard.
And what we did was we, so Beck mapped out these five different kinds of school assignments.
Oh, wow.
Sorry, just to say that.
On the whiteboard, you see the cover of the book, The Great Gatsby, Some Things Never Change, and a theoretical homework assignment.
Write an analytical essay about The Great Gatsby.
And then a bunch of yellow post-it notes with ideas about how someone might use AI to get through that assignment.
Ideas like checking grammar all the way through, using AI to write the essay, and then paraphrase it.
Hello, Playboy Farney.
Emily and Beck would show this whiteboard to a group of real high schoolers over Zoom and ask them, how do you feel about each of these ideas?
From crosses a line through feels kind of sketchy, all the way to totally fine.
Every one of these graphs is one teens plotting.
Interesting.
So you're like, you've you've plotted the moral judgment of these dangers.
And the thing that's interesting is like, I can just see visually, they're all over the place.
Yeah.
They're all over the place.
But one of the really interesting implications of this is you can actually see that kids just truly doing their best to make a judgment call end up with really different judgments about what is okay and what is not okay in terms of the use of AI.
Obviously, it's not unusual for teens to see things differently from each other or from adults.
But what's interesting here is that when the researchers talk through the logic undergirding these views, the teens are just operating under totally different logical assumptions from my generation.
We were just doing this a listening session that was actually what sparked the direction that we've been iterating in.
And we were talking to teens about how AI is coming up in their lives.
And one of the teens was like, you know, I am not using it that much right now, but I'll tell you a really cool story.
My dad just told me about how his friend was writing a birthday card for his wife, and he used AI to come up with what to write in the birthday card.
And then he gave the birthday card to his wife and it was so nice that she cried.
She said it was the best card he'd ever written her.
And he's like, and so isn't that amazing?
And what do you say?
I was like, wait, tell me more.
And I was, and so I'm like, tell me more about your reaction.
And he's like, well, like, he made his wife so happy.
It was like the best card he had ever written.
And I'm in a, it's me and I'm, I'm in a conversation with a group of teens.
And a few of the others start agreeing.
Like, that's really cool.
And so
I said,
wait, I want to talk about this because I actually have a really different reaction to that experience.
And I want to understand more about how you think about it.
My reaction is not, oh, that's so great.
I feel sort of like if my husband gave me a card that was written by AI and I didn't know it was written by AI and I cried, I would feel kind of betrayed or lied to.
And we start unpacking that.
And interestingly, one of the teens says to me,
Well, why is that?
Like, if your husband wrote a card with AI, he's still trying to do something nice for you.
And isn't the intention what matters?
Like, if he brings you flowers from a flower store,
he didn't grow the flowers or pick them,
and you still think it's really nice.
So, isn't that kind of like what's happening here?
And
I was like, huh,
interesting.
If you were born before the year 2000, your body is convulsing right now.
You want to grab the nearest teenager, maybe they're in the car with you, and explain to them that their generation is completely wrong about all of this, which is not how it works.
Trust me, I've tried.
Their generation just seems to hold some strange, to us, new views.
Like they seem to be really focused in that conversation on the idea that if the outcome was that the person's feelings were good, then the use was good.
And if the outcome was that the person's feelings were hurt, then the use was bad.
Maybe that was anecdotal.
Maybe it was just the teens who happened to be in that group.
Maybe it was developmental.
Maybe it was generational.
But like part of what you're saying that's so interesting at this moment is there's one version of what could be going on here, which is just like, I think the average adult feels differently about that story where they think, hey, a letter says to me, you sat down and thought about how you felt and you communicated something true about how you felt, and you're taking the time you thought about me and making it visible, and that's what's meaningful.
And that if an AI wrote it, it's not meaningful.
And like, you could think that these kids are going to feel that way when they're 20 or 30.
Or another possibility is that you could be watching a norm change.
Like, you could, the same way, like, exactly, my parents would not have taken a thousand photos of their faces, but in my generation, that's not considered like a huge mark of vanity.
A norm might be changing, and we just don't know.
Right.
In real time, in these rooms, we are watching values begin to shift.
Sure, we can all say that we agree some things are just human, but then we start trying to decide which things and realize, oh, this is complicated.
The researchers report that some teens told them that talking to a chatbot, not just for homework, but for advice or companionship, feels safer than talking to a human.
It doesn't judge, it never leaves, it's always available.
To me, that can sound a little tragic.
Like, what is the world we've created where the chatbot's window is a safer place for teens than the rooms with humans in them?
At the same time, people find what they need.
They make do with what's available.
We escaped into novels and TV shows and songs.
We got by on imaginary relationships while we waited for the real ones to arrive.
I can almost get myself there.
But then a birthday card to your wife written by AI,
surely we're losing something here.
I want to just say on the letter writing, I
my cousins lost their house in the, in the fire in the Palisades.
And so I've been thinking a lot about items.
And partly as a result of that, I spent part of my weekend digitizing letters that I had gotten across my life from my grandfather.
And I had like over 35 handwritten pages and pages like these letters he had written.
And I was thinking about what, this is like the number one thing I was like, when I thought about losing things, this was the thing that I was like, oh, this, I have to, I have to figure out a way to preserve this.
And so I was thinking about the power of letters a lot.
And I was also reflecting on the reality that even though a lot of us, I think, can totally write letters now, I don't have that many letters from people in my life.
And I actually have a lot of friends who who are writers.
And I don't think it's just that they don't know how to write a letter.
I think it's partly that people don't really have time.
They haven't prioritized it.
They haven't decided that's something they want to do.
And so I say that to say, like, yes, the skill is really important.
And I really care about that.
And I want us to think about that.
But part of what our research, I think, keeps telling us is that we also have to be paying attention to sort of the roots of what's going on.
Like, why is it that kids are cheating on all their homework assignments?
Or why is it that they're feeling all this pressure to have 5 million group chats?
Like, what is going on here?
And how do we get to the roots of loneliness or boredom or pressure or whatever it is so that we're actually having conversations about what we want to preserve and not just protect, but also cultivate with a fuller spectrum?
Yeah.
I think someone could tell me this many times before it would ever sink in, that when a new technology comes along and seems to have an uncomfortably firm grip on people, it's worth not just looking at the technology, but about the underlying need it's addressing.
You can try to turn off the machine or regulate it towards being less addictive, but you might also try to understand why the void it's promising to attend to is there in the first place.
John Warner, the writer and teacher, he reminded me, not every young person has this same need that the new technologies seem to burrow into.
Last fall, I went to Harvey Mudd College in California, which is a small liberal arts engineering college, right?
Part of the Claremont Colleges.
And the students there who are brilliant, right?
They're like 1,600 SAT type students.
They're all in computer science, engineering, all this kind of stuff, physics, astrophysics.
They have no interest in using this technology
because they are fascinated with the byproducts of their own minds.
And to outsource their crazy thoughts
to
something else.
It doesn't make any sense to them.
Now I'll go to other places and students will sheepishly admit,
yeah, when it doesn't matter, when it's like a discussion board post that my teacher makes me do, or it's not going to be graded, or I'm not worried about it.
Yeah, I'll just do ChatGPT.
And then there's the students who,
in a way, they're a little bit in between and that they get it.
Like school is not a game.
We should learn stuff.
I'm going to have to know something when I go off and be an adult in the working world.
But also,
man, I don't have time.
I've got a job.
I'm taking 18 hours of credit.
My student loans come due when I'm finished with this.
And so if I can like shave off a little that helps me cope, this is a good technique.
I recognize there's a compromise there, but this is just the thing I have to do.
And what we're looking at really is
the underlying conditions of where the work is happening really, I think, are far more dispositive towards how people feel about this technology than sort of the technology in isolation.
What do you mean?
If you look at the data around student disengagement, student stress, anxiety, a lot of it is based in school.
And so, to ask students to sort of forego
a
hack that will help them relieve stress, relieve anxiety.
get this credential that they think is necessary for the life they want to lead, it's very tough to ask them to give that up, right?
Sort of voluntarily, unless you can show a superior route towards something that they want and they need.
Right.
So to return to our original question, what is the fix for kids using AI to do their homework?
Before these conversations, here's what I was doing.
Using the powers I have as an adult in the house to set up firewalls and filters to just try to shut it all down.
But I will tell you, it feels like playing whack-a-mole and losing.
I'm professionally good at computers.
I'm not good enough at computers to stop a medium-strength teenager.
And not only did it not work in the narrow sense, it also created a lot of anxiety for the adults in the house.
This constant feeling of losing, being overwhelmed.
So maybe the solution we've landed on for now, at least in our house, is to sit with the kids, to talk about where AI use might be okay,
but also to try to persuade persuade them when it comes to actually writing a first draft, please go offline and struggle with the blank page like so many teenagers before them have.
I don't know that the process of writing is ever going to be entirely joyful, but it has joys and it's very meaningful.
And however the world shapes up, It's hard to imagine the ability to think and reflect won't be skills a future adult would want.
John Warner says when he looks at the adults currently using AI, what he notices is that they're people who benefit from having already developed critical thinking skills and some actual knowledge of whatever their job is.
If you have a functioning practice as a lawyer and you know how to write a brief, you may be able to use this technology to help you write a brief.
But if you don't have that underlying knowledge and ability, you have no idea.
if this technology is going to write you a good legal brief.
I've seen people use image generators in amazing ways, but the reason they're able to use them in amazing ways is because they can talk to the image generator like an art director, not like a prompt engineer, but like an art director.
And these are the things that we need to be teaching people to use technology that can generate images or syntax or video or music or what have you.
Not how do you interact with this thing on a moment-to-moment basis to get an output?
Because that just puts the whole sort of everything's behind the veil.
It's like the great Oz is back there doing stuff.
We should know exactly what Oz is up to when we are asking it to do things.
Yeah, it's funny.
I remember years ago, I think Tyler Cowan had this observation where he was saying there's this debate about, you know, when a computer is going to beat a human in chess.
And the real thing isn't computers versus humans.
It's human-assisted computers.
Like that is going to be the best chess player.
And similarly, to your point,
setting aside like the IP considerations, like if you can create a world where all the people whose work has been hoovered into these AIs actually gets compensated for, which isn't a crazy thing to imagine.
Like they don't have to be theft machines.
Like you could work out a deal.
But in that world where people could use image generation without feeling morally squeaky about it, you want people who know what they're doing to be checking the work of a machine.
And the reason it makes educators jobs so important right now is that our generation, as you're saying, has native
earned knowledge of all these things.
And the next generation, we just need to make sure that continues to be true.
Yeah, we can't be passive consumers of this sort of output.
You know, I go give talks on this stuff.
I sometimes will joke around.
I say, like, which AI future are you interested in?
Do you want the Terminator, right, where the super intelligence kills us all because it thinks we're a threat?
Do you want a Wally
where
robots do everything for us and we sit around on our floating bark loungers and eat and watch media?
Or do we want an idiocracy where we actually fall apart because we don't know how to do anything?
And
I just firmly believe we have to keep knowing how to do stuff.
Yeah.
It's not only important for practical reasons.
It's part of like having the experience of being human, right?
When I look at the increases in anxiety and depression among student age kids, I see a direct correlation between the kinds of things they're asked to do in school and those emotions and the pressure of doing those things well.
And if instead they could just kind of exist and do this work in a way that's meaningful to them, that still helps them build these capacities that are going to serve them well, I think it could be a catalyst towards increased human thriving.
But this is not outsourcing this work to this technology.
This is using this technology to allow ourselves to be more human.
John Warner, his book, which I found extremely clarifying and you should definitely check out, is called More Than Words, How to Think About Writing in the Age of AI.
And if you're curious to learn more about the fascinating, ongoing work being conducted by Emily Weinstein and Beck Tench, go check out the website for the Center for Digital Thriving at Harvard University.
Search Engine is a presentation of Odyssey and Jigsaw Productions.
It was created by me, PJ Vote, and Shruthi Pinamaneni, and is produced by Garrett Graham and Noah John.
Fact-checking by Claire Hyman.
Theme, original composition, and mixing by Armin Bazarian.
Additional production support by Sean Merchant.
If you'd like to support our show and get ad-free episodes, zero reruns, and the occasional bonus audio, please consider signing up for Incognito Mode.
You can learn more at searchengine.show.
Our executive producers are Jenna Weiss-Berman and Leah Reese Dennis.
Thank you to the team at Jigsaw.
Alex Gibney, Rich Perello, and John Schmidt.
And to the team at Odyssey, J.D.
Crowley, Rob Morandi, Craig Cox, Eric Donnelly, Colin Gaynor, Matt Casey, Maura Curran, Josephina Francis, Kurt Courtney, and Hilary Schott.
Our agent is Oren Rosenbaum at UTA.
Follow and listen to Search Engine for free on the Odyssey app or wherever you get your podcasts.
Thanks for listening.
We'll see you next week.
This episode is brought to you in part by Odo.
Running a business is hard enough.
So I make it harder with a dozen different apps that don't talk to each other.
One for sales, another for inventory, a separate one for accounting.
Before you know it, you're drowning in software instead of growing your business.
That's where Odo comes in.
Odoo is the only business software you'll ever need.
It's an all-in-one, fully integrated platform that handles everything.
CRM, accounting, inventory, e-commerce, HR, and more.
No more app overload, no more juggling logins, just one seamless system that makes work easier.
And the best part?
Odoo replaces multiple expensive platforms for a fraction of the cost.
It's built to grow with your business, whether you're just starting out or already scaling up.
Plus, it's easy to use, customizable, and designed to streamline every process.
So you can focus on what really matters: running your business.
Thousands of businesses have already made the switch.
Why not you?
Try Odo for free at odoo.com.
That's odoo.com.