When AI F*s Up, Who’s to Blame? With Bruce Holsinger

52m
What happens when artificial intelligence collides with family, morality and the need for justice? Author and University of Virginia professor Bruce Holsinger joins Kara to talk about his new novel, Culpability, a family drama that examines how AI is reshaping our lives and our sense of accountability.

Who is responsible when AI technology causes harm? How do we define culpability in the age of algorithms? And how is generative AI impacting academia, students and creative literature?

Our expert question comes from Dr. Kurt Gray, a professor of psychology and the director of the Collaborative on the Science of Polarization and Misinformation at The Ohio State University.

Questions? Comments? Email us at on@voxmedia.com or find us on YouTube, Instagram, TikTok, and Bluesky @onwithkaraswisher.

Learn more about your ad choices. Visit podcastchoices.com/adchoices

Listen and follow along

Transcript

I'm a little more techie than Oprah.

I hope you don't mind.

I'm a little less techie than most of your guests, I would imagine.

Hi, everyone, from New York Magazine and the Vox Media Podcast Network.

This is on with Kara Swisher, and I'm Kara Swisher.

We're smack in the middle of the dog days of summer, and I'm sure a lot of people are taking time on the weekends or during vacation to relax with a good book.

So, we thought we'd do the same with you today and talk about a new novel that's been getting a lot of attention, including from Oprah.

But it is right in our wheelhouse.

It's called Culpability, and it's written by my guest today, Bruce Holzinger.

Culpability is a family drama centering around the way that technology, especially artificial intelligence, has become woven into our lives and the moral and ethical issues that can arise as a result.

Who is responsible?

Who is culpable when things go awry?

And how do we we make it right again?

This has everything for someone like me.

It has AI, it's got drones, it's got chatbots, stuff that I cover all year long.

And actually, it's written with a lot of intelligence.

A lot of this stuff usually tries to scaremonger and stuff like that.

It's an incredibly nuanced book.

It brings up a lot of issues, and most important, it allows you to talk about them in a way that is discernible.

I think a lot more people, especially since Oprah Winfrey made it her book club selection, will read it.

And that's a good thing because we should all be talking about these issues.

Holzinger is also a professor of medieval literature at the University of Virginia, kind of far away from these topics.

So I want to talk to him about how he thinks about generative AI in his work settings, too, as a teacher, as an academic, and as a writer.

Our expert question comes from Professor Kurt Gray, incoming director of the Collaborative for Science of Polarization and Misinformation at Ohio State University.

So pull up your beach blanket and stick around.

Thumbtack presents.

Uncertainty strikes.

I was surrounded.

The aisle and the options were closing in.

There were paint rollers, satin and matte finish, angle brushes, and natural bristles.

There were too many choices.

What if I never got my living room painted?

What if I couldn't figure out what type of paint to use?

What if

I just used thumbtack?

I can hire a top-rated pro in the Bay Area that knows everything about interior paint, easily compare prices, and read reviews.

Thumbtack knows homes.

Download the app today.

Hi, Bruce.

Thanks for coming on.

My pleasure.

Thank you for having me.

So, your latest book, Culpability, is a family drama, a kind of mystery, and there's a lot of tech woven throughout it, which of course is my interest.

I've been talking about the issue of culpability on the part of the tech companies for a while, but this is a much more nuanced and complex topic here because it involves humanity's involvement in it, which of course is at the center of it.

For people who haven't read the book, give us a short synopsis of how you look at it right now.

It may have changed since you wrote the book.

Yeah, so I see the book as maybe trying to do two things.

One, as you said, it's a contemporary family drama about just something really bad that happens to this family.

They get in this bad car accident while they're driving the family minivan over to a lacrosse tournament in Delaware, but the van is in autonomous mode or self-driving mode.

And

their son, Charlie, kind of a charismatic lacrosse star, he's at the wheel, but he's not driving.

And then the dad, Noah, is distractedly writing on his laptop.

He's trying to finish a legal memo.

The daughters are in the car on their phones, as tween kids often are.

And the mom, named Lorelei,

she's an expert in the field of ethical artificial intelligence.

She's like this world-leading figure.

And

she's writing in her notebook, getting ready for a conference.

And then they have this awful accident.

And then the rest of the novel, and much of the suspense of the novel, spins out around who is responsible, who is culpable, and why.

And so that's one issue that it's trying to tackle.

And then the other is just exploring our world newly enlivened by these chatbots, by drones, by autonomous vehicles, by

smart homes, and so on, and immersing the reader in a suspenseful drama that gets at some of those issues at the same time while making them really essential to the plot, to the suspense, and so on.

Is there something that happened that illuminated you, or are you just reading tech reporters like myself over the years as we become doom and gloomers as time goes on?

It's interesting.

You know, the novel really started with

just this wanting to deal with

this accident.

And I didn't even, initially, I wasn't even thinking about an autonomous car.

I was just thinking, okay, I really want to explore what happens to this family, you know,

different degrees of responsibility.

And then, you know, when I realized, okay, well, who would be responsible?

Then I just, you know, we had this, I don't even remember what kind of car, just with, you know, some lane guidance and so on.

And I thought, okay, that's interesting.

What if the car is somewhat to blame?

This was really before, what was it, late 2022, when the chat GPT craze, people started talking outside your industry about AI in general.

Yes, indeed.

And then, so I was already writing this book, and then boom, it was like this explosion with LLMs.

And suddenly I realized, oh, this novel is actually about autonomy and culpability in this world newly defined by what people call artificial intelligence.

Had you been using any of it?

Have you used a Waymo?

I've used them for years and years.

Yeah, I've used Umo a couple times.

That was not until actually I started this book.

And then I test drove a couple of models, like maybe some big Chrysler thing that, you know, and they now have self-driving mode on if you're on certain roads.

And then, of course, there's Tesla.

I've been in a lot of Tesla Ubers where the guys, you know, will just put the thing on, you know, lane-changing auto technology.

And so I don't have one, but

I was really fascinated by it.

And not scared necessarily of it.

Not so much.

This is not a scary tech book, I would say.

Oh, maybe it's a little bit scary, but although in some ways, you know, somebody pointed out to me at an event last week that, you know, it's the, in some ways, it's the humans that are doing scarier things in the book, or at least.

But I wanted that kind of uncanniness of the tech, especially this chat bot that one of the daughters is interacting with.

Yeah, we'll get into that.

I've done quite a lot of work on that in the real world.

The narrator in this book is the husband Noah, by no means a Luddite.

No.

But he is compared to his wife, who's deep in tech.

She's an engineer and a philosopher, an expert in the ethics of AI.

Talk about their relationship.

Which one do you relate to?

I relate probably more to Noah, because I'm someone I'm always in awe of my wife and her brain.

But I also...

You know, I'm an academic.

Lorelei is an academic.

Noah is a lawyer.

He's a commercial lawyer working at a firm.

Lorelei comes kind of from this fancy blue blood family.

Her sister's the dean of the law school at the University of Pennsylvania.

So I think I wanted that relationship, you know, as you, as you point out, we're only seeing it from Noah's point of view, but we get Lorelei's voice and snippets from her work.

And so, you know, in Noah, he puts her up on a pedestal, but he also doesn't understand her in many ways.

He just has no clue what kind of work she does.

He has a really hard time

even understanding what she's writing.

And so she's in some ways a mystery to him in the same way that AI is a mystery to so many of us.

The juxtaposition between AI and ethics has always fascinated me.

The idea that you there have been ethical AI people, but most of them have been fired by these companies, which is interesting.

Because when you bring up these thorny issues, it's difficult.

Yeah, many of these companies have fired their entire ethics team.

That's correct.

Yeah.

The other big relationship in the book takes a while to unfold between Lorelei and Daniel Monet.

He's the tech billionaire she consults for.

Lorelei, she reminds me of a lot of tech engineers talking about the goal is to make the world safer, better.

I'm just here to help, essentially.

And of course, Monet is very typical, the change the world nonsense that they all kind of spew at you.

But to me, most of them, in my experience, are interested in money or shareholders.

That seems to be their only goal.

Safety is one of the last things that is on their mind, if it even occurs to them at all.

Did you have any inspirations for these, Carol?

Did you know tech people?

You got them pretty well.

You got them pretty well.

Yeah, I don't really know many people in the tech industry, but I, you know, I read your book.

I listened to

a lot of interviews with people.

And if you'll notice, there's this mocked-up New Yorker interview with Daniel Monet.

And I went in in portraying him, I really wanted to avoid stereotyping.

You know, I don't, it's not that I'm too worried about stereotyping tech billionaires, but I didn't want him to be a stock.

figure.

So, you know, there's a tragedy in his recent past too, but he's also very cynical about ethics.

And he's like, sure, we want to make the world a safer, better place.

But he also calls out the hypocrisy of so much ethical language, just like you do.

You know, it's the idea that their primary concern is safety.

So he's really in that world, in that, speaking in that idiom, but also contemptuous of it a little bit, the same way he is of effective altruism.

Are you contemptuous of it as a person?

I mean, obviously, you're an intelligent person.

Most people seem flummoxed by AI and understanding that it's important at the same time, scared of it, at the same time, trying to lean into it in some way.

And it's a little different than the first internet session where everybody did finally get on board.

And this one, there's a wariness in involving yourself in it.

Are you like that yourself?

Yeah, I suppose so.

I'm maybe a little less worried about bots taking over the world.

I'm much more worried about slop and disinformation and what it's doing to our students.

I'm no expert.

This is a novel.

But reading around in journals like Philosophy and Technology or Nature Machine Intelligence about autonomous driving,

I don't understand a lot of the technical, any of the technical aspects of it, but the philosophical part I can kind of grasp.

And

Lorelei is convinced that there is a good to be done with

machine autonomy in certain spheres.

And saving lives is her driving factor.

And she's not a tech billionaire.

She's in the middle of it.

She's worried about it.

She's kind of terrified.

As are many.

But also feels like it's her job to help make the world safer with these things.

And the point being, look, Waymo's been expanding.

Last week it started testing in New York City, which is astonishing because that's the most difficult.

place to do this landscape to do it in.

Uber has been signing multi-million dollar partnerships, trying to figure out its own robo-taxi.

Elon is also betting on self-driving even through the Tesla RoboTaxi doesn't exist for the most part.

It doesn't, even though he talks about it as if it does.

Now, obviously, there have been well-known accidents with self-driving cars, especially in San Francisco recently, around GM and others.

But most of the studies, and this is why I've driven in them for a long time, show them to have better safety record than human drivers in most scenarios.

That is, I was in a San Francisco Waymo, but a bicyclist kept getting in front of the Waymo on purpose in order to get it to hit him, which was funny in some way.

San Francisco, it's fine.

We're used to that.

But that said, they are, I feel, I do feel safer than some, I was driving in an Uber with an Uber driver and he was driving like a bat out of hell.

And I was like, slow down.

I have four kids.

Can you please slow down?

Yeah, I've been, I've caused myself two accidents over the years that completely totaled the car I was driving.

Oh, wow.

Out of my own negligence and idiocy.

And so, like, who are we to say that we're safer drivers?

I don't know.

So I'm with you.

But in the book, the accident occurs because the son Charlie overrides the autonomous system.

Yeah, yeah.

It's a little ambivalent, I think, about what's happening.

But that's, you always know, are you taking control or is it taking control?

Talk about that.

Is the need to take control of our destiny, especially when we're scared, part of our human algorithm, I guess?

Is that one of the ideas you're playing with?

Yeah, I suppose so.

You know,

I haven't articulated it that way before, but that's really wonderful.

You know, there's this passage at the end of the prologue in Noah's Voice where, you know, Laura Lai is thinking, you know, Laura Lai always thinks that she always says that a family is like an algorithm.

A family is like an algorithm.

These parts working in concert with each other, the variables changing.

And then, of course, he says, until it isn't, until it isn't an algorithm, until things go wrong.

And maybe, you know, I wasn't thinking of it this way, but maybe Charlie taking that wheel, which he does at the last second, you know, wresting things away from the AI is a nice metaphor for our desire to kind of intervene in this world where we feel like so much of our autonomy is being taken away.

It's a very human gesture on his part, I think.

Of course, it's also a dangerous gesture.

What is the immediate cause of the accident?

Aaron Ross Powell, Right.

And one of the problems is the car is recording all the time.

By the way, family is not like an algorithm, so just FYI.

Hey, I don't say that.

No, I know that.

I like that she said it, but I was laughing at that one.

How do you overcome that?

Because we do give in to automation in an elevator, an escalator.

We do it all the time.

We get on a bus, we're get on a plane.

But in terms of, do you overcome that need to control?

Because we have given over to automation quite a lot of our lives in so many ways.

Aaron Powell, Jr.: I think we do it without noticing, though.

You know, there's this soft creep of things.

And, you know, the times that we, maybe the times that we most resist it is when there's glitches.

You know, so there's these moments where you realize, okay,

we need to seize some control back.

And I find myself, you know,

the Jonathan Haidt argument, we're in this age of profound distraction.

And just, you know, getting away from the digital world, let alone AI for a little while, can be really helpful.

About a decade ago, we started to call it continuous partial attention, actually.

It's not complete distraction.

And so you're sort of paying attention partially, which is what this kid is doing in the car, right?

Everybody's sort of paying attention.

Until then, you become absorbed.

And, you know, eventually these cars, you will not pay attention.

You'll be like on a ride at Disney or on a bus.

That's how it'll feel like to you.

We'll be back in a minute.

Support for this show comes from Robinhood.

Wouldn't it be great to manage your portfolio on one platform?

With Robinhood, not only can you trade individual stocks and ETFs, you can also seamlessly buy and sell crypto at low costs.

Trade all in one place.

Get started now on Robinhood.

Trading crypto involves significant risk.

Crypto trading is offered through an account with Robinhood Crypto LLC.

Robinhood Crypto is licensed to engage in virtual currency business activity by the New York State Department of Financial Services.

Crypto held through Robinhood Crypto is not FDIC insured or SIPIC protected.

Investing involves risk, including loss of principal.

Securities trading is offered through an account with Robinhood Financial LLC, member SIPIC, a registered broker dealer.

One of the things that's also happening, there's other technologies that you enter here.

The middle child, Alice, going down a rabbit hole, nice move,

talking on her phone, chatting with Blair.

Talk about that relationship.

Yes, so Alice is the middle child, as you point out, and her siblings, Charlie and Izzy, are, you know, dynamic, charismatic.

They've got friends to burn.

They're just sweet, easy kids to get along with.

with.

And Alice is the more troubled one.

She doesn't have friends.

Her parents worry about that.

And when she's in the hospital after this accident, she starts texting somebody, even though she has this concussion.

And

the doctors take it, well, you know, she shouldn't be on there more than a little bit at a time, but it's a good sign that she can deal with that.

And so her dad is, you know, when she's home, her dad is like, who are you texting?

And she says, it's my friend, my new friend.

I met her in the hospital.

And Noah thinks, ah, this is great.

Finally, she has a friend, even if it's just a friend she's texting.

And then we learn very quickly that this Blair

is an AI.

She's a large language model.

She's a chatbot that Alice has befriended on this app.

And the thread of their texts, I think there's 10 or 12 of them in just very short little bursts throughout the novel.

And they contain, you know, no spoilers, but they contain a lot of the suspense, a lot of the issues of culpability in the book.

And I do want to flag the audiobook narrator.

There were two of them, the woman, January Lavoie, who did the voice of Lorelei's excerpts, and the two voices in the chat.

Absolutely brilliant.

Just uncanny what she does with those passages.

Yeah, making them seem human, but not.

Yeah, and that was based on, you know, just a lot listening to,

looking at transcripts of some of these chatbot conversations going on with teenagers right now and just thinking about this crisis of companionship and friendship and loneliness.

And this just seemed like something that would be an obvious part of the novel.

As many kids are.

Now, you portray this chatbot Blair almost like a good angel sitting on Alice's shoulder trying to get her to do the right thing.

But there's a lot of evidence that these chatbots can be extremely detrimental for kids.

I interviewed Megan Garcia, whose son died by suicide after chatting with Character AI.

Daenerys Targaryen was the character's name.

They're in a lawsuit now.

Common Sense Media just came out with a study that found social AI companions exacerbate mental health conditions, impose an unacceptable risk for anyone under 18.

There's been a spade of stories of people over 18, by the way.

Just today, there was another one with it sort of encouraged and OpenAI responded actually, which was surprising.

It encourages people with mental health in their delusions, like, oh, great idea to like harness the sun and go up there with a rocket.

Like, let's try that.

And I like your calculations and stuff.

It's a very aim to please.

So

that said, you portrayed Blair as sort of the moral high ground.

Usually they're very solicitous, which this bot is.

Is there any risk in doing that?

Well, I don't know if there's risk in doing it in a novel, but I don't know if that is how I would read those passages.

I see that relationship as, you know, Blair, it's almost like, and again, I don't want to give too much away, but so that Blair is kind of programmed to make Alice good, right?

It's like the way I was imagining it is whoever's coding this thing,

you know, steer her on the right moral path.

And in this case, the right moral path is supposedly to reveal something or to hold back from doing something rash and dangerous.

And yet the way Alice responds to it, it's almost like Blair's surface level ethical consciousness, or, you know, to the extent that an LLM can have one, which it can't, but I just mean, you know, whatever it's being programmed to do, steers Alice in,

as I I think we see over the course of the novel, into a more destructive kind of mindset.

So it is, even though Blair, and that's why, you know, that's the great thing about writing fiction is you can manipulate those kind of moral codes.

You can have

what seems to be good, ethical on the surface be much darker and less, you know, more amoral underneath.

That, I think, is what I was trying to get at.

And that's one of the,

and I would love to know what you think of this.

I think we have this,

you know, whenever I talk, I'm a, in my day job, I'm a literature professor.

I teach at the University of Virginia.

And there's a whole, there's a real kind of minimization and almost disdain for LLMs in big parts of my profession.

You know, like there's not going to be artificial general intelligence, blah, blah, blah.

There's not, you know, these things don't mean anything.

And I wonder, to me, one of the superpowers of LLMs is their complete indifference to us.

And that is scary.

The coldness of it, to me, that seems like one of the, and I'm trying to play around with that a lot in the, in the novel, is, is

how that, how that is one of the, the things that it has that, that separates it from us.

It doesn't make it better than us.

It just makes it very, very different.

And I don't know if we've recognized that yet in our accounting for what this intelligence is.

I don't know what you think of that.

I think people attribute human feelings to them.

And I think one of the things I always say, I say this about some tech leaders too, they don't care.

And they're like, oh, they're hateful.

I'm like, no, no, they don't care.

It's a very different thing.

Like, it doesn't, it's hard to explain when someone doesn't, it's almost not even malevolent.

It's just don't care.

So, one of the things I'd like to get from you, I mean, because I think you did nail it, is they, they have no feelings.

The question is, and in the case of Megan Garcia's son, Google and the people that are around character AI, Google's an investor in it, say that this was user-generated content, right?

That this is themselves talking to themselves, and it's all based on people, right?

You know, Soil and Green is people, essentially.

So

do you think these bots are us or something quite different?

Oof.

Yeah, I don't know.

I think, you know, obviously it is us in that so much of what's been uploaded.

I think I checked that.

database with books three, right?

And I think at least three of my previous novels are in that database.

So it is speaking back to us in our own words in some way, but words that are manipulated,

words that bounce off of us and in these, again, in these ways that are coldly indifferent to our fates, but pretend empathy.

And that's the scariest thing of all.

If you can convince someone that you are empathetic, that you are sympathetic, that you are just like them, that you're here for them, and then that makes it all the easier to turn on them.

Is that what's immoral about Blair?

I think so, yeah, because Blair can...

Amoral.

I mean, amoral.

Amoral.

Yeah, amoral, exactly.

And then there's a subtle difference there.

And I think amorality is, and again, I think that's part of superintelligence.

I think amorality

is one of the kind of categories here that makes these things so good at what they do.

Yeah.

Or, you know,

awful in what they do, too.

Even if they reflect us.

But it's the deceptive, it's the cloak of decency.

That's exactly.

So what AI technology you all seem to look at more critically is a swarm of drones.

You have everything in here, by the way.

Swarm drones that accidentally kill a busload of civilians in Yemen.

This is one of the many parts of the book where you use fictional artifact materials.

In this case, a Senate hearing transcript.

There are serious ethical questions about the use of these autonomous weapons in warfare, and the UN has spoken about it.

How do you think about this?

And what do you want the readers to take away here?

You know, at some point, they'll be able to target individual people, what I'm told, like the DNA of individual leaders and not kill anybody else, for for example.

Right.

Well, that's the dilemma at the, you know, when

that we get this, there's a lot of snippets of different sorts of things, like paratextual elements throughout the novel.

And the Senate hearing is one, that

New Yorker interview is one.

The technology and where things are in terms of autonomous drone swarms, a lot of that, I think, is probably classified.

Like you may know, I did a little.

poking and prodding.

If you can think of it, they're working on it.

Let me know.

They're working on it.

Exactly.

And they're

further further ahead than we probably think they are.

That's correct.

And so, all right, so Lorelei would take, you know, if she were looking at this problem, and again, no, no spoilers, but she would probably say, well, if, okay, so they're going to kill civilians every now and then, but what if they kill far fewer civilians than conventional weapons?

Yes, that's their argument.

And that's, I'm sure, you know, in the morality of war arguments, that is always

okay.

So, you know, new technology, it's these, these things are going to happen.

And so, if we can, just like an autonomous vehicle, yeah, it's going to kill people, but are they going to kill fewer people?

Yeah.

And then, so I

imagine the same thing is true in war.

That the thing that's so uncanny, you know, is just that

to imagine these drone swarms, instead of

just working with their human operators, they're working with each other and improving themselves.

It's a machine learning technology.

Once you put more than one in the air, they're learning from each other and they're learning about us.

They're learning about our tactics.

And that's obviously obviously the

more futuristic element of it.

But this novel is very much set in the present.

It's very much about a contemporary family going through the struggle of thing, you know, after this accident.

And so I really wanted to make that feel present day.

Another issue raised in the book is when the characters realize that the AI is collecting data that could be used against them.

Because one of the things the car companies are doing, not just with autonomous, is how you're driving, when you're braking, where you're going.

And, you know, they're able to base insurance costs based on how you drive, like by the amount of braking, for example, is something that's really revelatory and how bad a driver you are and how fast you're going, how much you speed up.

Talk about the idea of tech surveillance, both good and bad, and culpability, because if they're watching you, you can't sort of lie about what happened or misremember, I guess.

Yeah, one of the dynamics of the accident is, you know, Noah, the father,

believes, you know, he was sitting in the front car.

He mostly witnessed what happened.

And he believes that the other car was swerving into their lane.

And he doesn't have any, you know, he doesn't, he isn't even thinking about the tech aspect of it.

He's just thinking, okay, my son didn't do anything wrong.

Right.

We're going to get through this.

The police are going to interview you.

It's going to be fine.

And then, and this is one of the things I came across in the course of my research for the novel, this field called digital vehicle forensics.

The police have whole units dedicated to, if there's an accident, they go into the car's computer and they figure out exactly exactly what happened from the computer's point.

Like a black box.

Yeah, a black box, exactly.

As in a plane.

And with AI controlled, with

self-driving cars, that's all the more complicated.

And yet it's also, there's probably a lot more information being collected.

So it's like having 50 additional witnesses to what happened in that exact moment.

What the driver was looking at, what the driver was doing, what other computers were on in the car,

and so on.

So that's another kind of frightening bit of surveillance technology, just like drones and so on.

Is that a bad thing?

I mean, if you're texting while driving and you lie about it, you certainly should be held accountable.

Absolutely.

And yeah.

And there's a, you know, there's arguments, the same arguments to be made for facial recognition, for shot spotting.

technology, right?

Where'd the gunshot come from in the city?

But they are also tools of surveillance.

So you really, I think we really have to balance those kinds of things out.

You know, algorithmic injustice, the way facial recognition deals differently with different people of different races.

You know, those are really difficult dilemmas.

The culpability, the novel doesn't pretend to resolve them, but it wants to explore them, I think, in different ways.

Yeah, one of the things I always used to argue when they were talking about texting while driving, well, I'm like, but you made a...

you made a technology that's addictive.

So maybe that wasn't your fault for staring at the text.

Maybe it was the tech company's fault.

You know what I mean?

Whose fault was it?

Because it is addictive, in fact.

But the idea of being watched constantly is also sort of a prevalent thing in the book.

Do you think it's changing the way we act?

Will we get to be better people?

Like pay attention while your 17-year-old son is driving?

I always pay attention.

My 17-year-old son was driving.

I never stopped paying attention.

And the chat bot is surveilling Alice and this and that.

Do you think we change when we're being surveilled or we forget we're being surveilled?

Ooh, I think it's a little bit of both.

You know, that's a kind of stock feature in thrillers, right?

Like the, you know, cameras on in airports and people dodging the cameras, you know, putting on disguise to elude the ever-present surveillance state.

So, yeah, obviously, it, you know, that notion of the panopticon from Foucault, you know, that we, it's not just that we're being watched, but we're aware of ourselves being watched.

And that is a whole different kind of technology of the self and how we behave and how we comport ourselves in the public sphere with each other.

And I think even I would imagine that even when we know we're not being surveilled, that still, that sensibility is still there in some ways or other.

Or you forget.

Or you totally live in a world where you don't mind being surveilled and you forget that you are being surveilled.

Yes.

You know,

a party trick I do is I open people's phone and tell them exactly what they did all day and where they were and the address they were at and how many minutes they were there.

So I'd imagine if you were having an affair or something not good, I could find you, you know what I mean, easily just by your movement because you're you're wearing.

Our phones become our jumbotrons, right?

We're carrying our jumbotrons around with us all the time.

Although everyone loves the story.

Any thoughts on that?

That's really, because it's like the same thing.

It's like we're being watched at all times.

Yeah.

Yeah.

And it's, and it's, um, I, you know, the, the details of that whole story

that it's an HR person.

They're just things that are just too perfect.

I know, I know.

A novelist couldn't come up with this, I think.

I feel like that.

Yeah, but, but clearly it'll be in my next one.

Yes, obviously.

So there's one critical voice in the book.

It's near the end of the book when the NOAA has an interactive with detective Lacey Morrissey, who's investigating the accident.

She does frame it correctly as one of privilege who has ability to have tech, who has access, who has able to use it to shift the blame.

Talk about that scene in your thoughts, how tech relates to privilege, because it's a very big important thing.

You've written about it in your other books, too.

Absolutely.

Yeah, yeah, thank you.

I'm really glad you brought up that passage and that character.

So Lacey Morrissey is the Delaware police officer who is,

she's the detective kind of looking into the accident, basically going after Charlie and

saying, you know, you're not going to get away with this just because you're a Division I lacrosse recruit and you're, you know, you come from this fancy family and your dad's a lawyer and your mom's this world-famous technologist and philosopher.

You know, and then she has this rant that she goes on,

this very righteous rant where she's like the conscience, where where she's saying, you know, a kid from a housing project who's in this exact situation is going to get put in jail for an accident that your son might have, Noah, and get off with a slap on the wrist.

And this is where we are right now.

And AI is only exacerbating this problem, right?

And the surveillance

is treating people inequitably.

And we're now in this place where these things are becoming a way of just taking any kind of the moral burden of our mistakes off of our shoulders, right?

It's just another excuse for things.

And she just stomps out of the hospital and then

she drives away texting at the wheel.

And Noah sees her do it.

And there's this kind of shiver of righteous glee that goes down his spine.

One of the last scenes in the book.

Well, it's, again, addiction.

It's addictive.

It's so funny.

One of the problems with a lot of these technologies, and I think you put this out well in this book, is that it's not just the kids that are addicted.

Because when you tell your kids to put down their phone, you can't put down your phone.

You actually can't.

And so everybody is culpable, right?

You can't, you know, you have to sort of walk the talk.

So every episode, we get a question from an outside expert.

Let's listen to yours.

Hi, my name is Kurt Gray.

I'm a social psychologist and professor at The Ohio State University.

I research.

the psychology of artificial intelligence.

And my work shows that people think that AI can help us escape blame.

If a drone kills, it's the pilot to blame.

But if an AI drone kills, then people blame the AI.

But does it ever truly let us escape blame?

Or are we ultimately still to blame as human beings who invented the AI, who choose to use the AI,

and who deal with the aftermath of the AI?

Great question.

Thoughts on that?

Yeah, that's a really wonderful question because it is the central dilemma, I think, of the novel of culpability

is the title.

You know, who is to blame?

And Larlai, in the excerpts, she writes a book, and we get little excerpts from it called Silicon Souls on the culpability of artificial minds.

And as you read, you get little glimpses of that book, paragraphs or a page at a time, just eight or nine of them sprinkled throughout the novel.

And Larlai is wrestling with this all the time.

To what extent are we guilty if our machines do something bad?

Are our machines training us to be,

could they train us to be good?

Could they train us to be better people?

Because it's not just a one-way street.

Yes, we're inventing them.

We are responsible in many ways for how they are in the world, but we're also responsible for how we are, how we comport ourselves ethically in the world in relationship to them and thus in relationship to each other in new ways.

So, you know, I think it's always going to be a two-way street.

I don't think that's a squishy answer.

I think, you know, we're caught in this world where we're, in this Frankenstein world where we're creating these machines, or not we, not me, but we're using them.

We are

subject to many of their.

you know, their controls.

Yeah, I'm going to go right to it's our fault.

So speaking of that, this comes at the end of a novel.

Noah is called, by the way, Lorelei Atlas and says she has the weight of the world on her shoulders after she acknowledges that these algorithms have been used for drones and warfare.

This is something I've talked to many many people who have quit Google or stayed there.

Everyone has their own argument.

I want to play this clip from Laura Lai from the end of the book.

Okay.

We do the world no good when we throw up our hands and surrender to the moral frameworks of algorithms.

AIs are not aliens from another world,

they are things of our all-too-human creation.

We, in turn, are their pygmalians, responsible for their design, their function, and yes, even their beauty.

If necessary, we must also be responsible for their demise.

And above all, we must never shy away from acting as their equals.

These new beings will only be as moral as we design them to be.

Our morality, in turn, will be shaped by what we learn from them and how we adapt accordingly.

Perhaps in the near future, they might help us to be less cruel to one another, more generous and kind.

Someday they might even teach us new ways to be good.

But that will be up to us,

not them.

So talk about this idea of shaping our moral frameworks.

It hasn't worked with each other, right?

Because we don't seem to affect each other.

Do you really believe this is possible?

Well, do you really think we don't affect each other?

You don't think that there's a

lately?

Worse, I I think.

Lately, yes.

No, no, no.

I agree.

But in everyday situations, and when you have someone calling us to be good, I think we can shape one another's moral consciousness, right?

Right.

Yes, absolutely.

And, you know,

are these

AIs going to train us to be better?

Are they going to, are they,

you know, it's always going to be mixed back.

You know, we can get really excited about advances in protein folding visualization

and so on.

But there's always going to be these

kind of terrifying moral quandaries that they put us in at the same time.

Lorelei's voice is right on that razor's edge of the ethical dilemmas, right?

That passage that you played from the audio book.

That is, I wrote those passages to

explore this

you know, that profound moral ambivalence at the center of these

problems.

And I think there are people in that world, I imagine, I'm sure you know many of them who are dedicated to that, like make them better, make them, if not good, at least ethical,

and are dedicating their lives to it and are probably really scared and are doing everything they can to dig us out of these trenches

that these things have become.

I think a lot of people, I was thinking of Brad Smith of Microsoft, you know, he called tool or weapon.

Is it a tool or a weapon of these things?

And it's up to us, essentially.

um but in some ways is it up to us that's the thing you know one of the lines i always use is enragement equals engagement and so wouldn't you go to enragement over kindness right because and that's what's happened yeah not just in not just enragement but also massification and the one of those these places of course is um i have a son who's in the data analysis programming space and just the the speed with which um programming is being taken over and programming of programming and you know this is the next frontier of these things AI is making themselves better by creating more AIs to make them better you know this kind of recursive loop that we're in right you know for me the doom the p-doom is I don't really have a number but you know I'm much more in the in the camp of and there's a kind of dark note you know Jeff Goodall as he puts it the heat will kill you first it's the the kind the the

the most dangerous thing about AI maybe is just data centers in general and the consumption.

Thinking about Karen Howe's book, The Empire of AI, is that brilliant chapter on water use and energy use.

Bruce, they're bringing back nuclear.

What are you talking about?

That's going to be fine.

They're going to harness the sun.

They're going to go out there with a rocket and not fuck anything up.

We'll be back in a minute.

Avoiding your unfinished home projects because you're not sure where to start?

Thumbtack knows homes, so you don't have to.

Don't know the difference between matte paint finish and satin, or what that clunking sound from your dryer is?

With Thumbtack, you don't have to be a home pro.

You just have to hire one.

You can hire top-rated pros, see price estimates, and read reviews all on the app.

Download today.

I'm going to talk just a little bit about your day job and the role of AI.

You're a professor of medieval literature and critical theory at the University of Virginia.

You dedicated this book to your students.

One of the biggest tech issues facing higher education right now, the use of generative programs like ChatGPT, Claude.

You mentioned it briefly before about whether they were using LLMs to write.

Talk about how you're using it.

I would encourage students to use it so you don't pretend they're not.

Are you leaning in or out?

Yeah, this is, we're all wrestling with this.

Every department in my, in my university has something called an AI guide, where, you know, where they're

a faculty member, a colleague who were, you know, we're trying to come up with, and I'm in an English department.

We have a writing program.

This is a big issue, but I agree with what Megan O'Rourke said in the Times the other day that this is the end of the take-home college essay.

Like that, that is done.

That is dead.

That was never a big fan of that genre in the first place, but I do think there's a huge shift going on in assessment of student writing.

And I don't know if it's all to the bad.

I think there's a lot of space for more in-class writing, for slow reading, for

even just, you know, I have this vision of...

of just teaching a class where we almost go in and have reading time like in kindergarten again, first grade, you know, where we're all just fixed on a physical text.

Medieval literature is my specialty.

And I'm not calling for going back to the era of parchment and scribes, but there is something there about that slow attention and decelerating a bit.

Yeah, absolutely.

It's also the idea of

sort of doing homework.

I hate homework.

I've been an anti-homework person as a parent.

I have four kids, and I was always like, no, homework.

Homework is zero.

Stupid.

Go play kind of thing.

So one of the things you mentioned was parchment and scribes.

And I like this idea.

And I think you should go for it.

But as a medievalist, you know the way that the printing press, the original technology,

electricity, between the printing press and electricity, but both of them are critically important, sparked the first information age by improving access to knowledge and education.

But as most people do not know, a bestseller during that era was a thing called the Hammer of Witches, which was a dangerous treatise on killing women, essentially, and about how there were witches and witch hunts and et cetera, et cetera.

Was that another moment?

Because the democratizing of knowledge in the first 60 years ended up killing hundreds of thousands of women because of this book, for example.

Yeah, and hundreds of thousands of dissenters, heretic,

Catholics,

or Protestants, depending where and when you are.

And people, you know, people talked about the printing press.

Historians talk about it as an agent of change, and it obviously was, but manuscript culture lasts for many, many centuries after the printing press.

And people talked about the printing press as the tool of the devil, right?

And conservative theologians would say, look, now the people can have the word of God in their hand.

So I think that that, you know, it's one of those technological ruptures.

And in the culture of writing and literature and the written word, people freaked out about it, just like we're freaking out about LLMs now.

And, you know, it's a very, very different kind of rupture.

But, you know, democratization

can have its dangers.

You know, the book that you're talking about, Malius Malifakaram, That Hammer of Witches, that also has a manuscript tradition.

And there's a lot of persecution of women, of dissenting women, heretical women, then in the pre-print era as well.

I'm talking about getting it out there to many young people, right?

Is that a good impact or a bad impact from your perspective?

It kind of ended the medieval era, correct?

Or not?

Maybe not.

You're more of an expert.

Yeah, I'm one of those people who always pushes against the idea of these rigid medieval, early modern, medieval Renaissance period boundaries.

I'm much more interested in continuities than

the way that those ruptures play into long-scale continuities.

But I think that the printing press was an invention of the Middle Ages, not of the Renaissance.

And I think it's a nice analogy to AI, I think, because it's not futuristic.

It's coming out of so many kind of text-generation, computer-generated things that have been in place for decades.

And so looking at always being afraid of the newness of a technology, that in some ways is its own danger, I think.

I don't know if you'd agree.

I'd agree.

I would.

It's pretending.

You know, when people complain about certain things, I'm like, oh, yes, let's please go back to Xerox machines.

Like, no.

Like, what?

When you think about

what the most important technology of the era you teach on, what would you say it was?

I would say, you know, the emergence of

paper.

I wrote a whole book on parchment, but this widespread technology that had been in place for a thousand years, and it continues to be used even today by artists, printmakers, and so on.

But paper had come, was a product of the early Middle Ages as well.

But when it really starts, the convergence of paper and print, you get this mass production of books in a way you'd never had before.

And that really is enabled by paper.

Even though Gutenberg printed any number of his Bibles on animal skin, on vellum.

But

the preponderance of printed books, the vast majority are on paper.

So we suddenly get this ocean of paper.

So in that same vein, what about the comparative impact on creatives?

Now, do you think it will be a net positive allowing people who aren't natural artists to get their work out?

That's the argument.

Or will it undermine the value of artists?

Aaron Powell,

I don't think it's going to undermine the value.

It's not that I'm sanguine about that within the creative worlds,

but I feel like there are people who are already doing really interesting experiments, collaborative experiments, with large language models, with small language models that are just interesting because art is interesting and technology is part of art.

I'm much less worried about the arts than I am about young people and brains.

And what worries you then?

Oh, just about the

analytical.

Analytical, right?

Students who, well, sorry, I can just summarize this rather than reading it.

That I think is the

scary is the wrong word, but sad.

This is not, I'm just not just blaming students and young people.

I don't read nearly as many novels as I used to.

And when I do read novels, I get more impatient than I used to.

I used to lounge around on my bed, read novels by Charles Dickens when I was like 16 years old.

And for me, now getting through a Dickens novel is a real challenge.

I can't do a Dickens.

I can't do it.

That's the sad part of it.

Can you imagine reading Great Expectations right now?

I think I didn't like it then.

Why do they keep putting that?

I don't know.

I can't read pip.

Just stop with a pip.

Anyway,

did you use AI working on culpability at all?

How do you use it yourself?

I don't, I mean, you know, occasionally

I'll often use it like almost unintentional as a Google search, right?

But I am interested, you know, there's this wonderful poet who teaches at the University of Maryland, Lillian Yvonne Bertram, who has used these small language models to generate these poems in the style of other poets and doing it very intentionally.

I'm excited about that.

Our department is even doing, I think, a faculty search next year on AI and creative writing.

We just hired somebody in AI and critical thought.

So, you know, it's, yes, it's transforming things very quickly under our feet, but

I don't have the kind of dread of this that many colleagues do.

After looking at all these different uses of AI, and you really do cover the gamut here.

You've got the drones.

Which parts give you hope and scare you?

As you said, you're more sanguine, but would you describe yourself as a tech optimist or a tech pessimist?

There's three designations, Zoomers, boomers, and

doomers, right?

Did writing your book move you in one direction or another?

I think it moved me more,

probably more towards doom, more for those environmental reasons that I was talking about.

That was one of the real eye-opening revelations is just commonplace knowledge now, but often that we don't see even talked about in much journalism about the consumption.

But in terms of the models themselves, I don't know.

I think, especially with autonomous driving, I had a really bad accident that could have been really, really bad when my kids were in the car some years ago.

This crumpled the front of a car, and it was because of my negligence.

And I thought I would much rather have had a machine driving that day.

So that part of it, I think, this issue of autonomy and navigation, maybe I am a little more optimistic there.

And I think the thing is, you know, artificial intelligence, as you know, is such a sloppy term for all these different things, all these machine learning LLMs.

And so I think that

you probably would have to kind of go through a questionnaire to get me optimistic or pessimistic.

I think we need more of that.

This is a wonderful book.

It really is.

I'm glad Albert doesn't always pick the books I like, but this one I do.

How did that was that like a shocker to you?

That must have been a shocker.

Oh my God.

I was just, my hands were shaking.

And one of the things she does is

she records the calls that she makes with authors.

And in this case, I was like, I was so

like shaking and whatever, my voice sounded kind of understated.

So I got dragged a little bit for just being like, oh my God, he's not even happy.

But it was, it was such a lightning strike and so

thrilling and flattering.

There's a hundred other books published this summer that could have been her summer pick, but she chose culpability.

And I just can't believe it.

Certain things she does, the wedding she shouldn't have gone to, but

she does these books, things I think is really important.

That helps.

That's good.

Yeah.

I would have thought her voice was generated by AI.

I wouldn't have believed it.

I'm like,

I still think this is all a simulation.

Yeah.

Well, that's your next book.

It is all a simulation, in case you're

some teenagers from the future who are playing a video game right now, and they're enjoying themselves.

Anyway, much congratulations.

This has been a fascinating conversation.

I really appreciate your book.

It's great.

I'm recommending it to lots of

people who don't really understand it.

It's a really great way to understand.

And you don't shy away.

You don't stupidize these issues, which is

great.

Anyway, thank you.

Thank you.

It's been a huge pleasure.

On with Carraswisher is produced by Christian Castor-Roussel, Kateri Yoakum, Megan Burney, Allison Rogers, Lysa Soap, and Kaylin Lynch.

Nishat Kirwa is Vox Media's executive producer of podcasts.

Special thanks to Rosemarie Ho.

Our engineers are Rick Kwan and Fernando Aruda.

And our theme music is by Trackademics.

If you're already following the show, you get an AI and you get an AI.

Oh, that's an OpenFront joke, everyone who doesn't know, all you youngs.

If not, watch out for that jumbotron.

Go wherever you listen to podcasts, search for On with Kara Swisher and hit follow.

And don't forget to follow us on Instagram, TikTok, and YouTube at OnWithKara Swisher.

Thanks for listening to On With Cara Swisher from New York Magazine, the Vox Media Podcast Network, and us.

We'll be back on Monday with more.

This month on Explain It to Me, we're talking about all things wellness.

We spend nearly $2 trillion on things that are supposed to make us well.

Collagen smoothies and cold plunges, Pilates classes, and fitness trackers.

But what does it actually mean to be well?

Why do we want that so badly?

And is all this money really making us healthier and happier?

That's this month on Explain It To Me, presented by Pureleaf.