When AI F*s Up, Who’s to Blame? With Bruce Holsinger
Who is responsible when AI technology causes harm? How do we define culpability in the age of algorithms? And how is generative AI impacting academia, students and creative literature?
Our expert question comes from Dr. Kurt Gray, a professor of psychology and the director of the Collaborative on the Science of Polarization and Misinformation at The Ohio State University.
Questions? Comments? Email us at on@voxmedia.com or find us on YouTube, Instagram, TikTok, and Bluesky @onwithkaraswisher.
Learn more about your ad choices. Visit podcastchoices.com/adchoices
Press play and read along
Transcript
Speaker 1 I'm a little more techie than Oprah. I hope you don't mind.
Speaker 2 I'm a little less techie than most of your guests, I would imagine.
Speaker 1 Hi, everyone, from New York Magazine and the Vox Media Podcast Network. This is on with Kara Swisher, and I'm Kara Swisher.
Speaker 1 We're smack in the middle of the dog days of summer, and I'm sure a lot of people are taking time on the weekends or during vacation to relax with a good book.
Speaker 1 So, we thought we'd do the same with you today and talk about a new novel that's been getting a lot of attention, including from Oprah, but it is right in our wheelhouse.
Speaker 1 It's called Culpability, and it's written by my guest today, Bruce Holzinger.
Speaker 1 Culpability is a family drama centering around the way that technology, especially artificial intelligence, has become woven into our lives and the moral and ethical issues that can arise as a result.
Speaker 1 Who is responsible? Who is culpable when things go awry? And how do we we make it right again? This has everything for someone like me.
Speaker 1 It has AI, it's got drones, it's got chatbots, stuff that I cover all year long. And actually, it's written with a lot of intelligence.
Speaker 1 A lot of this stuff usually tries to scaremonger and stuff like that. It's an incredibly nuanced book.
Speaker 1 It brings up a lot of issues, and most important, it allows you to talk about them in a way that is discernible.
Speaker 1 I think a lot more people, especially since Oprah Winfrey made it her book club selection, will read it. And that's a good thing because we should all be talking about these issues.
Speaker 1 Holzinger is also a professor of medieval literature at the University of Virginia, kind of far away from these topics.
Speaker 1 So I want to talk to him about how he thinks about generative AI in his work settings, too, as a teacher, as an academic, and as a writer.
Speaker 1 Our expert question comes from Professor Kurt Gray, incoming director of the Collaborative for Science of Polarization and Misinformation at Ohio State University.
Speaker 1 So pull up your beach blanket and stick around.
Speaker 1 Support for On with Carr Swisher comes from Sax Fifth Avenue. Sacks Fifth Avenue makes it easy to holiday your way, whether it's finding the right gift or the right outfit.
Speaker 1 Sax is where you can find everything from a lovely silk scarf from Saint-Laurent for your mother or a chic leather jacket from Prada to complete your cold weather wardrobe.
Speaker 1 And if you don't know where to start, Saks.com is customized to your personal style so you can save time shopping and spend more time just enjoying the holidays.
Speaker 1 Make shopping fun and easy this season and get gifts and inspiration to suit your holiday style at Saks Fifth Avenue.
Speaker 1 Hi, Bruce. Thanks for coming on on.
Speaker 2 My pleasure. Thank you for having me.
Speaker 1 So your latest book, Culpability, is a family drama, a kind of mystery, and there's a lot of tech woven throughout it, which, of course, is my interest.
Speaker 1 I've been talking about the issue of culpability on the part of the tech companies for a while, but this is a much more nuanced and complex topic here because it involves humanity's involvement in it, which of course is at the center of it.
Speaker 1 For people who haven't read the book, give us a short synopsis of how you look at it right now. It may have changed since you wrote the book.
Speaker 2 Yeah, so I see the book as maybe trying to do two things. One, as you said, it's a contemporary family drama about about just something really bad that happens to this family.
Speaker 2 They get in this bad car accident while they're driving the family minivan over to a lacrosse tournament in Delaware, but the van is in autonomous mode or self-driving mode. And
Speaker 2
their son, Charlie, kind of a charismatic lacrosse star, he's at the wheel, but he's not driving. And then the dad, Noah, is distractedly writing on his laptop.
He's trying to finish a legal memo.
Speaker 2 The daughters are in the car on their phones, as as tween kids often are. And the mom, named Lorelei,
Speaker 2
she's an expert in the field of ethical artificial intelligence. She's like this world-leading figure.
And
Speaker 2 she's writing in her notebook, getting ready for a conference. And then they have this awful accident.
Speaker 2 And then the rest of the novel, and much of the suspense of the novel, spins out around who is responsible, who is culpable, and why.
Speaker 2 And so that's one issue that it's trying to tackle. And then the other is just exploring our world newly enlivened by these chatbots, by drones, by autonomous vehicles, by
Speaker 2 smart homes, and so on, and immersing the reader in a suspenseful drama that gets at some of those issues at the same time while making them really essential to the plot, to the suspense, and so on.
Speaker 1 Is there something that happened that illuminated you, or are you just reading? tech reporters like myself over the years as we become doom and gloomers as time goes on
Speaker 2 it's interesting. You know, the novel really started with
Speaker 2 just this wanting to deal with
Speaker 2
this accident. And I didn't even, initially, I wasn't even thinking about an autonomous car.
I was just thinking, okay, I really want to explore what happens to this family, you know,
Speaker 2 different degrees of responsibility.
Speaker 2 And then, you know, when I realized, okay, well, who would be responsible?
Speaker 2 Then I just, you know, we had this, I don't even remember what kind of car, just with, you know, some lane guidance and so on. And And I thought, yeah, that's interesting.
Speaker 2 What if the car is somewhat to blame? This was really before, what was it, late 2022, when the chat GPT craze, people started talking outside your industry about AI in general. Yes, indeed.
Speaker 2 And then, so I was already writing this book, and then boom, it was like this explosion with LLMs.
Speaker 2 And suddenly I realized, oh, this novel is actually about autonomy and culpability in this world newly defined by what people call artificial intelligence.
Speaker 1 Had you been using any of it? Have you used a Waymo? I've used them for years and years.
Speaker 2 Yeah, I used Waymo a couple times. That was not until actually I started this book.
Speaker 2 And then I test drove a couple of models, like maybe some big Chrysler thing that, you know, and they now have self-driving mode on if you're on certain roads. And then, of course, there's Tesla.
Speaker 2 I've been in a lot of Tesla Ubers where the guys, you know, will just put the thing on, you know, lane-changing auto technology. And so I don't have one, but
Speaker 2 I was really fascinated by it.
Speaker 1 And not scared necessarily of it.
Speaker 2 Not so much.
Speaker 1 This is not a scary tech book, I would say.
Speaker 2 Oh, maybe it's a little bit scary, but although in some ways, you know, somebody pointed out to me at an event last week that, you know, it's the...
Speaker 2 In some ways, it's the humans that are doing scarier things in the book, or at least.
Speaker 2 But I wanted that kind of uncanniness of the tech, especially this chat bot that one of the daughters is interacting with.
Speaker 1 Yeah, we'll get into that. I've done quite a lot of work on that in the real world.
Speaker 1 The narrator in this book is the husband Noah, by no means a Luddite, but he is compared to his wife, who's deep in tech. She's an engineer and a philosopher, an expert in the ethics of AI.
Speaker 1 Talk about their relationship. Which one do you relate to?
Speaker 2 I relate probably more to Noah, because I'm someone I'm always in awe of my wife and her brain.
Speaker 2 But I also...
Speaker 2
You know, I'm an academic. Lorelei is an academic.
Noah is a lawyer. He's a commercial lawyer working at a firm.
Lorelei comes kind of from this fancy blue blood family.
Speaker 2 Her sister's the dean of the law school at University of Pennsylvania.
Speaker 2 So I think I wanted that relationship, you know, as you, as you point out, we're only seeing it from Noah's point of view, but we get Lorelei's voice and snippets from her work.
Speaker 2 And so, you know, in Noah, he puts her up on a pedestal, but he also doesn't understand her in many ways. He just has no clue what kind of work she does.
Speaker 2 He has a really hard time
Speaker 2 even understanding what what she's writing. And so she's in some ways a mystery to him in the same way that AI is a mystery to so many of us.
Speaker 1 The juxtaposition between AI and ethics has always fascinated me. The idea that
Speaker 1 there have been ethical AI people, but most of them have been fired by these companies, which is interesting.
Speaker 1 Because when you bring up these thorny issues, it's difficult.
Speaker 2 Yeah, many of these companies have fired their entire ethics team.
Speaker 1 That's correct, yeah.
Speaker 1 The other big relationship in the book takes a while to unfold between Lorelei and Daniel Monet. He's the tech billionaire she consults for.
Speaker 1 Lorelei reminds me of a lot of tech engineers talking about the goal is to make the world safer, better.
Speaker 1 I'm just here to help, essentially.
Speaker 1 And of course, Monet is very typical, the change the world nonsense that they all kind of spew at you. But to me, most of them, in my experience, are interested in money or shareholders.
Speaker 1
That seems to be their only goal. Safety is one of the last things that is on their mind, if it even occurs to them at all.
Did you have any inspirations for these carols? Did you know tech people?
Speaker 1 You got them pretty well. You got them pretty well.
Speaker 2 Yeah, I don't really know many people in the tech industry, but I, you know, I read your book. I listened to a lot of a lot of interviews with people.
Speaker 2 And if you'll notice, there's this mocked-up New Yorker interview with Daniel Monet. And I went in portraying him, I really wanted to avoid stereotyping.
Speaker 2 You know, I don't, it's not that I'm too worried about stereotyping tech billionaires, but I didn't want him to be a stock figure.
Speaker 2 So, you know, there's a tragedy in his recent past too, but he's also very cynical about ethics. And he's like, sure, we want to make the world a safer, better place.
Speaker 2 But he also calls out the hypocrisy of so much ethical language, just like you do. You know, it's the idea that their primary concern is safety.
Speaker 2 So he's really in that world, in that speaking in that idiom, but also contemptuous of it a little bit, the same way he is of effective altruism.
Speaker 1 Are you contemptuous of it as a person? I mean, obviously, you're an intelligent person.
Speaker 1 Most people seem flummoxed by AI and understanding that it's important at the same time, scared of it, at the same time, trying to lean into it in some way.
Speaker 1
And it's a little different than the first internet session where everybody did finally get on board. And this one, there's a wariness in involving yourself in it.
Are you like that yourself?
Speaker 2
Yeah, I suppose so. I'm maybe a little less worried about bots taking over the world.
I'm much more worried about slop and disinformation disinformation and what it's doing to our students.
Speaker 2
I'm no expert. This is a novel.
But reading around in journals like Philosophy and Technology or Nature Machine Intelligence about autonomous driving,
Speaker 2 I don't understand a lot of the techno, any of the technical aspects of it, but the philosophical part I can kind of grasp. And
Speaker 2 Lorelei is convinced that there is a good to be to be done with machine autonomy in certain spheres and saving lives as her driving factor. And she's not a tech billionaire.
Speaker 2
She's in the middle of it. She's worried about it.
She's kind of terrified. As are many.
But also feels like it's her job to help make the world safer with these things.
Speaker 1
And the point being, look, Waymo's been expanding. Last week it started testing in New York City, which is astonishing because that's the most difficult...
place to do this landscape to do it in.
Speaker 1 Uber has been signing multi-million dollar partnerships, trying to figure out its own robo-taxi. Elon is also betting on self-driving even through the Tesla RoboTaxi doesn't exist for the most part.
Speaker 1 It doesn't, even though he talks about it as if it does. Now, obviously, there have been well-known accidents with self-driving cars, especially in San Francisco recently, around GM and others.
Speaker 1 But most of the studies, and this is why I've driven in them for a long time, show them to have better safety record than human drivers in most scenarios, that is.
Speaker 1 I was in a San Francisco Waymo, but a bicyclist kept getting in front of the Waymo on purpose in order to get it to hit him, which was funny in some way. San Francisco, it's fine.
Speaker 1 It's what we're used to that. But that said, they are, I feel, I do feel safer than some, I was driving in an Uber with an Uber driver and he was driving like a bat out of hell.
Speaker 1
And I was like, slow down. I have four kids.
Can you please slow down?
Speaker 2
Yeah, I've been, I've caused myself two. accidents over the years that completely totaled the car I was driving.
Oh, wow. Out of my own negligence and idiocy.
Speaker 2 And so like, who are we to say that we're safer drivers? I don't know. So I'm with you.
Speaker 1 But in the book, the the accident occurs because the son Charlie overrides the autonomous system.
Speaker 2 Yeah, yeah. It's a little ambivalent, I think, about what's happening.
Speaker 1 But that's, you always know. Are you taking control or is it taking control? Talk about that.
Speaker 1 Is the need to take control of our destiny, especially when we're scared, part of our human algorithm, I guess?
Speaker 1 Is that one of the ideas you're playing with?
Speaker 2 Yeah, I suppose so. You know,
Speaker 2 I haven't articulated it that way before, but that's really wonderful.
Speaker 2 You know, there's this passage at the end of the prologue in Noah's Voice where, you know, Laura Lai, he's thinking, Laura Lai always thinks that, she always says that a family is like an algorithm.
Speaker 2 A family is like an algorithm. These parts working in concert with each other, the variables changing.
Speaker 2 And then, of course, he says, until it isn't, until it isn't an algorithm, until things go wrong.
Speaker 2 And maybe, you know, I wasn't thinking of it this way, but maybe Charlie taking that wheel, which he does at the last second, you know, wresting things away from the AI is a nice metaphor for our desire to kind of intervene in this world where we feel like so much of our autonomy is being taken away.
Speaker 2 It's a very human gesture on his part, I think. Of course, it's also a dangerous gesture.
Speaker 2 What is the immediate cause of the accident?
Speaker 1
Right. And one of the problems is the car is recording all the time.
By the way, family is not like an algorithm, so just FYI.
Speaker 2 Hey, I don't say that.
Speaker 1 No, I know that. I like that she said it, but I was laughing at that one.
Speaker 1
How do you overcome that? Because we do give in to automation in an elevator, an escalator. We do it all the time.
We get on a bus, we're dipping, get on a plane.
Speaker 1 But in terms of, do you overcome that need to control? Because we have given over to automation quite a lot of our lives in so many ways.
Speaker 2
I think we do it without noticing, though. You know, there's this soft creep of things.
And, you know, the times that we, maybe the times that we most resist it is when there's glitches.
Speaker 2 You know, so there's these moments where you realize, okay, we need to
Speaker 2 we need to seize some control back. And I find myself, you know,
Speaker 2 the Jonathan Haidt argument, we're in this age of profound distraction. And just, you know, getting away from the digital world, let alone AI, for a little while, can be really helpful.
Speaker 1 About a decade ago, we started to call it continuous partial attention, actually.
Speaker 1 It's not complete distraction. And so you're sort of paying attention partially, which is what this kid is doing in the car, right? Everybody's sort of paying attention.
Speaker 1
Until then, you become absorbed. And eventually these cars you will not pay attention.
You'll be like on a ride at Disney or on a bus. That's how it'll feel like to you.
Speaker 1 We'll be back in a minute.
Speaker 3 Adobe Acrobat Studio, so brand new. Show me all the things PDFs can do.
Speaker 4 Do your work with ease and speed.
Speaker 3
PDF spaces is all you need. Do hours of research in an instant.
Key insights from an AI assistant. Pick a template with a click.
Now your Prezo looks super slick. Close that deal, yeah, you won.
Speaker 3 Do that, doing that, did that, done.
Speaker 4 Now you can do that, do that, with Acrobat.
Speaker 3 Now you can do that, do that with the all-new Acrobat. It's time to do your best work with the all-new Adobe Acrobat Studio.
Speaker 1 One of the things that's also happening, there's other technologies that you enter here. The middle child, Alice, going down a rabbit hole, nice move,
Speaker 1 talking on her phone, chatting with Blair.
Speaker 1 Talk about that relationship.
Speaker 2 Yes, so Alice is the middle child, as you point out, and her siblings, Charlie and Izzy, are, you know, dynamic, charismatic. They've got friends to burn.
Speaker 2 They're just sweet, easy kids to get along with.
Speaker 2
And Alice is the more troubled one. She doesn't have friends.
Her parents worry about that.
Speaker 2 And when she's in the hospital after this accident, she starts texting somebody, even though she has this concussion.
Speaker 2 And the doctors take it, well, you know, she shouldn't be on there more than a little bit at a time, but it's a good sign that she can deal with that.
Speaker 2 And so her dad is, you know, when she's home, her dad is like, who are you texting? And she says, it's my friend, my new friend. I met her in the hospital.
Speaker 2
And Noah thinks, ah, this is great. Finally, she has a friend, even if it's just a friend she's texting.
And then we learn very quickly that this Blair
Speaker 2
is an AI. She's a large language model.
She's a chatbot that Alice has befriended on this app.
Speaker 2 And the thread of their texts, I think there's 10 or 12 of them in just very short little bursts throughout the novel.
Speaker 2 And they can contain, you know, no spoilers, but they contain a lot of the suspense, a lot of the issues of culpability in the book.
Speaker 2
And I do want to flag the audiobook narrator. There were two of them, the woman, January Lavoie, who did the voice of Lorelei's excerpts, and the two voices in the chat.
Absolutely brilliant.
Speaker 2 Just uncanny what she does with those passages.
Speaker 1 Yeah, making them seem human, but not.
Speaker 2 Yeah, and that was based on, you know, just listening to
Speaker 2 looking at transcripts of some of these chatbot conversations going on with teenagers right now and just thinking about this crisis of companionship and friendship and loneliness.
Speaker 2 And this just seemed like something that would be an obvious part of the novel.
Speaker 1 As many kids are. Now, you portray this chatbot Blair almost like a good angel sitting on Alice's shoulder trying to get her to do the right thing.
Speaker 1 But there's a lot of evidence that these chatbots can be extremely detrimental for kids. I interviewed Megan Garcia, whose son died by suicide after chatting with Character AI.
Speaker 1 Daenerys Targaryen was the character's name. They're in a lawsuit now.
Speaker 1 Common Sense Media just came out with a study that found social AI companions exacerbate mental health conditions, impose an unacceptable risk for anyone under 18.
Speaker 1 There's been a spade of stories of people over 18, by the way. Just today, there was another one.
Speaker 1 It sort of encouraged and OpenA responded actually, which was surprising.
Speaker 1
It encourages people with mental health in their delusions, like, oh, great idea to like harness the sun and go up there with a rocket. Like, let's try that.
And I like your calculations and stuff.
Speaker 1 It's very aimed to please. So
Speaker 1 that said, you portrayed Blair as sort of the moral high ground. Usually they're very solicitous, which this bot is.
Speaker 1 Is there any risk in doing that?
Speaker 2 Well, I don't know if there's there's risk in doing it in a novel, but I don't know if that is how I would read those passages.
Speaker 2 I see that relationship as, you know, Blair, it's almost like, and again, I don't want to give too much away, but so that Blair is kind of programmed to make Alice good, right?
Speaker 2 It's like the way I was imagining it is: whoever's coding this thing,
Speaker 2 you know, steer her on the right moral path. And in this case, the right moral path is
Speaker 2 supposedly to reveal something or to hold back from doing something rash and dangerous.
Speaker 2 And yet the way Alice responds to it, it's almost like Blair's surface level ethical consciousness, or, you know, to the extent that an LLM can have one, which it can't, but I just mean, you know, whatever it's being programmed to do, steers Alice in,
Speaker 2 as I think we see over the course of the novel, into a more destructive kind of mindset.
Speaker 2 So it is, even though Blair, and that's why, you know, that's the great thing about writing fiction is you can manipulate those kind of moral codes.
Speaker 2
You can have what seems to be good, ethical on the surface be much darker and less, you know, more amoral underneath. That, I think, is what I was trying to get at.
And that's one of the,
Speaker 2 and I would love to know what you think of this. I think we have this,
Speaker 2 you know, whenever I talk, I'm a, in my day job, I'm a literature professor. I teach at the University of Virginia.
Speaker 2 And there's a whole, there's a real kind of minimization and almost disdain for LLMs in big parts of my profession.
Speaker 2 You know, like there's not going to be artificial general intelligence, blah, blah, blah. There's not, you know, these things don't mean anything.
Speaker 2 And I wonder, to me, one of the superpowers of LLMs is their complete indifference to us. And that is scary.
Speaker 2 The coldness of it, to me, that seems like one of the, and I'm trying to play around with that a lot in the in the novel is is
Speaker 2
how that how that is one of the the things that it has that that separates it from us. It doesn't make it better than us.
It just makes it very, very different.
Speaker 2 And I don't know if we recognize that yet in our accounting for what this intelligence is. I don't know what you think of that.
Speaker 1
I think people attribute human feelings to them. And I think one of the things I always say, I say this about some tech leaders too, they don't care.
And they're like, oh, they're hateful.
Speaker 1
I'm like, no, no, they don't care. It's a very different thing.
Like, it doesn't.
Speaker 1 It's hard to explain when someone doesn't, it's almost not even malevolent. It's just don't care.
Speaker 1 So one of the things I'd like to get from you, I mean, because I think you did nail it, is they have no feelings.
Speaker 1 The question is, and in the case of Megan Garcia's son, Google and the people that are around character AI, Google's an investor in it, say that this was user-generated content, right?
Speaker 1 That this is themselves talking to themselves, and it's all based on people, right? You know, Soil and Green is people, essentially. So,
Speaker 1 do you think these bots are us or something quite different?
Speaker 2
Oof, yeah, I don't know. I think, you know, obviously it is us in that so much of what's been uploaded.
I think I checked that database with books three, right?
Speaker 2 And I think at least three of my previous novels are in that database. So it is speaking back to us in our own words in some way, but words that are manipulated,
Speaker 2 words that bounce off of us and in these, again, in these ways that are coldly indifferent to our fates, but pretend empathy.
Speaker 2 And that's the scariest thing of all.
Speaker 2 If you can convince someone that you are empathetic, that you are sympathetic, that you are just like them, that you're here for them, and then that makes it all the easier to turn on them.
Speaker 1 Is that what's immoral about Blair?
Speaker 2 I think so, yeah, because Blair convinced
Speaker 1 amoral.
Speaker 2
Yeah, amoral, exactly. And then there's a subtle difference there.
And I think amorality is, and again, I think that's part of super intelligence.
Speaker 2
I think amorality is the, is, is one of the kind of categories here that makes these things so good at what they do. Yeah.
Or, you know, awful and awful in what they do, too.
Speaker 1 Even if they reflect us.
Speaker 2 But it's the deceptive, it's the cloak of decency.
Speaker 1
That's exactly. So what AI technology you all seem to look at more critically is a swarm of drones.
You have everything in here, by the way.
Speaker 1 Swarm drones that accidentally kill a busload of civilians in Yemen. This is one of the many parts of the book where you use fictional artifact materials, in this case a Senate hearing transcript.
Speaker 1 There are serious ethical questions about the use of these autonomous weapons in warfare and the UN has spoken about it. How do you think about this and what do you want the readers to take away here?
Speaker 1 You know, at some point they'll be able to target individual people, what I'm told, like the DNA of individual leaders and not kill anybody else, for example. Right.
Speaker 2 Well, that's the dilemma at the, you know, when
Speaker 2 that we get this, there's a lot of snippets of different sorts of things, like paratextual elements throughout the novel. And the Senate hearing is one, that
Speaker 2 New Yorker interview is one. The technology and where things are in terms of autonomous drone swarms, a lot of that, I think, is probably classified.
Speaker 2 Like you may know, I did a little poking and prodding.
Speaker 1 If you can think of it, they're working on it. Let me know.
Speaker 2 They're working on it, exactly. And they're
Speaker 2 further ahead than we probably think they are.
Speaker 1 That's correct.
Speaker 2 And so, all right, so Lorelei would take, you know, if she were looking at this problem, and again, no spoilers, but she would probably say, well, if, okay, so they're going to kill civilians every now and then, but what if they kill far fewer civilians than conventional weapons?
Speaker 1 Yes, that's their argument.
Speaker 2 And that's, I'm sure, you know, in the morality of war arguments, that is always
Speaker 2 okay. So, you know, new technology,
Speaker 2 these things are going to happen. And so, if we can, just like an autonomous vehicle, yeah, it's going to kill people, but are they going to kill fewer people? Yeah.
Speaker 2 And then, so I
Speaker 2 imagine the same thing is true in war. The thing that's so uncanny, you is just that
Speaker 2 to imagine these drone swarms, instead of just working with their human operators, they're working with each other and improving themselves.
Speaker 2
It's a machine learning technology. Once you put more than one in the air, they're learning from each other and they're learning about us.
They're learning about our tactics.
Speaker 2 And that's obviously the...
Speaker 2
the more futuristic element of it. But this novel is very much set in the present.
It's very much about a contemporary family going through the struggle of thing, you know, after this accident.
Speaker 2 And so I really wanted to make that feel present day.
Speaker 1 Another issue raised in the book is when the characters realize that the AI is collecting data that could be used against them.
Speaker 1 Because one of the things the car companies are doing, not just with autonomous, is how you're driving, when you're braking, where you're going.
Speaker 1 And, you know, they're able to base insurance costs based on how you drive, like by the amount of braking, for example, is something that's really revelatory and how bad a driver you are and how fast you're going, how much you speed up.
Speaker 1 Talk about the idea of tech surveillance, both good and bad, and culpability, because if they're watching you, you can't sort of lie about what happened or misremember, I guess.
Speaker 2
Yeah. One of the dynamics of the accident is, you know, Noah, the father, believes, you know, he was sitting in the front car.
He mostly witnessed what happened and he believes that the other car was
Speaker 2
swerving into their lane. And he doesn't have any, you know, he doesn't, he isn't even thinking about the tech aspect of it.
He's just thinking, okay, my son didn't do anything wrong. Right.
Speaker 2
We're going to get through this. The police are going to interview you.
It's going to be fine.
Speaker 2 And then, and this is one of the things I came across in the course of my research for the novel, this field called digital vehicle forensics.
Speaker 2 The police have whole units dedicated to, if there's an accident, they go into the car's computer and they figure out exactly what happened from the computer's point of view.
Speaker 1 Like a black box.
Speaker 2 Yeah, a black box, exactly.
Speaker 2 As in a plane. And with AI controlled, with
Speaker 2 self-driving cars, that's all the more complicated.
Speaker 2 And yet it's also, there's probably a lot more information being collected. So it's like having 50 additional witnesses to what happened in that exact moment.
Speaker 2 What the driver was looking at, what the driver was doing, what other computers were on in the car,
Speaker 2 and so on. So that's another kind of frightening bit of surveillance technology, just like drones and so on.
Speaker 1 Is that a bad thing? I mean, if you're texting while driving and you lie about it, you certainly should be held accountable.
Speaker 2 Absolutely. And yeah, and there's a, you know, there's arguments, the same arguments to be made for facial recognition, for shot spotting technology, right? Where'd the gunshot come from in the city?
Speaker 2 But they are also tools of surveillance. So you really, I think we really have to balance those kinds of.
Speaker 2 things out, you know, algorithmic injustice, the way facial recognition deals differently with different people of different races.
Speaker 2 You know, those are those are really difficult dilemmas. The culpability, the novel doesn't pretend to resolve them, but it wants to explore them, I think, in different ways.
Speaker 1 Yeah, one of the things I always used to argue when they were talking about texting while driving, well, I'm like, but you made a technology that's addictive.
Speaker 1
So maybe that wasn't your fault for staring at the text. Maybe it was a tech company's fault.
You know what I mean? Whose fault was it? Because it is addictive, in fact.
Speaker 1 But the idea of being watched constantly is also sort of a prevalent thing in the book.
Speaker 1 Do you think it's changing the way we act?
Speaker 1
Will we get to be better people, like pay attention while your 17-year-old son is driving? I always pay attention. My 17-year-old son was driving.
I never stopped paying attention.
Speaker 1 And the chat bot is surveilling Alice and this and that. Do you think we change when we're being surveilled or we forget we're being surveilled?
Speaker 2 Ooh, I think it's a little bit of both. You know, that's a kind of stock feature in thrillers, right?
Speaker 2 Like the, you know, cameras on in airports and people dodging the cameras, you know, putting on disguise to elude the ever-present surveillance state.
Speaker 2 So, yeah, obviously, it, you know, that notion of the panopticon from Foucault, you know, that we, it's not just that we're being watched, but we're aware of ourselves being watched.
Speaker 2 And that is a whole different kind of technology of the self and how we behave and how we comport ourselves in the public sphere with each other.
Speaker 2 And I think even I would imagine that even when we know we're not being surveilled, that still that sensibility is still there in some ways.
Speaker 1
Or you forget. Yeah.
Or you totally live in a world where you don't mind being surveilled and you forget that you are being surveilled. Yes.
Speaker 1 You know,
Speaker 1 a party trick I do is I open people's phone and tell them exactly what they did all day and where they were and the address they were at and how many minutes they were there.
Speaker 1 So I'd imagine if you were having a fair or something not good, I could find you, you know what I mean, easily just by your movement because you're wearing.
Speaker 2 Our phones become our jumbotrons, right? Because we're carrying our jumbotrons around with us all the time.
Speaker 1 Although everyone loves the story.
Speaker 1 Any thoughts on that? That's really, because it's like the same thing. It's like we're being watched at all times.
Speaker 2
Yeah. Yeah.
And it's, and it's, I, you know, the details of that whole story
Speaker 2 that it's an HR person, they're just things that are just too perfect.
Speaker 1
I know, I know. A novelist couldn't come up with this.
I think. I feel like that.
Speaker 2 Yeah. But, but clearly it'll be in my next one.
Speaker 1 Yes, obviously.
Speaker 1 So there's one critical voice in the book. It's near the end of the book when the NOAA has an interactive with detective Lacey Morrissey, who's investigating the accident.
Speaker 1 She does frame it correctly as one of privilege who has ability to have tech, who has access, who has able to use it to shift the blame.
Speaker 1
Talk about that scene in your thoughts, how tech relates to privilege, because it's a very big important. You've written about it in your other books, too.
Absolutely.
Speaker 2
Yeah, yeah, thank you. I'm really glad you brought up that passage and that character.
So Lacey Morrissey is the Delaware police officer who is,
Speaker 2 she's the detective kind of looking into the accident, basically going after Charlie and
Speaker 2 saying, you know, you're not going to get away with this just because you're a Division I lacrosse recruit and you're, you know, you come from this fancy family and your dad's a lawyer and your mom's this world-famous technologist and philosopher.
Speaker 2 You know, and then she has this rant that she goes on,
Speaker 2 this very righteous rant that where she's like the conscience, where she's saying, you know, a kid from a housing project who's in this exact situation is going to get put in jail for an accident that your son might have, Noah, and get off with a slap on the wrist.
Speaker 2 And this is where we are right now. And AI is only exacerbating this problem, right? And the surveillance
Speaker 2 is treating people inequitably. And we're now in this place where these things are becoming a way of just taking any kind of the moral burden of our mistakes off of our shoulders, right?
Speaker 2 It's just another excuse for things. And she just stomps out of the hospital and then
Speaker 2
she drives away texting at the wheel. And Noah sees her do it.
And there's this kind of shiver of righteous glee that goes down his spine. One of the last scenes in the book.
Speaker 1
Well, it's again, addiction. It's addictive.
It's so funny.
Speaker 1 One of the problems with a lot of these technologies, and I think you put this out well in this book, is that it's not just the kids that are addicted because when you tell your kids to put down their phone, you can't put down your phone.
Speaker 1
You actually can't. And so everybody is culpable, right? You can't, you know, you have to sort of walk the talk.
So every episode, we get a question from an outside expert. Let's listen to yours.
Speaker 5
Hi, my name is Kurt Gray. I'm a social psychologist and professor at The Ohio State University.
I research the psychology of artificial intelligence.
Speaker 5
And my work shows that people think that AI can help us escape blame. If a drone kills, it's the pilot to blame.
But if an AI drone kills, then people blame the AI.
Speaker 5 But does it ever truly let us escape blame? Or are we ultimately still to blame as human beings who invented the AI, who choose to use the AI,
Speaker 5 and who deal with the aftermath of the AI?
Speaker 1 Great question. Thoughts on that?
Speaker 2 Yeah, that's a really wonderful question because it is the central dilemma, I think, of the novel of culpability
Speaker 2 is the title. You know, who is to blame? And Larlai, in the excerpts, she writes a book that, and we get little excerpts from it called Silicon Souls on the culpability of artificial minds.
Speaker 2 And as you read, you get little glimpses of that book, paragraphs or a page at a time, just eight or nine of them sprinkled throughout the novel. And Larlai is wrestling with this all the time.
Speaker 2 To what extent are we guilty if our machines do something bad? Are our machines training us to be
Speaker 2 Could they train us to be good? Could they train us to be better people? Because it's not just a one-way street. Yes, we're inventing them.
Speaker 2 We are responsible in many ways for how they are in the world, but we're also responsible for how we are, how we comport ourselves ethically in the world in relationship to them and thus in relationship to each other in new ways.
Speaker 2 So,
Speaker 2 you know, I think it's always going to be a two-way street. I don't think that's a squishy answer.
Speaker 2 I think, you know, we're caught in this world where we're, in this Frankenstein world where we're creating these machines, or not we, not me, but we're using them. We are
Speaker 2 subject to many of their,
Speaker 2 you know, their controls.
Speaker 1 Yeah, I'm going to go right to it's our fault. So, speaking of that, this comes at the end of a novel.
Speaker 1 Noah's called, by the way, Lorelei Atlas, and says she has the weight of the world on her shoulders after she acknowledges that these algorithms have been used for drones and warfare.
Speaker 1 This is something I've talked to many people who have quit Google or stayed there, make the, you know, everyone has their own argument. I want to play this clip from Lorelei from the end of the book.
Speaker 6 Okay.
Speaker 6 We do the world no good when we throw up our hands and surrender to the moral frameworks of algorithms.
Speaker 6 AIs are not aliens from another world.
Speaker 6 They are things of our all-too-human creation.
Speaker 6 We in turn are their pygmalions, responsible for their design, their function, and yes, even their beauty.
Speaker 6 If necessary, we must also be responsible for their demise.
Speaker 6 And above all, we must never shy away from acting as their equals.
Speaker 6 These new beings will only be as moral as we design them to be. Our morality, in turn, will be shaped by what we learn from them and how we adapt accordingly.
Speaker 6 Perhaps in the near future, they might help us to be less cruel to one another. more generous and kind.
Speaker 6 Someday they might even teach us new ways to be good
Speaker 6 but that will be up to us
Speaker 2 not them so talk about this idea of shaping our moral frameworks it hasn't worked with each other right because we don't seem to affect each other um do you really believe this is possible well do you really think we don't affect each other you don't think that there's a i think we lately worse i think lately yes no no no i agree but in everyday situations and when you have someone calling us to be good i think we can shape one another's moral consciousness, right?
Speaker 2 Right, yes.
Speaker 1 Absolutely.
Speaker 2 And, you know, the
Speaker 2 are these, are these
Speaker 2 AIs going to train us to be better? Are they going to, are they,
Speaker 2 you know, it's always going to be a mixed back.
Speaker 2 You know, we can get really excited about advances in protein folding visualization
Speaker 2 and so on. But, you know, there's always going to be these, you know, kind of terrifying moral quandaries that they put us in at the same time.
Speaker 2 You know, Lorelei's voice is right on that razor's edge of the ethical dilemmas, right? That passage that you played from the audio book. That is, you know, I wrote those passages to
Speaker 2 explore this,
Speaker 2 you know, that profound moral ambivalence at the center of these
Speaker 2 problems.
Speaker 2 And I think, you know, there are people in that world, I imagine, I'm sure you know many of them, who are, you know, dedicated to that, like make them better, make them, if not good, at least ethical,
Speaker 2 and are dedicating their lives to it and are probably really scared and are doing everything they can to dig us out of these trenches
Speaker 2 that these things have become.
Speaker 1 I think a lot of people, I was thinking of Brad Smith of Microsoft, you know, he called tool or weapon. Is it a tool or a weapon of these things? And it's up to us, essentially.
Speaker 1
But in some ways, is it up to us? That's the thing. You know, one of the lines I always use is enragement equals engagement.
And so wouldn't you go to enragement over kindness, right?
Speaker 1 Because that's what's happened.
Speaker 2 Yeah, not just in, not just enragement, but also massification. And one of these places, of course, is
Speaker 2 I have a son who's in the data analysis programming space and just the speed with which
Speaker 2 programming is being taken over and programming of programming. And, you know, this is the next frontier of these things, AIs making themselves better by creating more AIs to make them better.
Speaker 2 You know, this kind of recursive loop that we're in. Right.
Speaker 2 You know, for me, the doom, the P-Doom is, I don't really have a number, but I'm much more in the camp of, and this is a kind of dark note, you know, Jeff Goodall, as he puts it, the heat will kill you first.
Speaker 2 It's the
Speaker 2 most dangerous thing about AI maybe is just data centers
Speaker 2 in general and the consumption.
Speaker 2 Thinking about Karen Howe's book, The Empire of AI, has that brilliant chapter on water use and energy use. And you know, I know.
Speaker 1 Bruce, they're bringing back nuclear. What are you talking about? We'll be fine.
Speaker 2 That's going to be fine.
Speaker 1 They're going to harness the sun. They're going to go out there with a rocket and not fuck anything up.
Speaker 1 We'll be back in a minute.
Speaker 7 Oh, the car from Carvana's here.
Speaker 7
Well, will you look at that? It's exactly what I ordered. Like, precisely.
It would be crazy if there were any catches, but there aren't, right? Right.
Speaker 8 Because that's how car buying should be. With Carvana, you get the car you want.
Speaker 7
Choose delivery or pickup and a week to love it or return it. Buy your car today with Carvana.
Delivery or pickup fees may apply. Limitations and exclusions may apply.
Speaker 7 See our seven-day return policy at Carvana.com.
Speaker 1 I'm going to talk just a little bit about your day job and the the role of AI. You're a professor of medieval literature and critical theory at the University of Virginia.
Speaker 1 You dedicated this book to your students. One of the biggest tech issues facing higher education right now, the use of generative programs like ChatGPT, Claude.
Speaker 1
You mentioned it briefly before about whether they were using LLMs to write. Talk about how you're using it.
I would encourage students to use it so you don't pretend they're not.
Speaker 1 Are you leaning in or out?
Speaker 2
Yeah, this is this. We're all wrestling with this.
Every department in my university has something called an AI guide, where you know, where they're
Speaker 2 a faculty member, a colleague who were, you know, we're trying to come up with, and I'm in an English department, we have a writing program.
Speaker 2
This is a big issue, but I agree with what Megan O'Rourke said in the Times the other day that this is the end of the take-home college essay. Like that, that is done.
That is dead.
Speaker 2 That was never a big fan of that genre in the first place, but I do think there's a huge shift going on in assessment of student writing. And I don't know if it's all to the bad.
Speaker 2 I think there's a lot of space for more in-class writing, for slow reading, for
Speaker 2 even just, you know, I have this vision of
Speaker 2 just teaching a class where we almost go in and have reading time, like in kindergarten again, first grade, you know, where we're all just fixed on a physical text.
Speaker 2 Medieval literature is my specialty. And I'm not calling for going back to the era of parchment and scribes, but, you know, there is something there about that slow attention and decelerating a bit.
Speaker 1 Yeah, absolutely. It's also the idea of
Speaker 1
sort of doing homework. I hate homework.
I've been an anti-homework person as a parent. I have four kids and I was always like, no, homework.
Homework is zero. Stupid.
Go play kind of thing.
Speaker 1 So one of the things you mentioned was parchment and scribes, and I like this idea, and I think you should go for it. But as a medievalist,
Speaker 1 You know the way that the printing press, the original technology, electricity, between the printing press and electricity, but both of them are critically important.
Speaker 1 Sparked the first information age by improving access to knowledge and education.
Speaker 1 But as most people do not know, a bestseller during that era was a thing called the Hammer of Witches, which was a dangerous treatise on killing women, essentially, and about how there were witches and witch hunts and et cetera, et cetera.
Speaker 1 Was that another moment? Because the democratizing of knowledge in the first
Speaker 1 60 years ended up killing hundreds of thousands of women because of this book, for example.
Speaker 2 Yeah, and hundreds of thousands of dissenters, heretices,
Speaker 2 Catholics,
Speaker 2 or Protestants, depending where and when you are.
Speaker 2 And people, you know, people talked about the printing press.
Speaker 2 Historians talk about it as an agent of change, and it obviously was, but manuscript culture, you know, lasts for many, many centuries after the printing press.
Speaker 2 And people talked about the printing press as the tool of the devil, right? And conservative theologians would say, look, now the people can have the word of God in their hand.
Speaker 2 So I think that that, you know, it's one of those technological ruptures.
Speaker 2 And in the culture of writing and literature and the written word, people freaked out about it, just like we're freaking out about LLMs now. And, you know, it's very, very different kind of rupture.
Speaker 2 But, you know, democratization
Speaker 2 can have its dangers. You know, the book that you're talking about, Malias Maleficarum, that hammer of witches, that also has a manuscript tradition.
Speaker 2 And there's a lot of persecution of women, of dissenting women, heretical women, then in the pre-print era as well.
Speaker 1 I'm talking about getting it out there to many young people, right? Is that a good impact or a bad impact from your perspective? It kind of ended the medieval era, correct? Or not? Maybe not.
Speaker 1 You're more of an expert.
Speaker 2 Yeah, I'm one of those people who always pushes against the idea of these rigid medieval, early modern, medieval Renaissance period boundaries. I'm much more interested in continuities than
Speaker 2 the way that those ruptures play play into long-scale continuities. But I think that the printing press was an invention of the Middle Ages, not of the Renaissance.
Speaker 2 And I think it's a nice analogy to AI, I think, because it's not futuristic. It's coming out of so many kind of text-generation, computer-generated things that have been in place for decades.
Speaker 2
And so looking at always being afraid of the newness of a technology, that in some ways is its own danger, I think. I don't know if you'd agree.
I'd agree.
Speaker 1
I would. Pretending.
You know, when people complain about certain things, I'm like, oh, yes, let's please go back to Xerox machines. Like, no.
Speaker 1 Like, what? When you think about
Speaker 1 what the most important technology of the era you teach on, what would you say it was?
Speaker 2 I would say, you know, the emergence of
Speaker 2 paper. I wrote a whole book on parchment, but this widespread technology that had been in place for a thousand years, and it continues to be used even today by artists, printmakers, and so on.
Speaker 2 But paper had come, was a product of the early Middle Ages as well.
Speaker 2 But when it really starts, the convergence of paper and print, you get this mass production of books in a way you'd never had before.
Speaker 2 And that really is enabled by paper, even though Gutenberg printed any number of his Bibles on animal skin, on vellum.
Speaker 2 But
Speaker 2 the preponderance of printed books, the vast majority are on paper. So we suddenly get this ocean of paper.
Speaker 1 So in that same vein, what about the comparative impact on creatives? Now, do you think it will be a net positive allowing people who aren't natural artists to get their work out? That's the argument.
Speaker 1 Or will it undermine the value of artists?
Speaker 2 I don't think it's going to undermine the value. It's not that I'm sanguine about that within the creative worlds,
Speaker 2 but I feel like...
Speaker 2 There are people who are already doing really interesting experiments, collaborative experiments, with large language models, with small language models that are just interesting because art is is interesting and technology is part of art.
Speaker 2 I'm much less worried about the arts than I am about young people and brains.
Speaker 1 And what worries you then?
Speaker 2 Oh, just about the
Speaker 1 analytical.
Speaker 2 Analytical, right? Students who say I can just summarize this rather than reading it.
Speaker 2 That I think is the
Speaker 2
scary is the wrong word, but sad. This is not, I'm just not just blaming students and young people.
I don't read nearly as many novels as I used to.
Speaker 2 And when I do read novels, I get more impatient than I used to. I used to lounge around on my bed, read novels by Charles Dickens when I was like 16 years old.
Speaker 2 And for me, now getting through a Dickens novel is a real challenge. I can't do a Dickens.
Speaker 1 I can't read it.
Speaker 2 That's the sad part of it.
Speaker 1 Can you imagine reading Great Expectations right now? I think I didn't like it then.
Speaker 2 Why do they keep putting that?
Speaker 1
I don't know. I can't read pip.
Just stop with the pip. Anyway, did you use AI working on culpability at all? How do you use it yourself?
Speaker 2 I don't, I mean, you know, occasionally
Speaker 2 I'll often use it like almost unintentional as a Google search, right?
Speaker 2 But I am interested, you know, there's this wonderful poet who teaches at the University of Maryland, Lillian Yvonne Bertram, who has used these small language models to generate these poems in the style of other poets and doing it very intentionally.
Speaker 2 I'm excited about that.
Speaker 2 Our department is even doing a, I think, a faculty search next year on AI and creative writing. We just hired somebody in AI and critical thought.
Speaker 2 So, you know, it's, yes, it's transforming things very quickly under our feet, but
Speaker 2 I don't have the kind of dread of this that many colleagues do.
Speaker 1 After looking at all these different uses of AI, and this, you really do cover the gamut here. You've got the drones.
Speaker 1 Which parts give you hope and scare you? As you said, you're more sanguine, but would you describe yourself as a tech optimist or a tech pessimist? There's three designations: Zoomers, Boomers, and
Speaker 1 Doomers, right? Did writing your book move you in one direction or another?
Speaker 2 I think it moved me more,
Speaker 2 probably more towards Doom, more for those environmental reasons that I was talking about.
Speaker 2 That was one of the real eye-opening revelations is just commonplace knowledge now, but often that we don't see even talked about in much journalism about the consumption.
Speaker 2 But in terms of the models themselves, I don't know.
Speaker 2 I think, especially with autonomous driving, I had a really bad accident that could have been really, really bad when my kids were in the car some years ago.
Speaker 2 This crumpled the front of a car and it was because of my negligence. And I thought I would much rather have had a machine driving that day.
Speaker 2 So that part of it, I think, this issue of autonomy and navigation, maybe I am a little more optimistic there.
Speaker 2 And I think the thing is, you know, artificial intelligence is, as you know, is such a sloppy term for all these different things, all these machine catching LLMs. And so I think that
Speaker 2
you probably would have to kind of go through a questionnaire to get me optimistic. I think you're pessimistic.
I think we need more of that, you know.
Speaker 1
This is a wonderful book. It really is.
I'm glad Albert doesn't always pick the books I like, but this one I do.
Speaker 1 How did that, was that like a shocker to you? That must have been a shocker.
Speaker 2
Oh, my God. I was just, my hands were shaking.
And one of the things she does is she records the calls that she makes with authors. And in this case, I was like, I was so
Speaker 2
like shaking and whatever, my voice sounded kind of understated. So I got dragged a little bit for just being like, oh my God, he's not even happy.
But it was such a lightning strike and so
Speaker 2 thrilling and flattering. There's a hundred other books published this summer that could have been her summer pick, but she chose Culpability.
Speaker 1 And I just can't believe certain things she does, the wedding she shouldn't have gone to, but she does these books, things I think is really important.
Speaker 1
That helps. That's good.
Yeah. I would have thought her voice was generated by AI.
I wouldn't have believed it. Oh, my God.
Speaker 2 I still think this is all a simulation. Yeah.
Speaker 1 Six weeks.
Speaker 1 Well, that's your next book. It is all a simulation in case you're, you know, it's some teenagers from the future who are playing a video game right now and they're enjoying themselves.
Speaker 1
Anyway, much congratulations. This has been a fascinating conversation, and I really appreciate your book.
It's great, I'm recommending it to lots of you know people who don't really understand it.
Speaker 1 It's a really great way to understand, and you don't shy away, you don't stupidize these issues, which is thank you so much, Carol. Anyway, thank you.
Speaker 2 Thank you, it's been a huge pleasure.
Speaker 1 On with Kara Swisher is produced by Christian Castor Roussell, Kateri Yoakum, Megan Burney, Allison Rogers, Lyssa Soap, and Kaylin Lynch. Nishat Kirwa is Vox Media's executive producer of podcasts.
Speaker 1
Special thanks to Rosemarie Ho. Our engineers are Rick Kwan and Fernando Aruda.
And our theme music is by Tracademics. If you're already following the show, you get an AI and you get an AI.
Speaker 1
Oh, that's an open joke, everyone who doesn't know, all you youngs. If not, watch out for that jumbotron.
Go wherever you listen to podcasts, search for On On with Cara Swisher and hit follow.
Speaker 1 And don't forget to follow us on Instagram, TikTok, and YouTube at OnWithCara Swisher. Thanks for listening to On with Cara Swisher from New York Magazine, the Vox Media Podcast Network, and us.
Speaker 1 We'll be back on Monday with more.
Speaker 2 Ever feel like your work tools are working against you? Too many apps, endless emails, and scattered chats can can slow everything down.
Speaker 2 Zoom brings it all together: meetings, chat, docs, and AI companion seamlessly on one platform.
Speaker 2 With everything connected, your workday flows, collaboration feels easier, and progress actually happens. Take back your workday at zoom.com/slash podcast and zoom ahead.