Paging Dr. ChatBot

30m
Patients and doctors both are turning to AI for help with diagnosing ailments and managing chronic issues. Should we trust it?

This episode was produced by Hady Mawajdeh, edited by Jenny Lawton, fact-checked by Melissa Hirsch, engineered by Adriene Lily and Brandon McFarland, and hosted by Jonquilyn Hill. Image credit Vithun Khamsong/Getty Images.

If you have a question, give us a call on 1-800-618-8545 or send us a note here. Listen to Explain It to Me ad-free by becoming a Vox Member: vox.com/members.

Learn more about your ad choices. Visit podcastchoices.com/adchoices

Listen and follow along

Transcript

Support for Today Explained comes from Crucible Moments.

What is that?

It's a podcast from Sequoia Capital.

Every company's story is defined by those high-stakes moments that risk the business but can lead to greatness.

That's what Crucible Moments is all about.

Hosted by Sequoia Capital's managing partner, Rulaf Botha.

Crucible Moments is returning for a brand new season.

They're kicking things off with episodes on Zipline and Bolt, two companies that are still around with surprising paths to success.

Crucible Moments is out now and available everywhere you get your podcasts and at cruciblemoments.com.

Listen to Crucible Moments today.

Support for the show comes from Charles Schwab.

At Schwab, how you invest is your choice, not theirs.

That's why when it comes to managing your wealth, Schwab gives you more choices.

You can invest and trade on your own.

Plus, get advice and more comprehensive wealth solutions to help meet your unique needs.

With award-winning service, low costs, and transparent advice, you can manage your wealth your way at Schwab.

Visit schwab.com to learn more.

I didn't know what to do, so I turned to ChatGPT.

How do you integrate this in a way that retains what is best about medicine?

They all say that healthcare is a sweet spot for AI.

This is explaining to me from Fox.

I'm Jonquin Hill.

A couple couple weeks ago, I went to the doctor, and there was a moment during the appointment that really surprised me.

She turned her computer monitor towards me, and there on the screen was this colorful dashboard with all kinds of numbers and percentages.

She explained that she'd entered my information into a database with millions of other patients, and that database used AI to predict my most likely outcome.

There it was, a snapshot of my future.

Or at least, maybe my future.

Usually I'm skeptical when it comes to AI, but I do trust my doctors.

So if I trust them, I should trust this technology too, right?

It turns out a lot of you already do.

I have used ChatGPT to diagnose myself.

ChatGPT cured my acne.

Chat GDP has actually

helped me navigate the disease better than most of the doctors.

I found out the gender of my second baby by using ChatGPT.

ChatGPT is honestly the most calming, reassuring voice of like, hey, great question.

Today on Explain It to Me.

Pagin Doctor Chatbot.

Pagin Doctor Chatbot.

How AI is shaping the way we get medical care.

We'll cover the do's and don'ts of self-diagnosis, how medical professionals are using these tools, and hear a doctor make the case for why AI is the key to a more human experience at the doctor's office.

Full disclosure, Vox Media has a partnership with Open AI.

To start, I had to make an appointment with a doctor for an interview.

My name is Dhruv Kooler.

I'm a physician at Weill Carnell Medicine in New York.

I'm also a health services researcher here, as well as a writer at the New Yorker magazine.

One of the things he's written about for the New Yorker, medical care and AI.

Okay, so as a doctor, what do you think about folks who are self-diagnosing with AI chatbots?

Part of me feels like this is a natural thing that's going to happen, particularly in a system as difficult to access as ours is and as difficult to navigate as ours is.

And AI is so fluent and so persuasive that it makes a lot of sense that people are starting to enter their symptoms into these chatbots and try to get diagnoses.

But there's also real risks if you over-rely overrely on AI.

I mean, these things are not infallible.

They can give you misleading or incorrect medical information.

You can give it a prompt and it will give you something back that's extremely convincing and it's completely wrong.

The GPT's job is to convince you that it's right.

You should be careful.

My worst fears are that we cause significant, we, the field, the technology, the industry cause significant harm.

You know, one of the things that's so interesting about these chatbots is that they're not like a COVID test or an MRI where you get the answer that you get.

I mean, how accurate these chatbots are really depends on how you're prompting them.

And so, in the piece, I talked to this particular chatbot that's called Cabot, which is a chatbot that was developed at Harvard.

That's not in clinical use, it's more of a research tool.

But it can perform exceptionally well, kind of almost in a superhuman way, on these specific, very challenging, complex clinical cases that are curated in a perfect way.

But the way that these chatbots perform depends on how the information that you give it is organized.

So if you give certain broad strokes or you don't emphasize the right details, you could get a very different and possibly incorrect diagnosis.

You know, I will mention there was a recent survey that was done that found that something like one in five, around 20% of Americans said that they had turned to a chatbot for advice that later turned out to be incorrect.

And so certainly there's a lot of incorrect information that's coming out of that.

I'm going to be honest, if I'm sick or worried about how I'm feeling, I have gone to Dr.

Google.

You know, I have been in those WebMD trenches.

For those that are using AI, are there things that they should do or shouldn't do to get the most accurate information?

Sure.

I mean, first of all, you're not alone.

I mean, a lot of people for years have been using Google and now AI is kind of the latest iteration.

And I think it can be potentially revolutionary and transformative for people if they use it in the right way.

I don't think the right way is just to put in your symptoms and ask for a diagnosis.

At least I don't think that's the right thing to do right now.

But there are really important ways that people can use it.

for benefit.

You know, if you have symptoms, asking the AI to rate the urgency of those symptoms, listing possible conditions that could explain them and some sense of which conditions might be most likely.

I think it might be helpful for people to ask about red flag symptoms.

Those are warning signs that suggest that you might have a more serious condition.

If you've gone to the doctor and you have lab results or clinic visits, an AI might be able to walk you through those lab results in greater detail and it might be able to help you prepare questions for your next visit.

And so, in all those ways, I think it can be a really helpful adjunct to the way that people are currently receiving care.

Okay, so you're saying it may not be a 100% bad idea to use ChatGPT to interpret what your doctor's telling you?

No, I don't think so at all.

And, you know, part of the challenge here is that healthcare is so resource-constrained in a lot of ways.

There's such enormous time pressures.

Doctors, nurses don't always have the time and attention to explain every diagnosis and treatment in the level of detail that we might want.

And AI does have unlimited time and attention in a way.

It can explain things at whatever level of sophistication you need.

It can help people navigate through the medical system.

It can help, you know, patients with limited access to care.

Some people are already starting to use it in an interesting way.

You know, I've spoken to patients who are now trying to record or asking if they can record their conversations with physicians and then uploading those transcripts into ChatGBT to try to have them explain what happened in that visit in greater detail and then kind of continually ask questions, probe for more information.

And that has been really helpful for a number of patients that I've spoken with.

But there are also challenges.

I mean, these AIs still hallucinate.

They still make things up.

They may mix up one patient for another.

I spoke to a woman who, you know, her own medical conditions were being confused with those of her mother's and it became a kind of really confusing situation when she was speaking with a chatbot.

Wow.

And so because these chatbots are so fluent and they're so persuasive in a way, it can make it challenging to figure out when they're actually being inaccurate.

And so that's kind of the note of caution that I want to sound as well.

We got a call about that, not from a patient, but from a doctor.

They're worried about how everyday people are using chatbots to diagnose themselves.

I work as an ER doctor, and I have noticed a lot more patients coming in after having talked to ChatGPT,

trying to figure out what's going on with them.

And on the one hand, I find people are asking really great questions.

They're often self-advocating tremendously.

On the other hand, I sometimes feel like the things they're bringing up are kind of random and hard to encourage people that they're going to be okay or to convince them that I think that's really unlikely.

There's a lot of anxiety in the air, and I think chat GPT sometimes makes that worse.

So is this a problem you've dealt with?

You know, it's a real challenge.

In a way, it's a more sophisticated version of what Dr.

Google has put out there for the past few decades.

And, you know, when you look up your symptoms online, there's often a range of potential diagnoses that are listed.

And it's only natural for the human mind to gravitate towards the most concerning or the most dangerous ones.

Those are the ones that represent the greatest threat.

And the challenge with using these things is that they don't come with a lot of context.

They don't have the context that you might have if you came to those medical diagnoses in a clinical setting with a physician or another clinician.

And so there's this challenge of helping people actually understand the context around the words and the diagnoses that they're learning about online.

But there's also this challenge of: you know, is AI going to steer people away from medical attention?

So, in the piece, I note this poison control center in Arizona that reported a drop in the overall call volume that they were getting, but a rise in severely poisoned patients.

And the suggestion here was that the AI tools could have steered people away from needed medical attention.

And so, this is another part of the challenge that people are starting to encounter.

What can't Dr.

Chatbot tell us?

You know, like, what is it that doctors can do for patients that chatbots can't?

Right now, there's a lot that doctors do that chatbots can't.

I mean, they're not reasoning clinically in the way that a doctor is reasoning.

They're not able to come to the same judgments and integrate patients' values and preferences and circumstances in the way that a physician is,

you know, managing pain or talking to families, helping people understand their options, guiding them through the trade-offs that occur in any medical setting.

So, as helpful as these AI technologies can be, they're only going to be part of the solution, at least for the foreseeable future.

When we get back, Dhruv is going to stick around and we'll ask him about how AI is changing the way doctors are practicing medicine.

Support for Today Explained comes from Nuremberg, a film from Sony Pictures Classics.

In the aftermath of World War II, as the world confronts the horrors of the Holocaust, a U.S.

Army psychiatrist is tasked with evaluating Hermann Göring.

Oh, God, Hitler's second in command.

Meanwhile, the chief prosecutor leads the Allies in forming an unprecedented international tribunal for the trial of the century.

As Dr.

Kelly delves deeper into Goering's psyche, a tense psychological duel unfolds.

Nuremberg, starring Russell Crowe, Romy Malik, Leo Woodall, and Michael Shannon.

Only in theaters November 7th.

Support for Today Explained comes from Wondery and their new podcast, Lawless Planet.

It unfolds almost like a true crime podcast, I've been asked to tell you, but it is about the global climate crisis.

Complex stories wide-ranging happening in every corner of the planet.

On Lawless Planet, the new podcast from Wondery, you will hear stories from the depths of the Amazon to small-town America.

Host Zach Goldbaum takes you around the world as he investigates stories of conflict, corruption, resistance, and highlights activists risking their lives for their beliefs, corporations shaping the planet's future, and the everyday people affected along the way.

Each episode takes you inside the global struggle for our planet's future, mysterious crimes, those high-stakes operations, those billion-dollar controversies that you do know so well.

To reveal what's truly at stake, you can follow Lawless Planet on the Wondery app or wherever you get your podcasts.

You can listen to new episodes early and ad-free right now by joining Wondery Plus in the Wondery App, Apple Podcast, or Spotify.

We're back.

This is Explain It To Me.

I'm John Glenn Hill.

Before the break, we heard from Dr.

Dhruv Kular about how folks are using AI to help them understand their symptoms come up with treatments and even talk to their doctors and those doctors are consulting ai too they've got their own chat bots which are trained on medical research and patient data and even suggest their own diagnoses and some physicians are listening i work in a hospital in an emergency department and one of the cool things about being an ed doc is that you never know what you're going to see and

patients coming with a question for you.

And you got to kind of be the person that gives them an answer and gives them next steps in terms of a solution.

And so there's been a couple really helpful times where I've typed in a patient's symptoms.

For example,

patient coming in with abnormal lab values and a little bit of their history.

And then helping me

be more confident in what I think the diagnosis is.

Okay, Drew, how common is that?

Does that sound like something you hear a lot?

You know, I think this is one of the fastest uptakes of any technology that I've seen in medicine, certainly since I've started practicing.

So many of my colleagues now turn to generative AI models, other forms of predictive analytics to make decisions about the patients that they're caring for.

And I think these things are going to be incredibly powerful.

I think best used as a really good second opinion to try to get a consultant's advice basically in any specialty, at any time.

You know, you can put in a patient's symptoms.

It might remind you of certain diagnosis, raise rare diagnoses that you haven't seen in months or years, and give you expertise and support that wouldn't otherwise be possible.

And I think this really needs to be balanced with something else that we're starting to see, which is this idea of cognitive de-skilling.

Not only does AI make it so that you're not learning those skills, new research suggests that it's also making you unlearn those skills that you previously knew.

You know, if you're not doing the critical thinking of going through a patient's case, understanding their problems, using kind of your own judgment to arrive at a diagnosis, what happens to the skills that doctors have?

You know, there's evidence already that doctors can get de-skilled pretty quickly.

The doctors' baseline performance got worse after they got used to using AI, which creates a risk if the AI fails, if it's unavailable, or it just misses something.

And so the question then becomes, you know, in a future in which AI basically pervades medicine and it's extremely effective and useful, is it a big deal that we've lost some of the skills that we used to have?

You know, in the past, doctors were probably better at listening to heart murmurs and doing certain physical exams.

And now we have technology like echocardiograms or CT scans that can replace that.

And I don't think people feel like we've had a huge loss there, but I do think there's something distinct about the critical thinking that goes into diagnostic work.

So I want to be very careful that, you know, we really use this more of a second opinion rather than generating the initial kind of set of thinking using AI.

Yeah, I

wonder how common this use is because, you know, we have shows like House.

Differential diagnosis, people.

Or ER.

It could be hyper-alostronism or Bardner's syndrome.

Carter, put that damn book down.

Or the pit.

So he's not

coming back.

No.

What happens now?

He's hooked up to all those machines.

Take some time.

Try to process this news.

Personally, I'm a pit head.

I love that show.

And you know, a quote-unquote good doctor flexes their brain and, you know, maybe uses some books.

But I don't know.

Is using AI, I guess, for lack of a better word, is it, is it cheating?

No, I don't think it's cheating.

I think

the challenge, again, is how how do you maintain critical thinking skills while offloading cognitive work that can be done by machines.

And one way, I've started to think about this, and this is a concept that came from a physician that I interviewed for the piece, Dr.

Gurbreet Dolliwall at UCSF.

And he told me, you know, we shouldn't be thinking necessarily about AI as solving the medical diagnosis.

It's better off thinking of AI as a partner in what he called the wayfinding,

assisting doctors and patients along the diagnostic journey.

And that might involve alerting doctors to a recent study, proposing a helpful blood test that, you know, could be used to aid in diagnosis, looking up a lab result that happened to be in the medical record from decades ago.

You know, there's a real difference between getting the right answer and actually competently caring for people along their medical journey.

Okay, we've talked a lot about the doctor-patient relationship, but healthcare is a lot more than that.

Where else is AI showing up?

There's a lot of ways that people are trying to use AI in medicine.

I think the first area that's going to have a big impact is on the administration of healthcare.

That has to do with, you know, entering things in the medical record, capturing diagnoses of patients who are coming in, writing orders, helping people navigate the medical system.

So, all these administrative tasks are, in some ways, the low-hanging fruit of medicine that rack up a lot of costs.

Another area is kind of prediction or personalization.

What does this new guideline mean?

How likely is this medication to be effective?

Should you use this treatment or not, you know, this procedure?

Does that make sense for you?

So I think AI can do a lot in terms of personalization and prediction of both risk and benefit from particular medications.

And then there's this whole area that we haven't talked about yet, which is around drug discovery and development.

I think there's a tremendous amount of potential for AI to supercharge drug discovery so that in a handful of years, we have a lot more options and potentially options for conditions that thus far are incurable or very difficult to treat.

And so, you know, at least right now, you know, I think what the way in which AI can be most helpful is in helping people prepare for their interactions with the medical system and hopefully making those most seamless.

So the machines are here, and there's an argument that they could actually make our relationships with our doctors more human.

We'll hear that next.

Adobe Acrobat Studio, so brand new.

Show me all the things PDFs can do.

Do your work with ease and speed.

PDF spaces is all you need.

Do hours of research in an instant.

With key insights from an AI assistant.

Pick a template with a click.

Now your Prezo looks super slick.

Close that deal, yeah, you won.

Do that, doing that, did that, done.

Now you can do that, do that with Acrobat.

Now you can do that, do that with the all-new Acrobat.

It's time to do your best work with the all-new Adobe Acrobat Studio.

All right, remember, the machine knows if you're lying.

First statement: Carvana will give you a real offer on your car all online.

False.

True, actually.

You can sell your car in minutes.

False?

That's gotta be.

True again.

Carvana will pick up your car from your door, or you can drop it off at one of their car vending machines.

Sounds too good to be true, so true.

Finally, caught on.

Nice job.

Honesty isn't just their policy.

It's their entire model.

Sell your car today, too.

Car, Vana.

Pickup fees may apply.

You're listening to explain it to me.

Well, I love seeing patients.

I really like to listen and help them as much as I can.

And that's what medicine's all about.

That's what drew me in 40 years ago.

Dr.

Eric Topol is a physician scientist at Scripps Research.

He also founded the Scripps Research Translational Institute, which means he thinks a lot about the ways technology can advance medicine.

And he's worried that the personal aspect of medicine is slipping away.

Well, I think most people are familiar with there's been tremendous erosion of this

patient-doctor relationship because we're talking about seven minutes for a routine follow-up visit or 12 minutes for a new patient, very limited time.

That time is often lost as far as face-to-face contact by typing into keyboard

looking at screens rather than face-to-face eye-to-eye with patients and then of course there's a data clerk function of doing all the records and ordering of tests and prescriptions and pre-authorizations that each doctor is saddled with after the visit.

So it's a horrible situation because the reason we went into medicine was to care for patients.

And you can't care for patients if you can't even have enough time with them, listen to them, you know, really be present,

have a trust, and basically have this, what used to be back in the 70s and 80s, a precious, intimate relationship.

So we don't have that now by and large, and we've got to get that back.

Yeah, what caused that change?

Why did that shift happen in that relationship between patient and doctor?

If I were to simplify it into three words, it would be the business of medicine.

And basically, the squeeze was on to see more patients in less time to make the medical practice money.

You've literally written a book about how AI can transform healthcare and make healthcare human again.

Can you explain that idea?

Because my first thought when I hear AI in medicine is not, oh, this will fix it and make it more intimate and personable.

Who would have the audacity to say technology can make us more human?

Well, that was me.

And I think we are seeing it now.

So the gift of time will be given to us through technology.

Now, I'll walk through a few examples.

One is that we can capture the conversation

with...

the AI

ambient natural language processing, and we can make a better note than has ever been made by doctors from that whole conversation and now we're seeing some really good products that do that but they don't just capture the note with audio links for the patient so in case there was any confusion or from something forgotten during the discussion but also they do all these things to get rid of data clerk work so that when the two get together They really are getting together.

And I think we can, even with the physician shortage that we have today, we can leverage this technology to make it much more efficient, but also much more human-to-human bonding.

Do you worry at all that, you know, if that time gets freed up, if it's like, okay, we have less administrative tasks and more time to spend on patients, like what's going to keep administrators from saying, all right, well, then you got to see more patients.

It's the same amount of time or you got to go even faster, you know?

Well, yeah, no, I have been worried about that.

That's exactly what could happen.

AI could be make you more efficient and productive.

So, oh, yeah, see more patients, read more scans and slides and whatnot.

So, no, we have to stand up for patients and for this relationship.

And this is our best shot to get us back to where we were or even exceed that.

Yeah,

I also wonder, you know, because there are so many issues that come up in medicine.

And I think about bias and healthcare, I wonder how you think of that factoring into AI.

Because on one hand, I can see like, okay, it's taking that out, but AI learns from human models and humans have bias.

Like, how, how does that, how do you see that?

Yeah, so step number one is to acknowledge that there's deep-seated bias.

You know, it's a mirror of our culture and society.

However, we've seen so many great examples around the world where AI is being used in the hinterlands and people, you know, low socioeconomic, low access to give access and help promote better health outcomes, whether it be in

Kenya for panda health or for diabetic retinopathy and people that never had that ability to be screened, mental health in the UK for underrepresented minorities.

And so you can use AI if you deliberately

want to help reduce inequities

and try to do everything possible to interrogate a model about potential violence.

You talked about the disparities that exist.

And in our country,

if you have a high income, you can get some of the best medical care in the world here.

And if you do not have that high income, there's a good chance that you're not getting very good health care.

Are you worried at all that AI could deepen that divide?

You know, the people with money will have access to almost a kind of super doctor, and those without

may not have, will have to rely on chatbots instead or, you know, something like that.

I am worried about that.

And we have a long history of not using technology to help people who need it the most.

So many things we could have done with technology, we haven't done.

It's just going to be the time when we finally wake up and say, it's much better to give everyone these capabilities to reduce the burden that we have on the medical system if you call it a system to help care for patients so other countries will get ahead of us on that jq i mean i think that's the issue is that that's where we should be is making a level for all people that's to me that's the only way that we should be using ai and making sure that the people who uh would benefit the most are getting it the most, right?

But we're not in a very good structure framework for that.

I hope we'll,

you know, finally see the light.

What makes you so hopeful?

I mean, it's, I'm, and I consider myself an optimistic person, but sometimes it's very hard to be optimistic about healthcare in America.

It is.

I'm not, I would be the first to acknowledge that.

But remember, we have 12 million errors a year, diagnostic errors that are serious with 800,000 people dying or getting disabled.

That's a real problem.

We need to fix that.

And we have lots of ways to get to much higher levels of accuracy.

So for those who are concerned about AI making mistakes, well, guess what?

We got a lot of mistakes right now that can be improved.

I have tremendous optimism.

I recognize the challenges.

But, you know, if I had a better way to fix medicine, I don't know of it.

So it's going to take time.

We're still in the early stages of all this, but I am confident we'll get there.

We won't even talk about AI and medicine.

It'll be all embedded.

It'll just be part of the practice of medicine, and someday we'll all be appreciative of it.

That was Dr.

Eric Topol of Scripps Research.

Speaking of healthcare, open enrollment is coming up.

Insurance can be really confusing, especially right now.

Call in with your questions about insurance and FSAs and HRAs and PPOs and vision and why it all works the way it does.

We'll decode it for you.

Or if you feel like you can't afford insurance with all these upcoming increases, we want to hear about that too.

1-800-618-8545.

You can also email us at askvox at vox.com.

If you like this and other Vox podcasts, you can help make this work happen by becoming a Vox member.

When you become a member, you get to listen to the show ad-free and you also get a ton of other perks.

Right now, we're having a sale on membership, which means you can get 30% off.

Just go to vox.com/slash members, and the deal is all yours.

This episode was produced by Hadi Mawagdi.

It was edited by Ginny Lawton, and our executive producer is Miranda Kennedy.

Fact-checking was by Melissa Hirsch, with engineering by Adrian Lilly and Brandon McFarland.

Special thanks to Lauren Mapp.

I'm your host, John Glenn Hill.

Thanks so much for listening.

Talk to you soon.

Bye!

When will AI finally make work easier?

How about today?

Say hello to Gemini Enterprise from Google Cloud, a simple, easy-to-use platform letting any business tap the best of Google AI.

Retailers are already using AI agents to help customers reschedule deliveries all on their own.

Bankers are automating millions of customer requests so they can focus on more personal service.

And nurses are getting automated reports, freeing them up for patient care.

It's a new way to work.

Learn more about Gemini Enterprise at cloud.google.com.

Mercury knows that to an entrepreneur, every financial move means more.

An international wire means working with the best contractors on any any continent.

A credit card on day one means creating an ad campaign on day two.

And a business loan means loading up on inventory for Black Friday.

That's why Mercury offers banking that does more, all in one place.

So that doing just about anything with your money feels effortless.

Visit mercury.com to learn more.

Mercury is a financial technology company, not a bank.

Banking services provided through Choice Financial Group column NA and Evolve Bank and Trust members FDIC.