Transcribe your podcast
[00:00:00]

Ai is transforming our lives, from travel to education.

[00:00:06]

The Pythagorean theorem is a fundamental principle in Euclidean geometry.

[00:00:10]

To art and music. But could it also reshape the end of our lives?

[00:00:16]

I got to be honest, my first reaction was like, Oh, no.

[00:00:21]

But a group of scientists are going there, trying to make families and doctors make hard end-of-life decisions using an AI that could predict whether incapacitated patients want to live or die.

[00:00:32]

You can create this psychological twin of them that would, in some sense, be able to speak on their behalf.

[00:00:37]

Specifically to make a decision on things like a do not resuscitate order or DNR, which tells medical workers not to provide life-saving care if it could lead to too much pain. It's a more common dilemma than you may think. If a person has not planned for such difficult decisions. Anthony Penal says he found himself faced with signing a DNR for his grandmother after she was diagnosed with dementia.

[00:00:57]

How involved was your grandmother in these conversations?

[00:01:00]

Not at all, to be frank. She was rapidly declining, not remembering my mom or me, acting very erratically and having hallucinations.

[00:01:12]

Penal says he and his mother agonized over trying to figure out what his grandmother would have wanted. They eventually signed the order.

[00:01:18]

Trying to put yourself in her shoes is, yeah, it is very hard. And even when her physical health was declining in those last months and weeks, it's trying to imagine the pain that she's in. It's It's a very hard thing to do.

[00:01:31]

Researchers think the AI can help. The idea? To feed a bunch of digital information about a patient into a large language model, like ChatGPT.

[00:01:40]

It could be trained on their medical records, past treatment decisions that they made.

[00:01:44]

Or even text, emails, social media posts, demographic info, and more. And it would require patient or family consent.

[00:01:51]

We run them through a bunch of treatment scenarios. They give you their preferences. Then you apply AI to those answers, and then you use that program to predict when the patient isn't able to make decisions for themselves.

[00:02:04]

Researchers say it could help improve the accuracy of decisions made by medical surrogates, which one 2006 JAMA study placed at just 68 %. But other bioethicists worry that the data isn't enough.

[00:02:16]

It's still so speculative. And then the other piece of my skepticism is whether even if we had that data, an algorithm could accurately predict someone's future preferences.

[00:02:25]

And they point out, as painful as these decisions are, they are part of human experience.

[00:02:31]

Is there a need for something like this?

[00:02:32]

Sometimes surrogates do struggle. In my experience, it's the minority. What they need is really more the emotional support. What they're struggling with is the weight of the situation they're in.

[00:02:44]

I think it's more worrying that humans should want to run away from those situations, decisions, feelings, and just resort to something like AI. It can help us grow, I think, as humans, having to wrestle with those emotions.

[00:03:05]

Christine, for those worried that this may get into the hands of bad actors, one of the biggest focuses of David Wendler, the researcher you saw in the piece there, is making sure that for-profit systems like hospitals or other businesses don't use this to ultimately advance their bottom line and that patients and families are respected throughout the process. Christine.

[00:03:22]

Thanks for watching. Stay updated about breaking news and top stories on the NBC News app or follow us on social media.