AI and Clinical Practice—the Potential for AI to Augment Humanity in Medicine

AI and Clinical Practice—the Potential for AI to Augment Humanity in Medicine

Show Video

(logo chiming) - The practice of medicine is about the human interaction between the clinician and the patient. So what does it mean when a technology like AI enters the room and has so many of those human-like attributes? How does the dynamic between a clinician and patient change with AI in the room? I'm Dr. Kirsten Bibbins-Domingo, and I'm the editor-in-Chief of JAMA and the JAMA Network.

This conversation is part of a series of videos and podcasts hosted by JAMA in which we explore the issues surrounding the rapidly evolving intersection of AI and medicine. Today I'm joined by Dr. Ida Sim, who is the co-director of a joint program between the University of California Berkeley and the University of California San Francisco in Computational Precision Health. She's also a general internist and primary care physician at UCSF, making her an ideal person to speak to regarding AI's role in the patient and clinician experience.

Dr. Sim is an elected member of the National Academy of Medicine and the American College of Medical Informatics. Welcome, Dr. Sim. - Thank you, very happy to be here. - Yes, and you and I know each other.

I hope we can speak on first name basis today for this discussion. - Absolutely. - Terrific, terrific. So, you know, I think of you as somebody who, you're a physician. You are somebody who's been in this informatics research space for a long time. You're also somebody who's thought about how we create structures for sharing data and making data more accessible. You have been doing this for a long time, Ida, and all of a sudden now all of us are talking about AI and it's transforming how we're gonna practice, it's transforming how we're gonna do science, it's transforming everything.

Why are we talking with a different urgency right now? - It is an urgency, absolutely. I think November 30th 2022 is gonna go down in history. That was the day when ChatGPT came out. I had been talking to other people in Silicon Valley here about ChatGPT two, about DALL-E, and it was just mind blowing what was going on in sort of the computer science world that we were not seeing publicly.

But November 30th changed that, that was an inflection point. And this is why I think it's so transformed with in such an inflection. Much of what we think about AI and machine learning, and those are two different terms, but we won't, you know, dissect those.

A lot of the work previous to that, you know, the public sees, it has been machine learning. You take a bunch of data, stick it in a black box and outcome something. And what the something is, is usually a prediction, right? - Yes. - We as machine learning, it's called predictive analytics. It's like, for example, you know, this patient has a 68% chance of needing ICU transfer in the next 24 hours. This patient has a 72% chance of being readmitted in 32 days, right? Now that we can wrap our head around.

We do with logistic regression. - Yeah. - We have tons of JAMA papers for decades about prediction, right? We know how to do prediction.

We know how to think about it as clinicians. It's a number, it's a number. Somebody, ChatGPT is a number. It is not a number. It is not, what it generates is natural language, right? Natural language is what you and I are doing. We don't do this often enough.

We should do more, but it's how we interact with each other. It's where humanity comes to. It's where we connect with other humans.

And for the first time, a machine can interpose itself into that exchange in a way that we, you know, in our millennial of evolution, see language exchange as coming from another being, right? That's how we got wired. And so now we have this machine there that can be kind of human-like that is qualitatively different from all the sort of machine learning that we've seen before in AI. And, and you see that, you see the amazing interest and also fear honestly, that this technology has engendered and it really is transformative. It can do all sorts of things. Sure, it can predict and all that kind of thing. And but I think because it's got this quality of being natural language and something that can almost act human-like in its interactions, I think it's fundamentally different.

And I think because the patient-doctor relationship, if as a primary care doc, I think it's doctoring and doctoring is about human relationships and language is the mediation of that relationship. And now we have a machine that can do that. - You've explained it so beautifully. Is it too simplistic for me to use these terms like generative AI as the label that describes the set of things that are different that is different than the simple predictive analytics? - Yes. Yes. I think generative AI, I think is a really good sort of umbrella term because it is generating, and it's generating things that sort of as a stream of things almost, right? Like language is generating language, it can generate music, we can generate images, right? DALL-E is a model that can generate images. So it's generative AI.

Probably the other thing to know in there is something called large language models. You've probably heard of that, right? So that's the model GPT 3.53, GPT 4, Llama 2, you know, blah, blah, blah. It goes on and on. And actually there are many, many, many of these large language models.

It isn't just GPT. ChatGPT is sort of like the web-based interface to the model behind it, right? And so there are different kinds of large language models. All the companies have them, they're open source large language models. And what a large language model is, well you can think about it, is it just ingest everything. And I mean everything like, the entire web from 2021 and before, and some people may have heard the term, a stochastic parrot, stochastic meaning random parrot, you know the parrot just talks because it's listened to you. So in a sense, the large language model has listened to what we have said on the internet and everything else.

And it's parroting it, it's just generating the next word, okay? Generates the next word or generates the next pixel that makes an image. And so it's generative AI, large language models being ones that generate language and ones like DALL-E generating images and this class of technologies it's what's really transformative. But of course, a parrot can sound very intelligent and you wonder if does a parrot really understand what it's saying? And I think, so if we think about it as being a parrot, we can kind of say, "Well, you know, geez, does it really understand a disease or a diagnosis?" - Right, I mean, it reminds me that some so much is made often of well ChatGPT got this answer wrong.

It's selling me the wrong things. Well, as you're describing it's basically sort of predicting what the next logical thing is that would come after this set of things, right? And so how do you think of this? Do I think of this as it's generated something wrong? Do I think of these models as understanding the question and understanding the answer? How should we be thinking about this? - You know what? I think this whole question, there's this thing of this sort of AI that is truly intelligent, right? And that's a philosophical question, science fiction question. I think we should be thinking about these technologies as how can they augment what we do in our work. It's really an augmenting, it's not a replacement, it's an augmentation. And if we think about augmentation, it really forces us to think, well, what do we wanna augment? What's important to augment? Who is being augmented? What does that mean? 'Cause you can close inequities by augmenting some people, or you can increase inequities by augmenting different people. It should be a choice.

We should be thinking very carefully about it. And in what cases, if you augment something, you're also replacing it and that it goes away. And what is the loss? And so, for example, there's an article in Jam Internal Medicine, right? In April by Ayers and all that GPT can you know, give answers that are as accurate and as empathetic as physicians answering, questions posed by the public on the internet.

And so if it's as accurate and as empathetic is that good? Should we just use GPT or do we lose something? And I think we do need to think about are we losing something? And then you need to know what you want to have and value before you can figure out whether you lose it. So in a way, I think this GPT generative AI thing is asking us to think about, certainly making me think about what is doctoring and what is human about doctoring and what is it that we should preserve and enhance and use technology around it. I mean, that's always been the promise, but can we really do it as opposed to just having these technologies come in and say, "Well, they can do it, so why don't we just go ahead and do it, you know?" And I'm worried about that. I'm worried about that.

- Well, I love that because and it's a theme that I think is gonna come up more and more is to not think of us just basically being passive in the way that these technologies are developed, but really thinking actively, particularly as a medical community about where is it that we actually achieve something better for patients, something better for clinicians? Because we've thought about the use of these, you and I are both primary care providers. These are fields that are notorious for being, we have a lot of interaction with patients, a lot of time spent doing things that are probably not where we should always be focused for doing the best for our patients. What makes you excited when you think about what's on the horizon in terms of AI for making the life of a primary care physician or patients better? - At UCSF, the way we were thinking about it was to think of horizon one, two, and three.

So horizon one is, this is from like a McKinsey thing. I think horizon one is your business today, and what can you do to improve your business today? So if you're in the transportation industry, you would be like your gas car, okay? How do you make your gas car more efficient? Horizon two is, "Well, you know, look a little bit ahead, right?" And maybe it's your EV, your hybrid, what is your charging network around the country? How can we have I don't know, solar charges, whatever, right? But that's horizon two. And then horizon three is to think, well, why cars? Why not autonomous flying individualized jet packs or whatever, right? So I think what's really challenging is that we need to think about horizon one. In a normal world, horizon one is like now, right? Years one, two. Horizon two is years three to five, and horizon three is like 10 years out. Well, with generative AI, it's the same time, we need to think about horizon one, two, and three at the same time.

So I think operationally horizon one would be things like, you know, helping us chart, helping us code. Like why am I telling you why I'm ordering an A one C on this patient who has diabetes? Why do I have to click that? So I think those are, those are obvious. And I'm sure our audience has all kinds of pin points that we can use large language models for another generative AI.

I think thinking through horizon two, I would think that that's bringing in digital devices, digital data in a way that I think supports patients more directly. 40% of Americans have two chronic diseases, 60% have one chronic disease, 40% have two or more chronic diseases. And if you think about chronic disease management, it's not you and I, Kirsten who's managing those, it's the patients who's managing their disease. - Yeah. Yeah. - Right? And we don't do very much to help them. We say, "Well, you go do this, this, and this, and you come back in three months.

Well, how did you do?" I think we need to sort of upskill everyone. And I'm mean everyone, if they don't speak English, if they don't, they're a high school grad or not a high school grad. So I'm really excited about technologies that augment our patients and our families and our communities. I think that's a huge area. And then we get to horizon three, which would be these crazy diagnostic and other things that these technologies can do that, that are stunning. And maybe we can get to that at some point, but that's horizon three.

So I think we should think explicitly, what is our focus on horizon one, increased operational efficiency, right? Horizon two, now we're getting to, are we really thinking about what are the values we're inculcating? What is the kind of doctoring we're supporting? And then horizon three, we better think about what it is we want. Otherwise we're gonna end up something that, that we might, that we might not like, or that our patients aren't gonna benefit from. - Yeah. It's so interesting. What I love about what you've said is I feel like I have heard about the examples in horizon one and Horizon three. I've heard less people talking about all of those things where I can already start to see that the ability to think across multiple sort of lines of information coming in for patients with multiple chronic conditions, an area that really, the need is so great.

And our ability to have any ways of thinking about that in, or having the tools to assist patients and assist clinicians in helping together to make better decisions in more real time they really just don't exist in the way they need to for this large population. So I can see that we might have some more automated ways we would generate notes that helps us on our operational efficiencies today. Lots of people are thinking about these diagnostic improving diagnoses, and hopefully that will help in the future. Everyone tells me that for now, we're doctors' jobs they're still gonna have their jobs in the future, but probably will be aided by AI.

But in that middle, that does strike me as that thing that I've probably heard less about and about thinking through what does it mean for our older population, a population with multiple conditions, a population who has increasing means to generate information themselves on their wearable devices, on their phones, on their other things to help think about. And we are already all using that individually, but we don't have as many great examples of the integration into clinical care. - Yeah. And I think that goes back perhaps to how we think about these machines and what we think of as being, the most brilliant thing they can do, right? I think sometimes there's the sense that the machine needs to be the smartest doctor in the room, right? That's sort of, if you read a science fiction or you watch a TV show, right? It's like, oh wow. Like the machine is there in the middle as the smartest doctor in the room.

That is not the way it should be. It really isn't. But if we think about it being the smartest doctor in the room, then we kind of think, oh, well then it needs to be the master diagnostician, right? So we hold that up as sort of like, that's the master clinician, it's a diagnostician, right? We do that even amongst humans. We expect predictions, right? 68.2%, you know, but as primary care docs, we know that, I do diagnosis, but the bulk of what I do is not diagnosis. - Yes. - It's treatment, it's management.

- Yes. It's managing complexity to keep things going. And frankly, again, it's not you and me who's doing a lot of the management is the patient and often their patient's families who are doing the managing.

And there's a whole stream of AI that's around planning, right? Like if you use AI to plan a war, for example, like the opposite of medicine, right? They're not sitting there going, oh, I predicted this, you know whatever missile is gonna hit with whatever accuracy they're actually planning the whole thing. It's the logistics, right? We don't do that in medicine. We're not using, I think, some of these technologies to think through what is treatment and management diagnosis being just the first step. And I think part of that reason is because medicine is extremely complex. So we have people who are expert in computation, but are also embedded in the way we do medicine so that you can understand what that process is and can start to think how we could use these technologies to help with things like an example that I would love to have is I'm suspecting somebody has a cancer, you know, there's I don't know, a mass on the abdominal CT scan.

What is the most efficient way to work that up, right? And it's not just diagnostically, how do I work that up? But it's like, well, what is the capacity of the radiology suite? How do I optimize it so that the patient can come and get these set of bloods, get the right scan at the right time that minimizes the workup time and time off from work. And that I draw as as little blood as possible, you know, as efficient as I can be in my diagnostics, right? I could use help with that. I mean, there are many, many examples like that, that I think can make not just our lives easier, but our patient's lives easier. And that I think is in the horizon too.

'Cause it's still within the frame of the way we do doctrine now. - Yeah. So I'd love you to talk a little bit more about that. It seems that as is often the case, we sit in our own sectors and certainly the AI innovation is gonna come from those with a computer science background, not necessarily from those who are trained in medicine or who think primarily in that domain. So as somebody who thinks and has a foot in in a variety of different disciplines, what are those models that can bring these different disciplines together so that we are not just advancing this really transformative technology, but we're doing so in a way that is ultimately useful for the domain we're trying to have an impact on in this case medicine.

- Yeah. I think the first and most important thing is that those of us in clinical medicine, we can't sit on the sidelines. We can't just sort of watch it go by and be just consumers of this technology. We need to actively engage and I think engage at all levels, especially those in leadership, in leadership and education, leadership in clinical medicine, education and research. And I think those of us who are in academic medical centers, I think it's incumbent upon us to be actively putting forth the vision and testing that vision over time as to whether that's what we need and what we want. I I think we need to be very engaged and very creative.

There are lots of people, health is 20% of of our GDP. There's a lot of money to be made here. If you think about the, all the billions of dollars going to generate AI, if you looked on the economy, you know what sector has a lot of inefficiency, has a lot of money, right? And it's a huge part of the economy. It's health. People aren't coming into health because it's such a rich domain and it is ready for disruption. People with the money and the technology are coming. They're gonna do things, they're gonna do things and that's great.

How do we have people who really understand clinical medicine and sort of our values and our vision, we can partner absolutely need to do that. Some people think even if you bring two people together, that's not as good as having one person who understands both. I think that is true.

That's partly why we have our computational precision health graduate program. I think we need to seed more where people are growing up in a sense with both. So yeah, more of this may be in the medical school curriculum. This is pretty critical. We need to start and we need to start integrating that sensibility I think into our current practitioners. There's a lot of work to be done here.

Maybe you'll have another interview of a medical education person. I think that there's so many wide open issues about how we do this to generate clinicians on clinicians who can really participate in a very fundamental way with where this is going. - I do think that the program, like you're describing and others are thinking about where we are taking those who are at those formative stages in their career and really sort of learning the skills and expertise of that discipline, but in context already from the outset without sacrificing anyone so that you have more people who really, they don't have to be physicians, but they're growing up learning their, honing their skills in an environment of medicine already. I think seems to be one of those things that at least can help bridge some of those barriers that historically have existed. - I think what's really important that is that we bring those kinds of technologies with the real world of medicine, not a sort of constrained diagnostic problem, for example, right? That we actually tackle the fact that this is the best drug, but it's not covered by insurance or it's not available in the pharmacy just happened yesterday, right? Or that people have different values. We don't bring in patient values often to our competition because the natural way of a computer scientist might be to look at it as a sort of almost a mechanical problem.

And it's not, it's a human problem. And with generative AI that is really far more human in its appearance, I think that's gonna be where people really see, wow, we can't think about this as just a machine spitting out a number, a prediction, right? This really has to fit right into the human process and the human experience of clinical care. So I think that is gonna be, people are gonna see that more and more. It's a more difficult task to be trained and understand real world and be embedded in the real world.

But the good news is that we have more technologies now, machine learning, causal inference, large language models, generative AI, where I think we can start to get a much better handle on the messiness of the real world computationally than we have before. So I think it's a really, really exciting time. - Well, it does strike me that, when you listen to what is leading to burnout for clinicians decrease wellbeing, it is not about the misdiagnosis or the fact that they couldn't get that test. That's really important. It is about all of those things that are about, the way we practice and those other ways that systems we have to interact that don't happen are not designed for efficiencies.

And similarly, when you talk to patients and you say, well, their experience of a health encounter sometimes is about the thing that didn't go right, a diagnosis that wasn't made. But it's most often about, it's too hard to get this, I don't have my medicines 'cause I couldn't do this. And it does strike me that the way I hear you describing what you're excited about with the promise of these technologies is that some of those things that make healthcare hard for both doctors and patients could actually be a perfect role for generative machines to step into. - Yeah. If we wanna conceive of it that way.

And I think we should, and as we do that, the issue of ethics always comes up, right? Is it ethical? Is it fair? Is it increasing inequities? Is it reducing them? So I've been thinking a bit about that and in our traditional bioethics frame, we have these principles, right? Beneficence or autonomy, we're all familiar with that. And I was thinking that those are all principles for protecting the human rights of patients. And that's as it as it should be. But if we think about our primary care clinic, we walk into the clinic room, there's the generative AI, there's the patient, the patient is not the only human in that room, there are two humans, right? It's the clinician and the patient and the machine.

If we reduce it down to that. And so I wonder if there are bioethical principles that relate to the clinician as a human and what our rights are when there's also this machine there as well. And that goes to the burnout issue.

I think that we have traditionally focused on the rights and the experience of the patient and the doctors just there. But I think, so, at one point, I don't think this is gonna happen, but at one point I thought, this is what's gonna happen. The the machine is gonna do all the work. It's gonna generate everything.

It's gonna write the note, and we're gonna be Wow. Like it's gonna write the note. No, we're gonna be spending all our time at night reviewing the notes the computer has general fixing it.

Now you could say, well, that's really great, but that's not the kind of doctoring I wanna do, right? And so what is it that gives me the sort of the reason why I went to medicine, right? That the drive that makes us value ourselves as a profession. What is that and how does the machine fit in there? And I think we have to explicitly take the clinician as a human, you know, and elevate it maybe even at the same level as a patient. I mean, we're all human, right? It's a relationship.

So, I hope that this would make us think about that also, especially in this time of moral injury and burnout that we should be thinking of clinicians as humans and how does the machine augment our humanity? - I love the way you said that. So it just sounds like you're describing both the types of research studies that you'd want in place to when we're evaluating new technologies, but you're also describing a framework for how we should think about whether these technologies bring value. The value comes both in evaluating clinical outcomes, which is always a patient and how we protect the patient in these environments. But, but these the promise and the challenge of these technologies is both to the patient and to the clinician. And that we have to have frameworks for thinking about those as well, is what I'm hearing you say.

- Indeed. Indeed. And you might ask, well, who's gonna look out for that? Who is making the investments in these technologies? Let's say if it's a hospital who's making it, are the hospitals, do they have this framework for thinking about physician wellbeing as a real factor? Not just as an afterthought, but as a real factor in what we're doing. We're not set up in terms of organization or governance to confront an opportunity and a threat like generative AI. So, I think it's something, like the pandemic was a pressure test, right? So generative AI in a sense is not in any way like a pandemic, but I think it does make us think institutionally and then broader across our whole medical system.

How are we gonna respond and how should we configure ourselves to respond so that the right signals and the right values get embedded in what we do? - Wow. That's such a perfect way to both challenge us and to invite you to come back and talk with us again. 'Cause you're describing a technology that's so transformative that we're thinking about phase one, two, and three all at the same time, but a system in terms of organized medicine, in terms of academic medicine that isn't quite set up to think about all of the governance and ways of evaluating in this. And so that's the challenge to all of us 'cause I think we can all hear from what you're saying, that this is an important thing for us to address, but how we're gonna do that, that's what's gonna play out over the next few months and years. - And it's up to us. It really is up to us.

And I hope we take that charge on. It would be easy enough for us to go back to our inboxes. You know, I need to do that this afternoon, right? I fill lot our insurance paperwork.

But we do need to engage in a really deep way with what's going on and we need to do that now. That there is an urgency. You're you're right in calling that out.

- Wonderful. Well, thank you so much, Ida. It's been a real pleasure to talk with you and I hope you'll come back and tell us a little bit more about what makes you excited and where we should be thinking when for the future. But thank you so much for joining us today for this discussion.

- Thank you. Thank you for having me. - And to our audience, thank you for watching and listening. We welcome comments on this series. We also welcome submissions in response to JAMA's AI in medicine call for papers. Until next time, stay informed and stay inspired.

We hope you'll join us for future episodes of the AI and Clinical Practice series, where we will continue to discuss the opportunities and challenges posed by AI. Subscribe to the JAMA Network YouTube channel, and follow JAMA Network Podcasts available wherever you get your podcasts.

2023-10-01 16:38

Show Video

Other news