AI and Clinical Practice—the Potential to Reduce Clinician Burden and Streamline Health Systems

AI and Clinical Practice—the Potential to Reduce Clinician Burden and Streamline Health Systems

Show Video

(bright music) - When someone experienced in AI tells you that their biggest hope is that AI addresses clinician burnout, you tend to listen. But how do we assure that AI in the back office lives up to the potential to improve the experience for clinicians? And how do we avoid what AI experts call another AI winter? I'm Dr. Kirsten Bibbins-Domingo, and I'm the Editor-in-Chief of JAMA and the JAMA Network.

This conversation is part of a series of videos and podcasts hosted by JAMA that explores the rapidly evolving area of artificial intelligence in medicine. I'm joined today by Dr. Kevin Johnson to discuss how AI can enhance electronic health information exchange. Dr. Johnson is a pediatrician,

and the David L. Cohen University Professor of Pediatrics, Biomedical Informatics and Science Communication at the University of Pennsylvania. He is also a senior fellow in the Leonard Davis Institute for Health Economics at Penn. Thank you for joining us today, Dr. Johnson. - Thank you.

And please call me Kevin. - Okay. Let's be on first name basis. I like that. So I wanna start by putting together your very interesting background and set of expertise. So you're a board certified pediatrician, but you're also recognized for your work in informatics and clinical information technology.

You knew early on that you wanted to combine both and then you chose your specialty to allow you to be able to have the flexibility to be able to do both, it sounds like. - That's right. That's right. - Wonderful. So you've been thinking about informatics and health information technologies for a long time, and we're gonna talk a little bit about the Electronic Health Record and your work there, but tell us why this time that we're living in is so different. As somebody who's spent a long time thinking about medicine and informatics, what's different about this time now? It feels different. - Yeah, it should feel different.

Throughout my life, there's been one or two previous moments where it was very clear something had changed that was going to impact healthcare either for the better or for the worse. Clearly, the first of those as I was getting into the field, was the internet. And just recognizing that the access to information, potentially the access to people globally could completely change the way we think about how to do a diagnosis. Because we could send information to a world's expert within seconds and get information back.

And we obviously have seen many examples of what the internet's done. It was really a sea change. Mobile technology was another one. And I was very excited about what was happening even before the iPhone with the Palm Pilot and did some early research looking at ways we could use Palm Pilots, because I've always been focused on getting clinicians to do direct data capture so that we could use computational means to improve guideline based care, clinical decision support. AI's been around a really long time.

AI's been around since the '60s, possibly earlier, depending on how you define it. But we've had a number of what we call AI winters. And so most of us have said AI's kind of smoldering along in the image analysis space. It was starting to gain a little bit of traction and prediction. But this year with the unveiling of what's now being called generative ai, large language models, ChatGPT, everyone is seeing that the Pandora's box is opened, or if you will, the genie's lamp has been opened and we're having our wishes.

So no question. We are now in a place where AI is going to be a part of healthcare in a very visible, meaningful, and pervasive way. We're all very excited. - That's good. So I've heard you use this term AI in the back office. What do you mean by that term? - Well, thank you.

You know, a lot of people are beginning to see I'm sure we'll talk a lot about burnout and some of the ways in which AI can mitigate that. But what I think people have failed to think about is the entire ecosystem of healthcare. And when one looks at the literature now, we're just starting to see people getting beyond some of the more commonly talked about areas. The things like how can we take clinical decision support and critique it before we actually release it? So making sure that the messages are clear, making sure that they're succinct, making sure that the references that we've chosen match the concern that we might want to have in terms of education, is now something that we can do. There are many, many people, John Halamka at the Mayo Platform and others who've been thinking about the rest of the back office, from scheduling to prior auth. One of my favorite topics which you could say is sort of front office, but maybe not is helping patients to understand the content of messages, especially if those patients have limited English proficiency or might otherwise have challenges with the way the message has been written.

Perhaps it was written by a doctor who loves acronyms, for example. And so I really think that as the literature this next few years evolves, we're gonna find many of the back office functions to actually get tested before the front office functions because there's less at stake. So there's a lot more that we can learn and critique without it actually impacting a very stressed workforce. - So you're describing that there's less at stake.

So that might be why we use AI in those settings first. But it also, you're describing a set of activities that have the potential to sort of eat away at the time that clinicians have or the things that they work in the background, but really can be overwhelmingly burdensome for the clinician workforce. - That's right.

And as you may know, this is one of the areas that's been a real concern of mine. I published a paper in JAMA with Bill Stead about going from safer to smarter approaches to think about electronic health records. And what we meant by that was that there are already guides in place to help us understand cognitive burden. But right now, there's really no metric for cognitive support.

So there's no way to say this particular tool provides more benefit to a particular community than another tool. We can say that as I've said before, and forgive my vernacular here that this tool sucks less than the other tools. Right? But what we really want to actually say is that this tool actually does what it is that patients and providers needed to do without fatiguing and causing issues that relate to staffing, in other words, without burdening the healthcare system. So a lot of what's happening right now in the literature that you're producing, quite frankly, JAMA has done a terrific job of this.

Jeffrey Harris and a lot of other people have published in the last year alone, some of these potential. And obviously the call that you have for the way in which JAMA is gonna evolve its AI in medicine portfolio is terrific. And it speaks to the evidence base that we need. That evidence base is likely to be focused on very interesting ways that we actually could reduce cognitive burden as well as improve cognitive support.

And so I do have some excitement about that but I also have some skepticism. - Well, I'd love to press you just a little bit on that because certainly the electronic health record was an example of that technology that was supposed to make all of our lives better. You write very eloquently with Bill Stead, that you know, it certainly has failed at that.

And you put out these areas that we want electronic health records to be safer and smarter but convince me that AI is not just that additional new technology that integrated in the health records, just makes my life worse. And instead of delivering on that promise to actually smooth things out for me. - Hard to do. But I'll give it a shot.

If you look at my record of work in clinical informatics, as I said, I've always been an evangelist for understanding the ways in which we can do direct data capture. So arguably as a part of the work that I've done with the National Academy of Medicine, then the Institute of Medicine as well as the American Board of Pediatrics and other groups, I've been one of the people who helped construct the electronic health record that you know today. And so for me to say honestly that I can predict nothing but good would be of course false. We know that there's a series of two tailed hypotheses here. We know, even for example, with areas like patients being provided with messages which is becoming a very, very top hot topic, patient portal messages being replied to by clinicians, that there's a double-edged sword there. And so I can't convince you, but what I can do is to say that properly done studies will help us I hope to create some guardrails.

And of course, remembering our history with electronic health records should hopefully help us to get out of some of the potential areas of concern. But it's clearly a possibility and I'm sure we can talk about that for quite some time. - So tell me a little bit, you said properly designed studies. What types of studies would you like to see? We also have the good fortune of having you as on our editorial board at JAMA Health Forum. What kinds of studies, either in real world settings or in the policy and regulatory domain would you like to see or types of work would you like to see published since we put out this call for papers? - You know, clinical trials are still, if you will, the most effective way to demonstrate both intended and unintended side effects, especially if we make sure that we're focusing on clinical effectiveness. So I would love to see us think about clinical trials for as many of these topics as possible.

Some of the data that we can get from secondary use will give us a really good sense of what might be possible. In other words, it's great for efficacy evaluation but I think at the end of the day, we need to do clinical effectiveness studies and we need to focus on every aspect of the system. And that includes early usability studies, that includes formative assessments, qualitative assessments, using mixed methods, inviting many stakeholders who are patients, nursing staff, providers into the study design so that we actually get a holistic perspective. You and I know, think a lot about equity and making sure that we are equity first in this conversation. So for example, many people now know about ambient scribing.

The idea that there may be technologies right around the pike that can take a conversation that I'm having with a patient and summarize that in an arbitrarily long or short set of messages or into a document, those are gonna be very expensive. So to do an efficacy study will likely demonstrate one set of outcomes. To do an effectiveness study means we need to get into other communities and really think about some of the barriers that might be there related to cost, education, time, staff turnover, patient understanding of this technology, trustworthiness of healthcare systems and healthcare employees. And those are the kind of studies I'd like to see done. I'd like to see studies that are fairly holistic that I'm very ambitious about where I'd like to see this field go. But as you said, we're skeptics and we should be, we need to understand some of those things going in.

- Yeah, it's been really exciting to talk to some people as a part of this series about the potential for enhancing our goals of achieving health equity because of the ability to enhance access, to potentially scale to other populations. But you're reminding us in this important fact that unless we test new technologies in the settings we actually hope to apply them, we hardly ever achieve the goals in all of these settings. - Absolutely.

Early in my career, I was one of the people who was looking at the use of text messages for behavior change. And I can still remember after a very successful project that was funded by Robert Wood Johnson and the Agency for Healthcare Research and Quality, presenting the work at Meharry Medical College. So I presented this work, I was very excited. I brought, you know, the technology to the room.

And then at the end, after we talked about and discussed some of the next steps that needed to be done, one student raised his hand and said, why did you build this in the iOS platform when most of our patients use Androids? And that was one of those wake up calls where I had no answer. I'd really not thought about it. I gave the typical sort of academic response which is, well, I was only funded this amount of money to do this particular project, but you're absolutely right. You know, I mean, you know the answer, the drill, but the reality was it's a wake up call, that from the very beginning, these types of initiatives need to be thinking in it from an equitable lens first, especially while we have the funding to study them. Because typically you get one shot at that, one bite at that apple, and from that point on, it's gonna be technology transfer or other methods to study it.

- Right. It's sort of building a focus on equity from the outset. So you're clearly enthusiastic about the possibility for AI to continue to transform even these terrible electronic health records and a lot that we're doing in the practice of medicine. But I also sense that there are things we should be cautious about. I know that you have played an important role in the National Academy of Medicine convening this group of multidisciplinary group of stakeholders to think about a new code of conduct for AI.

Tell me a little bit about what are the things you are worried about? - Well, leading up to this National Academy of Medicine Code of Conduct work, a number of us who are leaders in informatics have had a weekly sort of a weekly Friday session, if you will, an unmeeting, where we think about various topics. And we spent a lot of time talking about COVID and et cetera, et cetera. When the news broke about ChatGPT, we all went through what Gartner calls the Hype Hope cycle. So we all had amazing thoughts about what could happen. As we've discussed right now, it took very little time for us to get to the trough of disillusionment.

Probably the first example of that that we talked about was Ziad Obereyer's work from Berkeley where he talked about the issue of patients who are Black being given lower scores on an AI algorithm and therefore being less eligible for care coordination, even though in fact, the data that were used to generate those scores were biased, and that these patients should have, had they been manually reviewed, which is what he did, received more care coordination than they otherwise did. So the issue of algorithmic fairness became very important to us very early. We recognize that there are biases in us and that's been shown many, many times. There are biases in the data that are reflecting the society in which those data are generated. There's also gonna be biases in access to this information and there's going to be disparities in the kinds of questions that might be addressed.

Again, because of funding, as you and I know, one of the challenges of Brown and Black people and some of the topics that are typically brought out is that they're much more difficult to get funded. You know, great piece that was sort of orchestrated by the National Institute of Health to review that. So we can expect that some of the topics that should be of most relevance to researchers will be left behind. And therefore, one of the things that we all thought about from the very beginning was how do we continue to move this forward without leaving groups behind, without introducing systemic biases into an entire, you know, global setting. The other thing we worried about is health policy, because we already can see whenever we talk about things that scare us, people, the next question is how should we regulate this? And of course, I'm of two minds about that. If we were to regulate industries, like for example's clinical decision support, and the FDA has now come out I think, with reasonable rules about that.

And ONC is working on a new proposed rulemaking. We risk ossifying innovation. So there's a point at which it makes a lot of sense to watch and be careful about how we set up those guardrails that are policy.

We don't know that we're there yet because there's so much to learn. The technology itself has so much potential and importantly, a lot of the bad actors in the world and this is something that Peter Lee from Microsoft and Sam Altman and others have said, aren't going to stop. And so it's almost more important at this point that we understand what's possible. And in the conversation that we had about all of what concerns us in medicine as well as all of the opportunity was born this idea of how should we create these guardrails, which although we called them a code of conduct, really are all about how do we align and learn iteratively from what should be allowed and what is capable.

And so what the, what the National Academy of Medicine has now done is assemble a group of really talented people from Google and from Microsoft and from various companies and other groups to think critically about this. And Michael McGinnis from the National Academy of Medicine has been really committed to this being a learning environment where we don't write a report, put it on the shelf and put it out there. The technology's moving fast enough that we generate material, we distribute that but then we also respond to how things are changing over time.

- Yeah, thanks for saying that. I've heard a number of people say that it really will challenge the regulatory systems. The very nature of generative AI is that it's not producing the same product each time necessarily. Like that the sort, it's sort of inherent in the way that technology, the potential in that technology. But we don't quite have the structures to set up how we regulate that. And it's interesting to think of also the speed with which these technologies are developing over time.

And to think of what regulation means in that regard. When you say when the National Academy has put together code of conduct, who should I think about that code of conduct applying to. Are you speaking to the computer scientist? Are you speaking to the health systems adopting a new technology? Who's the code of conduct meant to apply to? - Yeah, such a great question. And we're still in the formative stages. So anything that I say about what this very large group of really talented people is gonna produce is also gonna be premature.

But I can tell you that what we know for sure is that we are gonna leave no one out from the healthcare ecosystem. So we have representation from patients, we have innovators, we have physicians. I suspect that we're gonna wanna make sure it applies very, very well to the development process because of the work that Obermeyer and others have done to show these biases.

So I think we're gonna start from the very beginning of the pipeline and look at what problems are being chosen and go through the process of what data are valid to help solve that problem. What models? One of the topics that many people here might not have heard about is this idea of what's called X AI or explainable AI. And so the question of what models we choose is directly related to how much do we believe that this decision should be explainable? So I'm certain we're gonna spend some time about the role of explainable AI, when is it necessary, when is it not, all the way to how comfortable should we feel putting this kind of technology in the hands of patients without providers? That's a very big question. I know a very active question. And of course, how do we integrate it into clinical decision support? How do we potentially begin to think about augmenting a lot of our care with AI? I can tell you a challenging question that John Halamka raised that I'm still sort of thinking about quite a bit, which is with the work that we've done not in generative AI, but in predictive AI, we will soon be able to re-review a chest CT and possibly take a a CT that was reported as normal and now have findings that are generated.

It's very similar to what's been happening in genomic medicine. So now we have another ethical question and this will certainly be something we bring up in the code of conduct, which is what should we be recommending? Is it a recontact situation where every CT that gets reevaluated and we identified that the patient may be had a nodule that was more concerning to the AI than it might have been to the original reader, should have a recontact to the patient and to the original provider? A completely unanswered question. And one that when he brought it up, and I was very active in the earliest days of precision medicine, it sort of gave me hives because, you know, this topic about recontact, when there's a variant of unknown significance is still largely unappreciated. We know that we should, but we don't know what it means.

Patient's phone numbers change, patient's addresses change, data changes. So we might recontact the patient today and find out that new data have come out that suggested that's not significant. And so in this rapidly moving technologically advancing society, I think the code of conduct team is gonna have an opportunity to think about a lot of, you know, quite vexing problems.

- Right. Well, it sounds like there will be no shortage, certainly an imaging that's an area in medicine that has been thinking about and is used to advances in AI assisting in imaging and how we integrate that into clinical practice. And there are even more challenges in thinking about the many areas where we've not really been as used to thinking about the possibility for AI helping or perhaps prompting new challenges for us. - That's right. That's right. - Well, it's been really a pleasure speaking with you, Kevin.

I hope you'll come back as this report is released. I know you can't say anything about it and about the work of the committee, but this is a rapidly evolving space. And you, as somebody who's been in this space for such a long time and really taking the leadership as a part of this National Academy's work, we'd really love to hear from you again as things continue to evolve. - Of course, it'll be a pleasure.

- And thank you to our audience for watching and listening. We welcome comments on this series. We also welcome submissions in response to JAMA's AI in medicine call for papers.

Until next time, stay informed and stay inspired. We hope you'll join us for future episodes of AI in Clinical Practice, where we will continue to discuss the opportunities and challenges posed by AI. Subscribe to the JAMA Network YouTube channel and follow JAMA Network Podcasts, available wherever you get your podcasts.

2023-10-08 07:32

Show Video

Other news