How ChatGPT and Other Digital Tools are Transforming Medical Diagnosis

How ChatGPT and Other Digital Tools are Transforming Medical Diagnosis

Show Video

[I. GLENN COHEN] All right I think we will get  started now because we want to make sure we have   enough time to hear from everybody. Welcome  everyone. I'm Glenn Cohen. I'm a professor at   Harvard Law School I'm a Deputy Dean here  and I'm also the faculty director for the   Petrie-Flom Center. My pleasure to welcome  you, and thank you for attending. It's also   my pleasure to acknowledge the Gordon and Betty  Moore Foundation and thank them for funding the   work we've been doing on diagnosis in the home of  which this webcast is part of it. I also want to   mention as part of the same project we have a new  podcast series featuring some of people who you   see here on this panel called Petrie-Dishes you  can find on the Apple Podcast Network, review,   rate us especially if it's good rating rate us and  that's Petrie-dishes we've got a really exciting   panel for us today, but I want to just do a few  housekeeping points. Firstly if you want to submit   a question you can do so through the Q&A function  on Zoom or via Twitter, now X, @PetrieFlom.

You do not need to raise your hand or use the  chats those are ways we will not be looking   at those instead that's the way to do it  and Chloe has helpfully put a link to the   podcast I see in the chat function. We'll  share the fully captioned event video with   all the registrants within about one to two  weeks. So if you've missed this or you want   to watch it again because it was so good one to  two weeks it'll be available and captioned. And   lastly if you have any technical issues or  even a technical problem please email us at   Petrie-Flom at law.harvard.edu. Okay well with  that housekeeping, it's my pleasure to turn it   over to David Simon, now Professor Simon,  who is a postdoc on this project and now   he's an assistant professor at Northeastern.  And David I'm going to turn it over to you.

[DAVID SIMON] Great thanks Glenn. As Glenn said my  name is David Simon, I'm an associate professor at   Northeastern University School of Law, formerly  a postdoctoral fellow at the Petrie-Flom Center   and it's my pleasure to introduce our three  panelists today, first we have Dr. Adam Landman   he's an Emergency Physician and Chief Information  officer at Brigham and Women's Hospital.   He's interested in Innovative application of  Information Technology to improve Healthcare   delivery and he led an emergency department  Information Systems modernization project a   three-year seven million dollar custom software  development project to move clinicians from   paper-based to electronic documentation among  others. He's received a variety of grants and   is quite interested in artificial intelligence  and its application to Health Care. Our second   panelist is Dr. Michael Abramoff. Dr. Abramoff is  the Robert Watzke Professor of Ophthalmology and  

Visual Sciences at the University of Iowa, with  a joint appointment in the College of Engineering   he is an IEEE fellow, did I get that right, that  the Triple E, and an arvo gold fellow he's also   the founder and executive chairman of digital  Diagnostics and autonomous AI Diagnostics company,   who is the first in any field of medicine  to get FDA clearance for an autonomous AI.   Our final panelist is Professor Leah Fowler,  she's a research assistant professor at   the health law and policy Institute at  the University of Houston Loss Center,   her work explorers intersection of  consumer technology and health with   a focus on smartphone applications and platform  she's published in a variety of outlets and has   done some terrific work so thank you to all  of the panelists for joining us we're going   to start with Professor Landman and I will  hand it over to him and begin the slideshow. [ADAM LANDMAN] Great thanks so much David  thanks so much for the opportunity to join   everyone today, and thanks for the nice  introduction. I'm gonna run through these   slides really quickly, and it's sort of start  with a holistic overview of Healthcare AI. Next  

slide please. I just want to start by saying  you know I'm a technologist, I love technology,   but what we're really talking about here is  using technology to improve the quintuple aim.   And so our whole goal here is to find solutions AI  solutions that help improve population Health that   improve the patient experience that help improve  the efficiency of healthcare delivery and that   also help us with advancing Health Equity and very  importantly help with healthcare worker burnout.   And so ideally we are looking for solutions that  actually help improve all of these five aims or at   least many of them. Next slide. I really like this  figure by Justin Norden that kind of gives a quick   glimpse of what's going on in healthcare  AI. And the big takeaway is that there  

are AI Solutions popping up to many areas of  healthcare challenges, ranging from you know   life sciences and and clinical trials research to  administrative challenges like prior authorization   and medical coding to analytics for pop health  and then even to patient-facing solutions   like sensors and care navigation and finally  we're seeing a lot of AI solutions to help   clinicians with clinical decision support or with  documentation. Next slide, not all of these use   cases are equal though. We want to think about the  use cases in terms of what the value add will be,   but we are but I also like to think of these use  cases in terms of risk. So if you're applying   AI to these areas what is the risk? The risk for  patient harm for instance. And so on the left hand   side are our use cases that I think are lower risk  and as we move to the right higher risk. And so I  

think where many health systems are starting with  AI strategies towards tends to be towards the left   where the lower risk, and so we're seeing contact  center automation or we just did an AI video.   In the middle are sort of medium risk where  we're using it in clinical workflows but we   often have an expert such as a clinician  in the loop, so human in the loop.   And then finally I think where we're going to  focus today's discussion is on diagnostics,   when we're having AI actually do diagnosis or  triage a patient and I would say that's at the   highest risk. So I want to share with you some  real examples of things that we're working on   just to give you a sense of where AI is in  healthcare delivery. Next slide. The first   example is on the lower that I consider lower risk  and it's in our contact center, and we're using a   various forms of AI in our contact centers to help  us with improving efficiency and also improving   patient experience as well as employee experience.  And we have a large contact center that handles  

patient questions related to our electronic  health record. Our patients are increasingly   using the patient portal and sometimes they have  questions. Well traditionally they called and our   agents would answer their question, we are using  a combination of interactive voice response system   with natural language understanding though to have  the computer listen to, recognize the questions,   and then provide answers to those patients. And  we've actually seen about a third of calls to   our patient portal support desk are successfully  handled by this IVR and AI combination. So it's  

really been showing a nice efficiency improvement  and Improvement in satisfaction. Next slide.   Another example and this is kind of in  the middle bucket of kind of medium risk   and where we use a human in the loop many sites  including my organization are very interested   in using generative AI to help with clinical  documentation as providers every time we see   a patient we have to document, that's for many  reasons including billing, legal, and also for   clinical care continuity, and that takes quite a  bit of time. And so the Holy Grail is could you   have a solution where AI does the documentation?  And these solutions take the form of an app a   secure app running on a smartphone, the patient  provides their consent to have the patient   provider conversation recorded. That secure  recording is then securely sent to a commercial  

product in the cloud, and that commercial product  then does speech recognition natural language   processing, natural language understanding and  then then uses large language models to summarize   the note, and create a note that looks like any  other provider note. We are in early stages of   testing this I also want to emphasize that the AI  generated note has to be reviewed by the clinician   to ensure that it's accurate and correct and  has all the information included, it's then   edited and signed off. And so we're in early  stages of assessing this solution. Next slide.   Another promising example is using AI to help  with in-basket messages from patients. So we're   really excited that we've seen a large growth in  number of patients that are using online portals   one of the features that our patients really like  is sending in-basket messages so sending you know   secure email messages to their providers and  care teams and across the country we've seen   a huge growth in use of these tools particularly  after covid, well this is great news and great   engagement but it's been very challenging for  our practices and our physicians to keep up with   responding to these messages while they're  still keeping their usual in-person clinic   visits and other clinical load. And so we're in  early stages of investigating AI, in particular   generative AI that can review the messages coming  in from patients, classify them, and then once   they're classified we can help route them to the  appropriate person to handle that message. We're   also in early stages of investigating how well AI  could actually draft a response to an in-basket   message. Those messages would then be reviewed  need to be reviewed by a clinician, edited,  

and then ultimately sent to the patient by the  clinician. Next slide. A final example which is on   the diagnostic side I'm going to share a research  example it's an exciting study that was recently   published by researchers from Mass General  Hospital as well as MIT and Chang Gung Memorial   hospital and they developed an algorithm that  can predict lung cancer risk from a single low   dose CT scan of the chest. And you can really see  the power of this if you look at the image in the   bottom right. If you look at image a when a expert  radiologist looked at image a you can see the  

area in the circle they rated that as low risk of  lung cancer, a lung rad score of two, when the AI   algorithm developed in this study which is called  Sybil evaluated image a it rated it as a high risk   for cancer in the 75th percentile. And Image B is  the same patient but two years later and now as a   human looking at image B it's very clear there's  a new speculated solid mass very concerning for   cancer. And so this is an extremely exciting this  is very early stage, this is a research study,   but this is very exciting because this starts to  show AI as a diagnostic tool that is exceeding   human capabilities. Next slide. And I think as  you know we all want to move towards using AI  

for medical diagnosis at the point of care, and  I think these are some of the characteristics and   things that we need to ensure exist before we can  really start using all of these tools and I think   we're going to probably have much more discussion  on this but most importantly we need to make sure   that the algorithms are safe and that they're  reliable, but we also need to have to look at   the benefit and the return on investment because  there are costs to these tools and we need to   ensure that there's value being added. In some  cases FDA approval may be necessary, and I think   the rules and regulations regarding which types  of algorithms need FDA approval is evolving. And   so there's a there's a real opportunity here to  better understand this. And then finally we need   to ensure that these algorithms can be applied  without bias to all patients, and if there is   bias we need to be able to understand that bias  and ideally remove the bias. We also need to be   thinking you know there is data involved here and  need to ensure that the privacy and security of   the individuals data as well as the data used  to create the algorithms. When possible ideally  

these algorithms should also be transparent so  clinicians and others can understand how the   algorithms are working, that may not be possible  in all cases but when it is we should try to bring   transparency. And certainly transparency about the  algorithm's performance. And finally in some cases   we may need to be transparent in explaining to  patients when we're using AI and how we're using   AI. So I look forward to talking with the entire  panel and diving deeper into some of these issues. [DAVID SIMON]   Great thank you so much Adam I'm  going to turn it over to Michael.

[MICHAEL ABRAMOFF] I'm in a hospital environment  so I'm I'm so sorry I'm not wearing a tie and   looking decent like everyone else. I've read  continuously in the hospital because of family   circumstances for a few weeks now. Very excited  to be here thanks so much for inviting me and Adam   laid it out very carefully very very well. Let  me begin with what about me, my name is Michael  

Abramoff, I'm a practicing retina specialist  like David already mentioned in the beginning.   I do notice we do not see my face right  now so as long as that's okay I'm good.   I'm also as you mentioned the creator of that  first autonomous AI and that took a long time,   started meeting in 2010 with FDA, Hey I want  a computer to make a medical decision how do   we go about it and that led to a very fruitful  meeting of minds for years and years and years,   and that's led to an ethical framework for  AI that has I think being really important   in getting stakeholder support all cycles really  in healthcare to support the use of especially   autonomous AI, because as you, as Adam, mentioned  the perceived risk of autonomous AI is probably   the greatest and we can discuss it a little bit.  Also I think it's been an interesting journey   since then because 2018, FDA authorized this AI to  be used for patient care without human oversight,   specifically a diagnosis the complication  of diabetes called diabetic retinopathy and   diabetic micro edema which are the most important  causes of blindness. So it's well you know with   Adam it serves a needs it's a major cause of  blindness, it can prevent this, because it   leads to early treatment and management. It's also  traditionally a great source of health disparities  

and that's another reason to to use AI to improve  access for these patients, and in fact multiple   randomized clinical trials coming out this year  and already came out are showing that indeed   health disparities are improved and in fact in  the Baltimore area with black Americans having the   same amounts 100 of diabetic eye exams now thanks  to autonomous AI as white and Hispanic patients.   So really it can resolve a very persistent health  disparities that have been plaguing us for often   decades and seemingly unsolvable even by throwing  resources and money at it. So that's exciting   because you know what it all started was really  you know building autonomously AIs to hopefully   improve outcomes Health Equity population and  health, and it's indeed now showing certificates   that all this trouble that everyone went to is  worth it. But like I said it was not you know   easy steps, FDA but it's also required for example  National Committee of quality assurance when many   of you are providers and they're probably familiar  with measures like hedis and mips which until then   always said that a human needs to close the  care cap as it's known and that language was   changed and now a care gap can be closed with  an autonomous AI so there's all these small   detailed steps that need to be taken for before  you can actually say well this is now you know   being widely deployed in clinic and is useful for  solving these problems that we have in healthcare.   Another step was reimbursement I didn't really see  it very explicitly Adam on your slide, which was   very helpful as a framework for panel discussion,  but you may be aware of fair Therapeutics,   a company that was very successful had FDA  approval for an app, and I'm looking at Leah for   addiction and curing addiction and showing in  randomized clinical trials that it worked. So  

everything you know, all the checkboxes were  checked except that they had a hard time   for various reasons why they didn't get  reimbursement and ultimately that killed   a company and it went bankrupt now a few months  ago. And so a very useful AI technology that was   shown to benefit patients that already had FDA  approval didn't make it and now will no benefit   patients in the technology essentially is lost.  So I think reimbursement is is a very important   factor but that of course requires every stakehold  in healthcare to be supportive. For example   physicians often fear job loss it's not only a  job satisfaction but literally you know will I   still have a job 10 years from now I'm often asked  these questions by residents and fellows and so AI   reached to these worries and that also of course  can lead to lack of stakeholder support from   physician organizations and that is typically you  know a problem if you want to get reimbursement.  

And so payers of course need to support it there  needs to be an ROI ethicism to support our patient   organization to support it. And I think the  work we did on the ethical framework that they   already mentioned, and essentially making the step  from rather than talking about ethics measuring   ethics, a concept called metrics for ethics  where you say, well this AI meets this biological   principle 1.5 on some scale and actually being  able to have various metrics there's of course   many of them that were published a few years ago.  I think that really helped get these stakeholders   on board and understand that we were addressing  any concerns they could have and Adam at least   many of them proactively rather than reactively  as often as seen you know in other instances   of new technology. Let's for example gene  therapy. So I think a very worthwhile journey   stakeholder support and that ultimately  led to CMS and later all payers   you know reimbursing this at a level where there's  also a sustainable business model, which is of   course important for sustainable rnd and continued  investment by VCs and now also private equity.  

I think with you know the results of the  randomized clinical trials coming out this   year that the circle is almost rounded this can  be done you can really take an algorithm take   care of bias, address all the issues that maybe  with ethics get sake of support automatically get   reimbursed and and make it benefit patients, and  so I think it's really worthwhile to to discuss   these various steps, especially this is still you  know the first AI we created and we have many more   is still prescription defined, so it's not for  home use, we're not there yet. I think the FDA   is not yet comfortable with this being used in the  home, and so how do we move from the current state   to autonomous use in, let's say at home. You know  what are the steps need to be taken and absolutely   I think we will get there but you know, step by  step, so I will stop here. And maybe Leah is next? [DAVID SIMON] Yes, Leah.

[LEAH FOWLER] Yes, I am next. Let me get my  PowerPoint shared. All right can everyone see   that, great. So hi everyone I'm really happy to be  here, it's so exciting to be on a panel with such   great speakers and it's exciting because of the  topic because the promise of technology to move   diagnosis out of the confines of the clinic and  into the homes of patients is actually a really   big and exciting topic and one facet of that  that particularly interests me a great deal,   is the way that consumer health technology is  often regulated very differently than similar data   and tools in a medical or scientific context, and  I often like to think about this disconnect and   two major buckets one is more privacy and security  and confidentiality, and the other is safety and   efficacy and accuracy. And so that's actually  what I'm going to talk about briefly today, and   I'll be pivoting to the other digital tools, which  certainly can but don't always include artificial   intelligence that are one of the subjects of  today's event. And these are tools that have  

that maybe don't always live up to the potential  to transform medical diagnosis in the home,   especially in a consumer context, and a couple of  the legal and ethical issues that they raise. Now   this is kind of an ambitious 10 minutes, but  I plan to with a very high level examination   of our very basic assumptions about at least our  traditional notions of healthcare, maybe a little   bit further back than the evolution that we've  talked about going or previous two speakers have   talked about, and then consider how we engage  in health promoting activities and activities   that even look and feel a lot like diagnosing and  treating from the consumer perspective, even if   it technically isn't, specifically in a consumer  context, but like I alluded to when I talked about   the things that interest me in my research, the  types of protections you get as a patient, are   the types of protections you may expect, are very  different than the types of protections you get   as a consumer. And I will illustrate that point  with two examples of digital health tools that are   commonly used. Now when I talk about a healthcare  context, what I mean is settings we typically   think about when we think about the provision of  care, like a clinic or a hospital. And they're   among some of the most highly regulated settings  in the United States. So because of those complex   laws and regulations, we have certain expectations  about the care we're going to receive and how our   personal Health Data are going to be treated. At  least as we traditionally think about it and I  

know these are things that Dr. Landman actually  mentioned when he was discussing the things that   we need in place to advance AI diagnosis. And one  of the first big ones is that we expect that the   treatments we receive are going to be safe and  effective and that we have enough evidence about   them to make informed choices, about the risks  and benefits, and that the diagnostic tools that   are being used are going to be reasonably accurate  and precise, and that we individually as patients   don't generally have to do independent research  to be sure of any of those things. And the second   thing we assume is that our data will be kept  private and secure and in some cases we have   expectations about confidentiality. However it is  worth putting a huge caveat on all of that. Just   because these are expectations doesn't mean it's  something that everybody gets. So for example not  

everyone receives the same quality of care, either  because of location or stigma or resources or   structural barriers, but for simplicity many of us  can go to the doctor have these basic expectations   about things like accuracy and privacy and for the  most part those expectations are going to be met.   But of course a medical encounter is not the  only place that people manage their health but   you would certainly not be here today if that  were true. And so for example you individually   may be tracking your calories or your steps in an  app, or you may be using a wearable like a smart   watch or a ring, and some of these wearables  also sync with other apps that aggregate large   data sets that can use artificial intelligence  to do things like improve health predictions   and this can span health categories. So it could  include weight loss, or menstrual cycle syncing,   to mental health, to sleep, and so much more.  And truly the space is full of products and   services and advice that viewed in their most  positive light can help us live our healthiest   lives and they're tools that can liberate health  care from just the confines of the clinic and   bring it directly to consumers in their homes.  But one of the things I alluded to is that our  

assumptions about privacy and accuracy in  a medical context do not always translate   into a consumer context and it depends on many  variables that are not always particularly clear   to consumers. And in the interest of time I will  give you only two examples though there are many   examples. But the first is that in a consumer  context your data are treated differently. Now   many of you watching know that HIPAA and its  state level counterparts protect Health Data   privacy in very certain contexts. So while  some states offer more robust protections,  

HIPAA itself is only providing privacy and  security protections for certain types of   identifiable information possessed or controlled  by covered entities and their business associates,   which is not everyone. And so importantly the vast  majority of cases HIPAA is not going to apply to   consumer techh like your smartphone apps, or any  health information you're receiving or sharing   on like a social media platform. And second is  that most apps and many wearables are not going   to be FDA regulated medical devices, even if they  look the same or similar to a device that is FDA   regulated. And this is in part because the FDA has  pretty broad discretion about how it interprets a   product's intended use which is a special term of  Art in the law, and further legislation actually   carved out certain types of products from the  FDAs definition of medical device. So now it   excludes things like low-risk devices intended for  maintaining or encouraging a healthy lifestyle,   which includes a lot of consumer products. And  that's all a very long-winded way of saying that  

many products don't have to obtain any sort of  pre-market approval or authorization or even   baseline demonstrate that they work before they  enter the consumer Tech Market. Now of course   a lingering question in the background of all of  this is why would it matter that things that look   and feel like healthcare or health information  are treated differently depending on the context,   and I would argue that certainly can be a big  deal especially as our interests and optimizing   our health through consumer Technologies grows  and private for-profit companies continue to   offer oftentimes very promising technological  solutions to the problems of Health Care in   the United States. But if we continue to position  consumer technologies even as perfect substitutes   for evidence-based care, it does raise important  legal and ethical questions especially since at   least right now the prevailing advice to consumers  in the absence of more robust legal and regulatory   protections is unlike in a healthcare setting  for you to do your own diligence, and your own   research on these technologies before you pick  one that you want to use. But I would offer that   that's actually really difficult advice to follow.  And I won't just tell you I will actually show   you if two examples involving femtech, which is  actually a really broad category of technologies   that address female Health needs. And this is most  commonly refers to period and fertility tracking  

apps and I pick femtech specifically because it's  super easy to understand why accuracy and privacy   matter in this context. So if an app you are using  to achieve or avoid conception is not accurate,   you will either be not be able to become pregnant  when you want to be, or you may unintentionally   become pregnant when you don't want to be. And  depending on where you live you may have limited   access to the full spectrum of reproductive  care. And further, menstrual data is legally and  

medically significant, so for example the date  of your last period is relevant to determining   gestational age and period infertility trackers  at their most basic taking away all of the   technological shine are often just repositories of  dates and menstruation. So let's talk about what   it might look like for a consumer to do their due  diligence and try and pick an accurate and private   app in this space. And we'll start with accuracy  now most people make decisions about the types of   digital Health tools they're going to download at  the point of download. And for most of us this is   going to be the Apple App Store or the Google Play  Store, and one of the first things you might see   is images that the App Store shows you and this  is an example on the slide here. And if you're   looking for something that feels like an assurance  of accuracy, your interest might be peaked like by   claims like the ones that my teeny tiny Arrow  points to and it says automatic and accurate   predictions of fertility. And of course images  are not the only thing you're going to find at   the point of download you'll also find things like  your app description, and these words are even   smaller but what you need to know is it's echoing  the same guarantees. It's saying that one of the  

features is automatic and accurate predictions  of fertility. So what can we conclude from this?   Could we conclude that the product accurately  predicts when you're fertile and if it can do   that can it also predict when you're infertile,  and if it can do both of those things couldn't you   use it to achieve or avoid conception? But we can  check one more spot just to be sure and that's the   terms of service, and of course terms of service  are not documents that people often read this one   in particular is not available in the App Store  I had to Google it and what you would find if you   read it is likely a health disclaimer and this  tells a very different story than the images in   the App Store. Suddenly It's all language about  how the information and predictions can't be   used for diagnosis or treatment and you should not  use this product for conception or contraception,   if you trust it you do it at your own risk,  and it's just interesting to think about how   these documents tell very different stories than  the most obvious consumer facing advertisements.  

And just to do this again for privacy this is a  screenshot from a different app, and what you'll   see where the arrow is pointing if you can read  it I know it's very small is that it says the app   never shares or sells your personal data, and I  want you to ask yourself to reflect on what you   think the word never means. Because this is an  excerpt from the app's privacy policy where it   shows a non-exhaustive list of the ways the app  does share your information with third parties.   And I don't know about you but that's not what  never means to me. Now I would hate for you to  

think I'm just picking on a couple questionable  apps I have no opinions on the products that   I've shown you here. I just want to show you  something that is fairly common in the health   app space which is apps advertise things that  consumers want even if it's not necessarily a   thing they truly offer. But while I just talked  about femtech I do want to be clear that this   discussion goes far beyond it. It matters in a  lot of contexts, especially ones that we might  

think of as low risk but maybe aren't. So context  from which the risk of physical harm is greater   If the product isn't accurate or circumstances in  which privacy and security are important because   the risks of things like stigma and discrimination  are higher. But no matter what, even if it doesn't   fall into one of those buckets, we want products  to do what they claim to do because even if they   can't actively harm you if it's not working,  it's a missed opportunity for improvement, and   if we don't read the terms of service and privacy  policies, the way they advertise product their   products really does matter. But my final point  because I know I'm coming up on my time, is what   I want for you to take away from this is if we  want consumer Technologies to be truly disruptive   and game changers and how individuals self-manage  their health or diagnose in the home, it's really   important to assess honestly where they live up to  those promises and maybe where they still fall a   little bit short, and this disconnect between  assumptions and protections and the limits of   consumer due diligence is just one piece of  that puzzle. And with that thank you so much. [DAVID SIMON]   Great thank you all of the panelists for  really terrific presentations touching on   so many different issues. I think  what I'd like to start with is a   question that all of you can respond  to if you'd like. Leah talked about  

unregulated zone of products, the Zone where  at least FDA is not doing the regulating.   Dr. Abramoff talked about his product the  product that he helped develop in the context   of FDA regulation, and then Dr. Landman talked  a little bit about both. And so I'm wondering   what each of the panelists thinks  about the current FDA framework,   the current framework for evaluating these  kinds of products, and how we might think about   changing it or modifying it in the future. So  I'll pose that question first to Dr. Landman. [ADAM LANDMAN] Thanks for the opportunity and I  think the challenge is there's ambiguity in the   current framework around you know what  is regulated and what's not regulated.   And so I think the crisper we can be on where the  level of regulation is, is really important. And  

ultimately I think that there are some aspects of  regulation that can really help accelerate this   work, right, so and frankly may also help what  I'm seeing right now, is that a lot of centers are   doing the same work, on these AI tools, because  we're all trying to adhere to the principles that   were described by everyone earlier. And so  we're all trying to do our diligence to test   and validate and ensure the safety and equity of  all of these tools and if there were ways that we   could agree on, and potentially through regulation  set up ways that there were standardized processes   and expectations and then transparency into that  process for those who are consuming these tools   I think it could help accelerate some of this  work. So overall I think that particularly as AI   advances and there's an increasing desire to use  it at the point of care, either on the clinician's   facing side, or with the patients as Heather  described I think there's a real opportunity   for us to bolster the processes and maybe even use  a public partner private partnership to do that. [DAVID SIMON] Michael did you  have any thoughts? You're muted. [MICHAEL ABRAMOFF] Sorry. Let me put a pull up two  small points and then otherwise agree with Adam  

and Leah. But first, the concept that assistive  AI is in some way safer than an autonomous AI or   autonomous AI is high risk, is probably you know  debatable. And the the study I like to refer to   is Fenton et al from 2007, where there was an FDA  approved mammography AI that was validated under   FDA purview as in essentially in an autonomous  fashion, compared to radiologists at really high   performance and therefore was approved. That  was not the way it was used it was used as an   assistive AI in conjunction with a radiologist,  where it indicated lesions such as calcifications   and nodules on the mammogram that supposedly the  radiologists would then look more carefully at   that had never been validated as a system. The  assistive AI plus the clinician and it was shown  

because everyone expected, well duh with an  AI the clinician will obviously be better,   Fenton et al decided to study that in 200,000  women and they showed that outcomes for women   diagnosed by the radiologists assisted with the  AI were worse than that for the radiologist alone.   So even in this simple case AI does not always  make things better in an assistive fashion. So   I'm not sure whether the risk of an autonomous  AI is actually higher it's perceived that way   but at least we can test it as a determinant,  the deterministic system, rather than a variable   interaction of physicians with an AI. Data that's  relevant is shown by the Boeing 737 Max example   where Boeing developed an AI, it was you know  even tested with very experienced pilots and   it was fine and then less experienced pilots  were starting to use it they over corrected   and two planes were put into the ground as you  may remember a few years ago. Again an assistive  

AI really hard to validate, because you need a  broad spectrum of expertise on the sides of the   physician, or sorry the expert being assisted.  So that is one aspect that is relevant because   we you know also discussing llms and chatGPT. The  second is that what Leah said is absolutely true   that these apps have the potential to maybe harm  patients in some way or at least not get them good   care, but more importantly like you said David and  Adam there is this tightly regulated pretty you   know and in my view we're in a sort of Goldilocks  situation where this tightly regulated AI right   now which reimbursed it's regulated people feel  comfortable with it, but there's this better   AI that is actually harming patients that is not  regulated, specifically I'm referring to my friend   Ziad paper in science in 2020 where he studied  in AI created by a payer, I won't name names   that was used to create to determine care  pathways for people with lung disease. It   turned out that because cost was used as  proxy for the severity of the disease,   that actually black patients who had less cost  for the same severity of disease in the training   set would actually be directed to less sub-optimal  care and being harmed compared to other patients.   So this AI that was in a non-regulated space  was causing harm, and you would say well okay we   identified it and manufacturer actually improved  and they were done the problem is that this is   being cited widely by Congress and Regulators  including the office of civil rights in HHS,   which as you may know has a proposed Rule 1557 to  essentially make liable anyone who uses digital   Health Products that are biased. And so the sheer  Factor this existed as it was harming patients  

in a totally different space can have a little  backlash on all AI in this case, and we have seen   it with gene therapy years ago where gene therapy  was doing really well in 90s some unethical   experience were done it was shut down it was dead  for 20 years, and it took a long time to recover   to get FDA to proven when gene therapy looks now.  So long story short, I think the non-regulated   space is really important and is already having an  impact what happens there on the regulated space. [DAVID SIMON] Thanks Leah do you  have any comments about that? [LEAH FOWLER] I do, and I actually it's a way  of building on and echoing two of the points   that were made about transparency and the role  of reimbursement, because obviously more tightly   regulating in the space raises a lot of new  challenges. So one of the one of the benefits of   having sort of light regulatory touch is products  can enter the market cheaper, they could people   can access them at a lower price point, so the  more regulation you add in it's going to drive up   the costs of good and tested and evidence-based  products, which may in turn especially in the   health app space, drive people to download  things that are free and use the things that   aren't tested. So that is a challenge and a  tight line you have to walk, and the other one   is of course this transparency issue. And I think  a lot of consumers don't generally understand you   know what a product's intended uses and whether  it's going to be FDA regulated or whether it's   not and you even see apps advertising things  like FDA-registered which is of course doesn't   it's not a thing that really means anything  other than that the FDA knows that it exists.  

And so until I think we're able to communicate to  consumers what these types of distinctions mean,   I think we're going to continue to have struggles  with apps that are more heavily regulated and may   have gone through all of the FDA approval or  clearance processes and these other ones that   look visibly very similar if not identical  to them that have had none of the oversight. [DAVID SIMON] Great there are  a couple questions in the chat   that I wanted to try to combine in some  way. They're more technical questions, so   perhaps Adam and Michael might be better able  to answer than the Leah but maybe Leah knows   a lot about computational computer science or  something that I'm not aware of. The questions   are really directed towards the potential  functions of AI tools. So one question is is it   possible to assign a probability of a diagnosis  using an AI tool, and the second question is,   is it possible that to design an AI that  doesn't itself produce a diagnosis but suggests   tests that could produce a diagnosis? So similar  questions that are relating to the process of AI.

[ADAM LANDMAN] I mean I can happy to start if  that's helpful. I mean the short answer is yes,   right so there there are clinical decision  support tools or AI that can suggest new tests   in fact that's common it might be a recommendation  to say you know consider you know consider these   strategies for the patient. I think for  associating a probability, it may depend   on what AI technique is used, on whether it  can associate a probability, I think in some   of these tools some of the best practices we've  seen is showing the test characteristics overall   for the tool and making that very transparent to  the end users. And whether or not it can display   a specific probability for the specific patient  that may be a little dependent on the techniques,   but let me see if Michael wants to correct  or add anything to what I'm sharing.

[MICHAEL ABRAMOFF] It's interesting I  absolutely agree that it can be done, I think   what patients want to know more  than anything is the outcome right,   what is my clinical outcome going to be, and  can you do anything about it. So I think that's   really more relevant this can be a tool the  probability of well how should they adjust my   risk and is it worth to do a certain intervention  or a certain extra diagnostic with its own risk,   and you know weighing that with risk to maybe  the disease, I may get, or the poor outcome I   can get. So I can see where that might be  useful it's really interesting to see the   discussions for the autonomous AI that we created  where FDA was actually concerned about the too   complex output meaning you can give a very high  level output if there's this level of disease,   and that has these associations with other  diseases and these risks of progressions   to various end stages, and FDA and this is for a  primary care physician primarily, so let alone for   the patient they considered all these outputs too  complex and really want the the customers yes no,   you know bad disease good disease, or a referral  to a specialist for more care in this case an eye   care specialist is warranted. So they really were  focused on making this as simplistic as possible   and so that was really interesting process to go  through so rather than very sophisticated outputs,   they think it's better to have as simple outputs  as possible and from a interaction with AI outputs   that's probably the right decision, and that  may also have implications for apps you know   in a non-regulated space where clearly you know  keep it simple is often better. So I don't think   I'm not sure that helps with with answering your  question but I think that's an interesting aspect.

[DAVID SIMON] Yeah that actually leads  to another question for Leah, which is   how much information is the appropriate  amount of information for consumers? And   how do we know what's the right amount and  then do we treat doctors as consumers or do   we treat them as a special kind of consumer  as a law traditionally has treated them? [LEAH FOWLER] So I guess obviously two different  questions here, one of them being what's the right   amount of information that we can give consumers  and I would offer that it's not just what is the   right amount but how do we give it to them. So if  we have a lot of literature that suggests people   aren't reading things like the privacy policies  in terms of service inundating those documents   with more information that people are not going to  read is not going to be helpful. But if people are   making decisions at the point of download, and we  know that apps are digital products or advertising   their products in specific ways be it puffery or  whatever you want to call it, we have to be very   careful about the information that they're sharing  there. So if an app says that it's accurate or   that it never shares your data I think one place  we need to be clear is that that baseline should   be true. And so whether that means we need more  information and we need to make documents that are   already 30 pages long into 60 pages long I don't  think that's necessarily the right answer. But  

we need to be more Innovative in the ways that we  share information and ensure that the information   that people do see is correct. So the other  question that you asked is should we be treating   physicians as just a different type of consumer  and I I would say that it in a classic lawyer   fashion it depends on the context in which we're  talking about these digital tools right, so if   we expect physicians to be making recommendations  about apps or for their specific patients to use,   we should be treating them I believe as more of  a learned intermediary somebody who knows more   about the product that they're recommending. And  so yes I would expect that their understanding   of the types of protections and regulations  and evidence behind that product is greater   than your average consumer in a consumer context,  and that's really challenging when you talk about   the consumer health Tech space more generally  because if there's so many products it would be   almost unreasonable to expect any physician  to just know every single app and all the   different nuance of it but if they're going  to be recommending a specific one yes I would   expect that there'd be a certain higher level of  knowledge associated with that recommendation. [ADAM LANDMAN] And I build a little bit  on Leah's great comments there which is   I actually think there's a opportunity and  a need to educate Physicians more on these   tools. Right so for instance as you go through  your medical training you learn a lot about   diagnostic testing right so so as an Emergency  Physician I got a lot of training around how to   use a troponin which is a blood test to look for  damage to the heart, and how to apply that test   correctly in variety of clinical settings,  like that was part of my medical school and   then clinical you know residency training. I  think we're going to need for some of these   AI tools we also are going to need to train  clinicians on how to use them appropriately,   and we're going to need to think about  standard ways in presenting you know AI tools   so that Physicians can understand them  and and also be able to manage multiple   tools so I think there's a huge  opportunity here as we go forward.

[MICHAEL ABRAMOFF] Let me add actually we  have been working with FDA on a sort of AI   effects label, like you have for food,  right on every item of food where there   would be the level of evidence,  the level of reference standard   Etc. Of course first of all we need to agree and  what is a good reference center etc, but but there   is some movement and I think that will be really  helpful. But that's a few years away I'm afraid. [DAVID SIMON] Yes I'll just put in a  plug for my colleague Sarah Gerkey she   wrote a paper on that very subject so I'd  recommend that to anybody who's interested.   I did get a question in the chat here about  reimbursement and the question is basically   asking in light of paratherapeutics  bankruptcy how do we ensure that   there's a sustainable business  model for digital therapeutics. [MICHAEL ABRAMOFF] I love taking that. So is  it okay or Leah you want to start first okay.   So I think that that is key. Like with my example  every they had everything done except that right  

and still they're dead and so like I mentioned I  think stakeholder support is crucial any Group,   Patient Group, access group a group that doesn't  want it and everyone else wants it it's dead in   the water in my view. And I've seen it happen  if you go to the cp2 editorial panel where and   I've shined these NDAs so I cannot disclose much,  but I do know there's companies there that have   been waiting eight years for a category three  code, and if you're aware Category 3 code is   not even leading to reimbursement, it just means  that you're going to measure and be allowed to   measure utilization then hopefully showing enough  utilization, so a few years later you can move to   a category one code. So there's this depending  on the stakeholder and enthusiasm that you see   the effort the level of evidence for improving the  patient outcome, Health Equity, all these factors   that really go into every small item on the way  to reimbursement. Then CMS in our case spent   more than 30 pages on three different proposed  rules in a federal register, on what they call   the guardrails around AI. What they didn't want  is set a precedent for all sorts of bad AI that  

we're discussing now to be reimbursed and blowing  up the budget, so they want to be very careful and   say is biased address? Which normally CMS doesn't  really think about they now have to think about   all these different aspects of how AI can cause  harm right so data usage Etc they were really   very considerate in in their decision to to do  this reimbursement. I think the framework because   essentially in many cases and most even most  physicians are not aware the The Physician fee   schedule which is core of CMS and for many payers  the example to follow for private payers is really   what the charge is and the charge it can then be  reimbursed for the physician pays a charge and   then it can reimbursed by the payer so assuming  as an AI Creator you have to decide what is the   charge I set. And if that's very high that looks  very promising, but you know people say well why   should we pay for this, is this cost effective Etc  you get all these considerations. It's very low  

typically if you're an AI Creator your investors  will say this is not a sustainable business model.   You know we cannot support this so you need to  have the sweet spot I think the model we proposed   which is we call an equity enhancing model where  we said in this case, for this specific diagnostic   procedure, right now instead of 100 of patients  getting it it's a big source of health disparities   most underserved patients are not getting  it, and only 15 to 30 percent are getting it.   Clearly there's a willingness to pay for this  30 percent let's Now set the charge for the AI   where we can do all 100 percent of patients  and not blow up the budget meaning but the   same amount that we're the same expenses we're  currently paying for 30 percent will now pay for   100 and we'll set the charge that we as AI  Creator set accordingly. So now you go into   these meetings and they say well clearly you're  trying to save money here not not increase money,   you're not trying to blow out the budget here,  you're actually trying what we all want and you   do it based on you know improved patient outcome.  So I think that really helped and we published   that right in nature digital medicine now a year  ago I think already but this entire framework such   equations it actually is I think very useful  and is being used by other AI creatives to   set a charge because that's what it starts with,  people ask what what can I do for reimbursement   that's not what you should be asking, you should  be asking what is the appropriate charge that we   should be asking in health system, a provider,  Etc what is again, found in evidence I think   that's what we did and that's why I think it  happened and I think that's a good path to follow. [LEAH FOWLER]And it's such a rich and wonderful  answer it's hard to hard to elaborate too much   on that but for me what it really underscores is  how intertwined reimbursement and regulation are.  

So at least in the health app space we have  certain expectations that apps are available   for free if you're going to start charging for  them being able to build in the ability to get   reimbursed through insurance is something that  could make evidence-based apps more accessible.   And to me it also just really highlights to  me how accuracy and privacy are so linked,   because oftentimes when apps are developed and  then offered for free consumers themselves become   the product. Because you're able to transact in  the consumer data that you're able to collect.   So if we do want higher protections into when  we are talking about privacy and when we're   talking about security, we're now cutting  off a revenue Stream So if we can't come up   with another opportunity for reimbursement, we're  not going to see the types of positive innovation   in this space that really could move the needle  for consumers and consumer Health Technologies. [DAVID SIMON] So there's a, oh go ahead Adam.

[ADAM LANDMAN] No I I guess I would just add  one thing which is I'm disappointed that both   of these companies are challenged and and did  not succeed here, because they ostensibly did   most things right, right, they spent they actually  validated their products, right they went up for   reimbursement. I also wonder if we're just not at  the right time for them I think as I look forward   like we have to figure this out. If you start  to look at the demographics and particularly   in behavioral and mental health, we do not have  enough people to care for the number of patients   that need this care and these services. So I  think we've got to figure out what services need  

a human and and particular physicians, and then  where can we use automation? So I guess I would   just say I hope we don't give up in this area,  because I think it is going to be the solution it   may take us longer but it's desperately needed  and particularly in behavioral mental health. [DAVID SIMON] Let me push back against what you  just said and propose something to the whole   group, is probably will be the last question.  So if automation is well let's say I agree with   you that automation is going to be part of  the solution. But something you said at the  

beginning of your talk stuck with me which is  this automated responses to incoming messages   and you said that people were really engaged  with these incoming messages and they really   liked using the messaging service, and maybe we  can figure out a way to automate that in some   respect. I submit that I for one hate using the  messaging service and I would much rather just   talk to the doctor for 30 seconds and maybe the  doctor doesn't have time to do that but for me   that's would be more efficient. So I wonder if  an over-reliance on artificial intelligence auto   response automation not only will depersonalize  further the already depersonalized Healthcare   experience but lead to maybe not worse outcomes  but more dissatisfaction with the healthcare   system and how would that influence the  growth of these technologies further? [ADAM LANDMAN] Yeah I can start and  then let my colleagues jump in you're   absolutely right in raising this. I mean  look, ultimately we want to have multiple   channels for the patient to be able to  interact with not only the provider but   the entire care team. And what we want to  really try to do is figure out what they're   reaching out about and balance what their  preference is for how they want to interact,   right and this the nice thing is we can think  about what language, what format, right how you   know do they prefer to use a smartphone and send  a message, but also look at the clinical need and   which format would work best for that clinical  need. And I think what we're trying to solve   for is how do you do that as efficiently as  possible to address the patient preferences   but also address the really significant  capacity constraints and challenges that we   have on the clinical care side and different my  colleagues may have other perspectives on this.

[MICHAEL ABRAMOFF] I think Leah wanted to respond. [LEAH FOWLER] I mean you're not going  to find a lot of pushback from me,   because I tend to agree with you. And I know  that one of the more recent examples we saw   in the consumer digital health space  had to do with chatbots for therapy and   people tended to react pretty well, until  they found out that it was a chat bot,   and then you know statements like I understand  how you feel stopped resonating with them. And  

I you're not going to hear any pushback  from me because I agree with you. [MICHAEL ABRAMOFF] I think you know what we're  trying to do, what I have been trying to do is   bring high quality Health Care as close to the  patient as can be that can be from a specialty   clinic to primary care, which is really our  focus right now, and then well you know the   things Leah is discussing bringing it from maybe  the health system to even the home. And wherever   that is appropriate and leads to better outcomes  that we should be focused there's a lot right   now where what we're not doing, we're not even  discussing whether I'm comfortable. There's no  

interaction of these patients with diabetes with a  Ico specialist whats

2023-10-12 09:52

Show Video

Other news