so the next session is going to have a sort of a hybrid format uh it's going to start with lightning talks uh followed by a conversation and I would like to um introduce the the moderator for this session uh Margaret Chan who is Donald and Linda horovitz OU chair for the pursuit of justice and formerly associate Dean for research at Seattle University School of Law and she's also the faculty co-director of the Tech uh Innovation law and Ethics program here at Seattle University in addition to her many achievements uh in the field of law uh she holds a master's degree in in in public health and so uh uh please welcome to Stage I'd like to invite uh my lightning round round panelists to come up and join me and we are going to engage in what I think will be a very stimulating sort of uh speed dating format for the exchange of intellectual ideas and and and understandings so thank you thank you Dr Baker for that lovely introduction and I'm so privileged also to be part of this wonderful conference with such stimulating ideas so far already and I think the next half an hour to 45 minutes will add to the feast the intellectual Feast um we're fortunate to have here with us today thought leaders in many different areas but on this panel three specific areas of biomedical he and Health Sciences all of which are impacted by artificial intelligence machine learning and large language mod mod models um what we might call generally computational Technologies and those three areas include protein biology Health Care ethics and pediatric Critical Care so I'm going to introduce briefly each of our three uh lightning round participants in the order in which they will give their five minute brutally short presentations um the first is Ian Hayden he's with the Institute of protein design at the University of Washington Ian is trained as a scientist uh he has a masters in biological physics and structure and design from the University of Washington and and he leads currently the Strategic deployment uh development and execution of communication initiatives and execution of um the uh institute's pioneering work in protein design and in addition to a lot of writing he also films and produces educational videos so um that's our first speaker um he'll be followed by uh Alex John London whom you've already heard from and just to remind you although Dr Baker already introduced him he is currently the K&L Gates professor of ethics and computational Technologies and co-lead of the K KL Gates initiative in ethics and computational Technologies and just want to mention that KL Gates is a very large Law Firm here in Seattle so I'm interested in seeing that that connection and last but not least Dr Muay Missouri uh who was the division head of the cardiac Critical Care Medicine at Seattle Children's Hospital here in Seattle and co-executive director of the Seattle Children's Heart Center he's a principal investigator at Seattle Children's and systems biology professor at the University of Washington and founding member of the lon lab's research group focused on translational modeling and data visualization utilizing highfrequency inpatient monitoring data so with that um I'd like to ask Ian to come to the podium and give his five minute talk and I think you'll get some warnings when you have one minute left uh from the table up here thank you [Applause] five minutes we'll see all right um I have some there we go perfect all right so in the next five minutes I hope I can give you some insight into the specific area of science where AI has already had the biggest impact um everything I'm going to be talking about has been published in the last four years um it's been described as stunning by the UK National Academy of Science the Breakthrough of the Year by the Journal science and finally an answer to the question of what AI is good for by the news site box uh it combines chemistry and physics with Biology and medicine and computer science so it's okay if you're not super familiar with it um I happen to have a background in this area and it's my job to explain it so hopefully I can uh do some of that today so what is the area of science where AI is has already had the biggest impact in a word it's proteins um so this is a look inside uh it's a scientifically informed I think beautiful rendering of a look inside of one of your cells if you could see down in there you would mostly see proteins here they're colored uh all kinds of different colors you your body makes about 20,000 different varieties and they do the functions of Life protein your proteins right now are digesting your breakfast they're firing your neurons uh they're powering your immune system in other organisms they turn sunlight into energy and carbon in the ocean into limestone so proteins really run biology uh proteins are also medicines uh they're used to treat over a 100 diseases think insulin and antibodies they're also the fastest growing category of medicine and they're also vaccines they can be direct ugens they can be part of a vaccine uh or in the case of mRNA they could be made by your body in response to vaccination so they're important so where does AI come in uh we are all living through through uh AI revolution in protein science uh let's jump to the next slide yeah so uh it's pretty rare to solve a Grand Challenge in science but something like that has happened very recently with AI uh so the folks I work with at the University of Washington and many others have been building very powerful but very specialized AI tools that help us solve a particular problem and that is how is it the case in science that biological information the stuff that is encoded in our DNA how does that turn into biological function at least at the smallest scale meaning the molecular level meaning proteins uh we've worked on this for many decades and in the last four years we've made orders of magnitude improvement in our understanding of the system um there are pretty significant implications for what this is going to help us understand about diseases and also how we make new medicines going forward so next Slide the particular area of interest that I that I want to focus on in our conversation today is actually running this process in Reverse this uses also new AI tools built in the last four years and it instead of asking a question about biology like what does the human genome encode how do those functions arise it reverses it to say if I wanted a new function like an antibody that stops the flu what biological information would I need to have to produce that function how would I code for that antibody and it turns out we can do that um so if we jump to the next slide um so this is is exactly what my Institute works on these days things like uh it's not literally a text based prompt but it's something quite similar make me a new antibiotic make me a cheaper form of insulin uh things like that so this is the uh the area we specialize in again these problems have been worked on for decades but in the last four years we've seen something like a 10 to a thousandfold Improvement and the uh reduction of costs the acceleration of research timelines these are really incredible Technologies uh I could literally go on all day about the benefits of this and why I think it's going to improve medicine that's not really what we're here for uh we're also looking at the challenges presented by these tools uh I only have a couple minutes uh so I won't give you all the answers but I can talk about some of the ways we've been thinking about this uh I'll focus on two one point is that I firmly believe these Technologies which again are already published uh they will impact medicine in a very profound way uh but they alone will not change the medical landscape and by that I mean fundamental questions like is Healthcare a human right and if so what does that mean uh if these things are going to make drug and vaccine development faster and cheaper who should benefit from that I don't have answers but I think that these tools place a A Renewed urgency on those types of questions uh you know these are old questions but uh we are we are being confronted by new technology at a pretty incredible rate and then the last point that I'll close with um we've noticed uh as as part of this tool building community that the power of these tools has surprised even us um and it's got us to think quite a bit in recent years about topics like safety security and openness uh we're part of a public university uh we are very proud that all of the tools that we've made we've made completely free and completely open source meaning any scientist can use them download them build upon them uh open science is has been a big driver of this type of research not everyone building these tools does science that way some are private companies and it's not in their interest to do so um but how open should a technology like this be if it allows you to make a functional molecule isn't that itself potentially dangerous we hope it is used for good but who's to say it couldn't be used for harm uh so we conven a summit in October uh I'm running out of time I'll just touch on it briefly where we brought together many of the world leading computational biologists who focus on this area of science along with Regulators uh folks from the White House many senior scientists from a lot of the big tech companies that are increasingly interested in this area to focus on both what are the opportunities here and what are the threats what's around the Horizon what should we be worried about how do we conduct this science for the benefit of the world uh we can get into it when I have more time but there are practical solutions to a lot of the most concerning elements of this happy to talk about it uh but there are also huge upsides of this technology that I wouldn't say allay the threats but interact with those threats one is that tools like this can be used to make vaccines for an outbreak regardless of its origin so if someone designed a dangerous thing using AI or it occurred naturally or somewhere in between our ability to respond to pandemic threats is accelerating as well um and Technology like that can potentially save millions of lives and trillions of dollars so it is really important to get this right thanks for the [Applause] time okay thanks um so it's it's it's great to have a second bite at at the Apple because um I I had to rush through um a lot of my talk about the positive part and so I I planned in case the future me was like if if the talk me got too caught up um so again I don't have a disclosure anything to disclose but I think uh working in this space is difficult because there's two kinds of misconceptions misconceptions about drug development and then misconceptions about Ai and then they intersect so this is the way a lot of people portray drug development that there there's a pipeline that's sort of linear you go from lots and lots of candidates and then it kind of you know um Narrows down into a few um as though it's just Dropout that you start with the same candidates you end with it's just that a few drop out along the way or that most of them drop out along the way but um that's not actually true um it's not really a linear pipeline um it's much Messier because um you have these things every substance that we use as a drug is a toxin at the wrong dose it's going to kill you and most things are going to kill you and not do anything good for you at any dose so the struggle is to find things that have the potential to help you but then you what you have to do is develop the knowledge what is the dose at which this will help you what's the schedule at which this will help you what are the contraindications you can't give this to certain people you can only give it to certain people how do you find those people what are the diagnostic methods you need in order to identify who's who's going to be a responder and who's not all of these things are part of what Jonathan Kimmel and I call the intervention Ensemble drugs alone are not effective drugs with all of this Ensemble of knowledge is what you need in order to have something that's either a poison or a treatment and that's true for AI as well so um in in right so the dimensions here for in drug development are things like what's the indication what's the dose what's the schedule what are the co- interventions okay so what we've tried to do on the positive side is then say in AI what we need is to think of the AI as part of an intervention Ensemble it is the the the part that gets the spotlight but in of itself it's not going to achieve anything in the world so it has to be implementable by people who have to know what is the target what is the function what is the application for this system why is that supposed to be good like what's the ultimate benefit not just in terms of like it allows us to do this very little thing right allows us to diagnose this condition very often diagnosis like diagnosis is one of the big things that people look for with AI diagnosis is often not the bottleneck you know there are lots of things where it's like yep we can diagnose it but we can't do anything about it um and so um so having from the beginning a very clear sense of like what's the ultimate value that you want but then what are the conditions under which you can deploy this system and what's the window within which it's going to likely to work well and what's the window with within which it's likely to degrade um so uh also what's the population for whom it's indicated what's the population for whom it's not indicated so this is very similar in terms of the the it's the same notion of the intervention ensemble in terms of the the as we developed it in the drug context but this is a general this is a general point about any any tool that you're Implement right um and so then there also have to be the procedures that you need in order to say once I have the tool what do I need to do to be able to implement it and maintain it so there need to be procedures now because you're going to have drift and and with some AI systems as soon as you implement them you're going to shift the distribution of what the patient population looks like and what the disease population looks like because of you because you're implementing the tool in your system so um being able to identify what's our planed for dealing with measuring tracking distribution shift and dealing with it all of those things are part of what you would need to have in order to have a functional intervention Ensemble that's why part of the message here is that it's not skepticism about the many scientific advances that are happening in protein folding and other places those are all wonderful and they're great and um uh in order to tr translate scientific advances into drugs into interventions that are safe and effective there many many more steps in between being able to do something at the protein level at the cellular level knowing that a certain receptor a certain Target certain you know Gene is responsible for the life cycle of a tumor for example that's key Insight but your ability to intervene on that in the world in a way that's going to produce a benefit for patients requires this larger the development of this larger set of information and so even if you have tools that narrowly can do exactly what you want if you can't inte them into a human system where people know when to use them when not to use them the populations that are going to benefit the populations that are not then the hard one gains that it took at the bench to develop all that all that science isn't going to translate into a benefit where am I in terms of okay thank you very [Applause] much good day I I I represent present an area um of science where AI has yet to be shown to have benefit and I want to explain um by making you aware of our context you know why we are pursuing that question I'm an intensive care doctor and we are ever more reliant on Technical Systems to support our patients uh this is a standard bed space now in my unit and we have 24 of these and what we ask clinicians to do in complex bed spaces like this is to integrate the data that's available to them to make some sort of reliable estimate of patient State the reason that that's such an important exercise is it's only if you understand patient state that you can map that state to some category of risk and it's risk that's action forcing in our environment if I my assessment is that the patient state is well compensated heart failure then the most prudent action might be no action at all if my assessment however is that the patient's in a state of imminent respiratory failure then a series of actions need to happen and they need to happen now one of the real challenges in our environment is uh that virtually everybody in an ICU is an outlier they they are um resistant to the sort of large reliable uh prospective randomized control trials that produce reproducible knowledge in medicine and so a lot of the decision-making in our environment is Guided by our intuition we naturalistically make decisions under conditions of very substantial uncertainty so on this certainty uncertainty Spectrum we are always closer to uncertainty and further from control and that actually locates the uh potential use of analytics in an environment like this you know you could this is another bed space this is uh inadvertently hippoc compliant photograph that two-month-old that's in this bed space is hidden behind the gigantic renal replacement therapy device in the foreground and you can see if you just take a look at this picture that they're both threats and opportunities here you know the opportunity is that this is an unprecedented amount of data bed space like this generates something in the region of 350 million time value pairs per patient per day that we can use to refine our understanding of patients disease mechanisms and therapeutic efficacy you know the threat though is that we know from cognitive Neuroscience that that the the most remarkable Minds in this room can probably only simultaneously process somewhere between seven and 11 variables to support any given individual decision and there are 76 simultaneous streams of patient specific Telemetry coming at a clinician from a bed space like this and so what we do in these bed spaces is we prioritize information we use our intuition to say I think that that data stream is going to be more important than that one in supporting the decision I have to make and I think often we get it right but sometimes we get it wrong you know ideally the place for analytics in an environment like this is Deployable models that help us reason on all of this data not at any specific point in time but at all points in time and some of these models are relatively simple this is you know an adaptation of Google's wave net model performing a heart rhythm classification task something machine learning is remarkably good at providing clinicians with certainty about what the heart rhythm is to support their decisions some of them are much more complex this is a convolution and recurrent neural network that's being used to map directly from a a Vital sign data stream to a specific category of risk so in other words short circuiting that clinician cognitive exercise that I was describing with the goal being to provide clinicians with a patient trajectory through risk itself the goal in deploying these models is that these models uh collaborate with human clinicians and a collaborative intelligence where the ultimate decisional Authority still rests with the doctor and this raises as you can imagine a host of ethical issues we've heard about saturation probes um but I would argue that uh the opportunities with this kind of data outweigh some of the risks you know one of the reasons that we have chosen to focus on highfrequency data is that the patient behavior in an environment like the one I just showed you the observed behavior is actually not just the patient behavior in isolation it's the patient device interaction as as well and we try to model the entire system this is biometric data it's continuous and pervasive it's very dense allowing us to make that Dynamic State estimation continuously for patients and I think it is less biased than other forms of electronic Health Data if you're measuring a biological property of the patient either on or from inside a body cavity you're closer to the biological truth of that patient than the kind of representation of that patient that we get from the electronic health record and this is a huge worry because you know as you've heard heard because of the bias in historical Medical Data if we use that historical data as training we run the risk of hardwiring bias into our Medical Systems making it even more pervasive and invisible than it is now I want to end by by saying something uh that I think is one of the most interesting aspects of doing research in in this kind of area the nature of innovation in medicine has always involved retrospective studies that make us much much much more certain about what happened in the past than what's happening now or what might happen in the future and in a streaming Paradigm that is disrupted you know the the the idea is that we can reason in real time or near real time on patient data in ways that augment that clinician making that decision uh that changes the Paradigm the research packed if you will that we have with patients from one where we ask them to participate in research with a goal of providing us with an Insight that might apply to a patient like them at some point in the future at secondary gain and we're relying on their altruism to one where the fact is that we want try to reason on your data in real time because that actually might benefit you now here and today thank you thank you well I want to thank each of our three panelists here for giving such a great uh overview of their Rel relative domains of expertise and I just want to also say that in getting to know their areas of research and fam familiarizing myself with their work it was such a pleasure for me and I hope it is also for you just to hear a little snippet of what they do um I'm reminded a little bit when I'm hearing these presentations of what Christopher lash said was one of the the most uh defining characteristics of modernity which is reflexivity of knowledge that as we gain knowledge we feed that back into what we knew before and I think with what we're talking about here today across these three different areas we're seeing this increased reflexivity uh within specific areas but also Al across these fields which I think is very very compelling so um to just jump Our Round Table off um you've given very compelling examples of how uh computational Technologies can improve health outcomes and and overall increased social welfare um but what are some examples and you've given a few of these but what are some of the critically important examples of conditions or assumptions that must be met before we can really harm that potential for for good for good outcomes and so starting with Alex for example um you've cautioned U that in the face of uncertainty conflicting judgment or novel circumstances the duty to to care can only be realized in practice if it is accompanied by a duty to learn and you also mentioned that duty to learn this morning and as a lawyer I'm very interested in these ideas of Rights and duties and and duties form the basis for tort law that we have to establish a particular duty of Care standard of care before we can assign liability so can you elaborate a little bit about this duty to learn that you've mentioned yeah I mean uh so part of it goes to just the pervasiveness of uncertainty the if you think about the way we understood cancer 20 years ago the way we understand cancer now you know there's no analog to that in architecture you know it's not like people look at this building and go like well we don't think about this in terms of trusses anymore and load you know like that's yesterday right um and that's because architecture stands on the background of 20,000 years of like trial and error um and we're still doing that in medicine and we'll start learning and I think it's really important that you know to have these examples like you presented where it's like real world you know real-time Telemetry of the the features of the patient that we care about versus these assumptions that you get when you're using proxies right um um and then you're trying to build a model that says well you know from from the proxies that I'm seeing I'm trying to now hypothesize back in terms of like what the underlying causal structure is um and so I think in that sense like being able to pick which which data streams that we want to use and uh you know where we're much more likely to sort of get at the underlying either either monitor the underlying causal structure um you know I think that that's where you need a much tighter intersection between the methodologist and the the subject matter Specialists um because I think not every data set is equally suited to you know support AI MH great Muay would you like to elaborate on that a little bit I know you've written and talked about the sort of incomplete and fragmented data landscape that you have to deal with and you also mentioned that today um so how would you address that um in response yeah I think you know when when you said duties Le one of the uh the interesting barriers that we face in in medicine is the absence of good benchmarks um you know I'll give you an example so hot Rhythm classification that's a prototypical machine learning use case in medicine you know this is something industry is very interested in those of you that have an Apple Watch Might you know be subject to a hot Rhythm classifier right now you know estimating your risk of atrial fibrillation um but there actually are not good benchmarks for how clinicians perform in that task and that's a real barrier for us in in uh uh applying these techniques in in medicine the the concept being what performance threshold do you need to exceed for this to be a reliable and safe tool and and I find this to be a really interesting and complicated question for for multiple reasons one um is it limits our ability to know when any solution that we've developed potentially has benefit uh the other is that the few benchmarks that exist are actually not that spectacular in terms of what they reflect on our aggregate performance as clinicians um yeah meaning meaning you know if you look at papers that per to perform heart rhythm classification better than cardiologists or pathological diagnosis better than Pathologists they generally in papers like that have to identify what the aggregate clinician performance was and it it it's sometimes distressingly variable and um I think that that's an important observation because I think of these techniques in medicine less as a new ceiling uh and more as a new floor you know if we we we aren't likely to constrain people um in making the extraordinary clinicians and making the extraordinary judgments they're capable of I think what we're hopefully going to be able to do with the first generation of these techniques that are deployed at a at a large scale is limit the variability at the lower end of the performance scale so make care more consistently reliable across clinicians and across cohorts of patients wonderful yes jump in on that I agree with everything that's been said and I want to highlight from my weird little niche of protein science where AI has been transformative it's not an accident I think a lot of what you just said um there's a different story in protein science meaning the the data sets in that domain of science the quality of those data sets is extraordinary these are basically molecules they're chemicals um so there is not patient data um there really isn't much subjectivity at all these are empirical measurements of 180,000 proteins that thousands of scientists over the last 50 years have deposited into a central place put it in the public domain creating an extraordinary data set and it happens to be the data set that deep learning is extremely good at learning from so why do we see this technology and protein science in 2020 that's the reason um so just to move on though to the ethical concerns that you stated at the very end of your five- minute talk I know that you were responsible in in part for the drafting of this statement that was made um subsequent to the conference that was convened here in Seattle on responsible AI in protein design can you elaborate on some of the parameters of what responsible AI development means yeah um so this effort was born out of us and others including folks in the government noticing hey these tools are getting pretty good pretty quickly and uh we will We ought to have some uh regulation in the space basically we ought to ensure that they're used for good and ensure that they're not used for harm and they came to us and basically said came to the scientific community and said please help us um this stuff is very new it's very complicated we don't want to make policy or law in a vacuum you let us know what do you think responsible science would look like in this place so really that was the focus of our October meeting with government in the room and it was a wonderful conversation months of follow-up work and what we produced as one of the work products is a community statement it's now been signed by 160 senior computational biologists who work in this space from all around the world uh both academics and in the public sector and we aligned on uh an articulation of values like research should be done for the benefit of people um that science should be open to the degree that it's safe to do so things like that but also we enumerated 10 specific commitments things that these researchers agree that their teams will do and will not do um applications they will not pursue and and standards around openness and model sharing and distribution that they must follow uh this is a very first step it's it's entirely voluntary no one can make us do it but I think it is wonderful to see the scientific Community internationally both recognize the potential of these tools and the need to have these conversations get alignment around them and hold one another accountable I do think that is at least a first step towards the development of this technology in a way that will benefit all thank you so this is obviously an interdisciplinary panel and that's very important because as Dr bucker mentioned in his introduction we arguably need every single knowledge domain to come to bear on artificial intelligence uh in order to ensure ethical development and applications um especially in the healthcare realm so for example um mu you've stated in your Ted Talk which I very much highly recommend um that the research side of medicine needs to basically work more and handshake more with the clinical side in order to optimize data Gathering uh for Equitable and accurate machine learning so for each of you if you could send one message to your discipline from to to from your discipline to the others to the other dis disciplines represented here on this panel or in the room today to guide ethical AI what would you choose to say and to whom so um I'll open this question up to to any of you to start the discussion yeah I have a I have a simple one but it's one that we we try to emphasize to a lot of audiences and that is that there is not one solution to this uh we we represent a very distinct application of we have weird small models that are very powerful and what you do with that how you develop those safely what ethical use looks like there will be a distinct set of answers for those models in that domain and the answers we come up with for large language model chat Bots aren't really relevant they're very different tools and the differences in these Technologies I think is really important so I guess the advice is or the recommendation is it's it's not one siiz fits all that's great I I I would say um we need a better pipeline so uh in the sense that the applications for machine learning in medicine we need a pipeline that does a better job of sort of saying what would be the priorities that if we hit those priorities we would really move the ball forward for um you know the way our health systems function and what we can do for patients and then we need degrees of maturity so that we can say um you know here this was a this was an early phase study um and and that we need to find ways to have avoid the reduplication of effort in during Co there were literally thousands of groups across the country that were trying to develop diagnostic Technologies for you know um you know in radiographs to diagnose covid from radiographs there A lot of it was you know from the systematic reviews look at it and say it's been low quality redundant there hasn't been necessarily the kind of like iterative improvements that you'd like to see in a healthy ecosystem so um you what what we need on the computational side is a set of incentives where just publishing your paper and doing it at a conference having to meet the four conferences a year like we need we need to find a set of incentives where people can invest over the long term in taking a technology developing it to it developing it to the point that it's mature and bringing it from the bench to the bedside um and I think we don't have that ecosystem right now and the incentives aren't there for this kind of long-term slog that it takes to sort of make something an actual you know uh you know Deployable technology great yeah I I agree with the comment about domain expertise and also couldn't agree more with you Alex I think uh one one of my pit peeves is is um we still exist in a paradigm in healthcare right now where um uh what we consider research and what we consider clinical care are completely uncoupled from one another and you know at my own institution here in Seattle uh uh Seattle Children's is um you know way up on Sandpoint way the research institute is down Town um they are administered differently um for for those of us that have roles in both institutions uh it's almost like belonging to two completely separate institutions there are people at the research institute who probably will never go to the Children's Hospital people at the Children's Hospital who never go to the research institute and you know I think I think that that becomes a a real objective translational barrier that that it doesn't just um uh create additional impediments to this kind of slug that Alex is referring to it also limits our ability to um uh provide patients with benefit and I I increasingly see that as an ethical issue you know there's a there's a growing gap between our actual capabilities and What patients experience at the point of care as K so um as a lawyer I obviously believe in regulation um but this space is it's pretty lightly regulated so far in the United States I mean we have now proliferation of different task force forces and executive statements uh from the federal government but nothing really what in what in the form of what I would call hard law this this is mostly soft law and voluntary initiatives so um in your ideal world what would be one of the characteristics one or two of characteristics of a uh an adequate and um sort of responsive regulatory framework from what you're seeing from where you're sitting right now to characterize your question a bit better are you talking about specifically for sort of Deployable Solutions in healthcare MH yeah I think I think from my standpoint you um uh you know I see I see a lot of these tools if if we are going to responsibly deploy them needing to pass through very well- defined stages so you know you you you develop your your um uh solution using historical data uh there's a you know a test training split you think that you have a model that's well validated you know I think that the next step that we should all be mandated in healthcare to undertake is you know well-designed silent trial where where the model is deployed in a prospective non-interventional framework um you know the goal is to run the model in the application domain in healthcare whatever that application domain might be against Real data um because there's always if you've ever done this there's always a a degrading basically in in uh model performance when you run it in real world conditions um uh what that actually allows you to do is to begin to think in an objective way about what Alex was describing as this intervention Ensemble so you know if you think that this is something that has benefit in whatever the application domain is where does it have benefit uh what is the breakdown in the current process that the model can Target um and and and that helps you then design what that actual intervention Ensemble would need to look like that then should be subject to a true prospective Interventional trial and I think that that's actually one of the standards I'm hoping in healthcare that we can hold ourselves to that other fields haven't uh as as an example you know I don't know how many of you have have cars in the audience with self-driving features and I I I would I would ask you to raise your hand if you know what the performance thresholds for for the deployment of those self-driving features are if they've been characterized against which bench marks does anyone know Elon mask Elon musks mood priv but but that's precisely the problem and you know I think in in healthcare we need to hold ourselves to a a much much higher standard than that I um so since we agree and also just to hear you say that it's like brings tears to my eyes I mean you know so I mean I'm on a group that's that's that's that's working on you know um you know promoting silent trials right of these Technologies and I think that's exactly right and I think what what the the problem is people don't want to hear what you're saying because of the Allure of cheap learning the ideas that we can kind of Bolt machine learning onto the healthcare system as it is and we can take the data that we have and we can sort of spin it into gold um and you know there may be places in sort of the backend and business part of medicine where we can do that um but you know the the in the patient patient facing side of medicine so let me do this the other in the other direction right like it's a it's a it's amazing that the healthcare system is still held together by fax machines and by and by pictures of you know like like there there are medical records you know where it's like it's it's digitized and it gets sent as a fact and so like it goes out of digital format like we we need to modernize the way we do medicine and we need to and I agree that we need to have more learning in medicine itself right um and so and so in that sense I'm fully on board with that and I think really get on with the learning Health Care System you know this this sort of pipeline from like you know um uh you know you think you've got a good model then you run it in silent you're going to learn a lot then because I agree like the data sets that we have your performance is going to drop when you see in real data and then from there then validating it in so being able to randomize more of our patients because equipo exist the idea that you're not going to be worse off by being randomized to this new technology that it's ethical to randomize patients in healthcare um in order to in order to get the actual data because once we randomize very often that's when you see oh like you know it was confounded like you know like once you control for these confounders you you you you know you don't see your your benefit being able to have a healthcare system that can support that kind of learning I think that's to me that's the move into the 21st century thank you any thoughts um so I think we're running a little bit behind but we have how many more minutes maybe a couple more I can't resist asking this question so and this is going to come from Left Field I think for all of you as a Jesuit Catholic in institution Seattle University is committed to examining and taking action on what is called the preferential option for the poor uh that is prioritizing the needs of marginalized and disadvantaged communities so how does your work address or engage with this particular social justice principle would you like to I could weigh in um I think we're going to talk we have been talking a lot about uh what these tools are and what do we do with them I think it also matters a lot uh who builds these tools and why um and I say that again I'll just Echo coming from one of the world's largest public universities that engages in a lot of This research uh we operate as a public university and that implies um you know that allows us to make decisions accordingly uh and increasingly we are competing for computational resources research Talent with very large tech companies who are also contributing incredible science incredible Technologies these are I don't mean to disparage anybody here but I think the reality is that different institutions that make technology like this have different incentives rightly so um and each are good at certain things and so it is not automatically the case that Technologies like this will be built in every type of institution it requires support it requires durable support um and I just you know it's it's self-interested but I'll just uh Echo you know I think if we leave it it only to the five biggest companies in the world I think we can expect a lot more uh Market driven decisions which will ultimately affect who directly benefits from these tools yeah thank you I think this is a great question so um my book is is basically about trying to understand Healthcare not about interventions but from the standpoint of creating systems that will meet the needs of the people in your population in an equitable way and so it's it's very justice-based and the the needs of the people who are the most marginalized and poor and minoritized are the least well represented the least well served by our health systems um it could AI play a role in closing that Gap yes I would like to see that to the degree to which AI relies on historical data then there's a fundamental challenge there because those people are missing in that data and so in that sense that's what I mean by I don't think there's a short fix to this and I don't think that we want there to be a short fix to this we need to make our health system more inclusive and more representative and more available to every member of our population and then that itself will create the basis for being able to use machine learning and all kinds of other AI techniques to advance the needs equitably of everybody in our population but if you just go from like where we are now you're going to continue to make treatments and interventions for the wealthiest and already most advantaged I actually don't have a whole lot to add to that extraordinarily eloquent answer I think I think uh I think um I've actually spent much of the last decade in the Canadian Healthcare System and uh looking at the uh the differences in approach to things like access um uh is is quite a a stunning thing to continue to be reminded of we have an incredibly inequitable Healthcare System and unless everybody who is interested in unlocking insights understands that they have a role an advocacy role in addressing those uh those inequities we aren't going to be able to perform Equitable science thank you so much if I don't say so myself I think this has been a brilliant group of panelists and I I have lots of other questions I'd like to ask them but I think we are now heading into lunch uh but I do want to remind people that if you have questions you can uh use the QR code on the tables and those questions will be um lined up for our Q&A this afternoon but um please join me in thanking all three of these wonderful speakers thanks everybody really
2024-08-04