DART-Ed webinar series 2 - Building appropriate confidence in AI to enable safe and ethical adoption

Show video

cool so it feels like uh not any people coming in anymore so will i get going so panelists put your camera on and uh welcome everyone so my name is maxine macintosh i'm the program lead for an initiative at genomics england called diverse data which is all around unsurprisingly making the genetic data behave more diverse so this session is all about building appropriate confidence in ai to enable safe and ethical adoption it's a collaboration between nhs ai lab and the nhs transformation directorate and health education england uh you'll hear a lot more background as to darthead and about this program and about this research very shortly um but the primary aim is to identify requirements for knowledge skills and capabilities to develop appropriate confidence in healthcare professionals to implement and use ai safely and effectively so this is one of three reports the first one is all kind of conceptual piece and the second one is uh about educational pathways and then there's gonna be a third output which is really around um the broader skills and capabilities required for adopting ai and healthcare settings so we have a truly amazing panel uh we've got uh about say the top of the list is is me i'm not on the panel i'm just here to be the social lubricant and i am not amazing so i've already introduced myself um next up we've got hatim abdul hussein who is the clinical lead of the dart head program at health education england we also have annabelle painter who is clinical fellow for ai and workforce at health education england and nhsx and we've got mike nix who is clinical fellow ai and workforce at health education england and nhsx and we've got george honor c fourroux who is a research manager at nhs ai lab at hsx so a kind of good mix of health education england ai lab nhs sex um and sufficiently complex names i've probably mispronounced some of them um so this is a 45-minute session and as i've said to the beginning but a few people have joined since this will be recorded uh so anything you put in the chat or you say out loud um will be there in perpetuity um much less maybe um as we reminder mike's off camera is off uh just because of the the scrambling that happens on the screen when we record it and obviously if you want to get involved in social media uh the uh handle is at he underscore digi ready so you've heard enough from me i will hand over to uh the panel and first off i'm going to hand over to hatton go ahead thanks maxine so i'm just going to give us a brief introduction into uh where we are with some of our data at work and in the previous webinar i gave an introduction into what the data program necessarily involves and how it was developed from its conception around delivering on some of the recommendations made in the topple review today i'm just going to give you a brief update on where we are before i pass on to my colleagues who are going to do a lot of interesting talking around the core topic of today's webinar so i just wanted to announce that we published our ai roadmap uh the iran map was published last week and gives a clear oversight to us in terms of the types of taxonomies that are of technologies in terms of ai in our current health care system and and you can see from this diagram here we found that 34 of current ai technologies are around diagnostics 29 automation service efficiency and then a group of emerging technologies around people medicine remote monitoring and therapeutics and this report will also give a bit of an expression in terms of how these technologies will then impact our workforce and our different workforce groups uh very interestingly you won't be surprised to some extent that radiologists and radiographers are quite the top of type of the workforce groups that are likely to be affected by these technologies but on top of that general practice cardiologists nurses or other workforce groups that we feel will be impacted by ai and it has a couple of case studies in there which really dug deeper and deeper into that to help understand how a couple of technologies have been implemented in practice have affected the team that have been involved in in using these technologies so it's a nice uh way to bring some of these summers data to life and it helps us to think about how we're going to tackle a challenge in terms of developing the skills and capabilities in our workforce to work effectively with ai and effectively that's what the discussion today is going to be about just a reminder that all the webinars are free to attend they're all recorded and available on the hcu youtube channel as well as on the dart head website and we have a whole host of future topics and in in that we'll be covering through the webinar series we're looking to have a webinar specifically around nursing and midwifery uh we're gonna have a spotlight on ai healthcare applications and and hopefully a few of our topical fellows will come and share some of the problems that they've been working on in in that webinar we're gonna have a again a spotlight on dentistry itself and how dentistry is getting digitally and ai ready and i want to talk about the work that we're doing around robotics assisted surgery and robotics literacy uh with the royal college of surgeons so keep a look out on our uh social media channels keep a look out on the data website for further information about these future webinars and if there's anything you want to feed back to us feel free to get in touch with myself directly amazing thanks um so uh now we're going to have about a 15-minute um presentation uh from mike and annabelle and so whilst they're presenting do uh think about some questions you have i know that a number of you and pre-submitted them uh the thing about pre-submission is that it's very easy for me as a chair to take them as my own um so by all means repost them or re-ask them and put them in the chat or um and then depending on how the conversation evolves um we'll ask you to uh unmute reveal yourself and ask a question uh in person so um whilst mike and annabelle are presenting please um please do uh think about questions that you have for the speakers um i'm noticing some technical problems mike yeah um we seem to have a permissions problem on teams i've just sent my slides to hat in by email so hopefully hattie will be able to share them um because i can't so give us a few seconds and uh hopefully we'll be we'll be up to speed with that i'm really looking forward to the next slide please but i'm bringing them up now so i'll give you the full chris whitty how to thank you very much i can probably just start off uh just to give a little bit of background uh until we uh we see the slides um if that's okay um so yeah hi everyone my name is uh george jones forum and i'm a research manager at uh nhsai lab uh and we conducted this research with health education england um i worked with annabelle and mike on the coming reports um as a little bit of background this research involved a literature review and we also interviewed over 50 individuals in healthcare settings regulatory bodies people in in the private sector developers of ai and also academics in this field and we also try to speak with professionals with different levels of experience with ai technologies and what we aim to do with this research is to get an understanding of the requirements for knowledge skills and capabilities that will be needed to develop confidence in healthcare workers when using ai technologies the word confidence is important from early on in our discussions and also what we saw in the literature review the terms trust and confidence can be used uh into ten interchangeably but we did feel that they they need to be distinguished uh and we felt that confidence is the most appropriate term for the context of using ai technologies in in healthcare settings as it allows for a little bit more of a dynamic exploration of what is involved particularly in the different circumstances during clinical decision making that mike will explain later so with this in mind we went about uh trying to understand what influences uh levels of confidence and developed this conceptual framework if you can go to the next slide please and i should say that we will be focusing on this framework in our initial uh report which is a little bit more conceptual in nature and and it would set the scene for a second report that would outline suggested pathways to develop educational pathways to develop this confidence and what we're presenting today is just like a sneak preview so please wait for for the final information and visuals uh in the report uh that will come out so what we're saying with this framework is that uh professionals can develop confidence in the technology through two initial layers the baseline layer and the implementation layer and there are several factors under this that we will go more in detail and then these two layers of confidence in an ai technology can enable clinicians to assess what is the appropriate level of confidence that they should have in ai derived information during their clinical decision making and that's the third layer um so we'll talk now a little bit more uh in detail about each of these layers um and first annabelle will take us through the baseline layer thanks george um so yeah so starting with this baseline layer so and if you could go on to the next slide that would be great thank you and so the baseline layer is really about the foundations of ai confidence so it's the key um the key components that underpin confidence in the implementation and clinical use layer so what we're really saying here is that each of these components needs to have a really strong and robust foundation so that we can build from that with any kind of implementation or use in a clinical setting so there are five components within the baseline layer which are product design regulation and standards evidence evidence and validation guidance and liability so i'm just going to go through each of those um components in a bit more detail so starting off with product design with this component what we're really talking about is how do we create ai products in a way that inherently and fundamentally improves the end user confidence and there are several facets to this so some of them are fundamental things for example what is the task the ai is doing and what's the level of clinical risk associated with that task and also is the ai making a decision autonomously or is it part of a joint decision making process with a human in the loop um there's also factors here about usability of the ai product so how intuitive is the ai product to use and how seamlessly does it integrate with existing healthcare systems and then there are also some technical considerations so for example um the type of algorithm that's used can influence confidence and also um how much we can tell about how the ai's made a decision so this moves into the territory of things like um explainability and mike's going to talk more about that a little bit later um but another important thing to think about is transparency so this is more about getting information from those who develop the ai about the type of algorithm that's being used how it's been trained the kind of data sets that have been used and any potential limitations or weaknesses in the model and there have been several transparency standards that have been released that could be helpful with this so moving now on to regulation so having strong robust regulator regulation really is key to um building ai confidence and what we've learned from our research is that healthcare professionals generally equate achieving regulatory approval for medical devices as proof that that ai product is safe to use but in reality the current regulatory standards often don't actually meet that and in addition there aren't any specific ai regulation at the moment um and we feel like these are the two things that need to be um addressed during regulatory reform and that's exactly what the mhra are currently looking at um so they have a software and ai as a medical device change program that's recently been um announced and they're looking at several ways of addressing these things and within regulation there's also professional regulation and which is an important thing to think about so as healthcare professionals we generally look to our regulators for advice um on how we should behave and that extends to how we should uh interact with things like artificial intelligence and that not only applies to the ais who are using um sorry to the clinicians who are using ai to make clinical decisions but also to those who are actually creating these ai products and also involved in their their validation and testing um there may also be an argument for some kind of professional regulation of the non-clinical healthcare professionals who are involved in um making these products so by that i mean say like software engineers or data scientists who are working on healthcare ai products so next moving on to evidence and validation so it's essential that we know that ai products that are being released into the healthcare system work and that they do as they they say they do um and for that you know it's important that we have good guidance on on what kind of evidence and we may expect at the moment in terms of the regulatory requirements there's no explicit requirement for any external validation by a third party of ai products or any prospective clinical trials of ai products and our research suggests that the um that there's there's definitely an argument for having that as a requirement for any ai product that carries significant clinical risk um and this is something that's being looked at at the moment by nice as part of their um their digital health evidence and frameworks that are being being reviewed at the moment and then moving on to guidance so um guidance is important for steering how ai is is procured and how it's used and there's several different types of guidance so guidance there might be clinical guidelines um sorry if anyone's got their mic on do you mind um okay so just honing in um on clinical guidelines uh so what we've had from our research is that clinicians expect to be given clinical guidelines about how to use ai technology in the same way they currently are for say medication um but the the slight issue at the moment is that the process is involved in getting um specific product level guidance for ai ai technology it's not really scalable and it's not able to meet the demand or the volume of products that are coming onto the market so again this is something that's being looked at by nice at the moment and ultimately it may be that a more kind of agile guideline process is is required and potentially a move towards rather than product specific guidance there's maybe like a class-based guidance in the same way we sometimes see that for medication as well and finally moving on to liability so so at the moment it's it's unclear from a liability point of view about who would be held to account in a situation in which an ai was to make a clinical decision that led to patient harm so for example it could be a clinician who's using that a product ai product to make a decision it could be the person who or the company that made the product it could be those involved in commissioning it and it could be those involved in say testing or regulating or validating it um and this becomes even more complex when we think about autonomous ai where a human is actually removed from the clinical decision making process entirely so some kind of guidance and steer on this will be will be important for building confidence in ai so that concludes our baseline layer and so now i'll hand over back to george to talk about implementation thanks annabelle can please go to the next slide thank you so uh the implementation layer basically reflects one of the most consistent feedback that we've heard during the interviews for this research and that being that the safe effective and ethical implementation of ai in local settings contributes significantly to building confidence in these technologies within the workforce and the comments focused on four main areas as you can see here the first one is around strategy and culture and what we heard is that establishing ai as a strategic and organizational asset can enhance confidence including through developing relevant business cases and maintaining a culture that nurtures innovation collaboration and engagement with the public so these conditions allow for a confidence that the right decisions are being made and also that each setting can sustain this type of innovations the second factor is technical implementation and that refers to arrangements around information technology data governance and and issues uh on interoperability um we heard that a lot of the current challenges uh for deploying ai relate to these arrangements and particularly that agreement on information governance settings and data management strategies to handle the the data associated with ai technologies are highly important at this stage we got the impression that unless these are clarified many clinicians would hesitate to use ai annabelle talked about evidence and validation and local validation is an extension of that which is the third factor here local validation may be needed to ensure that the performance of an ai technology can be reproduced in the local context so that we don't assume that an ai system that may have good published performance data will generalize well to local situations and the last factor in this layer is system impact essentially being being confident that ai is probably properly integrated in clinical workflows and pathways and what we heard here is about the importance of seamless integration with existing systems of clear reporting safety pathways and of ethical practices and that all these build confidence and address inhibitions to adopt ai now the way that ai is integrated into clinical decision making is is particularly important as it may impact these kinds of decisions and this is something that we explore further in the third layer that mike will explain now great thanks very much george uh next slide please hata thank you um yeah so the third layer um is the point at which the clinical decision making process and the ai technology interact and this is the first point in our pyramid where we're really moving away from trying to increase confidence to assessing confidence and the idea here is that uh an individual ai prediction which is used for an individual clinical decision for an individual patient may or may not be trustworthy so it's not necessarily appropriate always to increase our confidence in uh ai predictions at the level of the individual clinical decision and really what we're trying to do here is to retain a degree of critical appraisal which as clinicians we would apply to any information involved in the clinical decision making process and to avoid having either under confidence leading to inconsistency lack of benefit being realized by by rejecting ai information um and overconfidence obviously with the risks of clinical error and potentially patient harm so this is a nuanced uh problem um and if we go to the next slide please thank you um there are five factors in this clinical use layer which really drive this interaction between uh the ai and the and the human decision making um the first of which is is underpinned by uh clinicians attitudes to ai and and we're aware from our research what we heard is that this varies a lot there are some clinicians who are digital leaders who are very excited about ai are very knowledgeable very confident in it and have a preference to to drive it forward and include these types of technologies in in as many clinical contexts as possible there are also people who are more skeptical either through their own experience or potentially through their lack of experience actually um so there's a great variation there which we need to be aware of and take account of as underpinning this this confidence assessment the other thing that really underpins this is the clinical context obviously there's a huge variety of situations in healthcare from uh primary services gp uh all the way through to emergency medicine in in tertiary referral centers and that has a great impact because not only of the of the potential risks and benefits associated with employing ai in those different contexts but also the time scales for decision making some decisions are made over many weeks with involvement of patients and families and are very discursive and other decisions are made in in the instant in emergency situations and obviously that will impact the way that we assess our confidence in ai and what we do with that confidence assessment for clinical decision making as annabelle pointed out earlier there are technical features of the ai system itself which will impact on on our confidence and our confidence assessment ai can make various types of predictions it can be diagnostic or prognostic it can be used to recommend a treatment strategy for some sort of stratification or it can in fact be if you like a pre-processing of some images or some other clinical data which adds a layer of information into a clinical decision-making pathway that already exists so the type of information the type of prediction which the ai makes the way in which it makes it and the way in which that information is presented whether it's presented as categorical or probabilistic whether uncertainty is included all of these things are features which will affect the way that we value that information as clinicians and the way that we assess our confidence in that information when making clinical decisions there's another factor here which is separated out although it is a technical feature which is explainability um and explainability i think is an area that's that's worthy of some examination at the moment because it promises a lot um and there's been quite a lot of interest in the potential for using explainable ai to get decision reasoning out of out of neural networks particularly and to be able to see the reasons for individual clinical decisions what we found through the literature survey and also talking to experts was that that is is not yet ready for for real time so we we believe that ai explainability has potential and we believe that at a model validation level it has value but at this stage it does not appear to have value for individual clinical decisions so i think we need to be quite cautious using that as a way of assessing confidence in ai for clinical reasoning and decision making and really what underpins this confidence assessment is this fifth factor of cognitive biases um all of us as humans are subject to cognitive biases um and we may or may not be familiar with what those are but it's important to acknowledge them and to understand the way in which ai presented information may change those cognitive biases which which we may or may not be consciously aware impact on our clinical decision making so just to give you some examples there there's things like confirmation bias automation bias um these kinds of things are unavoidable um i think we have a tendency to assume that we are less susceptible than the general population but i think the research would suggest that that's not true and therefore it's important to understand how that factors into the ai assisted uh clinical decision making process okay so next slide please head in um and then over to maxine uh for a couple of uh questions from the chair amazing thank you so much thank georgia and annabelle for um a whistle stop tour so um i'm going to fill the the gap with a couple of questions um so please uh whilst the time passes do you come up with your own and post in the chat um and depending on how the conversation pans out i also kind of invite you to ask the question yourself um so please do uh ask any questions and there's no question too stupid or too intelligent it's probably one there is too intelligent but it's probably well not so that's not too stupid um so please ask the full range of questions so um this one is definitely planned but um what is your top priority for improving clinical confidence in our ai over the next one to two years it kind of drives towards that that center of that um that double-ended arrow you presented nick what are you working on for the next 12 to 24 months thanks maxine um i'm not sure it's what we're working on so much as what the whole community needs to be working on i think really the challenges over the next period are at the baseline and implementation layers as we presented in the pyramid um so there's some there's some work that is going on nationally at the moment around uh regulatory clarity and that will definitely help with the baseline confidence and evidence standards as annabelle pointed out are currently being developed and are changing all the time in this space so we will see increased guidance we will see increased uh definition of what levels of evidence are appropriate for ai in healthcare and that's definitely going to be a positive thing i think the other challenge is is moving to a place where uh again as annabelle pointed out we have some class guidance because a lot of the products that are currently being produced or becoming available are relatively small niches and i think expecting product specific guidance from bodies like nice for every individual ai product which is going to enter the healthcare arena is is not a sustainable way to to work in in the longer term um so i think having some more general guidance and and standards around how to evaluate and implement ai um once it's achieved regulatory approval um i think will be will be very helpful and then the second part of this answer really is at the local level so considering a healthcare setting whatever whatever that might be i think really the challenge is around people um we need to have the right people to drive adoption of these technologies forward and to do it with appropriate levels of of knowledge um and and critique um but also with sufficient motivation and positivity that we actually do get these things translated into clinical practice um and i think around that there's a need to define some roles there may be roles which aren't common currently in healthcare organizations particularly around the task of implementing ai and i think we need to take a multidisciplinary approach so i think we need clinicians i think we need users um i think we need drivers policy people and people who are kind of holding the post strings and we also need technical people who understand the evidence um and the challenges associated with uh doing robust and ethical um implementation so hopefully um that ties in as an answer to what we were discussing in the framework um yeah busy two years um and also kind of one of the things like the work has never done and you're waiting all these dependencies and everyone's sort of working them out as we go so it is a bit of a juggling act i think for everyone in the community um so two things one is uh amanda's asked the question be more amanda keep them coming uh the second one is that um uh you know you talked about i need to define new roles and uh you know support with multi-disability teams and diverse groups and decision making and i know that this is kind of a bit of a sneak preview because you know future uh reports are going to be on education but uh you need to we definitely need a pipeline to start filling those roles and that's going to take quite quite a period of time so and whilst not creeping into future webinar topics um can you give a little bit of a hint about how you're thinking about the education piece on this one i think maybe for annabelle yeah sure so um as as george mentioned we are we are releasing two reports so the first one's coming out in a couple of weeks which will be focusing on what we just talked about and the second one will be following on in the next few months and that's really focusing down on what this means for educating the nhs workforce um so we have had a little think about this and we can give a bit of information now so um would you mind just going to the next slide yeah great so we um we're thinking about this across the whole nhs workforce so this is not just about kind of clinician end users it's about everyone from like the most um you know senior nhs management through to the people who are commissioning products and people who are embedding them within the nhs so what what the way we think about this is by splitting that workforce into five archetypes and these archetypes are based on the role that individuals will have in relation to ai technology and these archetypes are not exclusive so you can as an individual sit uh in multiple buckets at the same time um but the reason that these archetypes we feel are helpful is because these different individuals have different educational and training needs and so it can help us focus on what we're going to need to do to prepare these different aspects of the workforce so just to go out to explain them in a bit more detail so so shape is here shapers are really the people who are setting the agenda for ai so these are the people who are coming up with kind of regulation policy guidelines all of those things to do to do with ai and examples of them might be nhs leaders um the regulators of ai and other people who work within armsec bodies um the drivers really are the people who are leading digital transformation so um they are involved in commissioning ai they're also involved in building up the teams and infrastructure within nhs organizations that are going to be needed to to implement ai um so for example they might be an ics leadership board or it might be ccios um within within ics's um the next bucket are creators so these are people who are actually making ai so uh when it comes to the the nhs workforce it may be that these um individuals are co-creating this ai with for example like a commercial partner or something like that and the kind of people who would be doing this would be like data scientists they might be software engineers or they might be specialist clinicians or researchers and academics who are working on ai then we have our embedders and the embedders role is essentially to integrate and implement ai technologies within the nhs so these individuals um they might also overlap with creators and being kind of data scientists and they might also be clinical scientists specialist clinicians um bytee and and ig teams um yeah so that that's who you really mean by the embedders and then finally the users and the users are anyone within healthcare who is using ai products so this might be clinicians it might be allied health professionals it might also be non-clinical staff um and what's really important with all of these archetypes is that we need to make sure we capture everyone so we don't just want to capture the workforce in training but also we need to make sure that we're targeting those who are fully trained and we're already working um and we're gonna need to be giving slightly different expert advice to each one of these different um archetypes now the reason there's a box around creators and embedders is just because we feel like at the moment this is probably the area where we have the least um skill um within the unit within the nhs movement so it's one of the areas we're going to have to think carefully about is how do we bring people with um these kind of data scientists clinical informatics skills within the nhs and how do we train up existing people within the nhs to become specialists in that area so moving on um a little bit from from this kind of baseline information education these people are going to need we are also going to have to think about product specific training so this is about giving people knowledge and information about a specific product that they're using and this really affects three main archetypes so we're talking about the drivers the embedders and the users so the drivers need to know specific information about products so that they can make the right commissioning decisions about those products then embedders need to know the technical information about products so they can make sure that they're integrated in a way that's both safe and effective and finally users so users are gonna need training on specific products that they're using to make sure that they understand um clearly what the indications are for that product what the limitations are of that product and really importantly how to communicate with that product about patients and how to get how to facilitate joint decision making amongst clinicians and patients using ai in in the mix so that's everything from me maxine back to you amazing thanks thanks um so i am conscious of time um and there's a couple of great questions especially for uh i guess this kind of first more kind of conceptual piece about you know what is confidence what is appropriate so i think i might um look to um smush maybe some of amanda's question with some of uh james's so uh as you're like thinking about what does it mean to be confident and what does it mean to be appropriate um how does ethics cut across this as well because kind of ethics has its own um you know like let's redefine the question type of discussion that happens and so i'd love to um yeah pick up amanda's question about um uh you know where does ethics transects with uh appropriate confidence and then i'm gonna bundle that with uh the bottom of james's question which is um [Music] as this conceptual work which i think is incredibly important to underpin um some of the the harder or yeah or the kind of the base dining although that base layer how does that intersect with kind of already existing standards and i know that was kind of touched on a little bit um but yeah linking the conceptual with the hard would be a a good thing to touch on for uh the last couple of minutes perfect um shall i take that one um or all those ones i think is probably a better description um so yes let's start by thinking about ethics um i think our starting point for this work was was the idea that if we if we want to do ai ethically then we have to do it robustly and we have to do it safely and we have to do it with appropriate confidence anything else is not ethical essentially um because what we're doing is we're ensuring that we achieve patient benefit we're minimizing risk and we're maximizing impact and that includes ethical considerations like maximizing impact for different groups different demographic groups and ensuring that we know what the performance of our ai is for different demographic groups so it cuts through um it cuts through evidence and standards and regulation i'll come back to that in a second in response to changes question it also cuts through uh what george was talking about in terms of local validation local validation is absolutely key to ensuring that we have generalizability that we understand the limitations of the algorithms that we use and that allows us to use them ethically and then when you get to the clinical clinical decision making layer um that really is where individual critical thinking um comes into understanding uh what the ethical implications might be and when it might be appropriate to disregard potentially an ai prediction um and whether that disadvantages people um is is something that needs to be considered at the workflow integration and the and the implementation stage so that we can try and be as even-handed as possible with technologies that are not necessarily inherently even-handed in their performance and that really is a big ethical challenge with ai i think it's very important to always have alternate pathways that can be used in the case where we we don't have confidence in the ai and we need to make sure that those don't result in detriment to certain patient groups so i think that would be my response to the ethical question i think in terms of uh regulation and ndr um hi james i'm the clinical scientist which is probably why um the clinical scientists have got some representation here to some extent um so i'm familiar with um the mdr and the iso standards um there are going to be some new standards uh we expect in in the relatively near future looking at ai specifically as a medical device i'm sure you're aware of some of the discussion around that um and i think our hope is that that will not replace what's in the mdr and iso 13485 um but rather it will clarify it and extend it um i think the other thing that's really important to to think about in terms of c marking as it used to be called and now uk marking um and and the mdr is the medical device approval does not tell you anything about performance um so it's not necessarily sufficient it's necessary but it's not necessarily sufficient and i think where nice and bodies like that come in is in terms of uh providing these evidence standards that say what should our expectations be in an ai product so that we can have clinical confidence yes of course it has to be regulatory approved um but that to me is a first step not a final step yeah i think it's like a great answer and for me that the the my opinion matters in this book um the the appropriate confidence for me is a really nice way of knitting together you know things like ethics which can sometimes feel a bit intangible and impractical um you know with the mdr that have some shortfall so for me this was this felt like a really nice way to to cut turn um some kind of floating themes or solid tools into something a bit more holistic and practical so i know they run out of time and it's not a competition of who's the most popular but if it was tracy or reagan would be winning um a number of your questions um in advance of this came about shared decision making so in 45 seconds um can uh one of you take uh the question around a shared decision making or b how do we make sure that patients truly sit alongside the the creators that had a little bit of a dotted line around them um so who's going to take that swiftly i can try and do that very quickly um so so just to say in terms of the archetypes patients are not in there they're not an archetype that's actually completely intentional because this this report is about the workforce and how we prepare the nhs workforce now it's also intentional because we think that patient that conversation about patient involvement is really really important and deserves his own attention so so it is intentionally excluded from here however um what is really important about how we include patients is first of all in that bit about um how we design products in a way that enhances confidence we need to make sure we have users involved and patients involved at that design stage from very early on because they're the ones ultimately who these products are going to be used on so it's really important that we get their impact their input all the way through the second thing is to say when we're talking about preparing on the users so the clinician users a huge part of their preparation is about how that they can how they can communicate about these products with patients so making sure they have conversations that bring patients bring patients in early they make it clear about the limitations and the risks and the benefits of using ai what it means for their data and their patient information and how they can make decisions together and as a group so you know clinician patient but also potentially ai moving in there is like you know a third a third agent in that mix amazing thanks thanks for doing it so succinctly so um uh i'm sorry we haven't had time to hit some of your other questions um but thank you so much for for posing them and this and different good ones and the individuals on the panel will be happy to follow up um an answer and obviously keep the conversation going but here end this the first webinar of the series um so uh the next one is happening at the end of march as hatiam says it's uh on nursing and the recordings of this will also be made available and in case i know your child came in halfway through demanding lunch or something catastrophic like that um but otherwise thank you so much for tuning in and do follow uh uh health education england and uh on twitter and i'm sure http would love to keep the conversation going but thank you very much for for your attention and your questions and uh for yeah for coming and hanging out this last time with us thanks all bye thank you you

2022-02-09

Show video