Audiology in the Age of AI: How Chat GPT and Related Technologies Will Transform Hearing Healthcare
foreign [Music] everybody and welcome to another episode of this week in hearing uh very excited to be joined today by devette Savannah pool and Jan Willem wassman two of the I think most Cutting Edge forward-thinking researchers and scientists in the field of Audiology today we're going to be talking about chat GPT and the you know really the introduction of large language models uh broadly speaking and how these large language models might ultimately impact our industry and all the different professionals working within it as well as the patient base so we'll get into that and talk all about it but first let's start with some introductions so uh why don't we start with you John with them a little bit about who you are and what you do hi I'm Jonathan Bosman I work at the Albert Medical Center in naime as audiologist both as clinician and and researcher I really like to be involved in how AI can be used in Audiology and so that's why we coined the term computation audiology and we wrote this prospective paper uh well three or four years ago ago and I'm really surprised at what we made his predictions at the time I think all those predictions are already passed and even have surpassed our expectations and things are going really fast and it's good I guess to explore this and see what could be the the benefits but also the risks for our community and well happy to enjoy to discuss this with you today awesome well thank you so much for being here and devette yes Dave it's good to be with you again and with you Jan Willem um on the show so yes my background is I'm a professor of Audiology at the University of Pretoria in South Africa I also have an adjunct position at the University of Colorado um my area of research interest has always been around you know technological innovation connectivity and how we can utilize that in hearing Healthcare to make Hearing Care More accessible and I think that's also where this link with the the exciting Technologies um that we've seen kind of come online with chat GPT over the past couple of months has kind of you know uh intersected I also have a few other um you know hats that I wear so I'm also the editor-in-chief of the international Journal of Audiology and then I'm also a co-founder of a digital Healthcare company called the RX group awesome well thank you too so much for being here like I said couldn't have been um joined by two better thinkers I think uh on this topic so just to kind of set the stage a little bit um you know when we're talking about these things they feel kind of abstract and esoteric um but I think that we need to be conscious of just how pervasive and widespread these things are becoming so you know open AI That's the parent company of this large language model chat GPT which has only been out and available to the public for about two months has already amassed a user base of about 100 million users so that is the fastest application to ever reach 100 million users so this thing's growing like wildfire you have folks like Bill Gates out there saying that these large language models you know whether it's chat GPT or another one they're like orders of magnitude of of um you know impact um as like the internet and the PC um so you have some some people out there really call pulling this thing out as being a seismic changing forcing function that's going to really change a lot of different things and a lot of different professions and just the way that we operate just like the internet did right and I and I think that we need to kind of start thinking about you know in Audiology what what will this all mean and how will this impact us so why don't we start to vet I'm going to kick it over to you if you if you could maybe kind of frame the conversation beyond what I just did there very briefly about these large language models and this notion of like AI powered internet um Can can you just share your thoughts on what's going on right now and what these things really are yeah sure Dave I mean I think uh just agree with you it's a very exciting times I mean anyone who's played on chat GPT a little bit would agree that you know the power of these Technologies is astounding I mean it's just remarkable and apart from you know the personal kind of exposure and experience we're seeing massive shifts in the entire you know industry technological but also in healthcare in general in terms of how these Technologies are changing the world around us as we speak and as you mentioned I mean it's the fastest growing platform of users ever and it's certainly you know one of those massive changes uh in technology that that creates a new era I mean suddenly if you used to chat GPT you know doing a a regular Google search it feels like a two-dimensional exercise right so suddenly I mean six months ago that wasn't the case now that's what it feels like so so these Technologies are super exciting um so AI chat Bots are a type of generative AI that can generate text and they use these large language models and that allow them to really provide answers to prompts or questions in a in a really human-like fashion so in essence they just computer programs that use natural language processing to communicate with humans but they are trained on tremendously large data sets um which mean they draw from you know information that is in a way almost Limitless in terms of availability on the net and in other large databases so so certainly very powerful Technologies I think what's also exciting I mean we talk about chat GPT but there's actually a wide range of other technologies that I have already existed before jet gbd that are now expanding exponentially because of of what jet GPT has done to bring it to the Forefront and they've been brilliant in the way they've marketed it to make it freely available and accessible to everyone so the interest has just grown tremendously quickly but I think you know what's important to recognize Dave is the fact that these Technologies are not just siled technologies that you go and access we have seen them proliferate in terms of Integrations into other existing Technologies I think the most um typical or the most widely known example is the integration of chat GPT into Bings as a search engine you know it was almost a relic of the past but now being is growing exponentially you know um it's becoming one of the most widely used um search engines because it's integrating this Ai and Technology into its platform so that's just one example but everything around us is starting to integrate this I mean every week we see new technologies like sales forces in integrating it slack is integrating it into their platform so we're going to be seeing these Technologies pop up on on everything we do our calendars and our to-do lists Etc so so it is an important Trend to think through and generally but also as audiologists and as hearing Healthcare clinicians and researchers it's going to change the way in which we interface with patients and provide our services yeah that's really well said thank you for that bro uh nice overview there um you know I think that the uh the being example is a really good one because um you know open AI did partner with Microsoft to really I think bring that technology to Microsoft and and it's search engine Bing and what we're seeing like you said is that you had these sort of status quo Technologies like Google that at you know once upon a time was revolutionary and groundbreaking in and of itself and now that's sort of being superseded by something that has the ability to I think generate um the types of searches and and I think this is one area of application and use case that it's extremely well suited for that um these these new search results have a level of context that we've never really seen before in that context is derived from all kinds of different uh inputs like Reddit and um you know these different things that are like sourcing a lot of customer feedback and so when you go and you search something that's like what is the best hearing aid or something like that um in the past you would get you know with Google you would get a bunch of paid advertisements and then you know there would be some method of authority to weigh those uh remaining search results now what um you know GPT would be doing is going and it's Gathering a lot of different inputs and it's going to probably spit out a totally different answer than what Google would and so I just think that it's that's a very very specific example but we're going to see a lot more of that and I think that the key kind of culprit of what makes all of this so different is that contextual understanding of going Beyond just the black and white definitive answer binary results that you would get with Google and now you're seeing a layer of this context and and that opens up a giant can of worms in and of itself because it's like how does it get to these new answers that seem so authoritative but are they flawed inherently so I'll kick it to you John will and and get your thoughts on on kind of this whole thing yeah that's a really good question that I would say that these large language models are actually excellent guessing machines so if you ask this machine okay complete the sentence Once Upon then it will probably guess right at a time and but that's something that's simple everybody can do but then if you ask it not to complete only the sentence but just to complete a whole story I just did it and asked it to make a story that's also nice for children to listen to it will create a story and then explain at the end why it's confident because it used some story about magic it's confident that children will like it and yeah if you use it and test it for all these kind of creative processes I think it's really nice that it's out of nothing can either hallucinate or create content but also because probably the main driver will be these search machines people will get use of using being AI like applications not only ask things where to buy something but also probably about hearing aids about the healthcare status and that was the reason that I started to just creating some prompts these prompts are the questions you can ask to a chatbot and see what would happen if you ask these machines okay I have a hearing loss what should I do and I was actually surprised about the the answers that are quite accurate although there's no reason to assume that they would be accurate because um the system that I used at the time jet GPT an older version has no clue about the world around it it just uses this big set of training data where it can use a lot of information from and it came up with quite good answers that I could at least review and say okay this makes sense um so I see a lot of potential there but at the same time it's important to realize yeah these are all um answers that are likely but they are not factual and there I think we have to really think about okay how to discern facts from hallucinations and what are then the ways to proceed and those will be different I guess for researchers as for clinicians as for patients but what we see with the research is that one example is the evidence hunt application where there's a model like tpt4 that's using only data from PubMed or it's constrained to this data so and it will also show what webmat articles it used for its answer and what I just tried is gave the question that prompted the question to this system and then one possible application would be if you have these answers to say okay this is based on evidence let's ask a system like gpt4 to rewrite this into layman terms so that's clear to for instance a patient you will see and then you can check if it makes sense and I see these kinds of Integrations to use it in healthcare where still there's an expert in the loop but um for me it's easier because I know ah this is based on quite new information maybe not from the last year but at least up to 2021 and it can help me and explain it better to another person or also another way I test this is by just telling a story to chat GPT and see how it responds to it if that's a good response well maybe if I then tell it to a person that person will also understand it at the right time right so it's a way to get feedback for instance and then if other things if we look into hearing Healthcare specifically I see this developments merging and like there has been Siri in 2010 that was poised to text so voice commands and around 2016 that we see this automatic speech recognition so speech to text which is of course a really helpful application for many people hearing difficulties and now a future application would be that I see many people with hearing loss guessing what persons are saying if then a model helps guessing and maybe is built into your device it guesses what a speaker is going to say and gives this as a prior to the noise reduction system which has to go really fast I mean these kind of interactions could yeah could be maybe reality within five years that's for certain uh how if I realize how fast it has been going in the last few months there's a there's two things that you said there that I I uh that really resonate um and the first one I want to Circle back to is this idea of like a large language model being restricted to one vertical so PubMed um I think we're gonna see a lot of this so I think that like this idea that you have these broad-based llms um like chat GPT that is scouring so much of the internet um there because think of how much written text exists out on the internet today I mean it's basically accessing all of the open Gardens if you will but there are closed Gardens and I think there is going to be a lot of advantages of having um singularly trained llms within specific verticals um Healthcare I think is a really good uh example of this so let's come back to that but the other one that you mentioned is this idea that you know Siri's been around since 2010 2016 you know we kind of see like the Amazon Alexa Google Assistant um and devette you mentioned something at the beginning before we even started recording which is like a lot of this has kind of been happening behind the scenes in percolating for years and and now we've seen it kind of like all be put together and I couldn't agree more with that I've been following the like the voice user interface space for a little while and I know that what we've really seen like kind of in this Alexa era is what I think of it is Major improvements on natural language processing um text to speech speech to text so basically computers beginning to actually interpret language and then be able to spit it back out whether it's speaking it or it's just in a chat interface so I think that kind of like what we're seeing is it's not as if this is some sudden emergence of brand new technology this is actually a maturation of like five technologies that have all sort of matured to the point and now they've fused into this one thing and that's where I think now we're we're seeing kind of that byproduct of like okay when you put together all of this and you have this ability to capture so much of the internet and go it almost reminds me of I'm not sure if you've ever seen the movie short circuit 2 where johnny5 is the robot and he's he can read and and it's like he's constantly just um Gathering as much information and he can read you know lightning speed so he's in the he's in the library and he's reading the entire encyclopedia I mean that's essentially what these things are able to do is they can read the whole internet and spit back out to you a consolidation of that so there's a super power ability here but obviously there's a big can of worms that that opens up which is like this idea of you know how do you make sure that the information that it's Gathering is accurate what kind of oversight is there um you know those kinds of things I think are are going to become Paramount as we move forward and we know that there's already parts of the world that want to either slow this down or completely remove it Italy I saw just this morning Germany is now considering Banning some something like this so I just kind of want to get your your thoughts on this either one of you on this idea of like you can't really put the genie back in the bottle so it seems like we have to kind of work around what exists um but I mean if there are governments that are actively trying to um put the put the clamps on this I'm I'm just curious if you feel like that's feasible in any way and how you see this kind of shaping and shaking out yeah Dave maybe I could just respond on a couple of the things you mentioned oh oh super irrelevant comments um and there's so much to talk about dear I mean I like the Siri um you know story that Jan Willem also introduced I think you know Siri gives us a little bit of context you know when it came out it was absolutely revolutionary um but in a way Siri now looks like a young infant you know and chat GPT is probably you know a five six year old maturing right I mean we we haven't seen where this technology is going to go yet and we've added bit of a a full taste but but I think there's a lot of exciting things to come obviously with these new powerful Technologies there's all kinds of concerns that are raised right and I think I think those are important things to also mention and discuss and those are also the things that are being raised in different forums I mean you've mentioned Italy kind of raising concerns Germany also raising some concerns about where the data is coming from and the Privacy Etc so I think those things need to be worked through and need to be discussed and there needs to be good bodies to actually you know help us to have better transparency and insight into how these models work where they get their data from and how we can utilize them you know in a way that actually you know helps us not to have a biased view from what these Technologies are giving us because the one thing these Technologies are really good at is sounding convincing confident and they sound like humans I mean I think that's one of the powerful things about these AI chat Bots is they really engage in a human interaction that feels natural to us so so that that also kind of fools us sometimes to believe them too easily right because they do get it wrong um someone uh compared chat GPT to a really enthusiastic young inexperienced research assistant and I think I think super smart research just to think right so I'm very eager to help very evil eager to go late information give it to you but but but it does get it wrong and and we've seen seen that happen as well so you need to have a way to kind of validate and check that maybe some other just general comments around what you mentioned in terms of oversight and managing this revolution um in in the research field You Know Jack GPT is amazing at supporting writing of research documents research articles so when it went live on the 30th of November we researchers started using jtpd to write research papers right so you give it some information and it's super at generating text and writing an article even for you um and suddenly you know the big journals had to kind of respond so you can see jgpt is a co-author on many research papers already published at the moment and then we saw some of the influential journals like nature coming out to say you know they're not accepting chat GPT as a legitimate author and and they needed to do some work to say what constitutes a legitimate author right and they came back with the with the rebuttal to say you know an author needs to be able to take responsibility for what they're writing which chat GPT and any AI chatbot obviously can't I think that's a good line and and they've made some good recommendations about next steps for us to not ban this the technology not try to get rid of it like you said not trying to get the genie back in the bottle but how do we find ways to utilize this technology to help help us be more effective more efficient more responsible and to get information out quicker to people so that we can actually move faster in this knowledge generation you know era that we're living in so so we need some guidelines we need you know the right processes in place but certainly it's I believe it's not not the right approach to try and ban it but actually find a way to use it um responsibly and I think there it's also important for us to to know how to acknowledge that GPD so so it doesn't plagiarize it gives good information but but we need to be able to report to say you know Chad gbt was a tool that we utilize to generate this text or to write this article or to do this data analysis or or whatever so so we need to find good responsible ways of acknowledging it's kind contribution so that we're transparent in that way so I've I've added a lot of additional comments so let me just get it up to uh young Willem or you Dave to kind of comment yeah I'm willing I'll kick it to you thoughts yeah well I think there's also good uh critique given by Prince's researchers to say okay but this is all not validated information and you cannot use it in a clinic and I must say that I agree that that that's that's true but that's in an ideal Society ideal situation often I see that for instance people have questions to me or they already have found some answers and there's also a lot of Errors there and you have only a limited time to give an explanation so you focus on a kind of on a number of items to further address but what I found interesting that for instance uh this checkupt also gave advice about uh healthy diets or about um being thoughtful about sound levels and that are things that either you as clinicians take for granted or there's a conversation on itself if you have only 10 minutes for instance and you think oh yeah the diet is important or the hygiene those kind of things so it's interesting that these systems popped it up and that could also be uh maybe the key for a next conversation with your specialist so there I see also opportunities that maybe these AI chatbots can help you in digest all this information or prepare for your appointments and explanations we could also collaborate with within our association for instance and see okay what are good prompts for instance and to ask and maybe publish some kind of frequently used prompts that we could advise to patients that we say well that is a good start and of course with some of the warnings of possible potential misuse or in case of Doubt contact your clinician or some health provider in that way I think it's helpful if we start to ex well if we experiment this instead of Banning these Technologies also because yeah it's impossible I guess to bend because it will be built in into many applications in the near future yeah it certainly feels like one of those things that it it's going to be really hard to um completely reverse and uh put back in the bottle but I mean there will probably be efforts to at least mitigate the speed and that's I think what probably is both most astonishing and also most concerning is just the rate at which this seems to be progressing I mean the first iteration that was released like the vet said like in November was um was really kind of mind-blowing and then this next versions even better and so it's just kind of crazy to to watch but there were a couple things that you said there young Willem that I thought was really interesting so first of all it's like this uh and maybe we should get into um you know the the article that YouTube wrote um around basically so you mentioned like these prompts so that's kind of the the terminology that that's used when describing how to even communicate with them is you're prompting the large language model and so you guys did some prompts sort of from the perspective of the patient in a hearing Healthcare setting as well as the clinician and I thought there were a couple really interesting things that came out of that um that we can talk to but to your point um because I think it's very specific but I think this will be broadly applicable all over the place is this idea of like sort of unexpected answers so you have you know if they're if it's going to spit out seven bullet points of uh you know recommendations of what to do if you detect a hearing loss you know and and like the first five of the seven are probably going to be pretty generic but then there are things that it's obviously sourcing from some Publications that it's weighing as being authoritative so it's factoring in diet and exercise and all that so even if that's not something that's like verbatim in the guidelines issued by um you know some standards committity some standards committee it's still adding that in and I think that we're going to see more of that where I think these models have an opportunity to go beyond I guess like the best practices the status quo and introduce things that might be a little bit more off the beaten path which could actually be really significant in the grand scheme of things when you're thinking through all kinds of different medical anomalies more or less and the role of the doctor is largely to determine what's going on with you um you know I think that what really makes me excited about this is the idea of some sort of uh you know off the beaten path study that was done that might be completely unbeknownst to the clinician that this thing is surfacing insights from and it seems like maybe that could be a real upside to this is that it's it's because of the breadth at which it's going and scouring all of the different clinical data and studies and stuff like that it might be actually Servicing some information that would not be surfaced if you're just strictly going off of kind of the status quo today so I'll just throw that out there and let you respond to it in whatever you way you want but I think it would be good here to just kind of start talking through how this does apply to Audiology the patient the clinician the researcher really any any one of those different participants in here okay I'd like to reply to that I think it's also important this transparency because if it's using these different sources you need to be able to somehow assess its validity of course yes and what I think in these discussions overlooked it's called open AI for instance but it's not an open organization at all and these models are not open source or publicly available nor its exact training data is also not available but if in theory such model would be openly available to resources in hearing Healthcare then it would be really interesting if you can train this model specifically on parameters important for Audiology and also maybe some of the important facts for our patients for instance to take into account and that you can add also where the system is basic itself on so maybe I don't know how well are databases are but you can imagine that if ENT doctors and audiologists around the globe would fill a database with their best practices and say okay we just constrained this model to these best practices and it will help clinicians who are not up to have the best practice for instance to learn from it and had to disseminate this uh clinical approaches while on the other hand people who don't have access to a clinic can use prompts and then get information from this validated model that will be really helpful so I'd say that these commercial models have shown that it's really versatile and could be used but hopefully by more Community Driven approaches that are open and also maybe it's really available because open AI is also now giving priority to people who pay for instance they give them more bandwidth Etc that would be helpful and could be in the long term in terms of how to organize your Healthcare model be a good cost effective investment if many clinics throughout the country and also patients can benefit from better information better access and better clinical workflows you know one thing that comes to mind here and I'll send you know either one of you can respond is going back to the point that I was making earlier about these verticals so these specific um you know basically think of open AI as being able to scour anything but if you start to put parameters on that then you can create um you know more or less a definitive amount of information that it's sourcing from um so you think of like you know maybe the way this evolves is you have these large you know medical institutions like the Cleveland Clinic or something like that where they're basically establishing that okay so for anything related to Cardiology um we want those standards you know or all of those best practices to be guided by the you know like the United States hardest hard Association or something like that so Audiology would be like the American Academy of Audiology you know whatever whatever you're going to um whatever kind of prompt pertains to hearing Healthcare Audiology you need to use this set of parameters and this information to kind of guide the answers so that's kind of I think one way that these could be shaped is that um you know kind of like I said these bodies these large bodies are actually defining what the prompt or I'm sorry what the language model is able to access to begin with knowing that there's that's a level of oversight that I could kind of see being implemented here is defining what's what's being sourced yeah and that people know what's in the data and for instance I try to use prompts to limit constrain the model to only using American Standards or British standards and I didn't see any effect on it so even if you try to build it into the prompts apparently that's not the level to constrain it and it should be deeper in the model and there I guess we really need as a field to yeah Embrace but not Embrace this technology but see how we can get it to better Alternatives have learned from this I mean it's for everybody clear that gpt4 or this map bomb from Google that all these systems that will not be the final shape we're now maybe using this five-year six-year-old AI advice systems and it gets good to consider them as minor so that if it's something really important then you don't rely on the opinion of a of a five-year-old so good to keep that also in mind and see how to integrate these different systems that keep the the errors in check while also allowing for yeah upscaling applications and making benefits yeah accessible in countries where maybe it's not affordable yet or where the information is even not findable yeah yeah I mean I I think uh you know that's one of the important applications for these AI chatbots is kind of assisted diagnosis um for clinicians we've had systems available for many years you know but they just haven't been as intuitive and they haven't relied on on such large models so so I think we're seeing this kind of taken to a whole new level um I agree with the transparency but but the exciting thing is there's all kinds of ways in which these AI chat Bots are going to and are already improving clinicians engagements with patients and I think we've spoken about the diagnostic you're going to have Silo but there's so many other other ways I mean they're perfect at doing case histories right I mean they can do a a an amazing case history ask the right open-ended questions and then kind of you know narrow them down so that you can have a thorough case history already done before you even see impatient um I I think we're also seeing them you know contributing to the efficiency of the engagements with patients there was just a recent article that came out about um uh Microsoft actually embedding this into a tool that will allow clinicians to get patient notes transcribed automatically and then organize and structure it for you you know after you've done your consultation so it saves you a lot of time so it increases efficiency effectiveness of our engagements as well and then of course you know the whole idea of as we collect information about this patient during the the case history beforehand but also during our consultation and testing it can correlate the information but actually also start interpreting that information for you so that you can have a cross check and a cross validation when you speak to the patient and it can make recommendations on treatment options already can be validated by the clinician and as you mentioned Dave I think it could it has that Vantage that it thinks about everything in its database so you can actually you know think about things and suggestions that we may sometimes forget about but we need to recognize um as young Willem also kind of reminded us that that it needs the oversight we don't yet know about the transparency how how the data is you know being put together and and and what potential biases some of these models have but super exciting to see you know the whole clinical engagement being affected by these Technologies and and I I think in the next couple of years we're going to see them integrated into that entire Journey or another example could be I already asked gpt4 to interpret uh audiogram with mixed hearing loss we can expect that these models are going to also output images or maybe use images as inputs so I can imagine that you give a Handover an audiogram which is more clinical information for the expert and then as a patient you can just make a photo of this audiogram and ask a chatbot to explain it to you and maybe have this better interpretation or a repetition of what the experts told you hey in this brief conversation so also there it can help in explaining your patient Journey where I think the risk for I mean of course there can be errors made but as long as you have checks and checks and balances and ways to restore it efforts if you have it in taking the patient history yeah then of course the Ace of follow-up moment with declination and you can set things straight I mean any clinician would complain to you about the amount of admin they have to do right and report writing and chat CPT or these kinds of models are absolutely perfect to generate these reports based on the data that that you receive so certainly you know potential therefore efficiency and and and and also cost Effectiveness gains in in practices but also in large Healthcare systems and that's just on the I mean we've just been discussing the clinician side of you because there's a whole what about the actual consumer or the patient you know how can they engage with it and and actually you know benefit from this even before they see a clinician or or as a support system after they've seen a clinician yeah there's a couple things come into mind here um and I know we are kind of coming up on time here but um you know I I just think that this idea of having the um having your own large language model I think on your own data would be I think really impactful you know so you're talking about like the patient's perspective you know what happens if you get to the point where um a lot of those different electronic medical records in all their different shapes and forms from the audiogram that you're uploading onto your iPhone um you know in apple health and all of these other inputs that you can be sharing with a large language model of the future that's literally specific to your data um you know how how powerful would that be when it can start to take all of these different inputs and figure out what's correlating and impacting one another your diet your sleep you know you look at like kind of this trend of the Quantified Self you know with your Apple watch and this idea that there's more and more sensors and there's more data that you're capturing and you're you're feeding a model right now but that model isn't you don't have the AI engine yet that's going to really start to make sense of it so I think that's coming as well that's going to be really really powerful and uh we'll change a lot of this stuff but you know it's again we're at kind of day one here and um I think like even to your point of that about this notion of the uh for the clinician of like being able to create efficiencies a lot of that again has sort of been pre um you know like there was a precursor which is like you know even the ability to transcribe your your past meeting through your voice into a Notes app or something like that you know so all of these things have sort of enabled what's happening now um because this all exists is like you have this large database of your you know even for one patient and and all of their records so it's a matter of like how do you start to combine that and consolidate it and and draw insights from it it's a task that is almost impossible right now because of how fragmented the data is and then also like who has the bandwidth to do that so that's a perfect application I think of these large language models that we're at the beginning of I mean we're only seeing these things scour publicly available information across the internet think of when it starts to be able to do this for um personal records and I probably just opened up a giant another conversation there which I'm not sure we have time for but um closing thoughts in general or maybe one short question answer would be I think that flowing more data at these models is not the solution now because it's almost already the full or entire internet yeah that's used to train so getting more out of the same information or constraining it I assume that will be and wrapping other functions around it other databases that have validated information Etc that's I I think the the way forward but we will see I assume that nobody knows what's in store for the the rest of this year let alone in 2025. yeah so and and maybe you know just uh one or two thoughts from my side as we close um you know I think we we've covered a lot of ground just in kind of the healthcare maybe touching a little bit on the hearing Healthcare space but of course this this technology covers every field of you know occupation and health um uh in in general and it's also changing the landscape of the tech Giants that we're so used to you know I mean uh you mentioned it's day one of um of this technology Dave uh I I think it was Google CEO kind of you know downplayed chat gpt's prominence by saying you know it's minute one of of of an entire New Journey um and and I think that's true but what we're also seeing is we're seeing these shifts and um you know pushbacks between the different companies as well everyone fighting for this space I think we'll have the advantage as consumers that you know we're going to get really fast developments good products but but the but the downside is that you know we're going to have to monitor because I think some of these things have been released without enough information being provided in terms of you know uh what data they're using what's the Privacy you know um regulatory uh constraints that that they're functioning in ETC but any case all kinds of things to discuss an exciting um New Era that we're in absolutely couldn't agree more on that note we will end today's conversation I'm sure this will probably be the first of many conversations like this as this all does start to unfold and become uh just more pervasive I think in our lives so thank you so much to vet and John Willem for coming on today and thanks for everybody who tuned in here to the end we'll chat with you next time cheers [Music] all right
2023-04-30 08:44