Panel Discussion : Reaping the benefits while managing the risks of the evolution of AI

Panel Discussion : Reaping the benefits while managing the risks of the evolution of AI

Show Video

okay I think we'll get started uh welcome everyone and thank you for joining our webcast today from natural to artificial intelligence reaping the benefits while managing the risks of the evolution of AI I'm Mike Tomah I'm the vice president National practice leader for travelers global technology and life sciences our organization specializes in understanding and developing Insurance Solutions for companies in the advanced technology and Life Sciences Industries artificial intelligence is no longer just a buzzword used in Tech circles it's become a part of our daily lives from virtual assistants to personalized recommendations on streaming services and online shopping platforms AI is changing the way we interact with technology generative AI models are now even engaging in household conversations however the rise of AI is not without its risks and as its positive impact continues to grow so do concerns about its potential downsides the evolution of AI has transformed it from a futuristic concept into a practical reality and it's clear we can no longer ignore its impact on every aspect of Our Lives in this webinar we're going to try and delve into the ways that AI is shaping the world of technology companies and how they can manage its risks while reaping its benefits today I'm joined by Avi Goldfarb obvious the rotman chair of Nai in healthcare and professor of marketing at Toronto's Robin School of Management he's also Chief data scientist at the creative destruction lab a faculty affiliate at the vector Institute for artificial intelligence and a research associate at the National Bureau of economic research Avi is also co-author of the best-selling books prediction machines and power and prediction the disruptive economics of artificial intelligence with the University of Toronto colleagues professors Ajay Agarwal and Joshua Gans also joining me today is Amanda Bond Amanda is the vice president and chief underwriting officer at Travelers global technology and life science in this capacity she establishes the Strategic underwriting direction of the practice and leads a team of Underwriters throughout the United States specializing in technology and life sciences as of right now we've got about 600 of you on the phone so I'm assuming if you're joining this call today you share my enthusiasm for this exciting and emerging topic so I really hope that you find today's conversation interesting and informative and with that I'm going to kick it off and I'm going to direct my first question to Avi Avi it seems you can't pick up a newspaper or in my case I can't pick up my phone and then my news feed not see multiple headlines talking about AI with the heightened awareness and excitement around emerging AI Technologies like chat GPT and Bard what are some of the watch outs and current limitations for individuals and organizations using these services uh hi Mike uh great to be here and that's a fantastic opening question thinking through I when we're talking about artificial intelligence in 2023 what do we really mean and the first thing to remember is we are not in a world of machines that can think like you might imagine from science fiction we're a long way from The Matrix of The Terminator what we have are prediction machines they're uh you know machines that take advantage of advances in deep learning computational statistics that use data we have to fill in misinformation so what's changed over the past decade or two in artificial intelligence is that we've become much better at taking information we have and filling and missing information and that means um they're great when the data are present okay so when we have lots of data prediction machines can fill in missing information of other similar situations when they break down they break down when the data that we've used to train the machine is not relevant to a current decision in the current situation now what we've seen overwhelmingly so far is the use of prediction machines the use of AI um what we call Point Solutions which is you think through your company's workflow you identify some predictions you're already doing with perhaps some human process you take out the human you drop in the machine and you don't mess with a workflow because that's easier every time you change your workflow it's a pain you've got to get all sorts of people to coordinate and cooperate and it's hard and so typically what we've seen so far are these Point Solutions where you take out a current process and you drop in the machine at the exact same point those work but a lot of companies have implemented these Point Solutions and said you know what the juice hasn't been worth the squeeze we invested millions or more in these Data Systems to make AI happen and ultimately all we did was save you know one percent on our costs that's not worth it what we emphasize in our book power and prediction We Believe is that the biggest changes are going to happen when organizations are ready to find new ways to deliver value this is what we call a system solution where rather than just doing the same thing you always did but a little bit better you take advantage of what prediction technology operates figure out where if you had a little more information what could you do differently and that leads to more than doing a little bit the same thing you always did but a little bit better but instead an opportunity to deliver an entirely new kind of value to your customer base thank you Avi so if I can summarize what I think I heard you say it's an emerging technology there's lots of potential to improve outcomes but they're not without risks yeah pretty good topic for a webinar I think all right I'm curious how do you view the current state of all types of AI the opportunities and the risks so again the starting point is to recognize um there are risks to uh you know machines taking over the world like they did in the Terminator but those risks are not relevant to us on a day-to-day basis and to what you guys need to worry about in the short term okay the risk we need to worry about are the recognition these are prediction machines and that a few things predictions come with uncertainty there's variance and so you're going to get a point estimate out of your prediction but you're also going to get a confidence interval and uh you need to understand and embrace that uh when the machine gives you a prediction that it doesn't tell you for sure what's going to happen just like any other prediction you guys are on insurance you understand that idea um and so in making decisions based on prediction machines you need to embrace that uncertainty and embrace that um and think through the fact that even though you don't know for sure um there's a lot you can understand second very important risk is um that even though I think there's many reasons to think that machine predictions are going to be much more accurate than human predictions um they leave a trail that's good in general leaving an audit Trail means you can improve them and make the world a better place by seeing what went wrong but it also means that anybody can see what went wrong and with human processes there's often an ambiguity about whether a mistake was made or everybody actually made the best decision they possibly could have and there was bad luck uh with a prediction machine there's an audit Trail and that creates another whole set of risks when you implement them in companies yeah so you know I think that you know being in the insurance industry having some of that transparency can certainly affect the liabilities Downstream so I think it would be something that's interesting to everybody on this call um with these transformational Solutions you're referencing I imagine that there are some economic benefits that companies can expect that will be compelling what do you think those economic benefits will be um okay so what we've seen as I said you know what we've seen so far are there's a handful of companies that were well positioned to uh take advantage of these Point Solutions so you know what there was this a really expensive part of their workflow like in um in banking fraud detection was an expensive part of the workflow and they had all sorts of people who tried to do fraud protection and we've had ai's support solution there but the biggest economic benefits are going to be around what we call these system Solutions to get a sense of that um I think it's it's useful to look at a previous generation of um of Technology a general purpose technology and how it played out so uh if you've been paying attention to the hype around AI you've heard people say things like it's going to transform the way we work and the way we live and it is the next big technology it's like the internal combustion engine it's like Computing and it's like electricity and uh if we think about AI as a new electricity I actually think that metaphor is much more powerful than many people appreciate what do I mean by that um Edison's patent for the electric light bulb was in 1880. it was clear in the 1880s that electricity was going to transform the way we live in the way we worked but it wasn't until the 1920s 40 years later that half of U.S households and half of U.S factories had adopted electricity it took 40 Years of wandering to figure out how this clearly transformative technology would actually impact most people at home a network and what took so long is they had to figure out um what the technology really could do what I mean by that in the in the 1880s um the logic of a factory was determined by power needs because the steam engine or the water wheel would have been the center of the factory and do you remember your high school physics you may or may not uh energy dissipates with distance and since every single machine in the factory had to be connected to the steam engine by these belts um they tried to locate the machines as close as possible to the steam engine and so the logic of the fact that the micro geography of the factory was determined by the power needs of the various machines and so the workflow in the factory was determined by which machines needed to be closest to the power source in the early days of electrification of factories all they did was take out the steam engine drop in an electric motor at that exact same point and that's it they didn't change anything else and they might have saved five ten or even 15 on energy costs but that's it and for most Factory owners it wasn't worth it to save a little bit of energy to figure out how do you get electricity distributed into your factory how do you set up the wires how do you deal with new fire needs how do you set up uh within the factory how do you connect all the machines uh that used to be connected through belts to this uh central power source and so even by 1900 less than five percent of U.S factories

were electrified and then around 1900 people started to realize that electricity wasn't just cheap power electricity was distributed power what electricity did is it decoupled the power source from the machine and so you could put your machines anywhere you wanted they were no longer constrained by this need to uh keep it close to the power source and once that happened we invented what you think of as the quintessential 20th century Factory with inputs coming in one end and outputs coming out the other modular production where the organization of the factory is determined by a logical workflow from inputs to outputs once that happened we saw a rapid increase in the adoption of electricity in factories in a huge increase in the productivity and the output of those factories that did adopt it required the invention of an entirely new system in order to really take advantage of what the technology can operate okay so what does it have to be with AI with AI it feels like we're in the 1890s we're in these times between recognizing the potential of the technology and figuring out what those new systems look like once we figure out what those new systems look like the ability to deliver value to our customers becomes extraordinary we've seen it in a handful of Industries already there's been um you know the advertising industry has been transformed by better targeting today's advertising industry looks very little like the Mad Men industry of the 1960s and that is largely because of prediction technology and targeting uh we've seen it a little bit in personal Transportation where Uber Lyft and others uh combine digital dispatch and navigational predictions to enable almost anybody to be a professional driver assuming they know how to drive but in most other Industries it hasn't happened yet and where we're going to see the huge upside potential is as we go industry by industry Reinventing ourselves like what happened in the advertising industry and AD Tech over the past decade okay so that's fascinating but you know we focus on the technology space and and I would think that the opportunity for change there is just as greater greater so if you think about the industries we target what do you see for technology Industries including life sciences and medical technology companies okay so I think the upside is even bigger in healthcare medical Tech and Life Sciences uh but the there's a handful of underlying challenges there are going to be key sources to resistance to really ultimately delivering better care and better medicine so the the two core barriers here one is going to be regulatory for very good reasons we are careful about what life sciences and medical technology we allow to have look if you see an ad that you don't like who cares it doesn't really matter uh but if you receive a medical treatment that's the wrong treatment that's a big deal and so because the stakes are so high there are reasons for a much more cautious regulatory environment okay so there's a challenge number one Challenge number two is the decision makers often in life sciences and Healthcare um are people who have been selected and trained around diagnosis and diagnosis is fundamentally prediction you're taking data about symptoms and filling in the missing information of the cause of those symptoms and because doctors tend to be so Central to decision making in healthcare you should expect some resistance to a machine that might uh displace some of the central role the doctors play and replace them with uh empowering nurses and Pharmacists and others okay so um but at the same time as those challenges I just described this incredible opportunities so if we have machines that can diagnose effectively and at scale um there's a whole bunch of new opportunities for example for treatments that might not have imagined before so if diagnosis of disease is slow and um not that personalized not that uh targeted it might only be worth it to develop a couple of different treatments for lung cancer for example but if you have a prediction machine that can diagnose not just the high level disease but be very specific at scale for the entire population of which particular kind of cancer this might be then it becomes useful and worth it for the um for the pharmaceutical companies for example to develop treatments for these narrowly defined diseases so rare diseases are off you know don't get treatments often because there isn't a market a big enough market but if you start diagnosing at scale that creates a business opportunity on the other side of things so there's like there's real barriers in life sciences and Healthcare but there's there's some incredible opportunities and more generally uh Healthcare is an industry uh with a lot of room for productivity Improvement uh and a lot a lot of room for uh better health for patients and better treatment for patients I think it's a really exciting place all right I think that that's a great segue to our other panelists Amanda from your Vantage Point what newer increased risks do you see for those companies that are developing these AI Solutions and for those companies that are using the technology well I mean no question there is great promise in this technology especially For Life Sciences and Healthcare like Avi was pointing out but this Innovation does come with quite a bit of risk and that's kind of our tagline and travelers Tech and Life Sciences Innovation creates risk and we ensure it so we kind of understand this quite a bit and as Business Leaders and AI developers and users we all need to understand the risks that this Tech presents and ai's been around a really long time and at Travelers within technology and Life Sciences practice our Underwriters are really familiar with it and how to approach it but it sort of seems like things are shifting right about now maybe you might even say we're entering a bit of a perfect storm adoption is increasing exponentially there's this kind of lack of corporate accountability there's a massive lack of Regulation there's just so many unknown bonds and and like Avi said the AI is predicting and when the AI doesn't have the data it gets it wrong and Underwriters predict things too and believe it or not we get it wrong sometimes too and there are so many things that developers and users really need to think about and I'm only going to highlight a couple so to start when I think about the developers I think about three things primarily the the security and the Privacy risk the explainability risk and then the the risk to reputation so for security and privacy the data could possibly be used for unintended purposes there is just no way for the developer to predict or foresee all of the use cases secondly the explainability risks so the uncertainty in the decisions that are made by the AI system and the lack of understanding of that decision-making process especially if it's a black box system and what those are is the inputs or the processes are either hidden from public view or they're just so gosh darn complex that a human can't tell how the AI was trained or how it got something wrong lastly the risk to the reputation the company's reputation so if they if they fail to mitigate the risk while pursuing all of these awesome benefits it could lead to public criticism reputational harm or costly investigations and lawsuits so then I think about the flip side the risks to users and I think it's important to point out that you know Mike Avi you and I we're all users right we use AI in our day-to-day lives but there's Business Leaders at companies that use AI solutions that are provided by a third party and I think it's really important to remember that we can't assume that you aren't responsible for potential mistakes or bad outcomes and in many cases the businesses that buy this AI enabled tool are still accountable for the programs their outcomes and their effects so with that backdrop when I think about users I think about a few things safety accountability bias and accuracy so safety the the risks associated with the unintended results could possibly lead to to injury to death to property damage it really depends on the use case another risk I think about is accountability so if the AI doesn't work as intended and it leads to injury or loss who is accountable is it me is it am I accountable as the user is it the company that made the AI is it the creator of the software program that embedded the AI these are super tough questions and I don't have those answers and what complicates matters is when you use that black box system we can't tell what caused the error or who is accountable then I think about bias risk the data set just might not be diverse enough or it just might be incorrect the data labeling might just be wrong and researchers are raising a lot of ethical questions these days suggesting that it could perpetuate the existing biases that we already have in society invade our privacy or spread misinformation and then like the last risk I think about is is accuracy risk we have such trust in our technology these days there is just a high Assumption of accuracy and what we're seeing in some of the AI these days is hallucinations the AI is just making up an answer kind of like my four-year-old when he doesn't know he just makes up the answer and so these hallucinations are a bit of an error with a heck of a lot of confidence and in the absence of having a human that can apply that judgment to review that answer it's just so easy for these hallucinations to to perpetuate themselves so you think about and Bard they're getting a lot of these hallucinations and no one in the field has solved for this and the question is you know will will we it's a matter of pretty intense debate so because these systems deliver all of this information with what seems like complete confidence it's so hard for our users to tell what's what's right and wrong and the speed at which this misinformation can spread has just vastly increased oh Amanda that's kind of scary I'm I have this vision of a confident four-year-old out there making all sorts of important decisions for big corporations but there there's this recurring theme that AI is really only as good as the data that supports it and obviously depending on the situation the risks associated with that inaccuracy could be vastly different so Amanda when you think about the spectrum of risk created by AI are there characteristics that make some AI higher Hazard than other AI yes and short bike yes there are and as Underwriters we want to understand how things work we want to consume lots of information so that we can identify the risks so that we can predict the losses and into to do this we think a lot about the end use of AI Avi mentioned uh Stakes you know are the stakes really really high or are they pretty low if something goes wrong so we put them in a risk Spectrum from from low to high so an example of a low um kind of low side of the risk Spectrum would be AI that is optimizing a web server or maybe predicting you know what show I should watch tonight on Netflix those are pretty low stakes then on the other side of the spectrum is AI That's diagnosing a medical condition that I have based on the lab results that were input into it so those would be the high stakes the other things I think about that could be characteristics might help us discern high from low Mike is if there's no human involvement or no human oversight that could be pretty high risk or high stakes is the system transparent you know can we see how it was built or can we see how it makes its decisions and if we if we can't I would consider that on the high risk side of the spectrum so making available for review or audit by external parties will really help determine the error and also the liability if something was to go wrong and it becomes so much more important in that high stakes environment and one could argue there's a bit of a responsibility on the AI developer to provide that information the other things that come to mind are are there clear guidelines that are set for the users of the system outlining how should this AI be used are the systems do they have limitations and are those limitations widely known and visible to the user because no AI is perfect they're just predicting right which is what Ivy mentioned so if it's high risk and we can't tell what the limitations are that could be concerning the last thing that comes to mind is is there a feedback mechanism or a feedback loop so that the user can report information to the developer if something goes Haywire or if they happen to observe a hallucination okay you know that makes a lot of sense I am I I'm thinking about one of the more highly visible efforts in AI that it gains a lot of attention today and that's autonomous vehicles and and you think about the decisions that that those you know we're trying to program into those vehicles I don't know about the rest of the folks on the phone I'm not sure I'm ready to jump into a car with no steering wheel or brake pedal so I I understand the risk there obvious I think that raises another question I'm curious what do you think are some of the barriers of resistance you see for C companies facing as they contemplate the adoption of AI um so like Amanda just went through a whole bunch of things that can go wrong and all of those are excuses for barriers and resistance and as we're thinking through those let's not forget the big picture which is that humans are terrible drivers right humans attack we get into accidents all the time we're really quite bad at it and there's reasons to expect that machines will reduce the number of accidents even while all those risks that Amanda described will happen they are auditable there's biases all of that will be there but at the same time in aggregate there's reasons to expect that they're going to be better than human um on on writing in chat gbt yeah um there's all you know there's risks and biases again but again uh there's reasons to expect in many places that would be better than human in medical diagnosis uh you know we'll identify mistakes and there will be problems but the 25th percentile radiologist is a lot worse than the 90th percentile radiologist and we should expect machines to be at least as good as the 50th percentile and that can save a lot of people's lives especially those who currently don't have access to the very best medical care and so if all of this is so amazing then the question is well why why aren't we jumping on all of them okay and there's there's some technical barriers to it for sure and autonomous vehicles that's a big part of it um but there's more to it than that so let's let's take a step back and think for what what are the AIS that we have right now they're prediction machines and they help us make better decisions they don't make decisions right humans make decisions by deciding which predictions to make and what to do with those predictions once we have them now who's going to resist better and better and better predictions it's the people who already have it great the people who benefit from the biases and the way the current system operates aren't going to like change people in power tend not to like the revolutions right uh you know we're if we're comfortable with the way things are now that's where the resistance is going to come from there's a um a story that happened in Major League Baseball a few years ago that I think demonstrates this idea really funny okay so think about what we ask our human umpires to do there's a ball like a tiny little ball about the size of my fist it's going at 95 miles an hour over a plate a piece of wood that's roughly the size of your computer screen depending on the size your computer screen maybe even smaller and uh there's a human who asked to decide whether it goes over that plate between somebody's shoulders and somebody's needs and every 10 times they do something that person changes and the height changes and they have to do this hundreds of times over the course of three hours or so that's crazy that is not a human task and it's amazing that umpires can even attempt to be close to accurate and they're pretty good but about 20 years ago Major League Baseball realized that they make some mistakes they thought they could bring in a machine to call balls and Strikes better and they vetted various Technologies and they found a machine that could identify whether a pitch was a ball or a strike better than the human Empires they started to experiment with it and the human Empires didn't really like it But ultimately in Major League Baseball you know umpires aren't the source of decisions in power and so you know if the system was going to work they were going to use it even if the umpires didn't like it but the umpires weren't the only people who didn't like it um the superstars of the day also didn't like it uh you know two of the most prominent superstars of the time were Barry Bonds and Curt Schilling and they hated this new system why well when Barry Bonds was at the plate and he didn't swing the umpires gave him the benefit of the doubt if it was close it was called a ball and so when we brought in a fair system when Major League Baseball decided oh you know what everybody now has the same strike zone Barry Bonds had a lot more strikes called against it he didn't like that he benefited from the biases inherent in the old system and bring in a new better fairer system yeah it might have helped the nobodies but it didn't help the superstars and so they resisted so much that baseball ended up giving up on that for a long time and said okay we're gonna go back to the human decision because we don't wanna you know we like our Superstars benefiting from the inherent biases of the human Empires so challenge number one is where is the resistance going to come from a lot of the resistance is going to come from the people who benefit from the way things are today the Second Challenge is that doing system level change so if we're moving beyond a point solution moving beyond taking out something an existing workflow dropping in the AI but not messing with anything else and actually trying to deliver a new kind of value to some of our stakeholders well that requires coordination different parts of the decision and that means typically that you need CEO level buy-in for what you're trying to do because you need marketing to talk to finance you need underwriting stock to marketing and once you have everybody talking to each other well uh you know they might not see the world in the same way and so system level change is difficult and if system level change is what's needed to make the millions or more uh that it takes to invest in an excellent AI system worthwhile then those coordination level challenges are going to be a major barrier to making anything happen okay that's interesting Avi but you know it feels like today you're using an AI tag line is ubiquitous everybody's product or service has AI so how important is it for technology companies to invest in AI knowing that the risks of AI are evolving just as fast um the starting point for any strategy shouldn't be the technology the starting point for strategy should be your mission what are you actually trying to accomplish as an organization and then when you think through what is this new technology offer don't think through how you deliver on your mission well think about the various things that you do where you fail to deliver on your mission how much of your standard operating procedures are about compensating your customers or other stakeholders for the fact that you don't do what you should do I mean by that here's an example think about um think about airpoints okay um this is Seoul Incheon International Airport by many accounts it is the best airport in the world it has fantastic shopping great restaurants big Open Spaces Greenery it's about as spectacular as an airport gets but this isn't how the super rich fly the super rich don't fly through these beautiful multi-billion dollar structures the super rich fly through these tiny sheds the airports at private terminals look nothing like these beautiful structures they have low ceilings they're cramped they're dark if they have a magazine rack it might be the same magazine over and over and over again how does that make sense how do the people who can afford the ultimate near transportation how do they get these crappy airports and the rest of us get these beautiful multi-billion dollar structures well the reason is nobody wants to spend time at an airport the reason we have these multi-billion dollar airports with fantastic shopping and great restaurants and all that is because these airports are failing to deliver on their mission Seoul incheon's mission is to deliver smooth air transportation restaurants aren't about smooth air transportation shopping isn't about smooth air transportation that's about the fact that you're stuck at the airport and not on the plane the ultimate in smooth air transportation would be you have a great prediction about how long it's going to take to get to the airport through security into the gate and you arrive at the airport walk to the gate get on the plane and it takes off that's how the super rich get to fly that's the ultimate of customer experience and you think about airports these multi-billion dollar structures so many of their standard operating procedures are about failing to deliver on their mission in any industry there are all sorts of things that you do that aren't about delivering what you really could but about the fact that you try to compensate your stakeholders for your failures and looking at those places that's where uh the biggest opportunities for uh change arise and also where the biggest challenges in terms of startups coming in and disrupting the entire industry are going to take place okay so what I heard you say is is that successful adoption of AI requires really giving thought through process and improving competitive advantages and as you said earlier you know the point Solutions are really not going to drive huge competitive Advantage it has to be systems level change but at the same time given the potential that AI offers I think it's safe to assume that we will see more and more companies incorporating AI into their products and services so Amanda with all of these companies exploring AI what types of AI risk management management resources are available yeah Mike so the the development of AI and adoption is just going to continue to increase and it's going to be leading to more intense debate with big Tech politicians and litigators and I think we'll actually see the risk management resources that are available increase a great deal and maybe Avi Mike maybe we should come back in a year or two and do this again and kind of see what else is available out there for these Risk Managers to help mitigate this risk but for now there's a couple things that that come to mind so first is there's an organization out there called nist it's a National Institute of Standards and technology and they created an AI risk management framework that identifies the categories of risk associated with AI and Risk Managers that company at companies that develop AI or use AI as well as the agents and Brokers that might counsel these customers should really be familiar with this framework and what it does is it breaks down the seven category categories of risk that's associated with AI and some of these we already touched on today accountability safety reliability bias security privacy explainability and that's all well and good and Incredibly helpful and I think we should all be really familiar with the nist framework the other thing that comes to mind though is uh just good old-fashioned fashion tractual risk transfer also known as good strong contracts to ensure that the liability is placed with the party that has the most control over it and a good risk management program can help organizations that develop AI better protect themselves by reducing the financial exposures that they may face and I am going to add that Travelers Tech and Life Sciences we're producing a technical paper that walks through this in pretty great detail and it's coming out next month and we'll be sure to make sure that all of you guys get a copy of it so Mike that's that's how I would answer that fantastic so Avi and Amanda this is this has been a great conversation I've got one last question that I'll ask both of you if you could give one piece of advice what would you give to Risk Managers and organizations about the steps they can take to protect themselves and their organizations from harm as they start down this AI Journey so Avi I'll ask you that first okay um I'm gonna return to something I said way at the beginning which is these are prediction machines and they're statistical predictions this is computational stats like we might not have so many people on the webinar if we called it understanding computational statistics um but that's what's happening so we talk about it as AI but really it's computational stats and computational stats have are you know just the advances that we've seen the last 20 years are extraordinary but once you recognize that it's computational stats and it's not you know some artificial intelligence you realize what you have uh when you're using them is an estimate and that estimate has variance and the biggest mistake I've seen companies do over and over again in their deployment of AI systems is to think that the estimate the prediction that comes out of an AI is ground truth and to forget that it is an estimate with a confidence interval with a standard error and once you Embrace on the risk and risk management side of things that you have uncertainty you can deliver much better products much better services and mitigate the big picture potential for harm if you treat it what comes out of the AI as for sure the right thing you are for sure gonna uh be overconfident and if the stakes are high that will lead to disaster but if you recognize that there's uncertainty in those predictions and you build in systems to account for and um accommodate that uncertainty then we can build systems that are much much better than whatever we have now so what I might add to that is um and obvious a good thing you didn't name this webinar because calling it computational stats we wouldn't have had such great engagement um but Mike I I can't pick just one so when I think about a couple things of advice um that companies can do to protect themselves is similar to what Avi said really recognizing limitations but when I think about the creators and the developers being really transparent about the data's limitations to the users so that the users understand what they're going to get when they use this system and then additionally creating that feedback mechanism so that if they experience hallucinations or something doesn't work in the system the user can Circle back to the developers the developer can further improve the technology the last thing I'll mention is for all of those Tech and life science companies listening out there I would really encourage you to partner with an agent and broker as well as an insurer carrier like travelers to make sure that they can to be with a company that understands the technology as well as how to underwrite it so that we can all work together in this AI Evolution Journey all right thank you okay with that we're going to bring the webinar to its close I do want to thank both Avi and Amanda for their participation and the insights that they shared I also want to thank all of you for joining us today hopefully you'll be able to take some of what you heard and put it to good use but with that I I thank you all of you for attending foreign

2023-10-04 13:58

Show Video

Other news