Seattle University 2024 Ethics & Technology Conference Designing the Guardrails

Show video

[Applause] all right I hope everybody enjoyed their lunch and the lunch conversations uh that you know hopefully you will uh take with you and and and and keep continuing now we're going to uh start off uh session four for today uh and so uh Stephanie Simmons who is director of Regulatory Affairs and risk management at responsible AI at Microsoft is going to be talking about pursuing the potential while building boundaries Stephanie thank you for the introduction thank you for the invitation to be here it's great to see you I would like to share with you my 5-year Journey with responsible AI at Microsoft responsible AI is an imaginative process of Discerning the pros and cons of AI and its applications the goal is to make informed choices about the benefits and risks and to take actions based on those choices responsible AI involves Community deliberation among AI providers teaming companies and stakeholders and the World At Large responsible AI is both a Discerning and a deliberative process imagine how The Social Network companies could have avoided their mistakes if they' followed the guiding principles of responsible AI imagine if they had applied a guided uh thought experiment about their impact on children and others with the roll out of chat GPT some have echoed Dorothy and The Wizard of Oz saying Toto I have a feeling we aren't in Oz anymore in Kansas anymore um but for now we remain fairly close to home responsible AI is the best guide for our journey the man behind the curtain is us and AI while seemingly powerful and autonomous is ultimately created and controlled by humans we are responsible for our choices and our future I asked the Bing um image generator co-pilot to depict Humanity as the man behind the curtain um and this is what it came up with uh which is super creepy if you get to actually look at the faces and I prompted it many times to try to get a better result and this was the best that it did uh making the wizard The Witch is the wizard anyway uh this makes the case for why we need to be in charge let me now share my journey with Microsoft health Futures and responsible AI I've been working with legal and Regulatory issues raised by new technologies for the past 25 years as an attorney in private practice and then inhouse and more recently as a business professional supporting regulatory strategy and research Partnerships we are all excited by what AI technology can enable today and its future potential in health and medicine with our overburdened Health Care system and exorbitant drug development costs I've been working in Microsoft research for seven years within a team called Health Futures we are a global multi-disciplinary team of uh diverse backgrounds including computer scientists clinicians uh data scientists regulatory professionals and more the mission of Health Futures is to enable a future of personalized medicine for all and a continuously learning Health System it's a big ambition we learn by partnering with companies in research collaborations these collaborations allow us to deeply understand our partners challenges and top use cases we learn together about effective ways to apply AI to their data and develop proof points for real world impact we are hearing about some of those applications today in this conference within Health Futures we have many exciting areas of Discovery we have collaborations exploring clinical real world data and multimodal models to generate insights into population and individual health models to assist Radiologists scalable tools to analyze huge biomedical data sets and models that process pathology images to make earlier and more accurate disease diagnosis just to name a few another important research area for us is the impact of AI on society and the soot technical risks soot technical AI risks emerge from from the interplay of technical development decisions with decisions about how a system is used who operates it and the social context in which it is deployed while the potential good seems unlimited it is a daunting challenge to anticipate Downstream impact and the unintended consequences of AI and it has become harder in the last two years with the scaling of generative AI capabilities and models like gp4 responsible AI is not a checklist exercise but rather a dynamic area of active research and Discovery I was trained in responsible AI in 2019 when our office of responsible AI was established at Microsoft this slide shows the development of our program and the important milestones in 2018 we led the industry in creating a set of human- centered principles to guide the responsible creation of AI we adopted our first responsible AI standard in 2019 and in 2020 weed the Rome call for AI ethics our roach has been proactive and deliberative Microsoft's responsible AI program is based on six principles as you can see accountability transparency fairness reliability and safety privacy and security and inclusiveness we apply these AI to um AI systems via our corporate standard and the office of responsible AI in our champ Community responsible AI principles and practices help organizations address risks innovate and create value to design Design Systems that are safe and fair at every level including the machine learning model the software the user interface and the rules for um accessing an application we're always learning and improving how we evaluate AI Technologies there's a lot that we don't know yet but one thing we know for sure from being on the front lines of testing AI systems is that this work requires collaboration across many domain experts as we've been talking about today recently Microsoft issued its first annual responsible AI transparency report that describes how we operationalize responsible AI our approach encourages stakeholders stakeholders to think expansively about the benefits and risks to account for intended uses and potential misuses and to be clear about the limitations of the system developers deployers and users of these systems all have a responsibility to optimize the intended uses while guarding against misuse and abuse our transparency report describes how Microsoft's program is operationalized across research policy and Engineering teams through an iterative cycle of govern Map Measure and manage Microsoft's program is aligned with the nist AI risk management framework which was released by the Department of Commerce in early 2023 and to which we contributed um I'll briefly describe that process governance contextualizes the map measure and manage process we've implemented policies and practices to encourage a culture of risk management across the development cycle and grown our responsible AI champ Community mapping risks is the critical first and iterative step towards measuring and managing them this step involves conducting AI impact assessments threat modeling red teaming and planning mitigations once we have a sense for what the risk areas are we systematically measure the risks uh prevalence of those risks and we evaluate how our mitigations have performed against defined metrics examples of risks and Associated harms for generative AI um include jailbreak success rate harmful content generation and ungrounded content generation and these results help us assess the appropriateness of a generative application in a particular context next we manage the risks at the platform and application Level we Safeguard against previously unknown risks by building on goinging performance monitoring feedback channels processes for incident response and Technical mechanisms for Rolling applications back a controlled release to a limited number of users followed by additional phased releases helps us to Map Measure and manage risks that emerge during use as a result we can be more confident that application is behaving in the intended way before a wider audience accesses it about a month ago an update to the nist uh framework was released to to address generative AI risks and that generative AI guidance document identifies 13 risks that are either novel or exacerbated by generative AI um and actions that developers can take to manage them it notes that some generative AI risks are unknown and others may be known but difficult to estimate given the wide range of stakeholders uses inputs and outputs this document is a great source for understanding generative AI risks as an industry leader in responsible AI uh Microsoft has been working with government policy makers to establish principles and practices for the safe development and deployment of AI in the summer of 2023 along with several other major technology companies we announced with the White House a series of voluntary commitments to manage AI risks and Advance responsible Innovation our transparency report describes how we Implement implemented systematic testing and evaluation procedures and how we require all AI systems models and code to be released with transparency documentation um describing how the system was developed its intended uses and known limitations later in 2023 we Pro uh released a proposed blueprint for governing AI with five highlevel recommendations we're calling on governments to continue and build upon the work to develop safety Frameworks ensure that they can control and disable high-risk systems that control critical infrastru structure and regulate in a way that reflects the technology architecture as described in the blueprint there are three broad important points to our policy position on regulating AI I'm going to take a sip of water before I say what those are first regulation should be risk-based considering the technology architecture of AI and the role of each stakeholder in its development and deployment the right regulatory respon I responsibilities should be placed on the right stakeholders based on their role in managing the risk in a particular use case second regulation should prioritize transparency it's critical that the entire ecosystem is equipped with Comprehensive information to make informed decisions about the use of AI the right guard rails for the responsible use of AI may not be limited to technology companies and governments every organization within health and life sciences that creates or uses AI applications will need to develop and Implement governance systems and decide whether when and how AI should be used finally regulation should encourage collaboration it's important that the industry unites to share insights and engage in holistic deliberations about the impact of AI on Healthcare Microsoft shares its experience and knowledge through active collaboration with industry groups customers and Regulators this Collective effort will ensure that all stakeholders are equipped to navigate the complexities of Health AI although we still don't have a Federal Regulation last fall the Biden Administration issued an executive order on AI which reflected much of what was in our voluntary commitments and a policy blueprint that we had issued earlier in the year the executive order requires agencies to develop uh programs to leverage the benefits of AI while mitigating the risks um the nist document that I mentioned the generative AI document is released as part of the Department of Commerce response to the executive order in our work with policy makers and standards organizations we've been meeting with stakeholders in health to understand their concerns and provide potential policy suggestions some of the most poignant learnings have been one uh the need for Quality data to train models ensuring Fair representation by underserved populations the need for human oversight even in some cases where the risk appears low the need for clarity regarding accountability and liability the availability of insurance coverage for AI risk the need for Workforce training and upskilling and the need to promote greater access to AI tools and infrastructure Microsoft engages with industry associations and coalitions Microsoft was a founding member of chai the Coalition for health AI chai is a community of academic Health Systems organizations and expert practitioners of AI and data science that aim to develop guidelines and guard rails to promote the adoption of credible fair and transparent AI systems in collaboration with leading Healthcare organizations we also recently helped to form a coalition called train stands for a trustworthy and responsible AI Network one of the first Health AI networks aimed at operationalizing responsible AI principles train shares best practices enables registration of AI used for clinical care or clinical operations provide measurement tools and facilitates the development of an outcomes registry despite the tremendous progress and great work that has been done by the stakeholders voluntary commitments and guidelines are not good enough for the healthcare sector stakeholders want rules requiring transparency privacy protection protection against bias as well as clear regulatory approval processes not only are domestic laws needed to provide Clarity and assurance International cooperation is also needed our government Affairs teams have been involved in providing feedback on the EU AI act as it went through the legislative process over the last several years our advocacy has been based on our belief that effective regulation is risk-based is process and outcomes focused rather than prescriptive and addresses the responsibilities of Technology developers and deployers calibrating requirements appropriately across the technology stack although more clarity is needed the ACT is a step in the right direction towards protecting many important human rights we hope to see continued development of international standards and best practices two weeks ago uh Pope Francis spoke at the G7 Summit about the importance of guiding AI development to serve the interests of humanity he's been the subject of deep fake images and is concerned about disinformation and the potential of AI to change the way we conceive of our identity as human beings the Vatican signed the Rome call for ethics in 2020 with Microsoft and others agreeing to voluntary commitments to promote transparency and accountability and um Microsoft credited those commitments um as helping to guide us and make a decision not to release an AI voice cloning system one recent example of international harmonization is the guidance issued uh jointly by the FDA Health Canada and the MH on transparency for machine learning enabled medical devices this Guidance Do Dives deeply into the topic of transparency and examines kind of the who what when where why and how um of ensuring that transparency is consistently supported to promote the safe um the development of safe and effective medical devices last month Microsoft released a monograph called Global governance goals and lessons for AI on how International institutions might prepare for this challenge by examining lessons from the past um in regulating other areas such as civil aviation financial markets climate science and other areas it's another tool to make us better informed about how to harness the benefits of AI while keeping it safe and trustworthy other encouraging news came from a recent AI safety Summit co-hosted by the UK and Korean governments 16 global technology companies including Microsoft open AI Mistral in ction AI meta Amazon and companies from China and the UAE have agreed to voluntary safety measures for Frontier models Frontier models are large scale models that push the boundaries of AI and have Broad New capabilities for further reading um these are some books that were written by Microsoft Executives um that provide context on how our company thinks about these issues and have has been approaching it over the years um I was going to say a few words about those but I probably don't know if I have time how much time do we have are we out of um anyway they're all really good books I promote I I recommend tools and weapons is published back in 2019 and it's really Brad's like um statement about these fundamental questions of how we balance rewards of using new technologies um as tools to improve our life um with the Perils of those Technologies being used as weapons and um I really respect you know where he's coming from and where our company's coming from on this because the premise is that companies like Microsoft that produce powerful technologies have a responsibility for the future um and and to help um to help everyone kind of come together and understand how to regulate um so the AI revolution in medicine was written by our president of Microsoft research Peter Lee on um just in 2023 on some use cases that um they looked at for G pt4 capabilities for data extraction summarization and reasoning um he is appealing in that book to people that work in healthcare and Life Sciences to play an active role in understanding the ethical and legal questions are being raised by the Technologies which is exactly what we're doing here um and the AI for good book was just published this year and describ some Pro projects that Microsoft philanthropies has been doing with partner organizations back to The Wizard of Oz I've described some of the ways that Microsoft has been engaging to raise awareness of the promise and Peril of AI using foresight and highight hindsight we must continually discern its risks and benefits how it can be used for the common good and how it should be oriented to the Dignity of the person we must deliberate and collaborate each of us has a responsibility we are the man behind the curtain many difficult open questions Remain the most important thing I'd like you to take away from this talk is that we're on this journey together and the responsible AI principles and Frameworks are a guide the level of international cooperation is encouraging um and cause for optimism Now's the Time to get engaged in learning and contributing to the work of making AI trustworthy and ethical today let's continue to deliberate in a civil manner raising important questions debating and testing new solutions to enable better health care for all that's that's what I all I have thank you

2024-08-05

Show video