Responsible AI: Opportunity and Responsibility in the Era of Artificial Intelligence I Leigh Felton

Responsible AI: Opportunity and Responsibility in the Era of Artificial Intelligence I Leigh Felton

Show Video

>> I'd like to introduce Leigh Felton. She manages the Responsible AI Champs community, drives enablement of responsible AI across Microsoft, through assisting organizations and driving implementation planning for the Responsible AI Standard which supports our internal communications and awareness of responsible AI. She's been at Microsoft for cumulative, cumulatively over 15 years - I've really struggled with that word- where she's worked as Chief of Staff and has led PR and AR programs of the company. During her time away from Microsoft, Leigh had the opportunity to work for the State of Washington, appointed by the governor to lead the Business Services Division, including all marketing and communications programs for the State, imports and exports, as well as economic development. Leigh has an MBA in Business Communications and lives with her husband and family in Woodinville, Denmark, and Mississippi.

So, Leigh, take it away. >> Thank you. Thank you very much, Wendy. I sincerely appreciate the opportunity to be here today and to share some of the stories and the journey that Microsoft is going on as it relates to AI technologies and the understanding that AI has the opportunity to have such a tremendous impact on all people in society.

And so, again, my name is Leigh Felton. I'm the Director of Community and Communications. I sit in the Office of Responsible AI at Microsoft, and I just want to take you through some of the work that Microsoft has done over the last two and a half, three years with regard to really digging into AI technologies and helping to determine how they can be a benefit to society and to the communities of the society. I'd like to start with this quote here, and this is from Satya Nadella, the CEO of Microsoft.

We've seen movies and we've seen TV shows about the future and about AI technologies. Typically, in those movies, there's one of two things that happen. Either the future is completely overrun by technology and it's destroyed because of technology or a lot of other things. So the technology is just this horrible, evil thing on the world, or our entire future is based on technology and we're so reliant and dependent on technology that it basically, has saved society. So those are two very extreme views. I love this quote from Satya because we have to remember, we're not talking about technology itself being good or being evil.

It's not the technology that is good or evil. It's the people that are behind the technology. It's the people that are designing the technology, that's developing and distributing, deploying, and using those technologies. It's those people who based on the values that they're instilling and that they're incorporating into the technologies that really have the responsibility. As we, specifically look at Microsoft and our belief that the future of what technology can do to bring communities together, to enable access to large amounts of data and information and the types of technologies that everyone should have benefit to use, we need to make sure that the people that are in our company, as well as beyond, have those values and understanding of what the technology and the impact of the technology that they're creating can do for the world. And so, I get asked the question often, why AI? This can be true in terms of responsibility, in instilling responsibility in the technologies that we're creating. This can be true for any type of technology.

Why all of a sudden is everybody paying attention to AI and thinking AI is something that is different, is special? The answer is because AI is different and it is special. With all of the technologies that have come before it, AI has the biggest, the closest proximity to mimicking human intelligence. And for a lot of it, it's black box. So you literally, don't know and can't understand what it's learning or why it's making some of the decisions that it's making. If you look at the pace of innovation and how fast these technologies are coming to be and coming into society and growing and expanding, it's really something that quickly can get ahead of us. Being that it's different and it, actually, can be one of those things that people become to rely on more and more, it's highly important that we make sure that we're creating it in a responsible way.

Not to mention that in the past you used a calculator, you put in two times two equals four. You trust that calculator. People have become dependent in trusting technology that whatever answer it provides you, that must be the truth. That's not necessarily the case with AI, correct? Just because it comes back to you and it says, oh, this person or that person is more likely to offend or more likely to do successfully in school or more likely to do successfully in this job, that doesn't necessarily mean that that actually is the answer. It's one element, it's one input. We need to make sure that people aren't so dependent on AI to the point that they forget that this is something that is just providing data and statistical analysis.

As we think about the opportunities with AI, as I said, Microsoft really believes that AI can provide a tremendous difference as you think about healthcare and being able to help predict diseases. Help identify early on cases. Help with triaging. Help with filling prescriptions, Help with medication management. You think about retail, helping to identify what is the most valuable, most useful to particular customers and consumers. You think about financial services. There is such a huge amount of AI technology being created these days, specifically, around financial services and lending institutions and how we're doing banking and a lot of other things.

Then you get to manufacturing as well. As you think about manufacturing and the amount of time that AI can save in helping people on the floors learn how to do those roles, how to put things together, using these technologies, it can be a huge, huge benefit. But on the flip side of that, we also know that there are some concerns. There are some consequences if these technologies are not developed in a responsible way.

As I specifically think about deep fakes, for instance, I know that there are videos, for instance, of Obama. What's scary is, from some of these videos they're actually not real videos they're AI-generated videos. Without having the training or without having the technology to determine what's real and what's not, imagine what nefarious actors can do if they are using those systems to trick people into thinking, "This is actually real, this is actually what's going on"? How many of you have received, whether it's Facebook or some other social media, have received an email or news clipping or some kind of story from a friend that says, "Oh, my gosh, have you seen" X? "Have you seen what happened?" It's something that is completely unbelievable and just completely out of this world and then you dig into it and you look and you realize, "That's not real." You find out more information to determine that this is, actually, not real but it's being spread out there as though it's a real thing.

Deep fakes, that's just one of the many things that AI technologies can actually have a detrimental impact. There's a huge long list here. I'm not going to get into it all because I don't have a lot of time, but I just want to, real quickly, go over some of the experiences that we've seen, whether it's at Microsoft or us working with other organizations that we've seen in the industry. As we think about one of our principles, Microsoft has six responsible AI principles. Those include fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. All of the products that we create, that we design, that we develop, that we deploy, we think deliberately about each one of those principles in determining, is this technology aligned to our principles? If you take, for instance, the principle of fairness, and this is not just a Microsoft thing, this has happened on just about every search engine that you can test out, you type in the word CEO.

What happens? The majority of the images that come back are of white males and it's not representative, and it's actually not showing the true picture that there are other types of CEOs. There's female CEOs, there's Black CEOs, there's Black female CEOs. The images that you received from these technologies because it's like, oh, the majority of CEOs are these white men and so that's what they push up. We have to be able to go in and identify these as issues and to be able to resolve it. You think about other AI technologies with regard to images and recognizing people with diverse skin tones with diverse appearances. I know for me personally, being a Black female in the neurodiverse community, I have had my own personal experiences.

For instance, you take Windows Hello. My husband is Danish and he just loved when Windows Hello came out, he just loved it. Windows Hello, for those of you who don't know, is the, basically, you can pick up your computer, your PC, and based on your camera, your computer just recognizes you and turns on. Your face is your password. For my husband that worked, I would say 100% of the time, but I don't think that we ever say 100%, that anything works 100% of the time. So let's say 99.9999% of the time, it worked for my husband.

For me, maybe 0.999% of the time it worked for me. Not only was it frustrating for me and you could say, well, just turn it off and don't use it. It was a horrible experience and it actually frustrated me that my husband, every time we walked up to his computer, the machine would just turn on because it would recognize him. So I wrote to the Microsoft team who was behind that, and I said, as a Black female, the experience is absolutely horrendous.

I know not just the skin tone that the technology has issues with, but as a Black female, for me personally, I change my hair a lot. Whether I wear it up, whether I wear it pulled back, whether I wear it curly, I don't wear it straight anymore. I don't straighten my hair anymore, but regardless of how I wear it, I wear it in a lot of different styles and so that shouldn't matter. I'm not the only Black female who has hair that can go in a lot of different directions and I'm proud of that and I don't want to have to wear my hair the way mainstream society believes that hair should look, but I should also be recognized.

So it's important that we take into consideration other individuals who don't look like a lot of the people who are designing these technologies, that we expand that and make sure that we are having diverse groups and people who are not just testing, but part of the creation of these systems. This is another one in terms of fairness and transparency, another situation. I'm not sure if any of you are familiar with some of the AI systems that are used for assessing in the judicial system to assess individuals who are likely to re-offend in criminal histories.

There was a system that judges were using and it basically brought back a score and the score was range, I believe a 1 to 10 based on all of this data that was fed into the system of determining whether or not an individual was going to re-offend or not. What was found out is that it was at some exponential rate saying Black people were way more likely to re-offend even if they had a less criminal record or they had an equal criminal record. They were way more likely to re-offend than a white person. Not only is that problematic for obvious reasons, but why it becomes even more problematic is the transparency element of that these judges, first of all, they didn't know why the system was making the decisions that it made.

Someone whether it was a score of seven or eight, they didn't know the difference between what does it mean if this individual has a score of seven, versus if this individual has a score of eight, do you know that there's only one little line in the code that makes an individual get a particular score over another score? Actually why the system was making those predictions that the judges had no idea of the data behind it and number two with the systems that we know exist and not just in the US, but around the world, this is something that over time people just become to rely on. We have heavy workloads. If technology is supposed to make our lives easier, then we come to rely on it. It got to a point where judges were actually just relying on. It's like, okay, so this is the score that comes back instead of me digging into it and actually taking that as one input, I'm taking that as the truth and making decisions. What happens? We increase the societal biases and we allow technology to replicate the societal biases that individuals have.

We have to get on top of that and realize it's not OK. Microsoft has done a lot, I would say, over the last three and a half, four years, of looking at this and saying that there has to be a change. I'm not going to say that we're there, that we figured it out.

We're done in this journey, this is what you need to do. We are absolutely not there. We're still on our journey, but the main thing is the journey has to start somewhere.

There has to be a recognition that this is actually a problem and can be a significant problem. Satya recognized this back in 2016 when he published an article in Slate magazine that talked about the fact that these organizations can no longer just create cool technology and say, "Hey, this technology is so cool," and then throw it over the wall and say, "Hey, it's up to the customer and the customer is accountable," or the end-user is accountable for how it gets used. That it is not about creating cool technology for cool technology's sake, but that it is about people being the center of the technology that we're developing. That if you're not thinking about the end goal of who is going to use this technology and how those individuals are going to be impacted by this technology, then you need to start over. That's what we launched at Microsoft.

We launched a human-centered approach to how we are developing, designing, and deploying AI technologies, which led us to creating our Aether Committee and Aether stands for AI effects and ethics and engineering and research. That committee basically was created as an advisory board to the SLT, to the Microsoft Senior Leadership Team, to help the team look around the corner in terms of where technology is going, what types of things we need to be concerned about, and how technology can impact individual communities as well as society as a whole. From that organization, from that committee, Microsoft created those six principles of responsible AI that I've mentioned to you before. Those were published in a book called 'The Future Computed' which is still available in an online version. 'The Future Computed' laid out those six principles and why each one of those principles are highly important and should be considered. From those principles, we started as a company to socialize those, to make sure that not just our field teams who are working with customers and partners on looking at technologies that we're releasing into society.

Also that our internal teams are aware that we have these principles and the foundation of everything that we are creating needs to be based on those principles, but having principles wasn't enough and talking about the fact that fairness. All AI systems should treat people fairly. What does that actually mean? How do you actually take that sentence and make it tangible to an engineer to say, OK, all systems, all AI systems should treat people fairly. How do I make this system that I'm working on, treat people fairly? We realized we needed to take it a step further and that's when the Aether Committee, as well as the Microsoft Senior Leadership Team, created the Office of Responsible AI. That was created last year.

Again, that's the group that I sit in and the Office of Responsible AI has the commitment of taking those principles and actually making them tangible and creating the policies and the tools that teams can use and really thinking through how do we incorporate these principles into the way we're building our product. Again, these are the principles that I've talked about. I don't have time to go into each one of them. We have lots of information on, there's an AI business school that is an external site, if you're external to Microsoft. If you're internal to Microsoft, we also have the Office of Responsible AI site, which has lots of resources specific to these principles.

The goal, regardless of any of the other things that we're creating, the principles are the focus of the work. Anything that we're developing, any tools that we're developing, new resources that we're developing, any policies that we're putting in place, it is purely surrounding these principles and making sure that any of the technologies that we're putting out there are aligned to the values that are within Microsoft. I'll pause there to say, as well, that we get asked the question, "Well, isn't that just a US-centric view or Redmond-centric view?" We get asked that question sometimes from internal teams as well.

This is what I say to them. Microsoft is not a Redmond company. Microsoft is not a US-centric company. Microsoft is a global worldwide company, and so has global worldwide responsibilities to make sure that the things that we're doing has a positive impact for all of those communities around the world. As I think specifically about why I came back to Microsoft - so I left Microsoft. I was here for a long time, I left Microsoft because I wanted to go into consulting.

I wanted to do business strategy consulting. I had a great opportunity to do that. I began strategy consulting on this work within Microsoft, and it was so near and dear to my heart that we actually have the opportunity to make a difference in the technologies that we're putting out there that I did not hesitate to come back to work on this full time. I talked about the Aether Committee and the fact that the Aether Committee really looks across the principles and provides deep research and tools and knowledge that they share with all of the SLT, as well as other teams who are involved in this work. We also have the Office of Responsible AI, which sets the internal policy. We do that through, what we call, the Responsible AI Standard.

The Office of Responsible AI also sets the governance ecosystem, which outlines that there's a whole ecosystem, there's a whole community across the company who has this responsibility. It's not just in one office, but it really stretches across the entire company. Where I sit, as Wendy stated, in the enablement space, which specifically focuses on how are we enabling the people within our company? How are we giving them the tools? How are we making sure that they're implementing the Responsible AI Standard? How are we creating a place where this becomes a natural part of our culture? That it's not a checkbox to say, "OK, did I do responsible AI today?" Check. But that it actually is built into our systems, that it's built into our programs, that it's built into our processes. How are we making sure people have that know-how and the ability and the support that they need in order to do that? Then we have sensitive uses. As I think about sensitive uses, these are uses of technologies, specifically AI technologies, that can have a detrimental, damaging impact to society.

There's AI in general, so we need to make sure that, in general, the AI systems that we're developing are aligned to our principles. But for sensitive uses, we're saying that there are specific categories of technologies that can have even more detrimental impact to particular groups, that we want to take a closer look at those, and then also public policy. We're going to look at how, from governments across the world, from a regulation standpoint, where we should actually have governments get involved in regulating some of these technologies, such as facial recognition technologies and how they're used, for instance, for surveillance.

Just real quickly, I was talking about the sensitive uses of AI. I think this is extremely important because as we think about AI technologies that are used in some of these scenarios, you take, for instance, what we call denial of consequential services. These are AI systems that are making decisions about people that are giving some people opportunities, and not other people. Think about employment programs. There's AI systems now used for human resources that can go through a huge amount of resumes, and help determine who's going to be more successful on a job, who should get an interview. If those AI systems aren't built correctly, just imagine the communities in the particular groups that are going to be more vastly impacted by those types of technologies than other groups.

Think about funding, housing, lending, educational programs, as we use AI to determine who's going to be more successful in university and who should get accepted and who shouldn't. Any system that is being designed that's going to make a consequential decision about one group over another, that is considered a sense of use and has to go through a special sensitive uses review. Risk of harm. Typically people think about this from a healthcare standpoint that, yes, healthcare AI systems can actually have an impact that selects life or death.

Do you have a disease? AI system is used to do disease detection. AI systems used to triage and determine which patients should be seen first. AI systems used to determine what medication people should have. It's also in military scenarios. Any military, policing scenario, there is a risk of harm. As well as manufacturing. On the manufacturing floors,

there's AI systems, and all of those can provide harm. Those are the physical harm elements, but there's also serious emotional and mental harm, as well. AI systems that particularly impact marginalized groups. If you think about the Black and African-American group in society and how we have societal biases that have impacted those groups in the past. Anything impacting our youth, so minors or elderly. Any systems that can cause significant emotional and mental harm, those are also falling into the category of risk of harm.

We pull those technologies out and have a special review. Our other area is infringement on human rights. Microsoft believes in the global human rights of all individuals in all people, regardless of location, regardless of country, regardless of government. If there are systems that have the ability, and these, a lot of times fall into facial recognition systems, but if there are systems that have the ability to infringe on individuals' human rights, those also need to be pulled out for review. I'm not sure how I'm doing on time. I just want to make sure. I know we don't have time for Q&A -- my slides aren't advancing.

There we go. I'm not going to get into the standards. We do have a Responsible AI Standard, which breaks out what we call requirements. There's the general requirements, which are, general requirements are things that all AI systems, it doesn't matter if it's just a component, or if it's a huge solution itself, all AI systems have to meet the general requirements. We have specific requirements that if you have an AI system, for instance, that is a military use or facial recognition, there's additional requirements that are required of those systems as well. I talked briefly about human rights. As we think about the Universal Declaration of Human Rights, these are the things that Microsoft focuses on for any individual, for any person. To close things out, just want to say, Brad Smith and Carol Ann Browne of Microsoft have recently released this book called 'Tools and Weapons'.

In it, there's a great quote from Brad that says, "The more powerful the tool, "the greater the benefit or damage it can cause... "Technology innovation is not going to slow down. "The work to manage it needs to speed up." I would say, especially being in some of these conversations, and I told Brad this the other day, being in some of these conversations where I'm either the only female, or the only Black person, or the only...

I don't know, but feeling like I'm the only neurodiverse person trying to represent all of these perspectives. It's just my perspective, but it's important that we have the ability to have diverse people in these groups, in these meetings, in these design reviews, in these planning meetings, and that they speak up. I've gotten to the point now where I'm not quiet if I see something that just doesn't make sense to me, or I think something that can have a negative impact. I have different experiences, a different background. I have probably a much different perspective than a lot of people at Microsoft, that, especially ones that don't look like me, I speak up. I'm no longer quiet.

With my closing words, I would just encourage you all as well as you're going through this work and as you are doing your research, or as you're pulling together information that it's really important that if something doesn't feel right, that you speak up because you probably aren't the only person that's going to be impacted by that. With that, Wendy, I don't know if we have time for any questions, or if we should just do the Q&A at the end. I, again, appreciate the time to be able to walk through this. >> Yes, thank you, Leigh. Unfortunately, we don't have time for questions. We are going to switch the order a little bit. Ehi's fire alarm went off, and so we're going to have Theresa go next.

But what I do, I would like to do, is I'd like to encourage people to ask questions of Leigh. We will have time later to ask those, whether it's in our session all together or in the breakout groups, or in our town hall. We'll make sure your questions get answered, so please ask those. Leigh, just thank you so much for the work that you're doing and for the presentation. We really appreciate it and thank you for being here with us today. >> Happy to be here.

>> Awesome.

2021-05-10 07:13

Show Video

Other news