Multi-lens technology ethics with NVIDIA’s Nikki Pope

Multi-lens technology ethics with NVIDIA’s Nikki Pope

Show Video

- [Beena] Hi, my name is Beena Ammanath. I lead Trustworthy and Ethical Technology at Deloitte. Today on Solving for Tech Ethics, we have Nikki Pope, AI and Legal Ethics Leader at NVIDIA.

So great to have you on the show, Nikki. I am so happy we're having this conversation. Welcome to the show. - [Nikki] Thanks, thanks Beena, it's great to be here. - [Beena] Can you share with our audience a bit about your background and how you got to your current role? - [Nikki] I guess you could say I, I got interested in AI because I've been involved in Criminal Justice. I'm a lawyer, although I'm a corporate lawyer, not a, not a litigator.

And I was on the board of the Northern California Innocence Project for a number of years. And so I started paying closer attention to issues involving Criminal Justice. And I read about predictive algorithms and that's what got me down the path of looking into AI, because of the way predictive algorithms are used in the Criminal Justice system for bail, whether someone gets a longer or lighter sentence. All sorts of ways in which those algorithms are used. And there are all kinds of studies showing the bias that's inherent in those algorithms. And so that really drove me towards AI.

- [Beena] So Nikki, you've made this interesting transition from academia to cooperate, you know, anchoring on this ethics lens, right? How, how has that transition been for you? And I also want to touch on your unique experiences with nonprofits, but first let's talk about, you know, how was it, has it been moving from academia to the corporate world? - [Nikki] Well, it's an interesting and challenging transition, you know, in academia you have an idea or there's something that you're interested in doing research on, and so you do it. You do the research, you write a paper or write an article, and there's really not an awful lot of constraints on where your mind goes and what you want to work on. You can work on, on anything.

I mean, I was the Managing Director of the High Tech Law Institute at Santa Clara Law School, at Santa Clara University. And I spent my time focusing on algorithms in Criminal Justice and the intersection between technology and Criminal Justice and, and social issues. And no one stopped me because it's, it's what you do. So there's, there's not the same degree of freedom when you come into a company because they have businesses that they are running. And so what you do has to be in, has to advance that business objective in some way. The, I'm fortunate that I came into NVIDIA, which is a tech company, working in AI.

And so what I'm interested in is very important to the company, but there's not the same degree of sort of freewheeling, you know, do what you want in, inside a company as there is an academia. - [Beena] Yeah, what about, you know, between non-profits and for-profit companies? Is there a, you know, is there a similar set of differences? - [Nikki] I think there can be. I think it depends on what the nonprofit is.

I run a nonprofit in my spare time, spare time, and, that works with exonerees people who were wrongfully convicted and helping them transition back into society. And that, that's a different sort of initiative or it has different objectives and goals. And so I think, I think it's, I think it's different when you're working in a company, because as I said before, a company has stakeholders that are, you know, owners of the company, stockholders that you have to respond to. There may be regulatory agencies that you have to respond to. It could be just the, the SEC, the Securities and Exchange Commission, but there could also be the Food and Drug Administration. And if you're dealing, as we do, with, with autonomous vehicles and you have all the transportation, regulatory agencies, and so in a nonprofit, there aren't a lot of regulatory agencies that you're having to deal with.

At least in, in my nonprofit. So there are, there are a lot more stakeholders and there's a lot more interest, I guess, from outsiders in what you're doing. - [Beena] The, the, you know, having strategies towards as well. Right, I, I think the reason, in addition to the stakeholders, there's also more focused effort, right? Like to your point, you just cannot pick up any point and start any topic and start looking at it. Needs to be aligned with the business goals.

And there's more well-defined focus as opposed to being completely Greenfield. - [Nikki] I think so, and I think that, you know, that goes right into, how you, how one of the questions that, that we discussed previously, had to do with how you, or how I, or anyone in this position does their job inside a company. And, and, and that's, it's really essential to collaborate with others. And especially when you hear, if you're introducing ideas that are novel to the company, or if you are going to affect someone's business, you know, in a way that, that they don't particularly appreciate at the time. And so I think it's, I think it's essential to move slowly and deliberately, understanding what the business issues are and, and getting buy-in from the people whose businesses you're going to suggest changes to.

- [Beena] A role like yours is new to the industry, right? In general, to companies, these roles didn't exist. Even five years ago, ten years ago, these roles just didn't exist. So how would you describe your role? What does a day in your job look like? - [Nikki] Man, I don't know that a day in my job looks like something specific to the next day.

One day does not look like the next, and I could spend, I could spend one day, I'll give you yesterday for an example, I had about eight meetings and they started out with talking with engineers about risk management and risk assessment and assignment of risk. And then the next one was dealing with explainability and how do we do that. We have a, our, our GTC conference coming up in November, and I had a meeting on that, talking about bias and how you assess bias or attempt to mitigate it. Then there was a meeting with a potential toolmaker on assessment tools and, and mitigation tools with various people from engineering. And then I had a legal meeting where we were talking about licensing and licensing of our, of our models, codes, technology. And then at the end of the day, I had a, a really short meeting on privacy policy.

- [Beena] Yeah, it's so varied right? I completely agree. There, there is so much to do in this space, that it is, it's almost hard to figure out all the possible things that you could do within a day, because, not really the company that you're in, but also the breadth of topics that you need to get into. There are so many lenses to look at when you start talking about technology that takes AI ethics, and yeah. And then media is, works with so many industries.

So it is not just about figuring out ethics for a specific industry. You know, you're super computers are powering the entire world and theoretical implications have to be considered from all these different lenses. And how do you ensure that, you know, the outcomes that you drive are positive? What are some of the scenarios that you try to avoid? How, what does your team, how does your team prioritize these things? - [Nikki] So that's, prioritization is a challenge because not only, you know, as you mentioned, not only are we across a lot of industries, but we're global. And so we have to consider when, when we talk about ethics, and I really talk about AI being trustworthy or responsible, not ethical because I think ethic ethics is a charged word. And, and it means different things to different people.

But, you understand what being trustworthy is. And you can understand what being responsible is. And, and so when we, when we try, when we, when we build responsible technology or trustworthy technology, you have to consider not only what would engender trust in a particular industry, but, but a particular region. So for example, if we have, if an AI model, we don't, we don't do these, but if an AI model is a, a recommendation model and say, it's, it's going to recommend that you, buy this red dress.

That, there's, there's not a lot of harm that might come from that model. For, you know, recommending clothes or recommending shoes and what have you. But if you have a recommendation model that is recommending a drug you should take, that's a completely different situation with a lot of ethical questions and, and a lot of regulatory oversight. And there may be a difference between how that question is addressed in the US versus the UK versus, you know, Singapore or South Africa or Brazil.

It, you have to really consider not just the industry, but the, but the, the society and the societal norms and values. - [Beena] So true, and the challenges also, you know, we don't have all the regulations figured out, right? It is still very nascent, early days, on you know, what, what are, for ethical regulation, but what are the best practices? What, you know, there's no playbook talking on, how do you solve for technology ethics across all these industries? Do you, what, what are your thoughts on, you know, when will regulation catch up or is it, is it going to, you know, is it going to be taking a while before we see policies emerging on this? - [Nikki] I think it's going to be a while. And I think it's going to be a while before the different regions have some consistency across their regulations and policies. But for me, the, the legal regulations or legal obligations are like the, the baseline.

That's like the bare minimum that you should be doing, and you should be doing it because the law requires you to do it. But I think it's possible for companies to have these conversations now, where you are talking, you know, internally about what, for your particular business and, and it's important that they know what business they're in and what their mission is, what the ethos is of their company. You know, the one, one car company may have a different mission and a different ethos than another car company, even though they're competitors.

And so understand what that, what your company's ethos is and, and deliver on it. You know, be true to it. And you don't need the law to tell you how to do that. You need, my grandmother used to say common sense. Common sense can help you get to a lot of these solutions, you know, doing what's, doing what's right.

And I don't think it is inconsistent to, for a company to do what's right, and still be profitable. Lots of companies do it. - [Beena] Yes, yes. Do you, do you see any company without naming names? Do you have a favorite example of an organization that has applied tech ethics in the real world successfully? - [Nikki] I have an example, but it's, it's multiple companies. And my example is, there was a, a study done in MIT, at MIT, by a data scientist on facial recognition technology and the biases, skin tone racial bias and gender bias in facial recognition technology. And the report was published.

The data scientists called a number of companies that build facial recognition software and, and convinced them to stop providing their software to various organizations like law enforcement, when there was such a bias, built into the, into the software. And I think that's a huge win for, for AI ethics or responsible AI. - [Beena] So, so true. And you know, you, and I've talked about this in the past, right? There, there is a lot of noise around tech ethics. In general, you hear a lot of headlines, you hear there's a lot of fear that has been pushed out. What, what do you think are some of the misconceptions? What are some of the things that is misunderstood by people who are not really working on solving for ethics, right? What is the big conceptions that you've seen in your experience that you'd like to share with this audience? - [Nikki] Well, I think the biggest misconception is one that I had before I started researching in this space.

And that is that, that the technology is a lot further along than we think, that the technology, that Hollywood is the reality, and it's not. You know, there's no Cyberdyne, there's no Skynet. You know, machines aren't going to come and take over the world. And so I had this, this false sense of where the technology is and, and, and what AI is and what it does.

And what I think would help people to understand is that, artificial intelligence, it's, it's, it's weird that it's named artificial intelligence because it's really a machine that learns what we teach it to learn. And it learns it over and over and over and over. So if we don't present the information, it doesn't learn whatever the thing is.

So it reflects back what we put in, and that's not ness... that's not, to me, that's not intelligence. It's not really bringing in more information and thinking, it is looking for patterns of behavior in the data that we feed it.

And I think when I understood that, it made all the difference, like, okay, now I know what I'm dealing with here. This is not a thinking machine. It is a pattern finding or pattern observation tool. And I, I think it would help for people to understand that. And for it to be explained in a way that regular people who are not scientists and data scientists can understand. - [Beena] Yeah, that's so true.

And I know you and I've talked about this, right? The nonprofit that I started called Humans for AI, which is really about just driving more AI literacy so that everybody can have that basic fluency understanding about even what AI is. And think about it, like even other technologies, as they come to the fore and it creates value, there are going to be negative implications, there are going to be risks associated, right? And when we talk about AI ethics, it does tend to get very quickly focused on bias. And you've mentioned a little bit about explainability. What are some of the factors that you think fall under the umbrella of building trustworthy AI? - [Nikki] If you, you know, if you take a step back and, and think about what trustworthiness means and how you build trust. You, you have to, if you're building trust, one party to another party or a human to human or a machine to human, or however you're doing it. The person who is receiving that benefit needs to understand how it works and they need to, you need to build the trust.

And I think, getting to the question that you asked just before, about what humans, what people don't know, what people need to know, is I think we all, not all, but generally as a society, accepted this idea that machines can do better than humans. So machines can solve problems better. They can solve them faster.

Machines are better. And therefore, a machine solution is better than a human solution, and that's not necessarily the case, right? And so we, it's, it's, it's important to understand that AI has the ability to solve a lot of problems and to help us solve problems, but it's not the problem solver. And once we, once we started to realize that there were problems in the way the, these algorithms or these models or systems were, were designed, we sort of lost trust in them. We sort of had a, I don't know, a childlike trust, you know, originally. Oh wow, these machines are going to be great. And, and then they didn't perform great.

They, you know, were biased against men or women or black people, or, you know, people who speak with an accent or whatever it is. And we started to lose trust in the ability of those machines or those systems to, to be fair. And, and I think, in order to build trust, one of the, one of the essential elements of trust is explainability. I think we need to be able to explain to a person, who for instance, you know, was denied a mortgage loan, why they were denied and how that happened.

And for the, for the loan officer who is making the decision, that person needs to understand how that algorithm reached the outcome and that it did. How it got to that suggestion and, and everything that went into it. And I don't think it's impossible to explain AI.

You just have to do it in bites, and you have to figure out who it is that is asking the question and what it is they need to know. So an example that I give is a self-driving car. If you're, if, if I'm buying a self-driving car, I want to know how it works, but I don't want to know exactly how it works. I just want to know how it decides to take the freeway instead of taking a surface street, you know, and I want, I want to understand how it, you know, identifies a person versus a truck or a bicycle. But that's really about the extent of what I need to know. But if I'm, if I'm a regulator or I'm with the National Highway Transportation and Safety Commission, I want to know details of how the engineering works, how the technology works.

I want a completely different level of, of, of explanation. And so when you talk about explainability, it's really important to know who it is you're explaining to and what it is they want to know. And your explanation is going to, going to vary. You'll have different explanations for different audiences. - [Beena] Yeah, yeah, so true.

Explainability is crucial and you make a very great, good point, Nikki, as to, yeah. It, it has to be targeted for that audience. So you need to be able to build your explainability in a way that is understandable to that, the group, that you're targeting too, right? It's not a one size fits all. It almost, it's almost like you need to use AI to understand who the target audience is and then it'll pasteurize that. - [Nikki] And it also goes into, it also comes into play when you're, when you're developing the, your, your model.

The, so in privacy law, there's this concept of privacy by design, where you build in the privacy, into the product, you don't sort of tack it on at the end. And I think we are moving towards, you know, companies are moving towards this idea of, you know, ethics by design or trustworthiness or responsibility by design. So you have these conversations, development teams have these conversations when they're thinking about, oh, I've got this great idea for, for an AI model that will do X.

And then have the conversation then as opposed to at the end going, oh, I need check off that ethics box. You know, then it might be too late. You've built something that you cannot fix after the fact. So then you have to go back to square one to begin with. So I like the, what I'm seeing with, with more ethics by design being brought into the development process.

- [Beena] Love it, and what, what, what, well, you know, that, that's, that's a great point, too. You know, you need to look at the existing processes and make sure that you're thinking of ethics early on in the process. But how do you empower your organization? The people, the talent, the, you know, is it just the data scientist, is just the technologist, is it, you know, every employee? How, what are your thoughts on, how do you empower everybody in the organization to think about ethics and understand what the company's standards on it? - [Nikki] It's a great question. I think, I think companies are, or people in companies, employees are, well they're people, right? They're consumers. And so there already exists that concern about, about ethics and about responsibility.

You can see it in things like climate concern. And, and so that sort of perspective is already there. It's just a matter of management in a company, making sure that employees understand, this is important, and this is important for the company and you are all apart of this process.

So if you see something that isn't working, say something about it, or if you have an idea of a way to, to make something better, say something about it. And it no matter where you are, I mean, the idea can come from someone in the, on the development team, but it also could come from someone in marketing and communications or someone in HR. You don't know where the idea or the record with the suggestion is going to come from, and AI, and it's important because AI, if it doesn't touch everything now, it's going to.

It's going to, it's, it's going to be in reviewing resumes and deciding what brand of coffee to buy in the cafe. And I mean, it'll be in everything and all sorts of solutions. It's already in, you know, Netflix and Amazon and every place else that we use online. So we're, we're already interacting with it and it's just going to be the same inside companies and outside.

So we should start thinking about that and, and, and, and building the vocabulary for it and understanding what it is we can do to improve it. Because, you know, we are, we are AI. I mean, we are it.

- [Beena] Yeah. - [Nikki] It's a reflection of us. - [Beena] It's everywhere. You, you, you know what, going back to your point on ethics by design, right? The, that's great for companies who are very early in their journey of looking at AI, but there are companies that are more advanced in their AI journey and they, you know, they hadn't really thought about ethics early on, and now they have to do some level of catch-up. What's your advice to companies that are more advanced in their AI slash tech journey? You know, how should they be thinking about ethics? How do they, you know, how do they play catch up? What's the best way to approach it? - [Nikki] That's a tough question. And it, it's going to be a challenge for companies that have a lot invested already in, in AI.

But there are two things. There's one, there's one really good example. And that's the GDPR, the privacy, you, the EU Privacy Regulation. That came, you know, well after lots of companies had already collected data on people and had databases and personal information. And in, in companies like financial institutions and in the healthcare industry, they already had regulations dealing with handling personal information and sensitive personal information, but companies like, you know, retailers and, and social media platforms, they didn't have that sort of regulation.

And all of a sudden now they have to have it. And so my advice to companies that are further along in the process, may sound a little cruel and heartless, but it is, you know, regulation is coming. And so start now, you know, you can look at the EU's Artificial Intelligence Act as a sort of roadmap for what potentially could come, but there's likely to be much more stringent, much more restrictive of regulation coming down as well. Even, you know, if you're in the US, you can look to the Federal Trade Commission, even the FTC put out guidance on artificial intelligence with respect to anti competitive behavior. And they, earlier this year, they said, you know, I'm going to paraphrase this, but it's basically, if you are producing, if you have an AI system that is biased or that does not treat people fairly, that is anti-competitive. And, and, and there's a way that people can, can complain, file a complaint against the company that does that.

So it's already here and there are markers here for companies that are more advanced to, to know what they should be doing into, there's just, to me, there's no excuse for not starting the conversation. - [Beena] Yeah, you know, solving for ethics is, is hard. Yeah, you need to understand the technology.

You need to understand the regulation policies. You need to understand human behavior, business modulation. There are so many aspects to it.

What are some of your, you know, tips on, how do you get your arms around solving for ethics in a, in a large organization? No matter which stage you're in, what are some of the dimensions that companies should think about? - [Nikki] You definitely need to have cross functional or interdisciplinary teams. You can, I don't have a Computer Science background. I don't have a tech background, an engineering background, but I do have a legal background. And I've been around for a while and I've worked for a few companies, but it, it took me talking with our engineers to understand how a particular AI system works and, and it's different from industry to industry, right? So, so when you're talking about financial service, financial services algorithms, I talked to the people in our financial services industry to find out what's important for that industry, and how do these algorithms work? How do these systems work? So that informs what it is you, you need to address, cause you have to prioritize.

There's so many things, there's so many businesses and so many models and so many opportunities. You have to figure out where you're going to focus first. And I do that with the help of people in Cybersecurity, because they've done this a lot already.

And engineering, I turn quite a bit to the privacy folks, privacy law folks, because they had to deal with this in a very dramatic way with GDPR. And then the other thing that I, that I did when I first joined NVIDIA is I reached out to my counterparts such as they were at other companies, mostly tech, but some not tech. Some of them were further along in this process than, than we were, and some of them were not as far along as we were. And it was really helpful to talk to people who have gone through a lot of what, what we're going through to find out the best ways to, to move forward and also things not to do, you know, how to, how to navigate in, in this space. - [Beena] Yeah, we are all in this together and nobody has it all fully figured out.

So coming together and I think that's how you and I met Nikki, is yeah, trying to connect. And you know, I'm a technologist by training, but I have long admitted that this is one of those problems, you need that diversity of thought from a professional background, educational background, geographic background. You need so many different experts coming together to be able to actually solve this extremely fuzzy gray area that, you know, that we are all trying to get our arms around.

And you're so right, depending on the company, the industry you're in, it's going to be a different lens. It's going to be a different solution, right? - [Nikki] Exactly. And I will say that, one of the things that, that is, was surprising to me and, and really welcoming and was delightful is how much sharing of information there is between companies that otherwise, otherwise are competitors. I mean, I have, I have talked to people who are involved in tech, tech ethics, or responsible AI, or just robotics in all sorts of companies. And, and they've been really forthcoming in, in sharing what they've learned and, and likewise I have as well.

And it's, I think it's because the, the, the, the seriousness of what we're doing and the, and the opportunity. I mean, there's a tremendous opportunity to, I don't want to say save humanity. That's not really what it is. But solve some of the big problems that we have. It's possible, and you know, all hands on deck.

- [Beena] Yeah, that, that is so true. I mean, I, I'm definitely seeing more of that collaboration on this topic than anything else, because we're all in this together. And I agree it's, you know, that it's save humanity or, you know, this, if we don't solve for this, I think we will hit a roadblock, right? At some point, we'll have to think about the progress we're making with technology and the risks associated with it. So it's, it is an absolute must for all businesses to figure this out. So, yeah, Nikki, any parting words for our audience as they think about this progress? I took away a few notes, you know, build your network, you know, find your peers and, you know, reach out, figure out ways to collaborate.

Nobody's got this figured out. You need to collaborate internally with stakeholders from different teams to be able to figure, to be able to solve for this. What are some of your parting words for our audience today? - [Nikki] I would say, particularly people who are going to come into a position like mine, or who are interested in doing this work, educate yourself.

I think it's the most important thing. There are lots and lots of books. I can't even tell you how many books I've read on AI ethics and machine ethics and business ethics, and, you know, the future of humanity. All of, all of those.

There's some really good books out there that are, some are more to one side than the other. So I would say, you know, a good balance across the board of, you know, the, the books that are humanity is doomed and, AI will save humanity and everything in between. So I would say, definitely educate yourself. And, and part of that education also includes attending webinars or conferences on this topic, because, you know, you learn an awful lot from just having conversations. You and I have talked a lot, and I've learned an awful lot talking with you about, you know, just the everyday questions that pop up when you're, when you're doing business. And it's, you know, there aren't a lot of us in, in, in the corporate world.

There are a lot in academia, but there aren't a lot of us in the corporate world. And so I think we should definitely reach out to each other and, and share the best practices and the knowledge and information, because we'll only move forward faster if we do that. - [Beena] Nikki, so well said. And, you know, just to add on, that there are not many people in the corporate world, are really focusing on the solution. There, there are people, to your point in academia, but there's also a lot of marketing and click bait headlines that creates a lot of hype around it, but it is just headlines, you know, and I think we need more warriors like you who are focusing on solu..., you know,

solutioning for it, right. To actually figure out, okay, we understand there's bias. How do we solve for it, right? Moving the conversation forward towards actual impact, right? Do we need more warriors like you? And that was actually one of the big reasons for starting this, you know, interview series is to feature people like you, who are actually moving the needle on solving for it in the real business world that everybody can learn from, can connect with, and, you know, we can build our own tribe of warriors focused on solutioning. Nikki, I appreciate your time today. This was great conversation and I know those continue.

So thank you so much for joining us today. - [Nikki] You're welcome, it was fun. - [Beena] Nikki, thanks again for being with us on the show, and I want to thank our audience for tuning in to Solving for Tech Ethics.

Be sure to stay tuned into Trustworthy and Ethical Technology at Deloitte for more of our latest thinking and market insights. Thank you, and take care. - [Commentator] This podcast is produced by Deloitte. The views and opinions expressed by podcast speakers and guests are solely their own and do not reflect the opinions of Deloitte. This podcast provides general information only and is not intended to constitute advice or services of any kind.

For additional information about Deloitte go to deloitte.com/about

2021-11-07 06:38

Show Video

Other news