The Ethics of Artificial Intelligence with Amy Winecoff

The Ethics of Artificial Intelligence with Amy Winecoff

Show Video

foreign [Music] [Music] to welcome everyone here today on behalf of the center for digital strategies and Tufts it is my pleasure to introduce Our Guest today Amy Winecoff so Amy is a research fellow at Princeton University's Center for information technology policy and the center for statistics and machine learning she also serves on The Advisory Board of the e-lab startup accelerator program hosted by Princeton's Keller Center for innovation in engineering education her research focuses on human algorithm interactions taking into consideration how humans both shape and react to algorithms I should conducts research on how psychological social and institutional forces shape how entrepreneurs develop algorithmic systems as well as how algorithms algorithmic systems adapt to idiosyncratic user level Behavior broader social influences over time I started for joining Princeton she worked as an assistant professor of psychology at Bard College and is a data scientist for e-commerce technology companies where she developed large-scale machine learning systems for providing products recommendations she's published numerous academic articles and book chapters on psychology Neuroscience machine learning and human human computer interaction to graduate in North Carolina State University entered a PhD in neuroscience and psychology University please join me in welcoming Amy Winecoff to talk [Applause] but as you know we've got a lot of programming and a lot of interest in the AI space this spring I'm also really excited to announce um that at a to be determined date later in the second half of this spring term Professor tiller faculty director right there will be teaching a Sprint course on generative AI so it's hot off the presses so to speak and approved by the registrar and everyone so I'm really excited for this space I'm really excited for Amy to join us today I'll turn it over to you Amy all your show okay um thanks for having me everyone I'm really excited to be here to talk with you all the advertised title of this talk was the ethics of artificial intelligence and as is almost nearly always the case I thought of a better title later and I think this will embody the approach that I'm going to be advocating for here today which is how to build slightly less evil AI oh um just a quick note we're going to be doing an interactive portion of this talk today discussing AI is applied to mental health use cases so take care of yourself whatever that means for you um right so Patrick already introduced my research a little bit uh much as I would personally love to obliviate on about my research and all of its like academic components I don't think that would be very interesting or useful to you all but what I will start with is a high level Insight from the research that I've done over the past three years I've had in-depth conversation with like hundreds of Technology professionals mostly in the AI space although also in the blockchain space as well so what I talked to them about is what are the values that they hold dearly and in what ways are or what do they perceive as either opportunities or obstacles for developing and building those values into their technology that they're working on in these conversations of course it's a self-selecting group so that's the caveat but it is almost never the case that no one cares about any sort of broader ethical social or political value in fact I've only ever interviewed one person that openly admitted that he did not care about ethics whatsoever so it's like pretty common for people to care about something other than their business goals or their technology goals that said the overwhelming majority of people that I have interviewed have not built those values into their design process not thought about how their technology itself might stem out of those values not discussed how organizational processes might be shaped as a way to better embed those values nor in almost all cases have they ever even had an explicit conversation with their co-workers and co-founders about what they care about literally the most basic thing that one could do most of these folks have not done that does not it does not follow therefore that values don't become embedded in what you build anyway people build Technologies the choices that you make are going to be some sort of embodiment of what you care about whether you talk about it or not um what happens though when you begin to talk about it and design around it and make explicit decisions is you're more likely to build Technologies and processes that are consistent with your values you're less likely to have ethical outcomes that are an accident so you can't perfectly embody all of your values but if you're going to come into this space you might as well at a bare minimum be deliberate about it so I'm going to try to give you guys some tools today for thinking about how you might do that within the scope of a 50-minute talk um okay so you're probably not living under a rock so I'm going to assume that you're all familiar with lots of these splashy headlines and developments and using the tools themselves particularly around large language models but also generative image models lots of lofty claims about what these Technologies can do and a lot of like equally lofty claims about the harms that are going to be caused by these Technologies so misinformation is one that's received a lot of attention bias and image models has received a lot of attention as well and these are things that have come out over the last year or less that said these types of ethical implications of AI are not new to this current class of models it has been nearly a decade since these types of concerns have begun to surface about machine learning models you may be familiar for example of the instance where Google image captioning labeled images of black people as gorillas or you may be familiar with flickers model doing a similar thing labeling Holocaust sites that's jungle gyms other examples of bias and screening tools and ai's resume screening process where they eliminated candidates that came from women's colleges there are a lot of ways that the data itself can embed it can embed social biases into these models and create problems that are undesirable ethically and also from a business perspective so these are the types of things that people have been studying for a while um so how do we begin to build machine learning or AI models that might move in the direction of embodying some values that we care about and move away from the direction of embodying values that we don't want them to have there have been there's a recent meta-analysis that I put up here and I'll give you a link to these slides later that surveyed all of all of these sorts of ethical AI Frameworks out there there are over a hundred principles technical tools Frameworks for thinking through this stuff those are very interesting and fun you can produce them on your own today I'm going to be talking about an approach called value-sensitive Design This is a model that's in a principled approach to thinking through how we might incorporate human values into the design development and iteration process for building Technologies from those ethical principles or personal values I've chosen this for a couple of reasons to talk about today the first is that it's not prescriptive so it's not going to tell you these are the values that you should embody there's a recognition that different people are going to care about different things organizations are going to choose to prioritize different types of ethical values so it's not going to tell you what values one ought to hold it offers a framework for thinking through how do you prioritize the values that you do hold um it acknowledges as well that values and priorities they're not always going to like perfectly line up in this way where you can build beautifully privacy preserving technologies that are fair and that are inclusive and that also are sell to someone that will actually buy your product you're not going to probably be able to realize all of these things all at once so it offers some ideas about how to work through some of these conflicts and two the title of my talk I'm building slightly less evil AI it focuses on progress not Perfection you're here at the business school because you actually want to do a thing not because you're trying not to do a thing so it's unreasonable to expect that you're going to achieve some sort of perfect status that said there are a lot of incremental benefits that are cumulative from working on these types of small advances to move you in the direction that you might like to go so the goal is not to achieve this Perfection state but also just to make progress towards it and you'll learn in the process so value sensitive design offers a lot of different tools that I consider bringing up today something like 17 different techniques success for doing it we won't be able to do all of that today so I'm going to focus on sort of the core two types of thoughts that are important for doing value sensitive design you will probably already be familiar with the idea of a stakeholder analysis so where value sensitive design differs from this is that it Parcels out different kinds of stakeholders in the system so from this framework direct stakeholders are stakeholders who directly interact with the technology so the most obvious one would be your end user but this might also include developers hackers designers and maybe administrators depending on the kind of system that you're building it also though talks about indirect stakeholders so indirect stakeholders are people that themselves do not interact with the technology but are still affected by that technology in an indirect way so this could include an advocacy group it could be a family it could be regulator Society at large sometimes people bring up things like environmental entities so maybe there are whales and they're an indirect stakeholder of some particular kind technology that stuff as well um one example that's particularly relevant in the AI case for an indirect stakeholder would be a data subject so for example you can imagine that if someone is building a shopping analytics platform um they might get data traces that have time stamps interaction information from online retailers and then use that information to provide some sort of B2B analytical service the users in that case that generated that data don't use the service but they are nevertheless affected because their data goes into building it so they have a stake Albia indirect in that particular product um the last is excluded stakeholders so these are people who do not use the technology they actively choose to not use the technology or they're not able to use the technology so this could be physical cognitive social constraints other situational constraints it could be a personal choice but that still may be impacted even though the technology doesn't have any sort of impact on them directly or indirectly so just to give you a little bit more concreteness around this imagine that there is an electronic health record system that providers use for billing and maintaining patient records that only the providers have access to the direct stakeholders in this case would be doctors because they use it to access patient records take notes read notes from prior appointments those types of things same would go for nurses for example if the insurance companies also use these records to do billing they would also be direct stakeholders maybe there are cases where researchers might get granted access to that data in that system as well and maybe that would also be the case for Regulators that are trying to ensure HIPAA compliance indirect space stakeholders in this case if this is only for providers would be patients because they don't directly interact with the medical record system itself this would also be the case for family members of patients since they're still affected by that patient's health excluded stakeholders in this case might be practitioners that are working in a case that doesn't have internet access so or a cellular network probably not the relevant for this but it is still a case where that that may be the case in some settings where this system might not function um users with low vision if there are not accessibility considerations built into the system then they will necessarily be excluded from this particular technology and we'll do more work on this a little bit later the second technique that we're going to be drawing from today is a value analysis from value sensitive design so the goal of this is to try to figure out what are the values that are at play now in an ideal situation you would go and after you've identified your stakeholder groups is go talk to those people that would be the ideal way to elicit what these values are probably that's not feasible in all cases so borrowing the ability to talk to stakeholders directly you might consult prior research policy documents any other way to get a reasonably moderate to High Fidelity signal about what these particular stakeholders care about given that you have that data probably you're going to have more values and can be reasonably accommodated so you would probably select some and figure out operational definitions for those values that are relevant to the context at hand so it's one thing to say we want technology to be inclusive it's another thing to say we want this technology to be inclusive for this particular system and this is what we what we mean by inclusivity um the last thing that sometimes suggested is a way to think through this is once you have these operational definitions is to identify what the source of these values is so there are going to be some values that are explicitly prioritized by the project itself so it's a project goal maybe it's tied to funding sources maybe it's uh um guiding principle of the organization itself for how they operate in other cases you might have the personal values of the people that are building that technology so the designers of the technology or the engineers and the last case is the other stakeholders so the other stakeholders that you've identified which values correspond to those stakeholders that can be useful because it helps Orient people towards the design process and thinking through where there may be benefits to whom might be benefiting and where there might be Harms and so forth um so uh this is totally hypothetical don't hold me to a hypothetical system let's say we did actually talk to stakeholder groups and elicited some values and we find out that doctors value paternalism so this is the idea that they want to do what they know to be medically in the best interest of the patient often with limited input from the patient themselves about what their personal desires are patients on the other hand are probably pretty likely to Value autonomy they want access to their information and they want agency over how choices about their medical treatment are being made lastly Engineers might value security so they want to protect against accidental data leaks and therefore maybe you want to constrain the access to credentialed providers in this case there's like a pretty obvious conflict between the doctor's desire for paternalism and the patient's desire for autonomy you're not going to be able probably to satisfy both entirely that said you may be able to work with your design team to Think Through well what are some ways that we can provide some access to patients about their data so that they're able to access what's been done about them in the past and what choices were made without providing them with the full access to all of their medical information especially if that information can only be meaningfully interpreted in conjunction with the provider so that might be a way that you could think through how could we prioritize some of these things um at least a little bit through the features that we build into the product that product yes two questions the the first question was around the example you gave where you had hacker as part of us as a stakeholder yeah in the system and that's generally not how we would think of a stakeholders sure um if you could sort of help me understand that a little bit and the second thing with this is how do you think about when the underlying population whether patient doctors or some other category is heterogeneous right and so they now so it may be hard to come up with sort of the general value yeah yeah so the first question about hackers um I I spoke to you about this earlier there's kind of like different sorts of hackers so sometimes people are white hat hackers where they're hired intentionally to try to exploit exploit weaknesses and systems and are often rewarded for doing those things like bug bounties and stuff like this to try to test the system that might be a stakeholder where they're in some sense they're also a service provider but it is still somebody's like hacking maybe a more General sense somebody might mean hacker by somebody that's using not for any nefarious purpose a technology that's available open source we might call that person a hacker they still then become an end user of a technology that's put out into the public domain for open source code in other ways like maybe it is somebody with a nefarious intent it can still be useful to Think Through what might a hacker want to exploit our system to do what would be their goal in doing so number one because it helps you mitigate against what that might be um granted if you like don't hire somebody professionally to do that that might be difficult to anticipate but sometimes even just like verbalizing what that might mean would help you think through that now it's not to say like some stakeholders you intentionally don't want to prioritize their values like if there's the hackers like I want to gain access to personal financial data and put it on the internet like that is a value that you want to downgrade in your system compared to the value of Safety and Security so that was your first question maybe at least partially addressed and then the second question was oh heterogeneous stakeholder groups this is really tough you know you don't want to make uh over General assumptions about any particular group um there may be a ton of heterogeneity in that case like ideally you would have elicited at least like one opinion from the different types of stakeholders in the group if it's not the case that there's any sort of commonality within that group maybe your groupings are wrong and there's like a different axis along what you should be thinking about stakeholders again you're not going to be able to prioritize everything but doing something is a good goal because doing something is usually better than nothing um okay um so we're gonna be talking about AI today I feel a little bit weird about using that word I tend to not use it in my personal work when I'm building machine learning models I'm going to use it today because I've heard from many entrepreneurs that it's sexy and people care about it and the fact that all of you guys are here and this is a full room clearly they're right um that said there is not a consensus definition of what AI means so um different people when you ask them about this might say well it's a system that thinks or behaves the way a human behaves other people particularly computer scientists will say is the system that behaves rationally or better than humans might behave if you ask policy policy makers they don't really know so um that's its own group not all I mean this is like a general statement you should read the paper don't quote me on that um but they tend to emphasize more of this like thinking and behaving like humans or care about some of these other contexts that don't necessarily play into a technical definition I've done research with a lot of AI startup entrepreneurs and they hate me even asking a question about this and before I can even like get it out of mine with this ai's not real we use it as a marketing term people use it that don't actually build Real Models um so in that case this is the instrumental tool for hyping things up to their consumer basis that said I like need to put this in as a caveat um but in order for us to do anything today we're going to have to have a working definition so today for the purposes of this Workshop we're going to call artificial intelligence this is a system that leverages machine learning algorithms to uncover patterns or statistical regularities within data that are useful for performing some task I try to make that as like non-technical as possible we're going to be getting into a case study about AI you do not need to know how machine learning works at depth in order to do this exercise if you're getting down into the weeds of like what is is our loss function you went too far so just keep it at a high level for this um as I mentioned normally if we're doing a value analysis you would elicit that from stakeholders for the purposes of scope um we're actually just going to use Microsoft's AI principles here because they're pretty good and Encompass a lot of the concerns that people might have so um the first would be fairness you want to treat all of your stakeholders equitably and prevent undesirable stereotypes and biases so we've already talked about this a little bit with models that might be racist or sexist or prioritize some stakeholders over others and treat some unfairly reliability this may seem obvious but sometimes isn't is that the system should be able to avoid worst case scenarios even in the incidents where they're rare like it is not ideal if you're building a Radiology AI for you to misdiagnose even one percent of patients so you want your systems to be reliable privacy and security again you want to be able to protect the system from misuse intentionally and be able to protect against possible data leaks um inclusion this will be the idea that you want to empower everyone and provide opportunities for feedback so what are the channels through which people that might be affected by the technology can give you feedback about how it's impacting them on the last is transparency you want to create systems that have outputs that are understandable to the stakeholder groups so this can be important for engineers for debugging systems like if they don't understand why the system has made a particular choice it can be difficult to figure out how to mitigate that it's also important for users so if you think about hypothetically A system that is determining whether someone gets a loan or not and you tell them no or you tell them yes you should be able to have a basis on which you have made that decision so they can understand why that's made okay um we're gonna play a game here I have a QR code to access the slides we're gonna be uh doing a game called judgment call that's designed for product teams to Think Through um how to prioritize some of these six ethical principles within the design of an AI system if we were to imagine a system that uses facial recognition to verify that a ticket holder on a concert venue does indeed hold that ticket this is an example of a five-star review I love this take this ticket app it's a lot faster than the digital paper tickets the facial recognition wears me out a little bit but the app seems really secure there's two-factor authentication settings and it's default pin protected and you can delete your data at any time so in this case this person who wrote this review kind of took the general scenario and thought about some specific features that might fit with that so I'm going to present you with an intentionally vague AI scenario and then you as the person writing their review can employ some creativity to think about what specific features might pertain or not pertain to that value okay the AI startup startup stork is developing a chat bot for providing therapy to parents experiencing postpartum depression the application is informed by forms of therapy that are known to be effective treatments for postpartum depression such as cognitive behavioral therapy the therapeutic content delivered to the users will be predetermined and will be developed by stork in collaboration with clinical experts however in order to lend a more naturalistic more typical therapeutic context it's also going to leverage under the hood some machine learning natural language processing techniques that responds to when the user puts a free text response in like I'm really sad it will then give you some sort of empathic response back that mirrors what a fair a real human therapist might do in that context I'm going to give you maybe a minute to think of a couple of different stakeholder groups that fall into that direct and direct or excluded stakeholder the dichotomy may not be perfect so if you're like I'm not sure it's fine that's true all right dealers I would like for you to shuffle up those stakeholder cards uh and randomly pass one to each member of your group do the same for the rating card and also do likewise for the ethical Principle card once you have your set of three I want you to now take five minutes to write a review from this stakeholders perspective for that ethical principle with that writing once you are done reading the reviews from the group and even if this comes up as you're reading reading it I am offering a couple different discussion questions that you can come up with here are there both positive and negative reviews of the same feature are there any concerns that you might not have thought of had you not gone into this sort of perspective taking exercise and this last one is going to be particularly important what are some changes to the product that you might make based on the exercise that you have done hopefully this was somewhat instructive I'm going to ask you if you're comfortable with me using your review card in subsequent workshops just leave it down and I'll pick it up if you're not don't leave it down because I'm going to take it um so thanks so much for everybody again um for doing this hopefully you came away with some ideas of how you might be able to incorporate this into your practices I've got my information up here if you'd like to reach out I'm also legitimately if you want to talk to me in a meeting uh hit me up I love to chat with you about what you're up to and what your concerns are uh great and I'll share these slides with Patrick and other folks here so they can get them out to you all thank you so much [Applause] [Music]

2023-08-30 19:05

Show Video

Other news