Data Science Education, Physics, and Ethics

Data Science Education, Physics, and Ethics

Show Video

so hello everyone um welcome again to an exciting um Desi cop webinar this time we will be um discussing ethics um so just uh to let just a few housekeeping things before we get cited uh get started with this exciting um Series so first um note is that um uh this is in partnership with the topical group on data science so um please uh join us um you can join us on um slack you can follow us on social media and um for the Q a we'll be using an app called slido we'll be posting that link um in the um chat several times so that you can get your um questions and answers uh questions answered the other thing is is that we take code of conduct very seriously here and so um um I'll sort of sum it up with you know something from another [Music] um from another uh group that I'm part of you know sort of be gentle uh um be hard on ideas gentle on people right so if you have something where you disagree with somebody's ideas that's that's fine that's wonderful but no personal attacks um and otherwise we'll follow the standard APS code of conduct so um here's a brief overview of our game plan for today um where after this welcome we'll launch right in with our distinguished um panel and so they'll give a brief presentation on their work and some of their thoughts with ethics and then we'll save the Q a till the end where we'll have a combination of interactions between the panelists along with questions from the audience but because you'll have a slido link just as soon as you think of a question you'll be able to enter it then so it won't uh won't escape from you um so with that let me um stop my sharing and then um we'll start with our first uh first panelists so our first panelist is um wish it gush where he's a researcher at UC Irvine and a fellow at um lbnl working at the intersection of artificial intelligence and fundamental physics his research has direct impact on questions of AI bias fairness uncertainty quantification and the statistical evaluation of AI models he's been a strong advocate for including discussion on AI ethics and limitations of the quantitative methods in data science curriculum as part of a network of AI experts he has worked with the oecd to develop a framework to compare tools for trustworthy AI systems and continues to work with them on policy related to science and AI he's concerned about Democratic accessibility of technological resources and the education required to equitably share the benefits of AI in our society so with that um let's get started um Aisha voice up hi thank you very much uh so I hope you can see my screen um I'm gonna get started uh before I start I just want to put a disclaimer uh unfortunately I got a pretty sick yesterday because I wish maybe you can still hear it in my voice uh I might be a bit uh the delivery might be a bit slow or uh without as much energy but I hope the content is still interesting to everyone right uh so I'm very happy to be here um and uh let me show you yeah uh so first of all I think um it's it's great uh to have this kind of forum and and talk about a data science education for physics students uh that is incredibly valuable and I think a very important thing to have a uh specially uh with how quantitative uh everything in the world is getting now at this kind of a uh like if a physics education we often say is uh has uh gives us a lot of transferable skills and and data science uh as well uh it allows us it gives us a uh playing with these kind of uh problems gives us this ability to like um use statistical reasoning uh and uh visualization of data exploration of uh of the data and and it also gives you intuition for like what kind of measures make more sense for for I'm giving a pathological example here of how uh if you have a likelihood which is multimodal and then you use a mean to describe it it's it completely it it's not a representative um uh quantity and and these kind of things iterations I think useful even afterwards if after doing an undergrad in physics somebody moves on to any other uh field in in the industry or even in research um and and also uh more and more I think it's important to understand uh what machine learning is doing um it's a it's a machine learning is like having applications all around us and it's not it's not something you can avoid so and there's a lot of mystery around machine learning and it if the great greater Society has a slightly better understanding of what it can do and what is probably some kind of snake oil that people are selling it would uh I think it would help us make decisions better um and yeah when you want to learn about limitations and implications of these kind of statistical techniques and machine learning this is directly related to ethics and and I'll talk uh later about how uh it's actually a two-way street it's not just um we do the data science and then we think about ethical implication in society but like those kind of ideas come back all the way to uh pure physics research I'll show you an example of that um so also just for context what is machine learning it's uh is these are generally algorithms that learned patterns from some training data so you give it data and it learns patterns there correlations and then it uses uh what it has learned to make predictions on new data that you can give it's um and typically you can use it for classification it can try to classify um things into uh good or bad if if you're for for physics it might be some some data we want to keep uh versus not or or on on the internet it might want to filter spam from real content there's also regression predict and also generation so what is Generation like if you if if you're in uh in love with impressionism you definitely know these paintings Starry Nights by this guy called Van Gogh uh but like recently um there's this AI that has been able to out paint so it's able to imagine or hallucinate what the rest of uh this if you had to if Ango were to paint it bigger what uh how he might have done it and you can see that this painting is just one small piece of this much larger thing that looks almost seamless uh yeah this is created by um you know machine learning model these are all these are fun uh things to look at the capacity of machine learning these days um and before I talk about my opinions let me tell you who I am so I um I am a postdoctoral scholar uh um who works primarily primarily on physics so machine learning for particle physics and astrophysics and there is my experiment the uh the image on the right and if you see the tiny uh red circle there you can see the size of two human beings so our experiment is kind of big um and and this is actually placed in a ring which is even bigger and besides that I work with the oecd on guidelines for trust for the AI and uh the impact of AI and Science and I am concerned about AI ethics and I would like to uh uh do something substantial uh and make a contribute in that direction as well so yeah at the Large Hadron Collider wave you have the detector uh what we do is we smash these particles together and uh and our detector is like a camera which takes this image of this like completely chaotic thing and then we try to make sense of it uh with machine learning and um at the at this collider we care deeply about uh biases and risks so on the right this is just an example of how if you apply some machine learning classifier it often um completely reshapes uh a distribution and you don't want that so it it has a certain kind of bias and then we find ways of removing that bias to make this uh sculpt the sculpting that you see this um uh back into this uh long um falling distribution and um not just that we uh we make sure that our data uh so we we train our models on simulations and uh the way we generate them is fully transparent or we try to make them transparent and auditable so you can go back and see how this uh this sample was produced um and and uh yeah uh we we spend months after months on uh testing systematic biases on our machine learning models uh does it perform similarly on subsets of the data is there something we need to understand better and uh also will the our model generalize on new data uh so when we apply it on something from the collisions is that are the differences between assimilations and collisions uh yeah and uh we we find almost all the time uh that our models are overconfident that's just how uh these the way we train neural networks traditionally they uh learn to be overconfident so we know that and we calibrate for that and and this um we also try to find out what the model is really learning it's a hard problem but we try our best to uh do all of this and this is on just uh Collision data right um this this uh doesn't affect anybody's life that directly uh so when I was first invited to this multi-stakeholder conference on AI policy uh I had a quite a shocking experience I uh I was categorized as Mission critical uh so we were the people for uh For Whom the impact the biases of AI are very important to me uh as a physicist and also uh some people who are working on machine learning for controlling Transport Systems and I agree there it's it is critical because lives would be lost uh but on the other side I saw uh people deploying machine learning models that actually affect the society and and that was not categorized as critical which I think thought was really strange um because that actually affects people and it affects marginalized people disproportionately women and people from uh minority minority communities and they just as an example I'm sorry there are uh there are software out there that claim to be able to predict uh whether somebody is a criminal or not and um I'm sorry just give me a second yeah um uh yeah and these kind of uh these models are not uh made publicly uh uh available or auditable uh so we in physics are actually doing a better job uh of that than uh models that actually affect society and similarly there was a Machinery model that used to screen job applications and turns out he learned historical biases against women from the data so all of this was quite surprising that um these things are not tested as robustly sometimes and that's why um I feel uh all of this makes me believe that it's important for us as people who are studying physics or doing physics to also care about ethics um if if we we say that uh a physics education provides a strong foundation for jobs in diverse fields and and we I I believe this is true and if that's the case then our data Science Education should not just be about how to uh use some techniques for solving physics problems it should you should have a holistic education of um the ethical implications of the kind of models you're developing and how things might go around um we know that statistical techniques and metrics often find their way from science to the social applications and uh some of the time assumptions that make sense uh while using uh these techniques in physics may break down uh in these applications there's a there's a famous example of pred pool which I think picks up a model originally from geology to um uh for for um you know to help the police and it obviously violates the assumption that the data is collected in an equal manner uh and so uh it has all kinds of biases in it um uh I also feel that physicists bear some responsibility for the technology we build uh if there's a dual use possibility we we know what the kind of things we've built in the past and uh now we're building fast inference technology for example so I think uh It's always important to uh keep that in mind also we've seen that scientific jargon is sometimes used to justify biased algorithms when no one knows any better uh like like I talked about how our machine learning models have are overconfident but this actually has been used in in the justice system to say that look this person has like 90 uh the model is 90 confident that this person is a criminal so this kind of uh thing uh if if more domain experts were to talk about it it would be uh uh it would help the society understand the context better um there's also the problem that uh Sometimes the best solution isn't a technical uh technological one and by investing our our all of our focus and energy on uh trying to come up with a better prediction system using machine learning or actually uh you're missing out that the the solution is not in better prediction but maybe investing in Social systems and education opportunities for education and so on and uh last but not at all the least I think AI ethics is an active area of research a growing area research and it's actually uh very exciting and and people should consider it as a career but going back to this point of a metrics I think um the more we uh as we there's a scientification of social uh issues sometimes uh you uh tend to uh focus on the past that you can actually quantify into metrics and that has that means that the staff you can't quantify you you start to ignore it that's a that's one danger and the other is if you if you have a proxy metric which kind of tells you uh what you want but not exactly uh you you it's dangerous to try to uh interpreted as as the exact metric and and this is something that uh especially people with the physics or Technical Training are prone to doing and I will Give an example from my own field uh to uh yeah to demonstrate this so in physics we usually think we have exact uh statistical uh metrics for everything but this is one exception where um I'm not going to go into the physics but uh there's uh this is a kind of bias uh we have in our experiments um coming from some calculations uh and so you you see the uh the diagram on the left so think of the green dots and the Red Dot as our different our performances on different um categories and uh an unbiased uh technique uh would mean that all the dots lie on the same point and since they're far away that volume that they create that's our uncertainty that's our bias um and we've often uh thought why don't we just use a machine learning powered de-biasing methods to reduce this and and what you want uh with that recommendation is the image in the middle so you uh you do debiasing which means all these points shrink and come to the same uh place and you reduce your bias but what actually ends up happening is that since you're the the S the uncertainty is actually not estimated uh you there's no way to calculate the full volume so you estimate it by uh the distance between the Gen 1 and Gen 2 and and when you try to shrink this proxy metric you only shrink this it in one dimension which means you've only shrunk your estimate of the bias and not the true bias so this is uh a clear case in physics of the kind of danger there is and of course in the social applications unlike physics it's a lot harder to uh show and prove these kind of situations but they definitely exist now a real case in society this is an example um uh the so the so the Netherlands government back in 2013 had deployed this model to detect welfare fraud um by Yeah by people who are receiving them and uh the thing is before uh this had been deployed uh the cases were um looked at individually by humans and so there was a way to if if you feel that you've been judged as fraud by accident then you can there was a way to bring it up whereas once the model was deployed there was no recourse for people and and it turns out the model was uh had wrongly accused 30 000 uh parents of of Welfare fraud and it was because of some inaccurate data and also it learned strange things like uh it was much more likely to uh say that someone's committing fraud if they had to do a nationality with a turkey a Morocco or Eastern Europe and it took six full years to uh to get rid of this system living so six years it uh it harmed the most uh vulnerable people uh the society and and uh the other thing to think about is even if if uh these kind of prediction models uh that um are accurate is that enough uh or do people deserve the right to an explanation and this is something the uh EU and European countries are increasingly engaging with particularly France as some laws about this and I think uh people will talk about this more as machine learning is deployed in these kind of situations um then I'm gonna touch upon uh misinformation so I'm sure a lot of us have seen uh large language models that have become that have taken over Twitter basically um they uh they talk like us they're able to chat with us and um make jokes write poetry and uh and and people are talking about how uh this could be a cheap way of getting a therapist for yourself you're AI therapist or or it can teach you programming or or all of these things but if you if I mean there are dangers here because uh you can you can clearly see uh that there are two fundamental problems it's trained the data has trained on is from the internet and we know the kind of content that's there on the internet and it's yeah so it's going to pick up on all of these uh biases from Reddit and so on uh also the model itself uh is not trained to be factually correct or it's trained to sound plausible that's that's how it's built uh so it produces confidence sounding but actually in an inaccurate statements quite often and and and again it's in the Sciences or in computer science it's easier to quantify these things that's why I'm giving an example encoding on the right so uh this this chat TPT I think it was released a few days maybe a weeks ago and it had already they had to ban it on stack Overflow which is one of the main um computer programming uh it's a it's a website where you suggest uh solutions for somebody else's computer programming problems and they try to uh people try to use this um this AI bot to to propose Solutions and and what they found was it proposed Solutions so confidently that the humans thought it was a good solution and it was only much only experts were able to notice uh the mistakes in this and so they've banned uh these kind of submissions um but of course uh if if you try to use the same bot for social applications you will see all kinds of problems and it'll be uh even harder to quantify uh the problems so to conclude uh hopefully I'm not taking too much time um the disc yeah so uh the discussions of the importance of Ethics in data science uh uh as you can see it's really a two-way street I think um uh there's the idea is that uh are useful that come from ethics and it's easier to demonstrate these things on physics or like scientific data uh and that's why my talk focused on this but you will see a lot more uh actual uh uh discussion of uh the the impact on society in in the upcoming uh Talks by the other speakers and yes um uh I just like to end with this uh thing uh there was a philosopher uh in the 19th in the early 1900s rabindranath tagore who back then talked about the machine uh which has no flexibility uh to uh to individuals it dehumanizes people and forms this rigid system uh that people must follow and and it's justified by science so he was talking about uh how we are becoming Auto automatizing and making things rigid uh like and it's it's funny that he talked about in 1917 because now we're doing it even more with machine learning where we are there's a rush to automate uh things with technological solutions and and this is definitely going to entrench systemic problems into this unquestioned and unquestionable machines that don't offer explainability or flexibility or human in the loop well unless we do something about it and some people are doing something about it and I think in the greatest Society also needs to be aware of uh which direction we need to be going yeah thank you very much all right uh thank you um for for an excellent um presentation um I'm sure others have enjoyed it um just as a reminder to everyone please you know put your questions you know like in the uh in the slido and we will get to them don't worry um but for now we'll move on to our next panelists um Savannah's Heist uh where she's a research scientist at Columbia University's data Science Institute where she focuses on machine learning she is interested in complex system modeling and in understanding what types of information is measurable or modelable and what impacts designing and Performing measurements have on systems and Society this work is informed by our background in higher energy particle physics and incorporates traditional scientific experimental design components such as uncertainty quantification experimental blinding and decoration and debiasing methods her recent work is focused on geometric deep learning methods to incorporate physics-based inductive biases into ml models regulation of immersion technology social determinants of health and Community Education she is the founder and research director of community insight and impact in non-profit that focuses on data-driven community needs assessments for vulnerable populations and effective resource allocation she is passionate about the impacts of Science and Technology on society and as a strong advocate for improving access to Scientific education and literacy Community Center Technology development and Equitable data practices she was the ml knowledge convener for the CMS experiment from 2020 to 2022 and currently serves as the Executive Board of women in machine learning and executive on the executive committee of the aps Groupon data science and she's also the founding editor of the Springer AI ethics Journal she received her PhD from Yale in 2019 and was a postdoc at Princeton Intel very recently um Savannah um good to see you um and we are looking forward to hearing what you have to tell us and you're muted Savannah okay uh sorry let me uh make my slides big again is it the right does it look right do I need to like slot the display um we're still seeing your presenter view okay okay perfect okay awesome yeah so uh right uh William already I said a lot of this so I'll try to go over it quickly but yeah I'm like a research scientist research faculty member in the data Science Institute at Columbia uh uh and I did a math in physics undergrad and then my PhD in physics on Atlas the same experiment that oisha just talked about uh and then I did a postdoc focusing on um like physics informs and machine learning as well as AI ethics uh and my current work like I was mentioned focuses in in several directions so some Physics informed machine learning uh some interpretability stuff complex system modeling specifically looking at mainly at Public Health um and policy uh modeling and then contextualizing machine learning systems and research and advocating for a really holistic approach to development that is proactive about thinking about a lot of the issues at oystick mentioned and that you know go into um I'm building a kind of ethical uh socio-technical system and I'm happy to talk about like how I kind of made this transition from very pure physics into the AI ethics space um later in in the question part if people are interested so um in this short talk I'm going to kind of introduce one of the projects that I've been working on uh that shows how we can take skills that we build in physics uh and bring them to other types of model building that are maybe more societally focused uh and then I'm going to talk about like what happens or what can happen when we're not careful about how we do that and then I'll talk um quickly about like some things that I think is important for physicists in particular um to know and ways that we can perhaps try to avoid some of these mistakes although uh uh it's certainly not exhaustive not part of the talk because this is this is 15 minutes um so yeah like Liz mentioned I found it a couple years ago this um non-profit uh Research Institute uh that looks at trying to understand um well several different things but originally our first project was focused on trying to understand uh different like resource needs in communities and uh What uh risks associated with covid in particular although we've like expanded much beyond that now what sorts of risks um different communities were particularly vulnerable to so we could understand like which communities needed uh different types of interventions so uh we first developed these metrics kind of informed by a lot of domain knowledge working with different uh Public Health researchers urban planning researchers sociology researchers all of these kind of things and I think like and then we did as I'll show a bunch of kind of model validation and and impact assessment and things like that and the reason I bring this up as an example is I think that like it's very similar in some ways to what we do in particle physics and that we're trying to make valid and I want to put the emphasis on like valid um statistical inferences about things that we cannot directly observe or measure so like ocean was mentioning with the atlas detector uh we go through a lot of kind of layers of software and statistical processing to probe what is actually happening in those proton proton collisions um and when we're trying to look at things like social systems uh we also kind of go through levels of abstraction like that so it's really important that we take some of these scientific principles like incorporating domain knowledge diverse types of like experts model building we think about data collection and cleaning and model validation so um we've we've written two papers um about these particular metrics um where we were looking to both uh to validate them um and to try to uncover like what this could mean for Public Policy um in order to effectively address some of these risks that we found and so this is like a very preliminary um approach and we're working on like more kind of nuanced um projects now but this is the the kind of first few things that we've published so for validation studies um what we did is Define proxy outcomes that again are informed by domain knowledge for each of our metrics so he basically said okay what statistical measure can we use that is like closest to economic risk um under a Public Health crisis um and then you can build a model um basically to try to predict that proxy outcome this is really commonly done and kind of uh computational social science work um and then you can look at things like future importance and you can do the same kind of like model assessment that we do in physics where we look at you know we compare the kind of expert driven metrics to or sorry expert driven variables to a wider set of variables we try to understand what's driving the model in certain directions what has the most predictive power all of these kind of things and then also like what is your uncertainty on all of that um which yeah I won't get into too much but it's very difficult in uh Public Health and Social systems modeling then we also did some unsupervised studies where we looked at like clustering different trying to to understand similarities between communities um by we're looking at their similarities based on uh the underlying variables uh our social determinants of Health as well as developing like a longitudinal uh Public Health kind of History data set and looking at how those different variables like fluctuated over time to try to understand how communities could build resilience to a shock effect like the pandemic and we found some really interesting things like I wasn't spend too much time talking about those kids that you know this is the physics Community um but we found some yeah really interesting insights um like the percentage this is my favorite one and we basically found uh and and other people have looked at this as well that the percentage of uh kids enrolled in free and reduced lunch um is a super important um metric for measuring uh at risk people who are missed by most of the ways that we traditionally look at poverty in a lot of these um kind of social systems modeling approaches and we looked at kind of like different approaches to understanding the causal graph for why that might be the case like what information is is this um variable capturing that is missed and uh looking for a kind of policy driven explanations for that um and we found some other um kind of like policy suggestions for how communities can build um crisis resilience through um different policy shifts like investing in in Community College infrastructure um improving protections for part-time workers increasing Community infrastructure things like that so when I emphasize before I kind of switch to the next topic that the approach that I take to doing all this work is like that our models are potentially informed like they can inform things but they're not actually uh they're never modeling the the true like underlying system we have uncertainty we have approximations we have proxies in all of this model building uh and so it can perhaps give us insights but then we can discuss those experts but I do not work on things that are um fully automating any kind of decisions or predictions um and that's because of a lot of the reasons that I'm going to talk about now which is that all of these models that we build they don't exist in isolation um and they often can make mistakes so I want to talk about I focus in my work on kind of like what I call Holistic um kind of approaches to model building on six different areas of AI ethics so we can think about like data collection and storage practices how we actually design the tasks and learning incentives that we build models for bias and fairness um which and model robustness which I won't really talk about here because I think Oceanic did a great job of covering that and then equity in system deployment and outcomes and and more like Downstream and diffuse impacts so this is kind of how I conceptualize it to myself and I think that things like research regulation oversight advocacy all these things like should touch on all of these different topics and there are ways that we can we can start to like improve all of these different areas so I'm going to show some examples of kind of what can go wrong when we don't think proactively and holistically about those points that I just mentioned aspects of AI Essex um so looking first at data collection storage and sharing so this is basically a short uh list of examples of things that have gone very wrong uh in this space so for instance um there's a great series from MIT technology review detailing um what they call AI colonialism but talking about how data label companies that are used by many many tech companies in the West in the US and other countries um to build some of these large foundational models uh that we are all familiar with um so it explains how those data label companies can exploit workers and and political Strife in the global South to like maximize profit for this very essential part of the machine learning pipeline but it's not what we traditionally consider like research work and so it's like uh not put on the same level uh we can think about data sharing data collection like what you collect data for one purpose are you able to use it for another purpose so there was this very like kind of shocking uh example of the crisis text line recently that used user text conversation uh for from people in crisis and they shared that anonymized uh text Data uh with a for-profit spin-off that they started that's designed to like improve customer service chat Bots for other for-profit companies so we can think about like right what what can we do with data once we have it and should there be regulations or considerations on that data brokerage brands are a huge thing that really pool different aggregated sources of data about people and they're sold kind of indiscriminately to different uh organizations you might use them for very problematic kind of surveillance methods um we have companies that you know require drivers and workers to submit to different kinds of biometric Data Tracking and surveillance um to try to prevent things like unionization organizing um and we have even direct Partnerships between tech companies and law enforcement organizations where the for instance ring like distributed their doorbell cameras or worked with police departments police precincts to distribute um doorbell cameras to the community and to incentivize getting them placed in neighborhoods and then there's been recent reports that actually bring like will share recordings from cameras directly with law enforcement without the owner's consent so lots of concerns about data collection sharing reprocessing all of these kind of things and we can also think about like what are we asking our models to do um you know there's been a lot of work talking about uh news feed information surveying information curation algorithms are designed for attention and user retention uh they're not uh and so this can lead to like Downstream unintended behavior that perhaps a sociology or anthropology researcher could have predicted but we don't talk to those people always when we're model building so we get things like viral spread of misinformation radicalization pipelines information silos um but we also have you know researchers uh and people pursuing uh learning goals that are not grounded in in any kind of science um like predicting faces or sexual orientation from people's voices or predicting trustworthiness um from uh a video of someone uh then we can also think about like how these systems we build are used and deployed um so who is subjected to them um who are mistakes made on um and who gets the benefits of these systems so for instance like Rite Aid um deploys and deployed facial recognition software in an effort to prevent shoplifting only in low-income areas so you might say okay well yes like maybe low income areas have higher crime rates but we still have these questions of like you know who gets privacy is it a right that we should all share equally is it an economic privilege who's subjected to surveillance algorithms uh and things like that and I don't want to make sure I'm not like taking up too much time so I'll go quickly um and then we have more diffuse things like I always should mention uh you know focusing on technological solutionism can pull resources away from other types of recourse or system design for instance if we focus on using like traffic surveillance to optimize routing and signaling so that we can move more cars uh more quickly through our current road systems by doing that are we de-incentivizing investment in in public transit or other types of solutions um and there's you know other examples I have on here that I'm happy to talk more about um about how algorithmic tools have enabled have completely transformed Industries have created new classes of exploited workers is have morphed how cities work who has access to housing all of these different kinds of things that are maybe not the initial point of the the model building but are are happening as a direct consequence and then quickly I just say like what I think we can do from within physics to address some of these issues so this is a kind of guidelines that I put together for some of my students as you're working on projects I think it's super super important to think about the context of what you're doing so you can start doing this even in Hep research which you know we might think is isolated from these concerns but I think which I pointed out really well is is not always actually isolated even though we're just working on scientific modeling but you can think about things like documentation and reproducibility for your data sets and your models um you can think about if this is helping us learn about how machine learning works on a fundamental level so that we can build better like debiasing or um auditing tools and what technology transfer might happen from what you're working on um and then if you're working on industry collaboration there's side projects of course the concerns get even um more complex you can think about all of these data points think about if there other whis more transparent or even non-technical solutions for the problem you're looking at where could bias come in what guarantees can you give on model performance and how are the things you're developing going to be deployed and how will the benefits and harms be distributed and then you know these are principled questions so you really have to think about okay you've gone through this this kind of list do these align with your personal code of ethics and kind of how you want technology to influence the world and I think I've like talked for too long maybe already so I'll kind of skip these last points but I think treating machine learning and data scientifically is super important like I've kind of mentioned we do model building and physics in a very kind of principled way um and we can maybe bring some of those principles over to other types of data science and and machine learning model building and we can um perhaps also influence well uh maybe understand uh more about what machine learning is doing in the first place to build um safer more transparent practices in the future and then I just want to end on this idea that these are not just Technical and mathematical problems um and although we are like technical uh scientific people I think there's always other things we can do besides just these technical approaches which are very important um but we can do things like literacy building advocacy um leveraging our uh institutional and Collective power with our communities um to work against these power structures and also contributing to like legislation design um you know we are experts in these areas so we can perhaps share that with legislators to to develop more meaningful regulations um yeah so thank you this is a slide that I always end with when I give this talk and I I think it's very true these are a really big responsibilities and and we should take that to heart so thank you thank you Savannah for a great talk and um just as a reminder to everyone um you know uh please uh post your questions in the slido we'll keep posting the link um but um with that we will um move on to our last speaker uh but just before introducing him I'll also note that um this will actually continue to 1 30. so don't worry you will have a chance to you know like actually interact with the speakers and get your questions answered so don't be too concerned about the the time so we we have plenty of time left and other um about 45 minutes um so with that uh we'll move on to our last speaker um Ian Renee Solano uh kamiko so um Ian is a PhD student in information science at Cornell University his research is focused on building and evaluating Computing technologies that aim to improve the lives of underserved and marginalized populations in particular he is interested in community and in-home healthcare Automation and the future of work as well as climate resilience prior to attending Cornell he received his Ms in computer science from NYU and he's a member for the center for a responsible AI so um Ian looking forward to to what you have to say and again everyone right after Ian's presentation then we'll launch right into the panel discussion and addressing your questions awesome thank you so much for that introduction hopefully you all can see my screen and also hear me cool uh great so yeah as mentioned I'm Ian I'm a PhD student at Cornell uh Tech which is uh here in New York City on Roosevelt Island not a lot of people know that Cornell has campus here it's uh in the background of that picture pretty campus small campus kind of weird place uh but I'm also a graduate fellow at the center for responsible AI at NYU and uh oh before I get going here I just want to say that given the nature of this talk I'm going to start with the land acknowledgment I think it's important for for this kind of conversation around ethical data science so I'll start by reading this and say that like uh I'd like to acknowledge that uh I work at Cornell Tech which occupies part of the unseated homeland of the Lenape people and that Cornell University is located on the traditional homelands of the Cayuga Nation I want to recognize the long-standing significance of these lands uh for these nations past and present it's important that we acknowledge the forceful dispossession of both the Lenape and Cayuga people and to honor them as the original inhabitants of these lands uh which of which we are Uninvited settlers uh as we talk about ethics within data science we'll see that throughout this conversation technology is rooted in people and uh the decisions we make both as a society as individuals so therefore I feel it's important to acknowledge our past and justices uh in the United States and as well as their systemic nature uh I'm actually going to move my screen over too hopefully that doesn't ruin the screen share is that ruin it um all right anyways so uh yeah a little bit about me as and Ray mentioned so I'm a PhD student at Cornell I work with Dr Nicola Dell and Dr Aditi vashishta uh I'll skip over the kind of intro since we already talked about it uh I just want to give a quick shout out to uh Dr Julie steganovich and Dr and follow RF Khan they make some amazing comics and they've made amazing coursework on on uh responsible AI or ethical data science and I would encourage you to check those out um lastly just a little bit about me is uh before getting into Academia I worked for a number of years as a software engineer so actually this kind of uh AIML is fairly new to me um but I have this long long history in building software and systems and so I kind of take that approach to a lot of us work um right so this is my I'm going to convince you slide that I'm not a physicist uh I didn't major in physics uh as as mentioned my background as a computer science so uh you see the slot right like AI is the future and the future is here every internet article um so hopefully this is me convincing you that like physics Majors uh when you don't go into Academia it seems like at least from what I've seen in software you're being employed by uh as data scientists as software engineers and so I think this material is relevant uh from that alone but also it does seem from the outside like a lot of these data science methods uh are being used in physics and so that's my Spiel to convince you that this is relevant um so a little bit like quick overview I know we've kind of gone over this a few times now so like what is data science why now uh it's data science is impacting uh and has the potential impact every facet of our lives right and I think in particular what we're seeing is that there's an unprecedented data collection uh capabilities uh there's increase in computational power and access and a mature field with broader societal acceptance which are leading to these trends that are happening right now where it's impacting every every uh piece of relax right from from targeted advertisements to life-saving medicines right uh and here's a little comic which I like uh from fala and Julia uh where you can see at the bottom it says you know where are all the women and people of color uh in this Tech Utopia so that leads me to like yeah what could go wrong right and we've talked about this already uh so let's let's cover really quickly some examples of bias and algorithms uh first and this was supposed to kind of like alluded to this and this was mentioned this is this tends to be the canonical example of BIOS and mo um and so this is an article by propublica on a commercial software tool called Compass uh which automatically predicts some categories of future crime to assist in bail and sentencing decisions and essentially what Pro public have found uh was that blacks were almost twice as likely as whites to be labeled a high risk but not actually reoffend um so this is like I said one of these canonical examples of bias in our system or in in the real world uh and this is it if I believe not mistaken this is Broward County Florida I think this is another dog yeah uh another example is uh within automated hiring and I'll talk more about this later this kind of like Finds Its ways to my work uh but here uh Amazon scraped this like uh this AI recruiting Tool uh basically uh uh in the article they they scrap the tool because it showed bias against women um technology companies tend to be a lot more male dominated right especially within technical roles and basically the Amazon system taught itself that nail candidates were preferable uh it penalized resumes that included the word uh uh women's such as like women's chess club or uh any types of like women's organizations and actually there were two College women's colleges that were also um uh to uh put down or whatever um so let's consider so okay so what is ethical data science uh so uh Savannah kind of like covered a lot of this uh I'll just focus on fairness accountability and transparency but obviously there's a lot of stuff going on here like data profiling and cleaning uh and integration with data data protection and privacy legal Frameworks and so on and so forth but like I said I'll Focus mostly on fairness accountability and transparency and in particular I'll focus on explainable AI um so I like this definition from IBM it's not perfect but uh it suits the purpose of this presentation so explainable AI is a set of processes and methods that allow human users to comprehend and Trust the results and output created by Machine learning algorithms explainable AI is used to describe an AI model it's expected impact and potential biases it helps to characterize model actor Steve fairness transparency and outcomes in AI powered decision making so this seems like a decent overview of where to start uh in particular I uh think about explainable AI in terms of like a holistic approach and I wanted to highlight um three kind of like critical junctures that have been put forth a long time ago now uh in 1996 by bataya Freeman and Colin nissenbaum um and these are pre-existing Technical and emergent bias and so I'll just give you read this to you a little bit here so pre-existing bias exists independently and usually prior to the system so this this has our its roots in society so this may be data that is collected that we know for instance we live in uh systemically unjust world right and so the data that may be collected may be biased in itself and that's what pre-existing bias is capturing here um technical bias is a biasis introducer exacerbated uh by the technical properties of the system so this is the technical design um of the system uh so statistical bias in the model and then emerging uh bias which arises only in the context in a context of use generally this is kind of looked at as uh the system comes up with some output and and that output influences society and as a result that uh the societal changes feedback into the system and influence the system more um so I think these like three junctures are important to think about when we think about explainable AI um then I wanted to talk a little bit about explainability and transparency within explainable AI so uh there's a lot of different questions that ultimately we're asking uh when we're thinking about explainably AI we're basically in this diagram here you see there's uh this you know we're feeding user data and a lot of times into uh like a black box model and some decision that's being output and um when we're exploring like this idea of explainable I we're asking questions like well first off like what are we explaining and so who are we explaining why are we explaining but we're also asking any questions like how well does the system work uh or why was a person misdiagnosed or not offered a discount or denied credit right um as well as like are these system are these decisions uh discriminatory or are they illegal and here you'll see on the left hand side is a is a nice chart of like all these different types of questions and ways to ask the questions as well as the explainable AI methods that are applicable to kind of like interrogating these questions and obviously in a 15-minute talk we won't go over it all um but this is just to show you like this world of explainable AI is much larger than just simply what I'm talking about here um yeah and as I alluded to like that why is this important well it's important first off because we have laws in the United States and um and they don't necessarily regulate all of AI but they do regulate certain facets of things so we have the Fair Housing Act we have the Equal Credit Opportunity Act we have Civil Rights Act Right so these things say that like oh you can't uh have disparate impact or just for treatment of certain groups or individuals based on like race or within hiring or something like that right and we also have uh legislation you know coming down the pike both in North America and in the EU um stuff like the EU AI act uh attempts to apply transparency obligations proportionally uh within their predefined risk factors within a predefined risk categorization framework so this directly like relies on explainable AI techniques and transparency techniques to kind of like enforce this type of Regulation and this is becoming more and more popular uh at the Biden Administration uh in in May 2021 came up with an executive order around informing uh around like iOS T devices and consumer products and security capabilities so a lot of this legislation and regulation is is coming especially in Europe and hopefully we'll see more here in the United States and and some of my work is going to be some of my work is is advocating for that so we'll look at a little bit of that later um one thing I did want to touch on and I know this is a fitness crowd but in terms of like unpacking and opening up these black box models there are some methods in doing so and two most popular ones right now that I want to highlight were both shop and Sage they do slightly different things so shop is looking at like local feature Imports so an individual example in the model and trying to trying to explain and show you the the feature importance uh values and then Sage is thinking about global feature importance uh so it's looking at the entire data set in the model um so here you can see on the left hand side it says shop answers the questions uh how much does each feature contribute to an individual prediction and Sage answers the question how much does the overall model depend on uh depend on each feature overall okay so I just wanted to highlight that as some of the tools uh within explainable Ai and there's many many more uh including like counter factuals and all sorts of stuff as you can see in that previous chart um so a little bit about my research and what I've been up to one is we've been working a lot and especially at NYU and the center for responsible Ai and on nutritional labels uh this this is highlighting like nutritional labels for recruiting and automated decision systems but we've been looking at nutritional labels uh in algorithmic hiring more broadly as well uh this is a heuristic that's been getting tossed around in the community for a few years now um in particular we are trying to think of nutritional labels as a method uh for public disclosure that ultimately like influences policy change um and here is one example where we're looking at nutritional labels for LinkedIn recruiter and recruiters uh without hiring or even within they're giving us a ranked lists and trying to provide them kind of like introspection as to like what is going on here recruiters already know and that they can't rely on the system but they don't really understand why and and so this is something that we've been exploring um so nutritional labels uh obviously like they come from like your can of beans at the at the grocery store and like why does this Paradigm that's being put forth is because they're they're short uh and simple and easy to comprehend they're also actionable they can take actionable insights on them and there there's a standardization so that's why we've been advocating for nutritional labels um another piece of work which is a little bit more complicated to explain in the time frame but um there's been this Paradigm put forth especially within uh computer science and and and machine learning this idea that the more complex models are uh the more accurate they are but the less explainable and um in particular we're interested in for like high-risk decisions in public policy and basically we're exploring this Paradigm in this research we were we basically said well okay we hear this Paradigm a lot and and there's been some some papers uh particular Cynthia roodin had a paper that was essentially saying like stop using um complex models use Simple interpretable models for high-risk decisions and public policy and and we took that and wanted to explore this further uh explore this trade-off and we essentially found uh that explainability it was not directly related to whether a model was black box or interpretable uh and it's basically more more nuanced than we previously thought uh so we found that that basically uh using like objective objectively measurable criteria as well as subjective criteria um there's a few there's a few influencing factors so one is that there are weaknesses and intrinsic explainability uh intrinsic explainability of interpretable models so when we're talking about interpretable models we're talking about decision trees uh and linear regression models um and so you can think about like uh in a low depth decision tree that makes sense but in a really really uh deep decision trade things start to become more complex I mean those aren't just it's just not inherently interpretable it also matters like what is the root note of the decision tree and things like that and um and also more information about a model sometimes confuses users so this is uh some of our findings from from this work and then lastly kind of wrapping up I know a lot of time here uh this is some new work that I've been doing at Cornell uh we know uh we know Ai and ml is coming to the global South uh there's a lot of companies that have been are continuing to like uh invest uh time and energy and money into building tools for the global cell uh it's not quite there yet but we know it's coming and so we're we're fairly concerned about this and uh basically our motivating factor here was uh how can xai tools uh help support community healthcare workers in the global South and in kind of going through the sole narrative and thinking about my past work I I was really interested in thinking about explainable AI uh from a more holistic all the way through the entire system and so that led us to um some ideas around uh interactive visualizations so we think that like by integrating interactive official affordances uh and risk prediction mobile application community healthcare workers are able to better understand what the AI does and how to operate it and so this is really interesting and this was motivated by the fact that like these are these are really low resource communities they're also folks with limited education and so the idea that you're going to provide them some like shop diagram and be like oh yeah you can understand it is it's just not going to happen or even trying to communicate um some idea of like what confidence means or or what probabilities are it's just not realistic so thinking about well how can we how can we think about uh explainability much more holistically how can we think about what the agency looks like in terms of like the individual and the community healthcare worker as as experts and results because these folks are experts and tend to not be looked at as such so I'll end there um there's a there's a couple takeaways that I want to leave you with which are one is uh we need regulation like Point Blank uh the regulation hasn't kept up it's and there is not enough regulation currently two data science at these courses should be required at universities this is starting to happen it happened at NYU while I was there um it should happen across across the country and across the world and we should increase data data science versus like technical education more broadly and Savannah kind of pointed to this as well which is like as as technical folks we can also do our part in kind of like helping and and volunteering and these kind of things uh to uh you know to participate in some of these like educational initiatives um so in lieu of this kind of ties in right we move uh kosher regulatory policy technology professionals have an obligation to hold these systems accountable and the p

2022-12-15 14:23

Show Video

Other news