The. Starting. Point of our book is the observation that although, machine learning has been around for a long time we, are now, starting, to use it for increasingly, consequential. Tasks, as many people here at Google will know so for example, as. Was in the news recently. Lending. Decisions like if you apply for an Apple credit card will you get a high credit limit or a low credit limit are now, often made, without any human intervention at all just just, an algorithm. HR. Departments make hiring and and, compensation. Decisions use. Informed, by machine learning algorithms and in, a number of states including in Pennsylvania, bail. And parole decisions, are informed by by, trained. Models and so. It's sort of natural when you start making when, you start using machine learning to make important decisions about people that, you start worrying, that maybe those algorithms, will violate, some of the social. Norms that we would expect of human decision makers when they were making those decisions and indeed. Like. There's lots of evidence that this happens we see news articles every week and there's a number of very. Good, popular. Books that have pointed, out the problem and here's. Three books here, that we admire, but. What, these books do is very much point, out the problem but, but where they say less is about what. You can do to fix these problems right what the, solutions, are they they talk about the need for regulation. For example which, we by and large agree with but they, don't talk about, technically. What you would do to make the algorithms, better behaved and that's. The goal of our book to explain in, in plain, English the, emerging science of. What. We talk, about as embedding. Ethical, norms into algorithms then there's now a community, of hundreds of people including us working on these problems one. Comment. We often, get in fact something that one of our early reviewers asked, us is does. The very premise of the book like the title of the book does it make any sense the ethical algorithm, because. Algorithms. Are, in the, end tools they're human artifacts like hammers, and. Well. Algorithms. Like hammers can be. Used to do bad things that can be used instruments, of violence I could whack you on the hand with a hammer but, if I did that we wouldn't think of this as some moral. Failing, of the hammer right like you, would attribute, that, action, directly, to me. And, that's basically how we. Regulate. You. Know and and write law is about about. Violence, you. Know induced. By hammers right we say if I whack, you on the hand with a hammer then, I'm, very likely gonna have to go to jail and I'll take that into account when I'm deciding whether I want to do that. But. Algorithms. And when we when we say algorithms, we really mean models. That the sort of trained models, that are the output of machine learning pipeline, are. Different. They're. Different in a number of ways but but. One that's salient, for this discussion is that, it's. Very difficult to, predict, the. Outcome in, every situation of. An. Algorithm that you've trained using the principles of machine learning ok so eat so many of you are aware of what, the machine learning pipeline, looks like but let's just briefly. Recount it, you'll. Start with some data set okay. A data set might these, days consist, of records of millions of people you might have hundreds of thousands of features for each person and. In. The best case right, if you're lucky you're in the best case you're not always in the best case in, the best case you understand, this data set as the data, scientist, you, know developing, an algorithm in the sense that maybe you know how the data was gathered maybe you know what all of the features represent, okay, but, it's hard to say that you really. Understand, all, of the information contained. In such a massive objects. Then. You use this data set to formulate some usually, narrow objective, function some proxy for, classification. Error or maybe profit, and you. Use some tool. Something like stochastic, gradient descent to, search. Over some enormous class of models to find the one that is best. Or you know very good at, maximizing. Your narrow objective, function and. Then. You. Get out some some, model right, if you're if you're training a deep, neural network this might consist, of millions.
Of Parameters and, it's, hard to say anything at all about this model except, that it's probably very good as measured according to the objective function you specified, and so. The problem is when. And if, this model goes on to inflict some harm on some person, or some group of people you. Know it's it's typically not the case that, this. Harm was the result of some, mal-intent, of. The software engineer scientist, who is sitting behind the scenes and and and, building. This algorithm, right, if that was the case the. Situation would be much simpler right existing. Regulatory tools could. Be. Used to weed out bad actors, but, the problem is that the harms that we see from, algorithms, are typically the unanticipated, and. Unintended, side, effects, of of, optimization. Over large classes of models of the very, basic. Premise, of machine learning and so. If. We're going to. If. We're gonna prevent. You know this this bad behavior by by learned algorithms, we need to figure out how to. Embed. Our social. Values the the, actions. That we want, the, algorithms not to exhibit into. The design process itself. Okay. And that's that's, hard right because, words, like privacy, and fairness, and accountability these. Are. You. Know big, and vague, they mean many things but. It's. Important, to be precise about definitions, when when, you say privacy, or fairness and you in particular when you say you want an algorithm to be private or fair, it's. Not enough to speak about these things as a philosopher. Might, for example because. It's just at a very practical, level if you're going to embed these things as constraints, into some optimization, you need to be mathematically precise it's. Also enlightening, to you. Know it's an enlightening exercise even separately, from. You. Know the need to embed mathematical. Constraints. When you're designing algorithms to. Think. About what you really mean what are different kinds, of privacy, what are different kinds, of fairness and the, very act of trying to be very precise about these things is. Illuminating. And can reveal new trade-offs that maybe weren't, immediately. Evident. So. We've written these words and, degree in decrease, degree, of greyscale. Starting. With privacy ending, with with, morality and you know you can't even see it but but you know Michael assures me he wrote singularity. In white at the bottom there in. Proportion, basically, to how much progress we've made trying, to understand. These things at, a at a sort, of mathematically. Precise level how much progress we've made thinking, about the consequences of embedding. Constraints. Representing, these notions, into algorithms so it's not to say that we've solved. Privacy, or or that we have precise. Ways of thinking about all, of the many different kinds of privacy, but as we'll talk about in a moment we've, made some progress, fairness. Isn't there yet but it's it's along a good path and and for these other things accountability. Interpretability, and you, know even more as you go further down the list and. People are working on these things and they're important, but, we. Feel like we don't have the right definitions. Yet that are that are sort of a necessary prerequisite. To making the, kind of scientific, progress that we talked about in, the book okay, thanks. Okay. So what, we want to do with most of the remaining time is just, you know go, through two quick vignettes, one on privacy, and one on fairness which as per, Aaron's last slide are, there areas work that we feel are in relative, terms the most mature for. The type of scientific, research or algorithmic research that we're discussing and, as, Erin said sometimes, like the very exercise, of having to think so precisely, about, the definitions, of these social norms is, itself.
Greatly. Beneficial and might not only reveal trade-offs, that you weren't aware of but like flaws and your intuitions, about these ideas if you, just talk about them at the level that let's say a moral philosopher might and. So privacy is a good case study we. Would all we argue in the book that we feel like there's, a definition, of privacy for at least of the type of privacy, that I'm going to talk about here, is the right definition which, is differential, privacy but. It's preceded by definitions, that I think we and others feel like are fundamentally. Flawed and unfortunately. These fundamentally. Flawed concepts. Are the ones almost, exclusively. In force, in practice, these days so if you look at a you, know and user license, agreement or, a privacy, policy of a large company they, will normally, refer, to policies, if they're if they're precise at all. About various. Forms of anonymization, or, remove a removing, PII personally identifiable, information. And, to. Give you a sense of why we think those definitions are fundamentally, flawed I have here a toy example in. Which there's two different databases from, two different hospitals, of medical records and, due. To privacy concerns there's been some anonymization, done here and the normalization, largely. Consists, of you know operations. Of redaction, like just make entirely removing. Certain columns, from a database or, corseting. In which you sort of fuzz up the information and then the hope is that somehow when, you're done with this you. Have some sort of, privacy. Guarantees. So. In this top database, you, know somebody's gone in and decided like well let's, entirely, redact, an aim rather. Than giving precise, ages, let's group them into decades, so you know are you 10 to 20 20 to 30 etc, let's. Give some information about zip code but redact the last two digits and let's, keep some of the medical information like whether you're a smoker or not we'll come back to smokey in a minute and what, you know the particulars I ignore that you were given in. Your visit was and of course in reality you, know these databases, would be much much larger for a large Hospital, like the University of Pennsylvania's, there, might be tens of thousands of records but the the conceptual.
Flaw Can already be demonstrated, in this toy example suppose. You know you, have some additional information, aside. From this database like you have a neighbor named, Rebecca who, you happen to know is female, and that she's 57, years old and you know this because she's your neighbor and you're friends with her okay so. If with that side information, you also managed, to get hold of this allegedly, anonymized, database then, already in it there are exactly two records, which match Rebecca's, your. Knowledge about Rebecca and they're the two highlighted, in red and, notice that already, from, this from, this side information. You, can infer that your neighbor either is HIV, or has colitis, and she might reasonably, already, consider that to be a violation of her privacy alone. Now. Again in a in, a real day at large database, and in a real application of these methods you might go for a criterion, like what's, called K and so what's que anonymity que, anonymity basically. Asks that you do enough of this coarsening, and redaction, so, that any row of the remaining day in that in the you know kind of an, allegedly, anonymized, database, matches. At least k other there at least k matches. To that row k identical, records okay so, then you wouldn't know you, know rather than knowing this is like a two anonymous database, but, in general we might you know hope to get more privacy by asking for one hundred anonymity, rather than to annotating the real, problem of course comes when you know you also know that your neighbor, rebecca happened, to also have, a visits, to a second, hospital whose databases, at the bottom and this hospital has. Also in an effort to provide some kind of privacy done, some redaction, the same kind of redaction and coarsening. In their database and now three records, match rebecca there and. Of course the real problem, here is the join of these tube databases, right which is sometimes called linkage, analysis, or triangulation. Or various, other names when, i take the intersection, of the top red records and the bottom red records uniquely. Now know that rebecca is HIV. And, you. Might try to wish these problems away with fancier, definitions, or by appealing to scale but. But the real problem, with these types of definitions, is that they, pretend, that the data set in front of you is the only data that is ever going to exist now or forever in the world and they don't anticipate attacks. On, privacy, that, come from, you know triangulation. Of multiple, databases other, information, you might have about people even, publicly, declared information, that they weren't particularly trying, to hide many. Of you might have seen the sort of mainstream news. Frenzy. Over, articles. That i think surprised, probably, very few people in this room you know one, was about a month ago and it basically said you know here, are eighteen apparently, innocuous attributes. That if I know them about you they serve as a fingerprint, for you among all US citizens right so you know I'm not sure what they were but you can imagine do, you tell me what kind of car you drive you, tell me your, zip code you, tell me what color your eyes are you, tell me whether you have dogs or cats each, one of these things of course is like you know exponentially. Cutting away the. Remaining possibilities. And it doesn't take long to kind. Of have that sort of innocuous, information. Undo. The privacy, promises, of these, anonymity. Methods, okay. So. These. Are bad privacy, definitions, as we discuss in the book what, would be a good privacy definition, well, let me start by proposing, a definition, which has been I think proposed, since at least the 70s. Which. If you could get it would be a nice definition but, we argue in the book that it's basically asking for too much in the sense that if you enforce this kind of privacy we will never be able to do useful interesting things, with data, including. Things like medical, research studies okay, so. So. What is the definition I have in mind so imagine, you know and you can make this mathematical. But I won't bother here. Imagine, we basically said the definition is that no. Harm should ever come to you of any kind. For. As the result of a data analysis in. Which your data was included, okay so. Let's think about that as a privacy definition, for a second so. Certainly it's, a strong privacy, guarantee right I'm sort of allowing the notion of harm to be entirely general, and I'm basically saying if your data was used no harm should come to you of that study okay, so, why is this asking, for too much so. Imagine that it's 1950. And you. Are a smoker okay. And, if, it's 1950, you are a smoker because in 1950, pretty much everybody, smokes there, is no social or medical stigma. Associated with smoking in fact it's seen as a glamorous habit, and. So you do it openly in public everybody, that knows you knows that your smoke or maybe even your health insurer knows that you're a smoker who cares okay and suppose, you're asked to contribute your medical, record to the famous series.
Of Studies that were done in the 1950s. In in England, that firmly established. A correlation. Or connection. Between smoking and lung cancer okay, so, your data was included, in this analysis, this. Analysis, announced, to the world that there is a connection between smoking and lung cancer and, now we can say real harm has come to you as a result of this study right now. You know everybody's, posterior. Beliefs about the likelihood that you have cancer go up in, this study and your data was part of this study okay and in particular real, harms might come to you of the financial, variety, your health insurer might decide to double your premiums, for example, okay. Now. The key observation okay. So so in particular if we adopt this definition, this, study would have been disallowed, right this would have been a violation, of privacy of the privacy, of everybody whose data was included, in this study, the. Key observation though here is that of course it's. Not the case that your, particular. Medical, record was the crucial, piece of data that allowed this the link between smoking and lung cancer to be established right any, sufficiently. Large, collection. Of medical records would, have been enough to establish this this fact because you. Know the the fact that smoking and lung cancer are connected is not like a fact about you, particular. Or your data it is a you know we might call a fact about the world that, can be discovered. Provided. We have enough data okay, so. This brings us to the, what, we claim is the right definition of privacy which is differential, privacy which slightly refines, the definition, I give to, sort of account for this fact that your data wasn't, the crucial, missing piece in this analysis, and this, is a schematic but, in English what is differential, privacy asks, it, basically asks you to consider two, alternative. Worlds one. In which an, analysis. Is done and your. Data is included in that analysis, and let's say that there are n medical, records total, in the analysis, and the, other one, is the, same analysis, is done but, on n minus 1 medical, records where the missing one is yours so, what we want is that you, know, the haunt the difference, but you know the harm that comes to you is basically identical in these two situations so whatever. Whatever, your definition of harm is whatever it is you're worried about. You. Know the the chances, that that harm comes to you in the case where your medical, record is included, compared to the one where it's, only your medical record that's excluded, is sort of controllably, close okay, and as. Many people in this audience know the, definition, of differential, privacy, involves. It's a property of an algorithm first of, all not about a day that particular, data set and algorithm. Either is or is not differentially. Private and. General. Differential. Privacy is generally achieved by adding noise to computations. So, you have you moved from deterministic. To randomized, algorithms, and you. Typically, add noise in, a way that, kind of obscures, the contribution, of any individual, piece of data in the analysis, while preserving, broad statistics. Okay. And. You. Know so the. First time Aaron's been working in differential privacy much longer than I have and I remember the first time I saw the definition, of it I thought like well that's a great definition but. I'm still worried that it's too strong right it's got many Universal, quantifiers, and it right it's got a the, algorithm, has to provide. Differential, privacy on absolutely, any input, database, the, definition, of harm can be anything you want it to be and still the, increase, in harm as a result of including, your data is controlled, and. So my first reaction was like you know maybe you would still won't be able to do anything with this definition either, luckily. That's turned out to be you know far from the truth and in particular, pretty. Much any technique, you know from statistics. Or, modern, machine learning, has. A variant. It is not differentially, private in its original form but it has a variant, which gives differential, privacy so, you know for, example back, propagation, in neural networks or stochastic gradient descent have, differentially, private's variants. So. Differential. Privacy is kind of you know just in recent years started. To make it out of the lab or maybe you know kind of more precisely off the whiteboard, into. Practice, and the. Big moonshot, for differential privacy is coming up next year when, the US census has decided, that, every. Single report or statistic, it results, based on the raw underlying, census, data will, be released under the constraint of differential privacy and this is a huge engineering, effort and, it'll be interesting to see how it turns out, and I'm gonna turn it over to Aaron now to talk, about fairness a bit yeah so we're not we're not there yet on fairness, so.
We Sort of assert that if you you know think, about differential, privacy for a while you, know read, chapter 1 in the book that many. Of you will, agree. That at least for a particular, kind of privacy statistical, privacy, and data set it's. Somehow the right definition it's capturing what you want. There's. There's nothing like that in the fairness literature, yet there's you know dozens of definitions, of what we might mean by fairness, and for each one you. Know I could, tell. You one, reason. Why it's, you know lacking it's not capturing everything you want in fact we even know the, study of fairness is gonna be more complicated in the study of privacy, because. There, are. Already. Known you know different. Reasonable. Definitions, of fairness that in isolation you would nod your head and agree yes, that's something I would like that, are known to be incompatible. With one another. Nevertheless. And. So maybe you think about you know the study of fairness and machine learning as as where. The study of privacy, was 15 years ago, nevertheless. It's a it's a extremely, important, problem here. On the slide are two headlines just, from the last week. Two, applications, that have attracted new york state regulatory scrutiny one. The. Lending. Application, the apple credit card that. You might have heard about there's a, number of tweets from prominent prominent people alleging, that. The. Algorithm, that determines what, your credit limit will be. Exhibits. Gender bias the other article, was about a widely. Deployed algorithm, targeting, healthcare interventions, that seems to exhibit racial bias, so. I don't want to talk. Too much about definitions. Of, unfairness. Because, I don't think we've yet hit. Upon exactly the right ones but, I do want to give, some. Idea. For why machine learning might be unfair, in the first place because I think a lot of people's first reaction is that well, you know bias, is a bias. Of the sort that we talked about when we when we talk about like racism or sexism this. Is some human, property and we're removing it just by removing human, beings from the decision-making pipeline, and you know using, objective, optimization procedures. And, it's a little more complicated than that here's. A little cartoon to, to illustrate, why okay. So suppose that Michael. And I volunteer, to help out with pan admissions and we're going to design a machine, learning algorithm, to. Help admit students to Penn okay. So maybe in this cartoon, we've. Got two, observations. About each applicant, their SAT score and their GPA and there's, some concrete thing we're trying to predict okay, so maybe for example we're trying to predict whether students, if admitted will, graduate. And at most five years with at least two 3.5 GPA maybe, we're trying to predict whether within 30 years of graduating they'll donate at least 10 million dollars right, whatever it is some, concrete thing such, that we're trying to admit the people who we've labeled as plus and we want to reject the people we've labeled as minus and. There's. All sorts of problems you might imagine gathering. This data you, might imagine that there's you know potentially, the biases, of you, know past admissions, officers embedded, in this data but, let's wish that all away and imagine for this cartoon, example, that the data really is what it says it is okay, because I want to I want to show you that it things can be a little bit more complicated even in the best case when you've got good data, so. There's gonna be two populations. You're looking at the green population, now and, there's a couple of things I want you to notice about them so.
First Slightly. Fewer than half of the green, population, is qualified for college by which I mean there's, slightly more - science on this slide, than there are plus science. Second. There's a pretty, good although not perfect, decision, rule there's a line I can draw up through space and by, and large although not exclusively the, positive points lie above the line and the negative points lie below the line okay. So that was the green population. Here's. The orange population, and again. A couple of things I'd like you to notice about them. Maybe. The first one you noticed is that the orange population, is a minority by which I mean literally just that there are fewer orange points okay, like in this context, all it means to be a minority is that there's fewer of them. The. Second thing you might notice is that the points seem, to be drawn from a different distribution, in particular, they're they're shifted downwards, on this plot they seem to systematically, have lower SAT scores, that. Could be for one of any number of reasons for example maybe the green points come from a wealthy population. They take SAT tutoring classes they take the SAT three times and report only the highest score the, orange points take it one scold that. Naturally, results in a higher distribution on SAT scores for the green population, but it necessarily, make them more qualified for college in, fact when you look at the labels, when you look at the actual thing that we're trying to predict it's, the orange population, that's better here, and the orange population, is better in two ways first. On average they're more qualified for college right there's you, know half of them are positive examples here compared to fewer than half for the green population, and, second. It's easy even, easier to tell who's who there's now a linear, decision, rule that I can implement that makes no mistakes at all okay. We've got two populations, and in this example the, minority, population is the better one when I say they're better I mean they're more qualified on average and it's easier to determine, who are the qualified ones, and. Yet. Here are the two populations, together and, remember. We're only giving the algorithm, SAT score and GPA so you can see the colors of the points but the algorithm cannot and. Suppose. What we asked for is the the standard, standard. Objective. In machine learning we would like to find the model in this case the linear decision rule that, makes as few mistakes as possible okay. What could be more objective than that to minimize the number of mistakes and what. You get is just, the rule that best fits the green population, you. Can think about why. That is right if I were to shift that decision boundary downwards, I would, make fewer mistakes on the orange population, but, I would make more mistakes on the green population, and that wouldn't, be worth it from the point of view of minimizing, overall error because.
There, Are more green points and so mistakes, on the green population, count more for overall error, okay. So, we had an example here where, the orange population, was better than the green population. But drawn from a slightly different distribution. And. When. I asked, to find the model that minimized, overall, error it ended, up rejecting every, single member of the orange population, despite the fact that they were more qualified and despite the fact that they actually had more signal in their features note. By the way that were. I allowed, to use. Group. Membership, color in this case in. My model for example if I were allowed to build a decision tree that said well, for, green points use the blue line for for. Orange points use the purple line then. I could have improved by this I could improve things for everybody, right. I, would, have had a more accurate model it wouldn't have changed the decisions for the green population, and all of a sudden I'd be making the right decisions for the orange population. And. So two, things I want you to learn, from this cartoon, the first is that if. You just blindly optimize, for error. Okay. That. Will. Tend. To fit the my majority. Population, typically at the expense of the minority population not, for any kind, of you, know. Underlying. Not, not because there's any kind of like underlying, like racism, baked into the like. Objective, function but simply because, larger. Populations, contribute, more to overall error and. Second. Although. It's a knee-jerk reaction to say okay like if I don't want it like racial or gender bias, in my algorithm I shouldn't I shouldn't. Use those features. It's. Like it's not always the case this. Is an example where using, those features can actually make things better not just for fairness, whatever, that is we haven't defined it but, for, accuracy. As well this. Is an example of something that intuitively, seems unfair, okay we, have this better population. And we've learned a model that nevertheless rejects, all of them simply because there's fewer of them if. We want to design, algorithms, that correct this we have to pick a definition, we have to specify what we mean by unfair. I don't. Want to dwell too much on definitions, but for, example in. This thing you know in this application, you might decide that the people who are being harmed. By the mistakes made by our algorithm are the, qualified, applicants, the positive, examples. Who, are mistakenly, rejected, by our algorithm these are these are the people who like. It's really too bad that our algorithm, rejected, them they they would have done well had they had, they come to our College and maybe. The thing that you object to in this. Model is that the. Rate at which the algorithm is doing harm in these two populations, in this case the rate of false rejections, the false negative rate is. Drastically, different between these two populations, a hundred, percent on the orange population, is close to zero on the green population, and so, you could imagine asking, and this has become a popular thing to ask for that, our we should find a model that comes, close to equalizing. These false rejection rates maybe it exactly equalizes, though or maybe it equalizes them up to you know 5 percent or 10 percent or, 50 percent. So. You've got some quantitative. Notion. Of unfairness, that you can ask for there's a knob that you can turn trading. Off this notion, of unfairness with, with, other things you care about like error and. What. You find when you start designing, algorithms, that achieve these goals and, then you've got this knob that you can tune and by the way differential. Privacy also comes with such a knob and so you can draw similar pictures when, you're thinking about privacy, what. You find is that. Although. There are inevitably trade-offs. That you have to grapple with you, can illuminate.
What Those trade-offs are okay. So these are Pareto. Frontiers, these are on different data sets for a real machine learning task. The. Optimal, rate of, unfairness. That you can achieve here, measured by the difference between false negative rates between populations. That's what's plotted on the y-axis with. The, optimal rate of error you can achieve that's what's plotted on the x-axis okay. So for a particular class of models you can achieve an error. Unfairness, trade-off, represented. As any point on this Pareto. Frontier, and it is not possible to go beyond to, get a model that simultaneously, improves. On both of these metrics. And. What. You can see you know if you're lucky like in the in the plot on the left you. Can sometimes get a dramatic, decrease, in this, unfairness metric, in this case difference between false negative rates at the beginning and only a very small cost, to error okay. That's that's what happens when this curve looks very steep of. Course these trade-offs become more severe as you as you start as you start asking for more. And more stringent conditions, and, so. As. We. Described, in the book you. Know the science can only take you so far it. Can it can elucidate. What these trade-offs are but, it can't tell you where on this trade-off curve you want to live you know as a society, in a particular application and, there's not going to be universal, answers I will, want to prioritize. Fairness. Or privacy, more in certain applications, will want to prioritize accuracy. Other things. Other applications, but you. Know there's. No avoiding that we have to make hard decisions and, what the science can do is it can it, can help us make those decisions, with our eyes open, what. We described so far plus. Would the introduction, gets us to about the halfway point of the book and, in the mid mid, mid way through the book we kind of take a wide left turn that we think is interesting and well motivated and I just want to give you a teaser for what that wide left turn is so. In the different scenarios, and applications, we've talked about so far it was fair to a first approximation to. Think about, individual. People consumers. As the victims, of algorithms, so you, know you might be denied, admission, to a college you wanted to go to unfairly. Or you might have your privacy, leaked, by you. Know a data set or a computation, and you might not even know it right and you might not also know that your data was being used to build these models that are being applied to decisions, made about other people. There. Are other situations in which, there's. An algorithm or maybe more precisely an app and there's, a large base of users, of that app and it's. Not so easy to entirely, blame, the algorithm. Alone, for the antisocial, behavior, that. It exhibits because then antisocial behavior, is sort of a function of the algorithm, but also of the incentives, of the users, who are using the app okay, and this, takes us into the realm of game theory, and so in particular there are many many apps these days that we can really think about as you know the word that often used is personalization, but. We might think about the game theory term as being computing, your best response, right so, one concrete example is commuting, using, apps. Like Waze and Google Maps where, in response, to real time traffic mainly, that the activity. Of all the other drivers on the roads there's, this app that computes your best response, it basically says this, is the short at the lowest latency or the shortest driving, route for you to take from your point eighty or a point B and. You might think like oh well, what can be better than that I've got this thing that uses real-time traffic information right, now and tells me which way it which route to drive but.
It Is driving, us all collectively, towards, a selfish, equilibrium, of some very very large. Applicated, multiplayer, game like literally the Nash equilibrium, of that game and any, of you that have had any basic, game theory know that, just because something is an equilibrium doesn't, mean it's a good thing for you or necessarily, for any of the players in that game and so in particular in. The case of driving apps right and there's well known both toy examples, and evidence that this happens in the real world even. Though we're individually. Optimizing. All the times with these apps we, might be collectively, driving. More because. We're in this competitive equilibrium, and in. The book we we kind of you know take this you. Know semi, metaphor, and apply it to areas that I think are less, clearly, mathematically. Formula, formulate, able as a game as commuting, including. Things like product, recommendation, on services, like Amazon, or what, you see in your, Facebook, newsfeed and, talk, about sort, of the tensions, between individual. Optimization. And self-interest, versus. The collective equilibrium. That we're at let's say in the form of filter of bubbles or vulnerability. To fake news in the case of, Facebook. Things like that and then. The final chapter, of the book before we get to the the catch-all, chapter, that discusses everything from. Sort of interpretability to every, AI alarmists. Favorite. Dystopia. The singularity, we, talked about specifically. Sort. Of this the sport the competitive, sport the machine learning has come become and in. Particular, we talked about sort of game theoretic ways of thinking about that and the. Consequences, that it has for things like the reproducibility crisis. In the sciences, so you, know in a very quick nutshell you. Know I think many people in this room will be familiar with the fact that machine. Learning in some sense has become a competitive, sport where there are these benchmark, datasets, it's, very there's their selection, bias in the reporting, of results because. Journals. Won't publish negative findings, for the most part and we, really, there's. So many people in the field right now that. We really have no idea how, many experiments. Are actually being, run and how to correct, for the complexity, a number of those experiments, to, make sure that we're not sort of going down the road that you. Know food science, has already gone down where some significant, track the published results are not, reproducible. And our, kind of false discoveries. So, that's. A teaser for the second half of the book and we wanted to invite. Emily back up and and chat. With us so, I really wanted to start with one. Of the major theses. In your in your work is that, the. Solutions. To, the ethical concerns, that are arising from this prevalence of algorithmic decision making systems, should. Themselves. Be in large part algorithmic, and. So it's only if you could talk a little bit about how, you came to this perspective, if you're thinking on this matter has evolved, at all in the past few years yeah. I mean, I think we came to that perspective. Through, our technical research, work right so I mean we were, relatively. Early adopters. Of sort of the whole fate view. Of machine. Learning and algorithms like many people in this room and so. We, knew you, know even, while we were reading. Reports, of our field violating. Based on basic social norms that we and others were thinking about well well you know you, could wait for better laws and regulations, or you could you could go fix that problem in the code this way like right now I definitely. Think our view. Has evolved, and even the draft of the book of all this we you know we've, been we talked, we talked, to many people outside, of the you, know computer, science machine learning community who care about these issues like, regulators. Like, policy. Makers like people, who work in social agencies, that see, firsthand, the damages, caused, by you know criminal, sentencing, models that have gender, or racial bias in them for example and, I, think it's the.
Main The. Main evolution, it had at least on the book is to. You, know is to, point out that we don't think that algorithms, can solve every problem, and. That there's still a great deal of, room. And importance, for laws regulations, and more traditional, you. Know, solutions, and that also there are some problems that, the. You know the really hard problems remain are kind of social so you know if, it's the case that you're. Police on the street are racially. Biased and who they decide to arrest or stop and frisk that's. Going to kind of show up in the data you may not know, and the, only solution for it is to like you know make. Police, less racist right and that's like not an algorithmic problem, it's not even in an, easy regulatory. Or policy problem. The. Only other thing I would say is that. So. Of course like. All. Of these problems are. Complicated. And. The. Fact that, yeah. And their solutions, are probably it, can't be derived from just thinking about some, very. Narrowly. Scoped. Algorithm. Without thinking about the sort of broader social. And algorithmic ecosystem. In which in which they live but. Many. Of the issues that have come to light when thinking, about for example algorithmic, fairness like trade-offs. Between different, reasonable. Notions of fairness. Yeah. It's not that they're specific, to algorithmic decision-making. They've. Only come, to light now because there's no avoiding when you're using algorithms. Making. Quantitative, measurements, and specifying, precisely what you want but. But, these issues are you, know these these trade-offs, for example, are fundamental, to any like decision-making process, they they apply also to human decision-makers, and, so. I. Think. Many people think of like algorithm. As a scary, word of course you know I guess computer scientists, and as folks at Google we probably. Think. Of it as less scary but it's not that it's not that there are it's. Not that you're just as. You know simple tweaks algorithms, can't fix complicated, problems saying, get, rid of algorithms, like, also doesn't also isn't a workable, solution doesn't fix anything something. Else that you talked about in the book is how the. A lot of these outcomes. Are. The results of, professional. Scientists, and engineers very. Carefully, and rigorously, applying principled, machine learning methodology, just. To massive complex data sets and. And. So you do kind of get at this a little bit about how what is you know the things that are missing in this in the standard sort of methodology and so I'm really thinking that like this this points, to how. Many different aspects, of the, kind of rigorous scientific practice, that we would strive for, actually fall outside, the kind of standard, machine learning, sort. Of framing and. Educational, training, and, so you, know for example. One. Of the examples. That you gave just, now on the screen was this, algorithmic. Healthcare system, that. Was sort of reproducing, racial biases, and the in the healthcare sector and. If I recall correctly one. Of the problems with that system was this kind of equated health, care with healthcare costs, and so this is something that's been discussed a little bit in the algorithmic fairness community, this kind of you. Know failure to really. Precisely. Articulate. And justify, the. Kind of you, know operationalization. Of abstract. Sort of social constructs, into precise, variables, that are then predicted, by the machine learning system, and, so I'm wondering if you could just talk a little bit about you, know the sort, of new, machine, learning education, and. And practices, that would kind of you, know get at these things that fall slightly outside the kind.
Of Traditional machine learning thinking but are still kind of in this algorithmic, frame. Yeah. I mean I guess in some ways it's fair to. Characterize. You. Know the fairness and privacy chapters of our book at least as as kind of a tutorial, on. You. Know what. You can do to make things better or without leaving. The field of machine learning and going and becoming a social worker okay right and by the way you know in writing, this book we often would talk to people that. Would basically say to us like well if you really want to help you, should like quit your day job and you know go become a social worker and I was like okay well I'm not gonna do that but. But I mean so but but I think that maybe one of our points especially to an audience like this is that, there are things that we can do that, are just adjacent. To what we're doing already I mean the, hard part will be you know things like the Pareto, curves that Aaron showed there will be hard trade-offs, between you. Know error. And fairness, or a you know error and privacy. But. It's not like it's not like a different, kind of beast and mean if I had, to you. Know phrase it very dryly, it's like the difference between, solving. The optimization problem. That you're solving now, to. Solving. Trained optimization. Problem, where your objective, is the same but now there's like fairness or privacy constraints, and so. I think you know in many ways for the machine, learning community this is like low-hanging fruit, it's low-hanging fruit that will result in perhaps. Difficult decisions, with leaders, of your business units when you tell them like oh this, will make our ad placement, more fair but CTR. Prediction, will be this much worse which translates, into this much less profit every year but, at least you sort of put the discussion, on scientific. Grounds, and I, think those parts are appropriate, to put on scientific, grounds yeah I think there's two separate things here so the, discussion we had on. The slide was. In. This idealized world where the data was it was clean it was correct the labels were were right and right. Even there there's something to to, do to learn but but, that's really the scenario in which you're you know talking about constrained, optimization problems, and having to deal with trade-offs, in, the United Health case, the. Problem for, those who aren't aware, is, this.
Model Was supposed to predict. Given a. Patient. With with some collection. Of symptoms. Health. Outcomes, so that new. Interventions. Could be targeted. But. They didn't have outcome data instead they trained on health, costs, how much how much did this patient cost, the healthcare system, down, the line with, the thought that patients. Who are sicker cost more and. The. It's. Thought the reason for the bias that the model exhibited right that sort of -, similarly, sick patients, one of whom was Caucasian one of whom was was black. Yeah. The model would tend to suggest more. Healthcare intervention, for the Caucasian patient well. The reason was because, black. Patients who were similarly ill tended, to cost less not because their health outcomes were better but because they had less access to healthcare so. This. Is something this is the case where you you, know you don't necessarily have to deal with a trade-off. In the sense that you were you trained your model on the wrong data if you were able to go out and get the right data oven it would it might solve this sort of unfairness problem, and simultaneously. Make your algorithm. Better at predicting the thing you really wanted it to predict, but. You. Know I think this is something that sort of made salient, just because people are thinking about. In. This case fairness, but, this. Is like a part of like data science education, that that would. Have been important even if we didn't care about fairness in the sense that you could have made the model even if you just cared about like overall, accuracy you could have made the model more accurate, by. Training. It on the correct data and you. Know it's only because you, know someone wrote an article in science about the unfairness of the model that it was brought to light yeah yeah that's kind of what I'm getting at is that there are sort of really rigorous, scientific, practices. In related, fields you know for kind, of you know turning these abstract, constructs, into measurable, variables and, this. Kind of interdisciplinary. You. Know kind of work, I think is, not really, being adopted as much as it should be and I think could really you, know I I don't necessarily view it as two entirely separate, from the algorithm, design I think I really should be integrated and.
So Yeah. I'm just glad to hear that I think it's also important, okay, one more high-level questions, so you detailed a lot of different sort of troubling practices, prevalent. Within the machine learning community, and how these. Sorts of practices you know like biases, in reporting, you. Know kind of reliance, on a small number of data sets these types of things you, know these lead both to the reproducibility, crisis, but also to a lot of you know really sort of ethically questionable you. Know design and development of algorithms and so I'm curious what your thoughts are on how the community as a whole can start to shift its practices, what, types of new incentive structures you'd like to see in place obviously this is not a quick fix this is a very long-term thing, but, I think a lot of us are our members of this academic community, and so I'd love to hear your thoughts on you know how we as, a group can kind of shift in in a more socially and beneficial, and ethically informed direction, yeah, I mean I guess, my quick answer would be you. Know we do suggest some technical, things in, the book and we talk about you know the pre-registration movement. And things like that which I think we view is. Too. Restrictive, of a solution. But. You. Know maybe to answer that. Question to make a broader, social, comment I think it would be good for the field of machine learning to become less like a competitive sport, what, this. Is a relatively, recent phenomenon and, it's. You know you. Know it's I think a byproduct of the tremendous. Empirical. Successes, that areas like deep learning had and the, need for these massive datasets and kind of you know kind, of concentrated. Focused by a large number of people on the Minish you know in an intense, period of time and, that's all been great and I don't you know I'd you know no no. Not whatsoever, on the actual advances, in in, those technologies, you know log which are large I think in vision. Speech and and NLP but, I think you know, and. I'll use, my seniority, here to point out that the. Field of machine learning you know it used to be that people were considering, many many different types of learning frameworks, of different learning models. There. Wasn't this sort of uniformity, of datasets to, the extent that there is now or at, least if there were they were really kind of toy datasets, that nobody considered, like a serious, benchmark, for, developing, and deploying services. You know it's things like the UC Irvine dataset which, he went to check your results on but it wasn't like okay on the, UC Irvine dataset I've now developed, this service that I'm now gonna unleash, on a billion users and I, think you know the, field I'm hoping will organically, balance. Itself back more to an earlier era where, there's not this single-minded. Focus on, sort of one framework, for learning and a few datasets and so, maybe you.
Know Maybe. Things like pre-registration. Or sort of smarter leader boards, which we do discuss in the book you. Know maybe that's part of the solution but maybe part of it is just kind of you know a cyclical, move back towards, and kind. Of a more diverse research. Landscape in the field. Question. Jack, Dorsey, from yeah made, the announcement that they're not gonna do political, ads at all and. Machine. Learning algorithms. Are being. Used to target you. Know users, with. Those ads so, algorithmic. Accountability. And fairness. Is, we. Get questionnaire and you know chiming in with aren't. That there is, some. Bit of you. Know not, all algorithms, are bad we are trying to make them better so, a player leaving the field kind, of you know creates this added, pressure on the other, players in the field to be. Really. Accurate about, it briefly. I don't have deep. Thoughts on this particular, issue. I think. You know the policy to pull. Those ads entirely, is better, than having no policy, whatsoever. On, the other hand I'm not sort of convinced, that you know pulling, you. Know things, that are designated as political, ads, eradicate. Sky nathie penumbra. Of you. Know worries that people have around the politicization. Of social media right I mean you know like I don't think it directly addresses, things like you know fake news and and the like, but, I think it's you know it's better to have a clear policy than, to have no, policy at all and secondly I do, think, you. Know it's, good for the competitive, landscape of the tech industry to, have actors, that take stands on issues and try to you. Know create, internal. Pressure in the industry, to sort of think, harder about these issues and adopt, them I mean you know to. Give an example you know sort of Apple. Has successfully. You know you can debate whether it's how deserved it is but Apple has carved out a reputation. For greater. Concern, about, consumer. Privacy and was an early adopter of differential, privacy, and. I think that that does create kind of an environment where there's more internal. Pressure from the industry rather just than from regulators, thank. You I was. Willing how much do you think that your book is sort of a snapshot of, a current moment in time so, certainly a win to the made sense to have this book published like ten years ago and how, much do you think it is really something that's an enduring, set of problems and because, of the the list of problems we understand less and less it's really creating an outline and a framework that's going to -, you know have a significant, impact over time so, as we say, at the beginning of the book like this is an emerging science and, you know you might reasonably. Think that that means it's sort of too early to, write such a book but we think that it's sort of exactly. The right time because it's when, it's. When the ideas are developing that that somehow those like intellectual, process, of like thinking about them is most exciting so. I certainly think that especially, as you go down. That. List maybe even already you know fairness, which was the second thing in the list that if you look at what the technical. Approaches, are gonna look like you know 15. 15, years down the line they might be quite different from what they look, like today but.
I Think that the basic. The. Basic premise of what needs to be done and that's been sort of successfully, carried out from white board to you, know to product, to national scale deployments, for privacy is. Enduring, which is that you. Know what you need to do is you need to think. Very, hard you know in a in a rigorous. Precise, way about what you mean when, you say you want algorithms, to be bla, where block and, represent. Any you, know any. Any any word you want where you know a human being would would just know what you meant if you if you told him you wanted you, know accountability, fairness transparency. But. That is not obvious to an algorithm and then you have to after, you come up with a plausible definition. And coming up with the definition is the hard part but when. You come up with a plausible definition, you have to think, about how, to you. Have to think about the scientific problem, of how to design, models satisfying, that definition and think quantitatively, about trade-offs, because typically these things don't come for free I think that, general. Methodology. Is is going to have. To be enduring and, that even if the specifics, of how people are thinking about these things 15, years, down the line are. Gonna be different they, will be thinking about these things I'm. Gonna throw in this Dory question really quickly what skills, are. There other than computer, science that, are most needed for work on ethical algorithms, what advice do you have for successful. Interdisciplinary. Collaborations. Let's. See I mean if. By skill we mean sort of an academic, or. Specific. Technical skill. I. Think, it's more I, think. What. I would advise most is sort of a willingness and actual interest, in talking to people in adjacent fields, that think about the same issues but from, a non-technical perspective, so I think we benefited greatly for, instance in conversations. We've had with people with the law school at Penn who. Think, hard, about fairness, and, privacy. Including. And technological, settings from a legal perspective. And just kind of understanding, their views and is also especially understanding, the constraints that, come, from their world and you know and and in talking to regulators, it's quite revealing to talk to tech regulators, and realize. The handicaps, that they face right so there it's you know these are smart people but, these are smart people kind, of with many many shackles, on what they can and can't do that, really kind of force them to lag. In many ways the, companies. That they're regulating, and so, I think it's important. To in working in this area even if it doesn't like oh you know you talked to some regulator and then you've got a research idea that you then you know go work on to. Just kind of understand, that landscape, more than any other particular, field outside of CS, or machine learning, one. Thing that I've seem to notice is, that, when. People notice that say. An. Algorithm, has maybe, not fair, then. What. Algorithm. Designers I mean, even society at, large 10:00, to 2:00. Is to come up to think of quick. Solutions, on how to fix it when the solution itself may. Not inherently. Be fair, to, take an example for instance the the. College selection slides. That you've showed where you, have two populations, and two, clearly the, initial. Solution of having a general cutoff was a problem, but like. Yugioh suggested, one thing that could have been done as consider. Two, populations. I'm. Separately, which seems like an okay thing to do people might even, agree with it but, then when you look at the data you do see that even the green population, there may be some. Data points which are positives, but. They fall within the, cutoff. Range of may be population. -, and. I do think that this happens a lot in real life - that you know the easy solution is to maybe. Just say that one, very easily observable. Variable this population. Type. But. Maybe. The actual hidden, variable that you to consider is maybe like you mentioned maybe the income you know maybe couldn't takes out three, times a sort of one so. Today, what I see is maybe people from the, green population, perhaps. As a result. If. There, are somebody who, falls for the cutoff then completely. Screwed. As well yeah so I'm wondering, as algorithm. Designers what, are the things that. Are. There how. Can how. Can this problem be solved because it seems like we. Are trying to optimize, for maximum, efficiency you have the curve but, then as a result there might be some fraction of the people that always get left behind although. Overall it might be the most optimism, yeah see, I think you've put your finger on one. Of the main weaknesses, of these statistical.
Notions, Of fairness and then we talked about this a little bit in the book and it's it's actually. One of the main focuses of our of our research. So. And, by the way this is why we, think of you know maybe the fairness, and machine learning field. As, an academic field as, you know 15 years behind privacy's, so the, claim is not that any of the existing definitions. Are very good so that I think what you're putting your finger on is when, you look at these statistical, notions of fairness that say things like well I'd like the false rejection rate to be similar between, you know like orange, people and green people say. Well. You have to like the first step of even like enunciated Nath was you had to say, okay well, there are like these two groups I care about orange people and green people and usually. It's not so easy and, just. Because I you. Know guarantee. Some. Notion. Of statistical, equality, in aggregate. Over, two, large groups doesn't mean that the, solution, that we come up with is, fair. In. Various, technical senses to you as an individual or even to even. Two large groups of people that, you think of yourself as a member of if they weren't the, exact. Groups that we specified, up front. So. There let me just without. Saying too much about it yeah, this, is an active, area. Of research there. Are things you can do there are fairness notions that are somewhat, more satisfying, than these they don't require and I'm seeing you, know a small number of like pre-specified. Coarsely, defined groups upfront there are ways to talk about fairness at an individual, level and maybe we can talk a little bit offline. But. These, are this, is sort of the research frontier, like we don't understand. That, much about. Methods. That guarantee, protections. Of this sort and their implication, so it's a very good question and I'd say like there's. People thinking about it you should go off and think about it's like it's not it's. Not a settled, science yet. Thank. You so much. You.
2019-12-23