Regulating Artificial Intelligence: How to Control the Unexplainable

Regulating Artificial Intelligence: How to Control the Unexplainable

Show Video

Thanks, everybody for coming this is a really, exciting really. Exciting program it's, being sponsored, by the Graham school and three, of our master's programs we have a master in biomedical informatics, a master, in data, analytics and a master in threat and response management and these. Programs are, all geared in. Some way toward. Analysis. Of data and, and. Using, machine learning and using analytics, and part of the part. Of the. Part. Of the thrust for all the programs is to understand, how to use data in an ethical way and how to use. These algorithms that we're developing in, a thoughtful way so, I met Andrew a couple years ago at South, by Southwest, through. A mutual friend, we were both giving talks that year and we, just had this amazing conversation talking, about, talking. About our. Applications, of machine learning so as a physician. We're. Seeing the use of machine learning algorithms, all, over the hospital and all over medicine from things like predicting, cardiac, arrest to predicting, sepsis. To predicting, patients, that are going to be readmitted to the hospital and. You know we just plow ahead with developing, our models and developing our predictions, and it, was until I spoke to Andrew that it really stopped and thought about the. Implications, of these algorithms and how they could be used in both, good but also in also bad ways and it was really an eye-opening. Experience and, ever since that meeting I've been really excited about bringing Andrew, Andrew, here to talk so, andrew. Is a fascinating. Guy he used to work for the FBI cyber, division, he. Is, the chief he's, the chief privacy officer at, mu de which is a data, science company it, just as really fascinating, work and. Just. Is very very. Tuned. In both to the technical. Analytic, side as well as to the the legal and ethical implications of, the work that we do so the title of Andrews talk is regulating, artificial, intelligence, how to control, the unexplainable, and, as you listen to Andrew I want you to keep in mind all. Of the not. Only the science and computer science of what we do but also the social. Implications and, I hope that the questions that we talk about that we discuss afterwards will will, will, run the whole spectrum of the. Implications, of this kind of technology, so with that and you're excited to hear your talk and take it away wonderful. Thank you so. Much let me just switch, here. And. While. I am switching I will say thanks. Everyone for coming out thank you Sam Wendy, Suzanne everyone, I wanted. To start today I should. Also say. This. Is a condensed, version of a longer talk and so I want folks here to keep me honest and try, to keep this a little little casual, so I'm gonna do my best.

To Go off-script but I want to start today and and issues, in. Fact by introducing, you to a horse. More, specifically, this is Hans or clever Hans as he became known as and Hans, was one of the most famous horses in the world about a hundred years ago he. Was raised by a man named Wilhelm von ole stryn Hans. Lived in Germany and he was thought to be incredibly, incredibly, smart and his. Name so. This. Is Hans that a public fair demonstrated, his intelligence, the, folks that know about him just you know no spoilers. So. He was thought to speak German. He could perform arithmetic he could count objects, and much, more here's, a first-hand account of how he communicated. Numbers, so, small numbers were given with a slow tapping of the right foot with, larger numbers he would increase his speed after. The final tap you'd return his right foot to its original position and zero. Was expressed, by a shake of the head the. One example of a question he'd, answer, was. I. Have a number in, mind I subtract nine and I have three as a remainder what's. The number I have in mind and Hans would unfailingly, out. The number twelve though. Hans was quite, simply the. Most interesting, horse in the, world and. This is Hans with his owner in front, of a board that he used to help him communicate and, so. Hans, became, world famous for, his clear display of, animal intelligence this, is I. Didn't, realize there were slides over there this is an article in the New York Times from. 1904. A testing. The, honza's, feats of intelligence, and in this article the reporter, recounted, all of these feats instated, I'm, going to quote the facts here are not drawn from the imagination. But, are based on true observations. And can be verified by the world's most preeminent. Scientists. So. What. Is going on here why am I starting a talk about machine learning by. Introducing, you to a horse, two. Reasons the first is that Hans illustrates, something really, profound in, the. Way that humans approach the problem of intelligence, and animals and machines.

And In. Humans so in. 1911 a psychologist, named Oscar folks'd published. This book in which he demonstrated, that, Hans wasn't actually that clever at all in every case. Pons. Was, watching. The reactions, of his trainer and reacting, to involuntary cues, and the, body language of that trainer and so this wasn't a hoax vanastra. Didn't know he was creating these cues but. While we were assessing the intelligence, of clever, Hans clever, Hans actually, demonstrated, something. Very deep about our own intelligence, and that is we have significant. Cognitive. Biases, the way we process information is, prone. To, irrational choices. One of the cognitive biases, we, have baked into our brains is called cognitive bias, it, causes us to look for things that conform, or, existing, hypotheses. Or, beliefs, and so, clever Hans is really a testament, to the fact that we, don't process, information rationally. Ourselves, and confirmation. Bias is one, among many different types of bias and so I bring this point up as a starting point because, it's something we need to be acutely, aware of and acutely sensitive to, when. We think about this problem of, intelligence. Artificial, or. Otherwise, there's a lot frankly, about. This topic that might lead us astray. But. Hans, also illustrates, something else that is specifically. Relevant to the way we approach AI today, in the early 1900s, we, simply could not understand, the. Way that Hans was processing, information and yet almost, all of his answers appeared, to be correct indeed he seemed to know everything. That we knew and. The. Ability, to the, inability I should say to fully explain why answers, are correct or how reasoning. Is occurring. Is, exactly. The, type of problem, we face with artificial intelligence today that is. What, I would posit is the fundamental. Challenge we're, facing when, we think about deploying, AI. Though. Nai input. Goes in in the form of data the, so called black box of AI makes its decision and that decision is, usually right and in, practice, there's a deep sense of discomfort, about. What this actually means when, proposals, are brought up to regulate, AI this. Is what they're focused on this type beam, of unexplained. Ability is what they're focused on and it's frequently what they're actually fighting, and so. Today I'm going, to talk about a number of different dimensions of this, type of unexplained, ability, but what I'm really going to contend, in my overall message. Is. That the, very idea that, we can explain, how decisions are being made the very idea that we need to explain how, decisions are being made is actually. Something we're going to have to move beyond if we're, gonna fully embrace, this technology, and so when, we talk about regulating. AI today, and what that might actually mean in practice, I'm, gonna make some suggestions about I'm, gonna be talking about what it might look like to move beyond. Explanations. Beyond. Explained. Ability or at the very least to. Put less weight on the, importance of explain ability I'm gonna frankly be talking about what. A world might look like with a lot more honza's, and how, we might seek to effectively. Regulate, and manage risk, manage. The ethical dimensions of a world that, looks like this and. So that, brings me to three points I want to make today first. As I want to talk about what, AI is specifically, in the major challenges, it presents. And. When I say AI especially. For this crowd I want, to be very specific about what, it is that I mean I'm really using the colloquial, term the, kind of pop-culture term, or, what in practice, I think is machine learning and. Even. To be more specific what I'm really talking about is the increasing use of neural networks in a variety of different settings so when I say AI if you're, like. Some of the data scientists, that I work with if that makes you cringe what, I'm really talking about is the increasing prevalence and prominence, of neural networks which I'm going to get a to.

Talk About in a second the, second point I want to make. That we've been here before and in fact not all of the challenges we face we. Think about regulating, AI are new, and so what I want to do is I want to talk through past. Attempts. To address these challenges and I want to talk about what. These attempts, can teach us and, then, lastly what I'm gonna do is set. Forth some constructive, suggestions, on what. It is I think we, should actually be doing moving, forward, to, regulate this technology, and I'm going to focus on three. Particular, concerns. I have beyond, just. The challenge of of unexplained, ability or relating. Unexplained. Ability so what. Are the challenges of AI our. Story really begins in 1955. When a group of researchers, got. Together to think through how computers, could simulate artificial, intelligence, they. Called the conduct could simulate human intelligence they called the concept artificial. Intelligence, is, one of the actual first uses, of the phrase and. This conference, the summer conference. At Dartmouth in 1955. Is considered. One of the seminal moments, in the, history of AI and. Here's. An example of a neural net which, is what this approach led, to again. When AI people, talk about AI when I talk about AI this is really what I mean by show of hands how many people are familiar with neural. Networks. Okay. Enough that it might be worth actually going through and explaining in, general, terms how they work so in, brief this, is a visual depiction of a relatively simple neural network, we. Have an algorithm that's composed of a series of nodes. Represented. Here in black circles, otherwise. Known as neurons, and. They make weighted decisions and they pass the results of those decisions. On to other neurons throughout. The network and. So specifically. The way the weights in those decisions are. Created, is based on training. Data and so what you what you would do is you'd feed a neural network like this some, training, data input, data along. With the resulting, conclusions, about that data you. Would train that network so that has the correct weighting, and then you'd be able to give that network new data that has never seen, before and it would be able to pretty accurately, tell you things about that. About. That data and so for, example a network like this. Once. It's fully trained you'd feed it some data images, for example and, then, certain nodes in that network might get activated, let's say by the curve and a nostril, and then, that network would be able to tell you something about those, images like. If one of them had a face if the network was performing, image. Recognition and so the first really important, point I think for folks in this room technical. And non-technical to. Take away is that this is a drastic. Change. From. Traditional, programming, traditional, programming is based on giving a computer, a series, of step by step logical. Instructions. And. Instead, this, is a complete departure models. Like neural networks work. By finding, patterns and training data and applying, those patterns a new, data and these patterns are frequently patterns at humans and. It. Turns out that this type of programming, based. On feeding models, into, neural nets is actually, beginning to replace traditional programming, a variety. Of domains and I it seems to me at least like we're on the cusp of this, replacement different. Audiences, will react, to that statement in different ways but. This is a great example from a computer scientist, named Jeff Dean for. Folks who don't know about him he's a idle. In the computer programming, world if you just google, Jeff Dean and meme. Hours. Will be taken away from. Your life and basically, what he's saying his team at Google, is proving. That neural networks can be used for basically any task and so Sam. Is. About to release a wonderful, paper, that. Includes. Folks from that team and. That's on one end of the spectrum where they're using neural networks but they're also using neural networks or, other, completely. Different like big, database indexing. Problems. They're really using neural networks for everything and so this quote is where, Jeff Dean says basically, neural nets are the best solution for an awful lot of problems and a, growing set of problems where, we either previously. Didn't know how to solve the problem. My. Guess is that's going to be a good deal of how neural networks are applied to medicine. Or. Back. To Jeff Dean we could solve that problem, but now we can solve it better with neural nets. And. So this comes from an article from last fall. On that team it's a little bit hard to read but. Basically approach is based on data are teaching. Computers. Teaching, software programs, how to create their. Own software. So. A few concrete examples, of how, important, this actually is in practice. Again. I think in the medical community folks.

Understand, Immediately how powerful, some, of this is other, audiences. I think, less so so two examples, one medical the other or not this. Comes from an article in The New York Times not, that long ago and. Basically Pittsburgh's. Hotline for child abuse and neglect is. Using, methods like this to, detect children who might have fallen through the cracks and so and some. Examples, models, are literally, being used to find. And prevent. Instances. Of. Harm against children this, comes from his team at Stanford they put together a model called checks net that, has some incredibly powerful, abilities, to detect pneumonia. Through chest x-rays, and this, particular, model I like that they named it they named the actual model, it. Led to some headlines about. AI replacing. Radiology, and radiologists. I think, that was a bit misleading but. The broader point here is that were deluged by data doctors, social workers many, incredibly, important. Professions, and models. Like this have an incredibly. Incredibly important. Ability to. Help us to help find patterns in that data felt, both augment, the work that humans are doing and in some cases, replace. It so. Now, on to that major problem, and. So all of these advances, have, this incredibly, difficult if, not, fully. Impossible. Problem, of explained ability, and so this is a cartoon of a police officer asking, a driver why. He's just been pulled over and no, one knows the answer and the, fact is it's not actually too hard, imagine. A circumstance where. Something like this. Might, occur and the medical realm. It's, not too difficult to imagine. Diagnosis. And a very high level of accuracy were neither than either the physician nor. The patient really have any idea why. And. This. Is actually. My. Favorite article, I think ever written on the subject, this is from the philosophers was published the day after IBM Watson, won, jeopardy and again, the point is that these types of models, are. Not self-aware now Watson for, folks who actually know about how Watson was made is a bit of a Frankenstein. A. Frankenstein. System but the point is that these models aren't aren't self-aware, they don't know what they're doing and we can't really look under the hood and say why, did you make the decision you, made for. Technical folks who want to talk about explain ability we can dive into that there's a little bit more nuance there but. The upshot is that they, can't exactly answer. And. So laws, around the world hate, this they hate this type of opacity. To. Give you a few examples there's the general data protection regulation, before. Mark Zuckerberg gave, testimony this, week I think less people knew what that law was now. More people seemed to it's. Basically a gigantic. Data. Regulation, coming out of Europe fines, for violating it are up to 4% of global revenue which, is insane. So Apple, has, had revenue, of over 200 billion dollars every.

Year For the last few years if Apple. Spain if a subsidiary, of Apple violates, this. Global. Apple could be fined upwards of 8 billion dollars. So. Quite. Intense and, and the key here the. Key connection between gdpr and machine learning is that, more. Or less with some exceptions that basically prohibits. Automated. Decision making of the type really, talking about today without express, human consent. It's. Going to make using. Artificial intelligence and practice incredibly, difficult and, a lot of this I think is geared, at, the. Underlying concerns, our ethical, concerns. So, although. Folks in the tech community like, to scoff at it and, I think for some good reason I think it's important to really, think about the, motivations, behind it, that's, a gdpr in. Congress the bipartisan. Bill was proposed the LAT at the end of last year focused. On some of these issues it was the first federal law proposed. Law ever focused, specifically on AI and. Then the city of New York itself, has examined has stood, up a committee as of January, to examine, these issues and during the QA really. Just kind of hitting the wave tops here anyone, wants a deeper dive into any of these particular, laws. I'm happy to do that um but, the point for now is that these are just a few of the growing efforts to, regulate AI and, to address this, new problem which is really the increasing, adoption, of AI on the one hand and, increasing. Difficulty, of understanding it on the other and a, good deal of these approaches, actually seek to tackle this problem head-on. And, in, some cases to mandate certain levels of explained. Ability directly. This. Quote comes from not, too long ago from, the French doodle digital minister and he basically stated that any algorithm, that can't be explained, can't, be used by, the French government and, so again, blanket proposals, like this may be well intentioned and. I think they are but these types of reactions, are, going to deprive us from some very significant, opportunities, if they're actually, implemented. The risks here frankly, are, huge, and, if. We focus too much unexplained, ability I think we're gonna lose some. Very important, opportunities. So. That was the first point the. Second point is that we've been here before and as. Scary. And as new as these challenges seem they're. Not completely, new we face similar challenges and regulating. Opaque technology, or opaque or unexplainable. Software. In the past. Software. Systems, and so I want to run through some of these examples and, the, lessons they teach and so specifically, I want to talk about three parallels, I want, to talk about a law called akoa this, was used to govern, credit, decisions, it was passed in the 1970s.

I'm. Gonna talk about a law called SR. 11-7. This, is used, in the financial system to govern blackbox models, when. We talk about the subject I think this is one of the most overlooked, regulations, out there and then, I'm going to talk about some. Frameworks, for governing our own minds, which are the ultimate black, boxes, and how we can learn from, some of the legal lessons. Surrounding. Liability, in humans so. Let's, start with this this is the cover of Newsweek. The. Article, is is privacy, dead I've blacked out the, actual date and so my question for folks in this room you. Want to answer feel free to shout it out otherwise formulate, it in your head my question, is when. Do you think this article was published. Okay. So we've got nineteen sixty nineteen that, here seventy, seventy. Is we hear an eighty and then a. 2030s. And forty okay so basically we've, got a span, of a hundred years okay. So the. Answer is. 1970. This, came about in reaction to the rise of statistical, credit rating methods in, the financial sector and this article described that attack as literally, a massive, flanking, attack of computers. On modern society and the idea was that we were all under assault by this new type of intelligence, and it was popular back then to, say things like this this, is a senator from that same year and it was popular to say we need a regulatory. Department, to be specifically, focused on these challenges so here, the idea was we'd set up a federal department of computers, to regulate all software. Computing. Um. Instead. Of this approach, and. There are clear parallels, to the way some folks think about AI for folks who are following the debate about regulating. AI there are people who advocate setting, up a federal Department of AI in. The. 1970s. Instead of setting. Up something like this Congress, actually passed a, series, of specific laws targeted, at specific, problems. And so one of those laws was called the Equal Credit Opportunity Act passed. In 1974. The. Problem was focused on was that lots of groups face discrimination and. Credit scoring decisions, and in. Addition these algorithms were incredibly, complex, and difficult to understand, that eeep explain. Ability issues and so. A KOA's solution, was, to mandate, a basic, level of transparency its design, was. To decrease discrimination. On the one hand and increase, consumer. Education on, the. Other and so as a result of a co-op credit applicants, would, be able to understand, why a particular adverse. Decision, was being made they, were entitled to something called a statement a minimum, statement, of specific, reasons and that, was what you are entitled to see if there's, an adverse credit, decision. That's made and this quote comes from a Senate report on the bill basically. Explaining. The importance of the statement, of reasons and, this. Form is actually the, sample, form included, in a, KOA's, enforcement. Documents, and this is in fact the, template, for how credit decisions are, communicated. To this day if you get an adverse credit decision it's based on this. Form there's a list of potential reasons and, applicants. Get notified what, reason, is contributing, to a specific, result and now this. Type of template, doesn't fully break down how a decision, is being made but. The statement of specific, reasons does give us a basic.

Template, To understand, what's, going on it makes black boxes, so to speak just a, little bit less black, and. So when, we think about a KOA I think, the first takeaway is a koa gets us transparency. So, we, can see some of how these credit algorithm, scores. Scoring, algorithms, are working and that's, important. In fact I think that's crucial in many ways but. On the other hand we can't necessarily. Understand. It the, federal so transparency. Or, seeing how an algorithm works is not. The same thing as explained ability, and I think the. Enforcement. Documents, for a KOA make that's pretty clear the, Federal Reserve which enforces, it has stated that more than four reasons, for anyone adverse, credit, score. Is not, actually, meaningful. More. Than four reasons are too many reasons for. Human credit, applicant, to actually, understand, meaningfully, in a way that might be able to change their behavior and so a KOA might have succeeded in, giving, some transparency, and inserting some transparency, into. These new, and powerful algorithms, but, at the same time a. Koa, teaches, us that even transparency. Has its limits transparency, is not plain. Ability, doe. Onto. The second, regulatory. Framework this is called SR 11-7. Stands, for supervisory, guidance on, model risk management, again, this is like the Nerds nerd, law. For. Some of these issues but again I think it happens to be one of the most overlooked. It. Is focused specifically on model risk it, came about after the 2008, recession and this is really when regulators, around the world this, is a American regulation, but there's some equivalents, in the EU regulators. Around the world started, to notice banks using more complex algorithms, and they, started to notice that as a result banks had less of an understanding of how, and why they were making particular. Decisions, though, this is enforced, by the Federal Reserve and a, key regulator, within Department, of Treasury. So. This. Statement comes from the regulation, it's, basically, an acknowledgement, of the fact that banks are relying on more and more complex algorithms. Some, of which might fall into the category, of a high, and. That banks, and financial institutions are using these type of algorithms, for, wide variety of reasons and then. The regulation makes this direct, very nuanced, admission, of the, cost of this trend so I want to read this quote the, regulation, states that models also come with costs, there's, the direct cost of, devoting, resources to, develop and implement models, properly, that's, intuitive, there's, also the indirect, cost of relying on models such as possible, adverse consequences. Of decisions, based on based, on models that are incorrect, or misused. And. So I think this is really the meat of what we're talking about we, talk about regulating, AI of controlling models, whose inner workings, we can't fully. Understand. And. So the regulation. Identifies, two major risks, first. As errors made by the models themselves for. This community this is a you, know a false negative this is a diagnosis, that shouldn't have been made this, is fairly intuitive.

The. Second risk is broadly. Misuse, and, so models. Due to their inherent, opacity, can be used for purposes other than their original intentions, very, very easily, um and, the, regulation, makes this incredibly important assertion that again I want to read to you it states that, I have that here in in parentheses, states, that models, here, models. By their very nature are, simplifications. Of reality, and real-world events may prove those simplifications. And appropriate. And. I think this admission is one of the most important, Admissions. In understanding, AI and, there's a famous quote I'm, sure a lot of folks here are familiar with from the statistician, George box that. All models are wrong but, some are useful. And. What, the means is that every model is based on correlations, and data but not causations. And, so those correlations might be useful to us they might tell us about the likelihood of a, particular, answer being. Correct but, that's not a substitute for actual reasoning, it's not a substitute for actual. Intelligence these models do not know what they're doing just like Watson didn't. Know that, one jeopardy in fact I wonder actually whether Watson, knows that one jeopardy right, now I suspect, it does but he never know so, what, does SR 11-7. Say we should do to fix some of these problems a. Solution. Lies, in a concept, that calls effective, challenge, and this, is really the central thesis of the regulation, what effective challenge means. Is critical, analysis of really every, step of the, process from. Creation to testing a validation, to deployment, of a model it means outlining. Your assumptions, it means questioning, those assumptions, and. More, and so the regulation, has very very specific, guidance about how. To carry out all of these procedures and practice, and in the national security world we might actually call, a concept, like this red. Teaming, and red, teaming is basically a process it's something you do in a world with incomplete, knowledge and incomplete, facts something. You do to address, uncertainty. I'm. Going to revisit the importance of effective challenge shortly but, before I do I want to get to that last legal, framework, which, is the one governing, our own minds, and so to, do that I thought I'd introduce you to Florida, man so. For, people who don't know about Florida man he's been dubbed America's, worst, superhero. This. Is an article from the New York Times highlighting, his ascendance, as a, meme on Twitter from a few years ago and, the basic idea is that for whatever reason the beaches or the weather. Man, in Florida seemed. To be generating, newspaper headlines, that are, outlandish. And as a result the more generic Florida, man has become, a fictional. Superhero. On social media, so to, introduce you to Florida man I thought I'd give you a few examples of some, of my favorite headlines. So. This. Is Florida, man tossing, a gator through a window this, window happened to be the drive-through at a Wendy. The Gator happened, to be very much living. This. Is Florida man calling 9-1-1 repeatedly. Because, the clams he ordered at a restaurant were extremely. So small in. This case florida man actually got arrested on misdemeanor charges. For. Calling. 9-1-1 multiple, times, and, so i scoured. The internet for florida, man headlines, and this, is actually, my, all-time favorite. So full description, is a bit of a mouthful so, florida. Man at the age of 82 years old is arrested, for slashing the tires of, an 88 year old woman with, an ice pick during a bingo dispute. Don't. Ask me why Florida man had an ice pick given. The weather but, if. These aren't, examples, of. Completely. Unexplainable, unpredictable. Outlandish. Decision making I, don't think those examples exist and indeed, the parallels, between our own minds and machine.

Learning Are. Actually, quite strong from the time where babies we, ingest new data on a daily basis, images, sound sensations. And then we make conclusions about. Correlations. In that data that's how learning works. But. There are a number of problems with this model, highlighted. By Florida man's incredibly, bizarre unpredictable. Behavior, and. So the question is how does our legal, system handle, this how do. We think about regulating, Florida, man's behavior behavior we can't come close explaining. And there are two real answers to that the first is that, we treat human making a human decision-making, in different. Stages according to age and so the first thing we do is we ask if Florida man, is making his own decisions or a certain age basically. Florida. Man is on the hook Florida. Man's parents I should say is, on the hook there's an age where children become responsible, for their actions usually, in the double digits at varies by state I don't. Actually know, what that is in Illinois, some. Parents here might might, be anxiously awaiting, that that, date then. There's some intermediate, stage where children, take partial responsibility. For their actions this. Is the status of being a minor and then. Eventually. There's. Adulthood. This is when Florida man becomes an adult and the, legal sense of the term he's, entirely liable, for what he does and so the key point here is that we can extend this approach to. Thinking about models, like neural nets classifying. Them in terms of their, maturity certain, models might actually need to reach a certain level of maturity or. They can be deployed in certain circumstances, so we don't let Florida man drive for example until. He's reached a certain level of maturity until he has processed, certain. Amount of input data and. I. Think the, same is really gonna need to be done with AI we use age as a, proxy for training, data but, even. Once humans are adults our brains, still. Are not. Still. Don't process, information. Completely. Rationally, they're still full of cognitive biases, like Hans highlighted. Um and, so the second question is how does the law deal with our own inability to explain. Decisions. Our, own decisions, even as adults and so the answer there is a, standard, called the reasonable person standard it's, used really, really, widely throughout different areas, of the law and this, slide. Comes from a great Law Review article on that standard, and basically. That standard places judges and juries in the position of saying given, all the data that the person had at the time given all of the context, was. What this person did the right thing was what they did reasonable. Now it's an incredibly subjective standard. And it can evolve over time, but. Subjective, standards need not be perfect, and they can be incredibly useful when engaging with things we don't fully understand, so, why.

Is This so important, we, think about regulating, AI a few. Reasons first, the, way we think about Florida, man learning, and gaining responsibility. As. He becomes an adult I think is something crucial the. Crucial lesson for us and we think about regulating, and managing the risks in AI, and, secondly. I really think it's worth drawing out the point that in the law we, are using age as a proxy for input, data as, children grow older they have more input data and, maturity. Of training data I think is really going to be a central focus there's a great rand, study on. In a different area outside of Medicine on self-driving cars and that, study was focused on basically how, many miles of training. Data our autonomous. Vehicle is going to need before we can start certifying. Them as safe, and so I think this is really a key, point we think about trolling. Risk and deploying AI effectively. And, then lastly, I think it's very very important to highlight the role that subjective, standards, have, to play when we think about governing. Unexplainable. Decisions, and so specifically. My real. Point here is that, we need new standards, we need common standards. Subjective. Standards, that might evolve but we need these standards to help us evaluate how machine, learning systems are being, trained, and deployed and, maintained the. Real world and right now I. Haven't, seen frankly. Any examples, of common, standards, that. Exist for. In the world of data science so I'll revisit, that shortly, very. Quickly summing up formicola, we, can learn that we might need to mandate. A certain level of transparency at, times the, same time transparency. And explain ability are very, much not the same thing from. SR 11-7. We can learn that even when there's no explained ability there's still a host of ways of. Controlling models, this is effective challenge and from. The way that law treats human minds we, can learn the. Importance of maturity, and subjective. Standards of reasonableness. So. That, was the second point and, lastly. I want to get a bit more specific, I want to focus not simply, on what we've already learned on what laws already exist about government but um I want. To talk about a little, bit beyond the, challenge of unexplained ability. Alone. And I want to talk about how, we should respond, because again the Jeff Dean's of the world are starting, to use neural nets for, basically everything and the question is what. Do we do as a result as governments as health care providers as people who, are seriously, worried. About the risks of all of these approaches, so. I'm going to start with my most general point and then I'm going to talk about three, sub, points, my. First point is that AI should not be regulated in one place through. One regulation. We should not stand up the Federative part men of AI today. Just like we should not have stood up the federal Department of computers, in, the 1960s. We're. Gonna frankly need a host of different regulations, and different approaches, and so a few, examples are, gated towards the medical community just, for this talk I think, this is going to translate into a few, different areas so, one, is I think there need to be specific, data sharing regulations, around, medical data beyond, just HIPPA I think HIPAA is woefully underprepared. For. The type of data sharing and really, the scale, of data sharing that, we're gonna need to train some of these models and deploy them if we're really gonna make you make use of AI in medical environments, I think, it's gonna translate, into specific. Types of regulatory. Review, for, machine learning models that. Are being used in diagnostic, settings there need to be specific, transparency. And third-party auditing, requirements for. Some of these models, placed, on vendors third-party. Vendors or hospitals, that, rely on these models so patients, can understand what's going on and so third parties can actually. Properly, assure, that. They've, been validated, in the right way so. This is just a few examples of, potential. Areas and, that, I think need to be applied for. The medical community um and this slide is from an op-ed I wrote earlier this year in the New York Times basically. Making the same point and that I think it's a very bad idea to. Think about responding, to AI and the challenge it's creating, with, one single, response with one, regulatory. Silver bullet so to speak, so.

Beyond. That general point I want. To get into specifics. And so what I want to do is I, want to talk about three, frankly. Of my own personal. My. Own personal greatest, concerns when we think about the, risks posed by AI we talk about those challenges and. Then I'm gonna try to actually constructively. Suggest. How. To solve them and so to, start with is the issue of liability right, now I think it is just simply not clear, exactly. How and, why and where a deployed model holds its creators, liable. And I don't think we're gonna be able to safely deploy these. Models, if we if data scientists, don't actually know where that line is it needs to be crystal, clear from, the outset. Exactly. Where this liability lays, and so in, medical environments, I think we're, looking at a future, excuse. Me where. Models, created and trained by third parties are increasingly. Used by physicians and. Again, I don't think it's clear enough where liability, lies and so for, many cave in many cases for example these models, will be more, reliable. They'll be more accurate, statistically. Than, human, physicians, and so is the. Burden then gonna be on physicians. On health care providers, to default, to the most accurate solution. Even, if they don't understand. It even if they can't. Even come close to understanding the, technical. Reasons. Behind how the model is working or where the data came from and what, if these models then make an error a false positive false. Negative, who's responsible. Let's make things a little bit more complicated let's say a model, trains continuously. During deployment, though, the model is reshaping. Itself based on the data it's actually being exposed to who. Is responsible, in that circumstances. That the creators of the model is that the people. Whose data the, models reacting, to these. Are all really big questions, I don't profess to have the answers I have some suggestions which we talk about a little later, but. At. Least I would say at least the basic, framework for, how liability, exists in practice, needs to be clear before, I think we can start using some of these advances, in. The areas frankly where I think they must have I think they might have the the biggest impact. So. Second. Biggest concern for me is this big bulky, word inter. Inter. Agata bility it comes from a friend, Dan Gere and to. Me it means a couple of different things so, the first thing it means is explained, ability, or. Interpret. Ability and this is largely in the way I've been talking about though. Do we know what caused this outcome can we create a causal, explanation for. Why specific, input. Data created. Specific, output data, or. A decision, and so. A bit more background here, this comes from DARPA explainable, AI project, and what, this graph saying is really that different models, make, different, trade-offs in terms of accuracy versus, explained ability, and so on the x-axis I believe yes we have explained ability on the y-axis we. Have the. Level, of accuracy and the, key takeaway. Is that different, models have differing levels of both, and so the, level of explained ability is always, going to be the result of a trade-off plane. Ability is not simply, black, and white. There. And, and and the fact is as I'll talk about shortly there, are different ways we can make this trade-off and that, fact that trade-off that, that. Optionality. So to speak needs, to be clear when we start building these models. Again. Though there's more to interrogative, ility than just this trade-off alone there's, a more I think human, more procedural, side to this problem, and this relates to who we can ask who, we can interrogate if, something.

Goes Wrong if we need to get an accounting, for, any specific, model output and so this, is. The beginning or the cover page from one of my favorite papers ever written about. Machine learning and. It talks about the concept of technical, debt applied, to this realm so, in software development the idea of tech debt comes from basically prioritizing. Deployment. Getting, your software, to market over sustainability. And so tech, debt is something that gets. Progressively. Worse over time you're, basically, mortgaging. Complexity. Which gets which gets worse and in, machine learning tech tech is similar, but I think it's deeply challenging and deeply vexing in a variety of different ways this, paper goes into some of those ways but, for us I think the main point is that machine learning is deployed, incredibly. Complex, IT environments. And. Because. Of that these models can be dependent, on data, that we don't fully realize and it can make these models react in strange ways or unpredictable. Ways and. So. This. Fact the type of tech debt debt that accrues in machine learning environments. And make, it very difficult to figure out why. A specific, outcome. Happened. This interrogative. Ility problem this inability. To interrogate. And fully account or, if particular, decision. I think is really really, greatly, influenced, by tech debt machine. Learning systems and then. Here's, the third and final challenge in fact I think this is actually I would. Rank this I think as the biggest long-term challenge, I'm worried about when I think about deploying, AI I, have this here as fail silence, more frequent violent, failures, and. The fact is I think we often won't know what, counts, as a failure, once. We've deployed a model, and even, if we do I think oftentimes we, won't be able to understand, exactly why, that failure, has. Occurred and so. I I think frankly, we're. Looking at a world where we might be lucky, know. If something has actually gone wrong, there's. A lot obviously to say on this topic one, of my favorite examples of this though is move 37. Move. 37, took place in the second go game between, alphago. The. Series. Of ensemble, methods that, was used to. Beat. Human, experts and go alphago. And in this case it. Was it was Game two between alphago, and lee sedol and. So go. Was supposed to be one of the most sophisticated games, humans have ever invented, and, alphago basically, wiped the floor, our. Best go mind and move, 37. Is particularly. Powerful this is a move that alphago, made and nobody, understood, it and it was completely, completely. Bizarre and as a testament, to how, unexpected it was lee soto was. So, angry, he was so flummoxed, by the move he, reportedly. Had to stand up and leave the room and it took him 15 minutes recover. From, that move and at, the time people thought, that this was a buck, bottles. You, know are prone to do it turns out over time we now understand, this was actually a feature genius. To this move that humans just, did not understand. And. So. Understanding. What's a failure and understanding. What is not a failure and keeping, track of this difference in ways that are meaningful I really. Think is gonna be one of the biggest difficulties, brought, about in practice by. The deployment, of AI over. The long-term though. Ok those were my three biggest concerns, those were. Areas. Which some alarmists, might. Digest. And say okay we just can't do this in risky environments, which is not what I'm trying to do that's not the goal of my talk here so, I promised, that I would actually have some constructive suggestions, going, forward and so that's what I want to outline, here so, in general I think this this point is pretty clear, we just need clear, liability, we need it from a regulatory perspective we. Need it from a development perspective, everybody. Needs to understand where the lines are, the. Lines to start out with don't have to be perfect but, they need to be clear if we're gonna move forward, secondly. That trade-off between explained ability and accuracy, again, needs to be clear and it needs to be documented, and, it needs to be the result of a conscious decision now, this is something, that I've learned. A lot dealing, with, engineers but quite frequently in engineering. We. Default, to the most accurate solution, the ultimate, goal is accuracy, and in. Many environments especially in medical environments, that.

Can't Be the case we need to be thinking consciously about. What accuracy, we're gaining and. What. Explain ability we're losing when, we make these decisions in their variety of different ways that we can balance that trade-off there are a variety of different ways we can cut specific, decisions, into. Smaller decisions help, us make that right balance, but, again we need to very. Consciously, understand, what. Decisions, were making and the implications, of all of those decisions and. Then, lastly we need to be thinking about what counts as failure, and we need to be extremely creative about. How we monitor. An alert. And intervene with potential, failures, and so just, a few examples of what that's actually like this is something I'm quite focused on in my day job but, so some of this I think is gonna include best, practices, like constantly, snapshotting. Input and output data comparing. These snapshots against, benchmarks, or statistical, ground truths for, how we think data. Input. Data though, the the world or. Output, data the decisions, should be behaving in practice, and it means very consciously, thinking about how to insert humans, into. The loop when, we think there are potential, deviations, or anomalous activities, um, I also want to make the point, that all of the suggestions I've just made are gonna be in. A white paper we're gonna release in the next I don't. Know exactly when my guess is probably two. Months so, for anyone who's hungry about, more specific, details of putting these recommendations into practice, I'll. Have my contact, info the, last slide here and just reach out to me and I'm happy to make sure you get the paper our, goal is really I said there's no reasonable standard, for, deploying machine learning or, controlling. Risk and machine learning and our goal is to at least get, the ball rolling, and creating. Version. One of something that could turn into that standard. So. All. Of that brings. Me back to. Hans the, horse and. So I wish, I could tell you that Hans had a happy ending that. After, the world learned he wasn't as intelligent, as as. He first seemed he. Still had a long and distinguished career but. That is emphatically. Not the case at. The beginning of World War one and 1914. Hans was actually drafted as a military, horse by the Germans, he's, believed have been killed in action or eaten, by hungry soldiers. Sometime. In 1916. Neither. Outcome obviously ideal. But. Here's, an important aspect to Hans a story that I haven't actually been. Time. Talking about today and that there was really a 10-year period where. The best scientists, in the world thought that Hans was the real deal they thought they'd found a new source of human elegance, in, 1904, seven, years before his intelligence.

Was Officially, officially, debunked by Oscar. Folks'd the German Board of Education set, up a commission to, study his intelligence, and after, a year and a half of study 18, months they concluded, that it. Was the real deal and, the. Challenge is Hans posed really. Mirrored the challenges, we face today with AI for, example, how, do we approach a new type of intelligence we can't understand, how do we harness it without. Stifling as its potential, should we harness it at all, how do we understand when it's wrong how, do we hold it to account when, it creates a negative, circumstance. How do we control the. Unexplainable. So. The parallels between Hans, and AI of course, only, go so far what we call AI today really is quite capable new. Models really can achieve new levels of pattern recognition that. Humans simply can't we are looking at a breakthrough, and that's, to say the technology is ready and it's, ready right now but. What. Isn't ready as I hope I've convinced, folks here today is the, law the laws in place are got a governing AI are, not ready we, don't yet have any agreed-upon, practical. Methods. For. Deploying these types of models in real world important, and potentially, sensitive. Scenarios. We have frameworks, we can draw from as I've, tried to show today. But. We don't have any clear legal response to the rise of AI in all the areas it's being deployed and so when, you think about the success of AI I would, actually ask that you think about the laws governing, AI instead, I'd, ask that you think about this gargantuan ting guard gargantuan. Task of, regulating. AI which. Is going to shape the benefits, we can draw from this technology, as individuals, as organizations, as healthcare providers as patients, around. The world because. AI is becoming. Ready and, is. Becoming ready to be used, myriad. Environments, and what's not ready is our, laws the. Good news is the way our laws respond. Is. Up to us so. On that note I. Think, Sam, and I are going to talk and. I'm. Happy to answer any questions. I. Was. Interested in your topic about liability, and silent, failure and I was wondering just if you could give a very practical example. You. Know we're using algorithms, to detect cardiac arrest in the hospital and. The. The, algorithm, group that metropic. Runs for, instance has, developed. Algorithm. To, detect. When patients are gonna have a cardiac arrest and there's a pager.

That Goes off everybody runs through the room, and. So you. Know that algorithm is very rules based but. As these algorithms, mature, and become more machine deep-learning. Based how. How. Do you see those issues of silent. Failure and liability. Taking, shape around those kinds of specific examples, so. So. There's a liability answer, and then there's a silent failure answer the silent fans a, failure, answer is. One. That I think is easier to talk about frankly, because that point relates, to, deploying. A model like that over time and so at first it might be intuitive you're gonna know when it fails because it makes this act. I think a silent failure is when, the input data changes, over time such that it's, making correct decisions, but. For reasons that don't make full sense until. Suddenly they don't and then, there's a change suddenly. The what, has been working for a while no. Longer works and no, one understands, exactly why. So maybe in that case it's it's the, silent, part of the failure is that. The model is actually working but it's working in ways that. Are. Pushing. It towards failure and then once it fails debugging. It is going to become, incredibly. Complex if not impossible. So there's, an outside. The medical. World. There's, a example. Of debugging, and. Really. Ethical liability. And folks. Here might have heard of it. Google, has an image classifier, and. And I, believe, it's. 2013. It, started. Classifying. African-americans. As gorillas have difficulty, folks know about this okay. So it started doing that obviously, a huge problem the engineers had no idea it was going to do this when they deployed the algorithm, and there's, a wired story from January, of this year which. Says they still after all these years have, not, been able to debug, and figure out exactly why, and so their answer right now is basically. To, not allow the, label gorilla at all in this image classifier, and so this debugging, issue I think the confronting. Failures, is going, to be huge, and incredibly, incredibly difficult and, it's just gonna, be an incredibly, difficult challenge we're, gonna have to. Figure. Out but I realize I'm. Talking. About one of those questions and the liability, I guess the little failure is always be obvious because I was thinking about your move 37. Issue. And so if the algorithm says patient. Has cancer used this kind of chemotherapy and no one's ever thought of that before. You. Know and you use it you won't I mean your are you gonna know that that was a move 37, or that's a mistake I think. Right now you're not and that's why understanding, when, human review comes in. Is. Gonna. Be incredibly. Important I don't, I don't know how how I don't know excuse. Me the right way to deal with those I suspect that what you do in a medical context, is if, there's a move 37, and it's the first move 37. You. Have, an alert this is anomalous activity. Don't. Do it in. The medical world, one, of the reasons why I actually I think so. One. Of the reasons why a I and, data science itself is so fascinating, is because it. Can be employed. In almost every, context, and so there's some interesting articles on like the death of expertise, and the rise of data science so it's fascinating but I think for, someone like me who's focused on risk I think, there's no better.

Environment. Or more kind, of conscientious, environment, than in the medical environment where, no. One understands, risk I think like physicians understand. Though. By, my. Gut for an answer like that is what you would do is you would say no. No, to move 37, until it happens enough time and, there's enough human review that we can. Somehow. Validate. That there's some genius there so one of the most popular, questions right now on that line. Is that would is, that you think physicians will be found. Liable for not using the best available model. So. There's. A great paper that that was, just published on this and I think right now. Legally. The answer is yes I don't. Know if that's gonna change but I think the way that that, legal liability, works right now is, that physicians are. Are. Held liable if they are not using the. Methods, that are most likely they. Trustworthy, but but the best practices. And, or best practices, those are transparent, models where we know with, evidence-based medicine why they're the best so. I guess the question is when you start to have opaque models that. Have been shown over time to make the best decision, but then a move 37, comes up the, physician doesn't do it will, there be a liability there. So. Right. So so I don't know I mean these are these are questions that are incredibly important, I don't, think they have clear answers, my. Sense is what's clear is that physicians. Are, legally. Liable to be using the most trustworthy accurate. Methods. So. I think, the frankly, I think the answer that folks, in the. Technical, community and the data science community the answer I hear the most from them is the same answer you get with self-driving cars where. One. And however many self-driving, cars drives. Off a bridge and does something crazy. And. The answer is that's the cost of doing business. On the one hand and in fact I've seen some, of this with the recent incidents. With. Uber there been a few incidents. In the last few weeks that have brought this to light and so, people say on the one hand they, Tesla, statement, actually in reaction, to one of the Tesla crashes, was, we're.

Sorry For the loss of life that's caused but. Overall, from, a utilitarian, perspective we're. Gonna be saving more lives by, relying on things that don't make this, type of mistake I can't. Say I'm comfortable with that I, don't. Know I don't know if. That. Level of discomfort is just something we need to accept but, I think that's a huge huge question and, it's, potentially one of the trade-offs and so what, I what, I asserted. Today and what I'm 100%, comfortable with is. Silent. Failures and the need to insert, human, review. And. Do, some of these types of anomaly, detection where we at least know a move 37, is occurring that, needs to happen. How. Exactly we should respond I don't know I think their arguments to be made that there's, gonna be some collateral damage and over the long-run that, collateral damage, is gonna. Be you, know less harmful to society as a whole than, if we just let humans make. Mistakes like they do now. Well, no we, don't but, that's not really how we make our decisions, you know we were just talking about this in our ethics class you. Can't design a clinical trial and say well 5%, of the people are gonna die from this therapy. But. We hope to learn from it anyways that. You have to have a reasonable, expectation based, on the. Helsinki criteria, that which, you're what you're testing is not going to be more harmful so. That's a sort of a different test in this scenario I'm, not so sure so I was thinking so. In. Like constructing, this talk I was thinking what would it look like if I came and threw out all of these slides and just totally. Focused on machine. Learning in medicine just like my, thoughts and machine learning and medicine and the first thing I thought, of was what's. The future of the Hippocratic oath is it, it. Will. Doctors. Really. Be able to say in every instance we're. Not going to cause harm for precisely this reason, and. Then. On second thought I think, doctors, do this with medication, every day it's. A balance yeah the. Fact is it's a balance there are a lot of medications, that are prescribed, widely, that, nobody knows how they work and sometimes. People die, by. And large these medications seem, to work pretty well and we just accept some people dying as a cost to do in business and so. I. Think. That might be a more kind of analogous, scenario, than some of these clinical trials, but. I think, that I mean the end statement is it's gonna be a balance and. I. Don't. Know if we're gonna be comfortable with that balance but we need to think through the balance because again these, tools are being developed, and they're being employed, and as, as you know like. They can be incredibly effective in ways that human yeah. Somebody. Asks are come people ask is is the regulation, of artificial intelligence different. If we must consider, consider, the impending. Generalized. Artificial, generalized intelligence, that everybody's worried about.

Thank, You mr. musk for, for. Asking that I. Don't. I. Brought. Up Hans for, a couple of reasons one. Of them is cognitive, biases, I. Know. A lot of people are very worried about this idea of artificial general intelligence, to. Those people I would, say spend. More time. Thinking. About. How, IT systems, fail for really dumb reasons. And, then, kind of come back to me and talk I don't want to minimize. That. There's. A lot of fear out there and I think there's a lot of misunderstanding but. I think, the idea, I. Mean, sure I don't know I think this this point and one of the reasons why I. Get. I think it's a bit distracting for what Elon Musk is doing publicly, but, this point is something I can do a little bit of a deeper dive into but. It's it's. More of a like. How. Is it should I do a deeper dive and to why people think Skynet, is gonna kill us or. Should we just move on to I, mean. Think we're all worried about the about. The impending singularity, yeah, I am, you're worried okay then I you're okay so so. Basically so this question comes from this cognitive. Bias which is that humans can't understand, or. Can't grasp exponential. Change and so when you look at the growth of computing and the ability of computers to simulate human intelligence what, you see is a clear exponential, growth curve and, so, from. That one statement we. Then get concerns, that well if this is true than, in 10 or 20 years. We're gonna have a sky net that's artificially, intelligent, and that's, gonna be able to maximize its own capability. For human survival, and that, that, assumption, is then gonna lead it to kill us and, to be fair though to be fair the alarmists, make the point that from, the time it happens. To the time we realize it's going to be very short. Yes. But. That's still based on this assumption that, we. Are bad at understanding exponential. Change therefore there's. Going to be some god-like intelligence, that exists, and. I. Think, it's a distraction I think there are other problems we need to be focused on right. Now today, and. I. Think, it's, from. The world that I live in where I'm seeing, real. Risk every. Day and real, potential harms I think. It's a total distraction. Think. About how, computers, are gonna kill us just like I think it was a distraction the. 1970s. To think we're being attacked by computers, which. Was kind of true like there's some truth in these worries yeah, but the. Worry in the 1970s. That were being attacked by computers, was. Okay. How is it that we make them more useful how do we control the ways they're being applied how, do we for example pass, laws like a koa that, try to tackle discrimination not.

How, Do we, stop computers. From taking, over life on earth but so to that point one, of the biggest, concerns people are asking, questions about is how, do we controlled how. Do we control or, protect. Against discrimination that, these algorithms will likely make if they're taking. Available. Input data and then making you. Know unbiased, decisions, and so looking. At, socioeconomic. Status or loan. Applicability, how do we how do we regulate that or how do we prevent that so. That's. A really complex question, I think there going to be a bunch of answers it is a basic fact that. Underprivileged. And underrepresented community. Don't generate as much data and. Aí and. Aí, is. Based on data and so, I, think. We need to 1 we need to be understand. All, data is bias we need to be under we need to try, to be quantifying. The way that data is biased, and. I think we need to be thinking about more creative ways to. Try. To. Level. The the that bias, because, right now there's a good. Example of this either. The city of Boston, or a nonprofit or, something in Boston created, an app that, was designed to detect potholes, based, on. One. Of the sensors and smart phones and surprise. Surprise the most prominent, are, the wealthiest, communities had the most smart phones so, as a result, to. Start with all, of these potholes were getting fixed in wealthy neighborhoods when that wasn't the attention intention, and so, once. The developers, realized that they were able to. Insert. Some fixes in there that, helped, minimize. That bias but. Ideally. Everyone. From every community would be generating, the same type quantity. And quality of data and. And. I think that would that would reduce it great greatly, I don't, know if that's realistic so yeah the interim I think we just need to be aware of bias, in all the areas we can right that's not a new problem and we look at clinical trials and hoo-hoos, on clinical trials and the data we collect it's not always a fair sampling of, taking. The medication or using, the intervention, but. Sticking with this theme of sort of nefarious intervention. I saw. A really cool example, I can't remember the exact example but it was a it was where a I would identify, a picture then, somebody, put, in some noise, into the picture you couldn't even tell but. Then when they ran the same AI over, the picture had found somethi

2018-05-09 04:45

Show Video

Comments:

as explained earlier that traditional computer programs are based on certain programming language instructions, the system is like a dumb boss. machine learning is like a smart boss who can not be fooled by the employees of his subordinates. And indeed a leader should be chosen because of his intelligence beyond all others. In other words it is useless to control what is not reached by us.

Other news