# Regulating Artificial Intelligence: How to Control the Unexplainable

Show Video

To Go off-script but I want to start today and and issues, in. Fact by introducing, you to a horse. More, specifically, this is Hans or clever Hans as he became known as and Hans, was one of the most famous horses in the world about a hundred years ago he. Was raised by a man named Wilhelm von ole stryn Hans. Lived in Germany and he was thought to be incredibly, incredibly, smart and his. Name so. This. Is Hans that a public fair demonstrated, his intelligence, the, folks that know about him just you know no spoilers. So. He was thought to speak German. He could perform arithmetic he could count objects, and much, more here's, a first-hand account of how he communicated. Numbers, so, small numbers were given with a slow tapping of the right foot with, larger numbers he would increase his speed after. The final tap you'd return his right foot to its original position and zero. Was expressed, by a shake of the head the. One example of a question he'd, answer, was. I. Have a number in, mind I subtract nine and I have three as a remainder what's. The number I have in mind and Hans would unfailingly, out. The number twelve though. Hans was quite, simply the. Most interesting, horse in the, world and. This is Hans with his owner in front, of a board that he used to help him communicate and, so. Hans, became, world famous for, his clear display of, animal intelligence this, is I. Didn't, realize there were slides over there this is an article in the New York Times from. 1904. A testing. The, honza's, feats of intelligence, and in this article the reporter, recounted, all of these feats instated, I'm, going to quote the facts here are not drawn from the imagination. But, are based on true observations. And can be verified by the world's most preeminent. Scientists. So. What. Is going on here why am I starting a talk about machine learning by. Introducing, you to a horse, two. Reasons the first is that Hans illustrates, something really, profound in, the. Way that humans approach the problem of intelligence, and animals and machines.

Year For the last few years if Apple. Spain if a subsidiary, of Apple violates, this. Global. Apple could be fined upwards of 8 billion dollars. So. Quite. Intense and, and the key here the. Key connection between gdpr and machine learning is that, more. Or less with some exceptions that basically prohibits. Automated. Decision making of the type really, talking about today without express, human consent. It's. Going to make using. Artificial intelligence and practice incredibly, difficult and, a lot of this I think is geared, at, the. Underlying concerns, our ethical, concerns. So, although. Folks in the tech community like, to scoff at it and, I think for some good reason I think it's important to really, think about the, motivations, behind it, that's, a gdpr in. Congress the bipartisan. Bill was proposed the LAT at the end of last year focused. On some of these issues it was the first federal law proposed. Law ever focused, specifically on AI and. Then the city of New York itself, has examined has stood, up a committee as of January, to examine, these issues and during the QA really. Just kind of hitting the wave tops here anyone, wants a deeper dive into any of these particular, laws. I'm happy to do that um but, the point for now is that these are just a few of the growing efforts to, regulate AI and, to address this, new problem which is really the increasing, adoption, of AI on the one hand and, increasing. Difficulty, of understanding it on the other and a, good deal of these approaches, actually seek to tackle this problem head-on. And, in, some cases to mandate certain levels of explained. Ability directly. This. Quote comes from not, too long ago from, the French doodle digital minister and he basically stated that any algorithm, that can't be explained, can't, be used by, the French government and, so again, blanket proposals, like this may be well intentioned and. I think they are but these types of reactions, are, going to deprive us from some very significant, opportunities, if they're actually, implemented. The risks here frankly, are, huge, and, if. We focus too much unexplained, ability I think we're gonna lose some. Very important, opportunities. So. That was the first point the. Second point is that we've been here before and as. Scary. And as new as these challenges seem they're. Not completely, new we face similar challenges and regulating. Opaque technology, or opaque or unexplainable. Software. In the past. Software. Systems, and so I want to run through some of these examples and, the, lessons they teach and so specifically, I want to talk about three parallels, I want, to talk about a law called akoa this, was used to govern, credit, decisions, it was passed in the 1970s.

Goes Wrong if we need to get an accounting, for, any specific, model output and so this, is. The beginning or the cover page from one of my favorite papers ever written about. Machine learning and. It talks about the concept of technical, debt applied, to this realm so, in software development the idea of tech debt comes from basically prioritizing. Deployment. Getting, your software, to market over sustainability. And so tech, debt is something that gets. Progressively. Worse over time you're, basically, mortgaging. Complexity. Which gets which gets worse and in, machine learning tech tech is similar, but I think it's deeply challenging and deeply vexing in a variety of different ways this, paper goes into some of those ways but, for us I think the main point is that machine learning is deployed, incredibly. Complex, IT environments. And. Because. Of that these models can be dependent, on data, that we don't fully realize and it can make these models react in strange ways or unpredictable. Ways and. So. This. Fact the type of tech debt debt that accrues in machine learning environments. And make, it very difficult to figure out why. A specific, outcome. Happened. This interrogative. Ility problem this inability. To interrogate. And fully account or, if particular, decision. I think is really really, greatly, influenced, by tech debt machine. Learning systems and then. Here's, the third and final challenge in fact I think this is actually I would. Rank this I think as the biggest long-term challenge, I'm worried about when I think about deploying, AI I, have this here as fail silence, more frequent violent, failures, and. The fact is I think we often won't know what, counts, as a failure, once. We've deployed a model, and even, if we do I think oftentimes we, won't be able to understand, exactly why, that failure, has. Occurred and so. I I think frankly, we're. Looking at a world where we might be lucky, know. If something has actually gone wrong, there's. A lot obviously to say on this topic one, of my favorite examples of this though is move 37. Move. 37, took place in the second go game between, alphago. The. Series. Of ensemble, methods that, was used to. Beat. Human, experts and go alphago. And in this case it. Was it was Game two between alphago, and lee sedol and. So go. Was supposed to be one of the most sophisticated games, humans have ever invented, and, alphago basically, wiped the floor, our. Best go mind and move, 37. Is particularly. Powerful this is a move that alphago, made and nobody, understood, it and it was completely, completely. Bizarre and as a testament, to how, unexpected it was lee soto was. So, angry, he was so flummoxed, by the move he, reportedly. Had to stand up and leave the room and it took him 15 minutes recover. From, that move and at, the time people thought, that this was a buck, bottles. You, know are prone to do it turns out over time we now understand, this was actually a feature genius. To this move that humans just, did not understand. And. So. Understanding. What's a failure and understanding. What is not a failure and keeping, track of this difference in ways that are meaningful I really. Think is gonna be one of the biggest difficulties, brought, about in practice by. The deployment, of AI over. The long-term though. Ok those were my three biggest concerns, those were. Areas. Which some alarmists, might. Digest. And say okay we just can't do this in risky environments, which is not what I'm trying to do that's not the goal of my talk here so, I promised, that I would actually have some constructive suggestions, going, forward and so that's what I want to outline, here so, in general I think this this point is pretty clear, we just need clear, liability, we need it from a regulatory perspective we. Need it from a development perspective, everybody. Needs to understand where the lines are, the. Lines to start out with don't have to be perfect but, they need to be clear if we're gonna move forward, secondly. That trade-off between explained ability and accuracy, again, needs to be clear and it needs to be documented, and, it needs to be the result of a conscious decision now, this is something, that I've learned. A lot dealing, with, engineers but quite frequently in engineering. We. Default, to the most accurate solution, the ultimate, goal is accuracy, and in. Many environments especially in medical environments, that.

Environment. Or more kind, of conscientious, environment, than in the medical environment where, no. One understands, risk I think like physicians understand. Though. By, my. Gut for an answer like that is what you would do is you would say no. No, to move 37, until it happens enough time and, there's enough human review that we can. Somehow. Validate. That there's some genius there so one of the most popular, questions right now on that line. Is that would is, that you think physicians will be found. Liable for not using the best available model. So. There's. A great paper that that was, just published on this and I think right now. Legally. The answer is yes I don't. Know if that's gonna change but I think the way that that, legal liability, works right now is, that physicians are. Are. Held liable if they are not using the. Methods, that are most likely they. Trustworthy, but but the best practices. And, or best practices, those are transparent, models where we know with, evidence-based medicine why they're the best so. I guess the question is when you start to have opaque models that. Have been shown over time to make the best decision, but then a move 37, comes up the, physician doesn't do it will, there be a liability there. So. Right. So so I don't know I mean these are these are questions that are incredibly important, I don't, think they have clear answers, my. Sense is what's clear is that physicians. Are, legally. Liable to be using the most trustworthy accurate. Methods. So. I think, the frankly, I think the answer that folks, in the. Technical, community and the data science community the answer I hear the most from them is the same answer you get with self-driving cars where. One. And however many self-driving, cars drives. Off a bridge and does something crazy. And. The answer is that's the cost of doing business. On the one hand and in fact I've seen some, of this with the recent incidents. With. Uber there been a few incidents. In the last few weeks that have brought this to light and so, people say on the one hand they, Tesla, statement, actually in reaction, to one of the Tesla crashes, was, we're.

Sorry For the loss of life that's caused but. Overall, from, a utilitarian, perspective we're. Gonna be saving more lives by, relying on things that don't make this, type of mistake I can't. Say I'm comfortable with that I, don't. Know I don't know if. That. Level of discomfort is just something we need to accept but, I think that's a huge huge question and, it's, potentially one of the trade-offs and so what, I what, I asserted. Today and what I'm 100%, comfortable with is. Silent. Failures and the need to insert, human, review. And. Do, some of these types of anomaly, detection where we at least know a move 37, is occurring that, needs to happen. How. Exactly we should respond I don't know I think their arguments to be made that there's, gonna be some collateral damage and over the long-run that, collateral damage, is gonna. Be you, know less harmful to society as a whole than, if we just let humans make. Mistakes like they do now. Well, no we, don't but, that's not really how we make our decisions, you know we were just talking about this in our ethics class you. Can't design a clinical trial and say well 5%, of the people are gonna die from this therapy. But. We hope to learn from it anyways that. You have to have a reasonable, expectation based, on the. Helsinki criteria, that which, you're what you're testing is not going to be more harmful so. That's a sort of a different test in this scenario I'm, not so sure so I was thinking so. In. Like constructing, this talk I was thinking what would it look like if I came and threw out all of these slides and just totally. Focused on machine. Learning in medicine just like my, thoughts and machine learning and medicine and the first thing I thought, of was what's. The future of the Hippocratic oath is it, it. Will. Doctors. Really. Be able to say in every instance we're. Not going to cause harm for precisely this reason, and. Then. On second thought I think, doctors, do this with medication, every day it's. A balance yeah the. Fact is it's a balance there are a lot of medications, that are prescribed, widely, that, nobody knows how they work and sometimes. People die, by. And large these medications seem, to work pretty well and we just accept some people dying as a cost to do in business and so. I. Think. That might be a more kind of analogous, scenario, than some of these clinical trials, but. I think, that I mean the end statement is it's gonna be a balance and. I. Don't. Know if we're gonna be comfortable with that balance but we need to think through the balance because again these, tools are being developed, and they're being employed, and as, as you know like. They can be incredibly effective in ways that human yeah. Somebody. Asks are come people ask is is the regulation, of artificial intelligence different. If we must consider, consider, the impending. Generalized. Artificial, generalized intelligence, that everybody's worried about.

Thank, You mr. musk for, for. Asking that I. Don't. I. Brought. Up Hans for, a couple of reasons one. Of them is cognitive, biases, I. Know. A lot of people are very worried about this idea of artificial general intelligence, to. Those people I would, say spend. More time. Thinking. About. How, IT systems, fail for really dumb reasons. And, then, kind of come back to me and talk I don't want to minimize. That. There's. A lot of fear out there and I think there's a lot of misunderstanding but. I think, the idea, I. Mean, sure I don't know I think this this point and one of the reasons why I. Get. I think it's a bit distracting for what Elon Musk is doing publicly, but, this point is something I can do a little bit of a deeper dive into but. It's it's. More of a like. How. Is it should I do a deeper dive and to why people think Skynet, is gonna kill us or. Should we just move on to I, mean. Think we're all worried about the about. The impending singularity, yeah, I am, you're worried okay then I you're okay so so. Basically so this question comes from this cognitive. Bias which is that humans can't understand, or. Can't grasp exponential. Change and so when you look at the growth of computing and the ability of computers to simulate human intelligence what, you see is a clear exponential, growth curve and, so, from. That one statement we. Then get concerns, that well if this is true than, in 10 or 20 years. We're gonna have a sky net that's artificially, intelligent, and that's, gonna be able to maximize its own capability. For human survival, and that, that, assumption, is then gonna lead it to kill us and, to be fair though to be fair the alarmists, make the point that from, the time it happens. To the time we realize it's going to be very short. Yes. But. That's still based on this assumption that, we. Are bad at understanding exponential. Change therefore there's. Going to be some god-like intelligence, that exists, and. I. Think, it's a distraction I think there are other problems we need to be focused on right. Now today, and. I. Think, it's, from. The world that I live in where I'm seeing, real. Risk every. Day and real, potential harms I think. It's a total distraction. Think. About how, computers, are gonna kill us just like I think it was a distraction the. 1970s. To think we're being attacked by computers, which. Was kind of true like there's some truth in these worries yeah, but the. Worry in the 1970s. That were being attacked by computers, was. Okay. How is it that we make them more useful how do we control the ways they're being applied how, do we for example pass, laws like a koa that, try to tackle discrimination not.

How, Do we, stop computers. From taking, over life on earth but so to that point one, of the biggest, concerns people are asking, questions about is how, do we controlled how. Do we control or, protect. Against discrimination that, these algorithms will likely make if they're taking. Available. Input data and then making you. Know unbiased, decisions, and so looking. At, socioeconomic. Status or loan. Applicability, how do we how do we regulate that or how do we prevent that so. That's. A really complex question, I think there going to be a bunch of answers it is a basic fact that. Underprivileged. And underrepresented community. Don't generate as much data and. Aí and. Aí, is. Based on data and so, I, think. We need to 1 we need to be understand. All, data is bias we need to be under we need to try, to be quantifying. The way that data is biased, and. I think we need to be thinking about more creative ways to. Try. To. Level. The the that bias, because, right now there's a good. Example of this either. The city of Boston, or a nonprofit or, something in Boston created, an app that, was designed to detect potholes, based, on. One. Of the sensors and smart phones and surprise. Surprise the most prominent, are, the wealthiest, communities had the most smart phones so, as a result, to. Start with all, of these potholes were getting fixed in wealthy neighborhoods when that wasn't the attention intention, and so, once. The developers, realized that they were able to. Insert. Some fixes in there that, helped, minimize. That bias but. Ideally. Everyone. From every community would be generating, the same type quantity. And quality of data and. And. I think that would that would reduce it great greatly, I don't, know if that's realistic so yeah the interim I think we just need to be aware of bias, in all the areas we can right that's not a new problem and we look at clinical trials and hoo-hoos, on clinical trials and the data we collect it's not always a fair sampling of, taking. The medication or using, the intervention, but. Sticking with this theme of sort of nefarious intervention. I saw. A really cool example, I can't remember the exact example but it was a it was where a I would identify, a picture then, somebody, put, in some noise, into the picture you couldn't even tell but. Then when they ran the same AI over, the picture had found somethi

2018-05-09 04:45

Show Video