Steve Grobman - Technology Has No Moral Compass (3/26/2019)

Steve Grobman - Technology Has No Moral Compass (3/26/2019)

Show Video

Join. Me in welcoming our speaker. Hey. Good good. Evening it's really great to be back at NC, State it has been 25, years so. It's great to be back home what I am going to do this evening is talk. About how in the security, industry we're using artificial, intelligence has. Really the foundation. For really, all of our products, but one, of the things that we have to recognize is the. Technology, does have a dark side we, really do need to understand, it and it's. Not something, that's new, to just artificial, intelligence, it's, actually, something that technology. Has had really, throughout, its history I mean given, that we're here, in North Carolina would, be fitting to, start with, the. Technology. Of flight and if, you think about what flight has done for Humanity it's, totally, changed the, way that humans live so, we can now move between continents, and ours, as, compared, to days or weeks now, if you think about the way that all of our economies, and businesses, operate, everything's. Global, but. That exact, same technology. Is, also, really changed the impact of war so, in world war 2 we saw over 2 million casualties that, were, caused by the invention, of air power and, tremendous. Destruction. Beyond. Things, that we'd seen in any other conflict, so if you if you put all the pieces together at, the, end of World War two in 1948. War. Evil right was still alive and he was asked, about his invention, after. Seeing all of this destruction and, the. Question was Orville. Do, regret, regret inventing, the airplane and what, what Orville said was that he didn't, because. Technologies. Can be used for, good or bad and it's, a lot like fire where. Yes. It can cause tremendous harm but, it's also, can. Be put to many important, uses and it's that insight, that will really be what we talk through with, modern technology, so if we, pivot into, my primary domain, which, is cybersecurity, one. Of the areas, that we have this dual use of technology. Is encryption. So. For, the comp side people in the audience you, should recognize, this algorithm, it's, RSA, but. But think about it this way the RSA, algorithm is the, exact same algorithm, that can be used to protect, data from theft or. Hold individuals. For, ransom right. And if, you think about the, scale that we use this, technology, today, every. Month we're encrypting. 156. Exabytes. Of data on. The, web every month, but. At the same time we're seeing really. A. Level. Of ransomware, in the, consumers, and in organizations, and in businesses, beyond. Anything, that we've ever seen, in the past. And. This. Debate is not new so something. That right, when I got out of NC State and kind of went out into, industry, was the 90s and there, was actually this debate, on, whether. You could control, the, use of encryption so, they called it conditional, use and a lot. Of government proponents. Were pushing, for actually. Treating, encryption. As a. Munition, so there was a law, that it was illegal, in the, 1990s. To export, software, that had encryption, and the, US government, actually classified. It as ammunition. Just. As if you were shipping, bombs. Or bullets or, or other ordnance, so, you, had to be very careful, as a software, developer, because, simply doing, essentially math, could. Make you a weapons, provider, to. The industry and you could end up you know potentially, in the, custody of the FBI so. It was this big debate that we had in the 90s and just, as today when there are controversial, issues, there. Were forms. Of civil, disobedience so, there's a large conference, that, the. Cybersecurity industry, holds every year, called the RSA, Conference and, back in the 90s, people, would actually wear shirts that looked, like this and essentially. What the shirt was is it. Had a minimal, implementation. Of, encryption. Really, calling out the absurdity of, the. US government regulation, where, by having, a bare-bones, implementation. Of, encryption. They were saying hey my shirk, is, classified. As ammunition and it's, illegal, to even show. This to a foreign national the. Good news and all. Of you can go back and write, some code and ship it off and not worry about going to jail because, in. The end of the 90s they actually figured. Out this made no sense you. Know at the end of the day, encryption. Is just math and you. Can't stop, someone. From doing, math so, you, have to recognize that, encryption. Will be used for good it will be used for evil, but. At the end of the day it's. Technology, that, exists. And can be used for a wide range of purposes, so. When we pivot a little bit and I'll get, into really the heart of what. I want to talk about for the most of this evening which is artificial. Intelligence we. Hear, a lot, about artificial, intelligence I'm, sure many of you have taken classes in AI and, machine learning and other, forms, of data science, but, one, of the challenges, that we have in the industry is it's often positioned.

As, The, silver, bullet that can. Solve any problem and. The. Nuance, that's required, to understand, its limitations, are often, not, looked at to the degree that it really needs to be and one, of the things that I try to make clear is the. Field of AI, doesn't. Have to be complicated in many purposes, sometimes. To, build an effective model. So, if you think about what artificial, intelligence, or machine, learning is largely about it's about building. Models, that are able to take inputs, and then. Predict, an output so, if we look at a really simple, example and, I know there are some non, computer, science folks in the room. I wanted to build a really, simple model and I, do this to show a model, does not have to be complex, so if, I wanted to build a, model, in order, to predict, is an, Olympian a volleyball player so, think of it as you've, got all of the Olympians, for, the. 2020. 22 Olympics. They're all standing outside and, we're gonna March them through and, we, don't know what sport they play but. We want to build a model that basically says are you a volleyball, player, you. Can use all sorts of fancy things but a model, that actually works pretty well is, one line of code it's. Are you taller than six feet tall and, if. You use the model are you taller than six feet tall it'll. Actually correctly. Identify. 71%. Of the volleyball players as volleyball, players you. Won't get them all this. Data this is one of the public, data sets I think. From the 1996. Olympics, or it might have been 2000. But. You know there were people like Salima, who moosh at 5 foot 2, Olympian. Volleyball. Player you, know and then clearly. The problem is well. What about the people that are taller, than 6 feet tall that, are not volleyball, players and the. Nomenclature there as well those would be false, positives those. Would be people, that we've identified as. A volleyball, player but. But they're not so, our basketball, players are going. To be incorrectly, identified what's, interesting, about this. Simplistic, model, is it's actually pretty good because. Our false positive, rate is only. 27%. While. We're getting a true positive rate of. 71. Percent and the, reason I like using this example, is it also lets, you understand. How. You can adjust the, sensitivity of, a model very, easily and, also. What the consequences. Of that adjustment. Are so, for example, if, I challenged, you to use. This model to detect, a higher percent, of volleyball, players what. Would you do well, you would simply just fly, this red line to the left right. And now you're, gonna get more of the volleyball players to the right side of that red line so you, can detect, you can set the sensitivity of, the, model, to, detect any percentage, of volleyball players you want clearly. The challenge being as you. Start sliding this red line to the left you're, also going to start detecting a lot, more, people. A lot more athletes, that, are not really volleyball. Players as being volleyball, players. So. This. Is really just to introduce how. Simple. A model, can be you. Know when we get to the next level and think about a little bit of the taxonomy, of how we're looking at data. Science, and artificial, intelligence, three. Of the very large, areas. Are AI. As, a field, and the way that I think about it is artificial. Intelligence is really using. Any. Form. Of intelligent machine, capability. Can. Be for automation, it can, be for, predictive, analytics, and then, a lot of the fields, are now focused, on specific. Elements so machine, learning where you're, training. A model, based on things. That you've seen in the past and then, a sub-element of that deep learning using, a specialized. Form, of machine learning where you're, typically using things like deep neural networks. But. The, models themselves. Aren't. Any good unless you have data, and one. Of the other interesting, things that we see is in. 2019. There's immense, quantities. Of public. Data available. For, open analysis. So, when I gave a version, of this talk a few weeks ago it. Was out of San Francisco and, we checked you know what sort of data does. The city of San Francisco, make. Available, for open, analysis, and it turns, out that if you go to data. SF, dot org slash open data there's, over 400. Data, sets on energy. On all, sorts, of fields including things. Like Public Safety so. What, we did is we started thinking about you know how can you use this new field and the new techniques, around, AI, and, machine learning with. Publicly, available, data, for good and, there's. A lot that you can do to make the world a better place by, using this, technology on, this, open data so for example, you, can optimize where. Police, will, focus. Their patrols you, can identify hotspots.

Of, Crime in order, to have citizen, training, in those, areas really making, the city safer. But. What, we also recognized. Is the exact, same data. Could. Be used by a criminal. To make them. A better criminal and we did that too, so. With. 50, lines of Python, and a. Machine learning library out of spark we, built the model that would basically, predict. Whether. Or not a cyber, criminal, would get arrested based, on parameters, of, a crime and. Even. With a very simplistic, model, we, were able to come, up with a pretty, good. Model. So what we see here on the left is if you've taken a machine, learning class it's, a ROC, curve it essentially, plots the, false, positive, rate versus. The true positive, rate and if you think back to my, volleyball example. Where I said you could slide that, red line to. Anywhere on the chart from a sensitivity perspective. You, can do the exact same thing with this model, as a criminal. So think of it as the. Further to the left that, they go on this chart is going, to decrease, the, number of opportunities that, they have to. Commit, crime for, specific, parameters, but, they're less likely to get caught and then, the further to the right they go they'll. Basically have a lot more opportunity. To commit crime but. They'll also increase, their probability. That. The model is predicting them not, being arrested but they actually do. Get arrested so recognizing. Just like the other technologies. The. Underlying. Artificial. Intelligence, or machine learning in this, that is, figuring. Out whether. Or not a criminal, is going to be arrested it's, just math and, doesn't. Necessarily, know that it's doing something good or bad so. It gets us thinking now. What are some of the other areas, that. Artificial. Intelligence could be used by. An adversary, in, order, to make, their, techniques, more, effective, and you, know one of the things that's a hot topic today is, deep. Fake videos so, if, you think about the use of AI for. Information. Warfare so me, and my colleague, that Co presented, at this, talk we, literally put together this, fairly crude video in a. Weekend, and I'll go ahead and play and hopefully the sound works unfortunately. AI, is well suited to generate, false content that's highly, believable for. Example, deep fake video can create realistic footage, that is a hundred percent fabricated. Making, it appear, that individuals. Are making statements or involved, in activities, that have not occurred. Okay. So it's. Not perfect, and and, if you look there's some, artifacts, if you actually look at it frame by frame you. Could detect that that was a deep fake what. What I think is more interesting is. How. Easy it was to build, this so as I mentioned, we did, this all with open-source software we. Wrote a little bit of Python in order to do some. Of the compositing, but, it literally took a weekend, to, get to that state and we learned a lot like. Everything, else in engineering, when, you do something once you learn a ton and you would do it better the next time, one, of the challenges, that we had in making it glitch.

Free Was, me. And my co-presenter. Were fairly, different from a, physical characteristic perspective. So, getting, everything to match was, difficult. Whereas, if you used an actor, that was, the same gender more. Similar, in features, to myself it. Would have looked more believable, and here's essentially the, recipe, that we used so. We. Went out and we just googled. Very high-tech we googled, you know what are the different open, source deep. Fake video creation, capabilities that. Are available one. Of them that we found I forget, exactly which one we ended up going with it, was a. IPE. ROV. Deep. Face lab and so, we used that and, then we had to get both source. And target videos, so what we wanted to do in this case was. Show that you can use pre-recorded content, so. We went way back to, 2017. To. Some comments, that I had made publicly, that, we're already on the web and we grabbed those to show you, can build something. Off of video, that, already. Exists, from the distant. Past so, then a my my colleague, went ahead and recorded the. Words that you heard me talk, we. Then used, the, open source deep. Fake application. To train the video. Because. Of the the challenges, that I talked about we did use some custom, compositing. So, on the video you saw, the face. Tracking, markers what, what we actually did is we. Tracked. Basically. Both of our mouths chin. And nose and. Took. The a I built. Image, and then composited. That onto. The source image in order, to get them to match with, the right mouth, expressions. For, each part of the video. And. One. Of the things that I really want to point out here is this adversarial. Use, of machine learning is going. To make our adversaries. More, effective, if you think about my field of cybersecurity. It's, not just, a technology problem a big, part of the problem is the, human factor where. Many. Cyberattacks, start is phishing so convincing. Somebody. That, they. Should click on something or convincing, somebody that something, that is not real is real and, deep. Fake videos is just one area that AI, will help another. Area, is in, the automation, of, targeted. Content, or phishing attacks so. If you think about what a cyber attacker, had. To do in the good old days they. Could essentially choose, do. They want to do a mass, phishing, campaign where. They send out a million, emails to a million individuals that. Are all identical. You know it's a something like a package. Has been shipped to you here's, the UPS, link click, on the link to to, track your package and, the. Problem with those is they would have a fairly, low victim, conversion, rate or they. Could do spearfishing. Where, they would research, an individual, find. Out information about, their background, and then send them a very targeted, email. With, information, that was. Very pertinent to them hey, I know you went, to this lecture on, Tuesday. Evening, it was great meeting you you mentioned, a really, cool. Application. To download here, it is, what. AI does, in this case is it, allows, an adversary, to. Have the scale, of traditional. Fishing but. With. The ability to target close, to spear. Fishing so essentially, use automation. Of. Targeted. Content in, order, to get a higher victim, conversion, rate so. One of the other things that we're. Trying to point out in the cybersecurity industry, is when. You use, AI for cyber, security it, is different, than using AI in, other fields. Of computer. Science, so, if you think about using, machine, learning to, track. Hurricanes, when. You get good at tracking, hurricanes, it's not like the Hurricanes, all of a sudden changed the way they operate it's, not like the laws of physics change, and water, evaporates, differently, but in cybersecurity, that's. Actually exactly what happened so we're. Studying this new field called, adversarial. Machine learning which, is the technology, behind. Confusing. Machine, learning algorithms, and let's. Start with a simple. Example, with image classification, so. What. We started with here, is an, image on the left of a, Rockhopper. Penguin and we. When we run that through. One. Of the standard, image classifiers, it, correctly. Classifies. It as a, Rockhopper. Penguin with. A confidence, of 99. Point, whatever. That is percent. Using. Adversarial. Machine learning, we, can figure out the, minimal, changes, to the image in order. To make the machine learning model, completely. Fall apart so when. You look at the image on the right, that. Is now classified, as a frying-pan so what we did is we basically. Ran. A, adversarial.

Machine Learning algorithm, where the two inputs, were the, original, image and we, said we don't want you to classify, this as a, penguin, we, want you to classify this as a frying-pan, what. You see in the middle is the. Difference, between these, two images but we had to amplify, it by a hundred x in order, to see it so, this. Image on the right it looks, identical to. Us as humans, as the one on the left there's. Actually slight, differences. In the underlying, RGB. Values, of the, pixels that, make. The machine learning algorithm, get. Completely, confused, so this whole field, of adversarial. Machine learning is about, finding. The flaws in the way machine, learning algorithms, work in order, to get them to reach incorrect, conclusions, and if you think about why, an adversary. In cyber, security, would. Want to do that it's because they're. Generating. The data that, is ultimately sent, to. A machine learning model so for example, if you're using machine. Learning as, a malware. Classifier. Is it malware, or, is it benign if, they, can figure out what do they need to tweak in their, application. To get it to classify, as benign, that. Will make their job much, better so where, else might, this cause problems, in the physical world what. We have these new things today like, autonomous. Driving so one of the research projects, my team worked on was, you, know what, could we do. Two full. Image. Recognition as, it pertains to things. Like street signs and the way that we did this project was, we started with. Digital, so, we started, with a machine. Learning algorithm, that would, classify, a, stop, sign as a stop sign but. Then we used the same type of. Algorithm. To figure out what is the perturbation required. To. Create an adversarial image. And just as with the Penguins, there's. Really no, difference or very little difference, to the human eye but what the image. Classifier, now sees it as it's. A speed. Limit sign it you know think, about an autonomous, driving vehicle, pulls, up to a stop sign it, doesn't, see it as a stop sign and it, drives right through so the question, you, might be asking is. Okay. It's great to do it digitally. But. Could you do it physically, and that was the next thing that we did so, instead. Of manipulating. The. Images. Themselves, with. Digital. Manipulation we. Started, researching in, we've actually progressed, a lot since I've, put these slides together where now, it's not, nearly as profound. But, we used an algorithm, to figure out exactly, where, would we need to put stickers on a, sign, in, order, to just as we didn't with the digital form make, the machine learning algorithm, completely, fall apart so what you see here on the left is, stop. Sign just. Without, any, modifications. It's, it's really a hundred. Percent being, classified, correctly. As a stop sign whereas. By putting this one sticker in, exactly. The right location it now, sees it as you. Know number one and added Lane sign with an 81 percent. Confidence. And then a speed, limit 25 sign. As a nine. Percent confidence, so what's what's really interesting when. We do this demo live is, I, can, take the regular, stop sign and I can turn it I can wave my hand in front of it and it, doesn't change the. Classifier, from. Classifying. It correctly, but. It's when you know exactly. How. To change, the image in such, a way that will directly impact the. Underlying machine learning algorithm, that's where you're, able to use this type of technology to, make it fall apart. So. Why, is McAfee. Studying, this like, we're studying in not really, because we're. Gonna get in the business of you, know, automated. Driving or autonomous, driving or. Image. Classification, but. We are in the business of. Detecting. Malware and just like the rest of the, cybersecurity industry, we. Are leveraging.

Advanced. Machine learning capabilities. To do things like malware classification. So, taking, some of the learnings. By. Looking, at, adversarial. Machine learning on things like image classifiers. We're, now looking at how could you apply that to. Malware. Classifiers. And we see that, many of the same techniques, work. Just as well so, if we have a malware sample. That, we've, extracted features, from, that. Get fed into a model, we can use a very similar. Adversarial. Machine learning model to figure out what, are the exact. Minimal. Perturbations. Required, to. Change that in order, to have it viewed, as benign because this, is essentially, what, the adversary, is going to do. We're. Also studying, how do you defend against, this and there. Are a lot of interesting, properties, about, adversarial. Machine learning, that. Were. A bit surprising to us until we really got into, the details number, one there's, this interesting, property called. Transferability. Which. Essentially means an. Adversarial. Attack, on one, type of machine learning will. Actually, be effective, on, other, types of machine learning so, that was counterintuitive, to us at first because. We were thinking of ways to defend, against, somebody, using, adversarial, machine learning against us where one of the naive approaches. Was well, we'll just run a set of models and then, take the unn humble results, and make. That our underlying. Classification. And that. Had issued because of this issue, related. To transferability, there. Are techniques, that do work better so one, of the techniques that we're now using is. You. Attack, yourself. So. You actually take your training set and, a. Portion. Of your training set you apply, adversarial. Techniques, too and then, when you train your model, you train your model, on a combination, of. Adversarial. Samples, and non. Change, samples, and that gives you a more resilient, model, because, the model is now trained, to be at least to some degree immune. To. Those adversarial. Challenges. One. Of the other big, challenges, that the, cybersecurity and actually the entire AI. Industry. Has is this. Issue around. Understanding. Ye, model. Predicted, why a model, predicted, what it what it says in my trivial, example, that I started, with with, volleyball, players, it's. Very, explainable. I can, tell you exactly, why. The model, predicted, you, are a volleyball, player or, you're not a volleyball player but. When you get into a deep neural network, and a, deep neural network, is saying you, know this is a dog this. Is a cat. Understanding. Why why. Did it come to that conclusion is, a lot more difficult, and if. You think about things that are a lot more serious. You know the machine learning model, says you have a very high probability, of a, serious disease that. You. Know you, might want to start getting treatment, for but.

The Doctor can't actually explain. What. The underlying symptom, ology is that, reached its conclusion. So, understanding. Why machine, learning is making. The decisions, or the predictions. That it is is incredibly, important, it's, a entire. Field, called, explainable. AI, or. Explainable, machine, learning and there. Is a lot of nuance and I wanted, start with an example, so I don't know if you can see this but. This. Is an example where, at first glance it looks like a pretty, good model so. This is a classifier. That is predicting. Whether. Something. Is a husky. Or a wolf and, we. See six images and, from. An accuracy, rate it. Got it, got five out of six correct so pretty. Pretty good model right. Until. You look at why, did. It classify. Things. As husky or wolf so one of the techniques that we can use in explainable. Machine learning, is we, can look at in something like an image classifier. What. Were the underlying, pixels. That, had. The image make, the decision, that it, actually made and it, turns out in this model, this, was the, underlying data, that actually fed. Into the classifier, and. I don't know if you like I see some people smiling like. Basically. You didn't build a husky wolf detector, you built a snow detector, so, in. In this case if, you look at the underlying pixels, that were actually, used in most, of the images it was really looking, for, how. Much white there was in the image because. The. Wolf's. Were generally, taken, in pictures. Of snow and the. Huskies were not so, it, was really an awful model. And was not very good at all at distinguishing. Between the two but because, of the way the, training, set was set up you. Could perceive, that that this was actually better than. It actually is so, just. In closing, and then I'm happy to take, questions on. On really, anything is, there. Is this notion that. Artificial. Intelligence. Is. Able, to solve all sorts of problems and, that artificial, intelligence is. Actually. Intelligent and, one of the things that I'm trying to point. Out to the industry is. Artificial. Intelligence is not really. Intelligent. It doesn't, understand, the context. At the end of the day it's it's math where it's models, that, are predicting.

Outputs, Based on their inputs, but. But the context. Of those inputs is often. Not, understood by, the model, so, in my field we can have things that work very well when, they're. Tested. Against things that we've seen in the past but. When we see something very new they. Don't work nearly as well, we. Also need to recognize that, there's, a high level of fragility, the, models, are a lot more fragile, than, many people understand. And in many fields, it doesn't, matter because. There isn't an, adversarial. Element, to, it but, in things like cybersecurity and things, like defense. Where. You might be using, artificial. Intelligence as, part of a weapon, system where. You do have an adversary. Recognizing. That, the adversary, is able to use techniques, to. Confuse the model is is, absolutely key, and then, finally. Recognizing. That the technology, itself will. Make, adversaries. More. Effective, so. AI. Will, make adversaries, more effective, in being able to, defeat. Some, of our new technologies, to. Focus. On things like. Finding. Victims to go after they can use some of the same types of problems that AI is very good at things, like classification. So, a cyber, criminal, can look at their, sea of victims, and say out of all these victims, which, ones are likely most vulnerable and, then those would be the ones that they would focus their attacks on and and. Recognizing. That all of these things will make my. Industry, and hopefully many of yours it's great. To hear that you're. Adding a new focus area, around security. Really. Much harder and you know one of the things that makes this, industry, a lot, of fun to be a part of so. With, that I will take, questions on anything. So. We've got a couple of mics if you have a question if you'll raise your hands one of our ambassadors will bring the mic and we just ask that you hold your question until you get the mic that way we can capture it on video. Okay. Can I ask you question if you don't ask me questions, yeah. You. Mentioned how like now I guess with. A lot of models there's no appreciation, of context. Do, you know currently, of like any research.

About Like building. Context-aware models, and like how do you think that would change the. Security landscape for, you so. There definitely are, models. That attempt. To comprehend, context, a lot of the deep learning models, are. Trying, to predict, context. So for example, instead, of having simple. Image classifiers. That say it's. A dog it's a cat it's a bird it'll, come back and say it's. A dog jumping, over a fence and. Implying. That it comprehends, context. But, but part of the challenge that we're finding is the context. Is still very subject. To. The limitations, of the training set so. Inferring. A new situation, that has never occurred that, any of us as human beings would, be completely, obvious are, much. More challenging than. Then simply putting all the pieces together to, have something that looks like context. So. You, talked. About like how, you can trick. These, models into doing into seeing other things and how a part of. Avoiding. That can be kind of almost like vaccinating. Against it by giving. It the examples, of these problems, and part. Of the data set would allow it to then be, immune to that sort, of what's, the best way to find, what. Best. Attempts. At the you, know model, to. Try, to get. It to be immune to a lot of brute force or is it like. At. Least we've used a lot of trial, and error and. Part. Of the, good news bad news depending, on how you look at it is, there are a, decent, number, of open, source. Adversarial. Ml library, so the, one that we used for the the penguin was, I think, it was full box, there's. There's other ones out there and. We've, tried different different. Algorithms, I, think. At some point as, the. Cyber criminals, start using adversarial. Techniques, more will, work towards. Optimizing. Our defense right, now what we're seeing in the wild, is, there's, very limited, use of adversarial, techniques. Outside, of academia and outside of research so, the good news is we're not impacted. By it quite, yet we are trying to get in front of it so right now we're really trying to understand, it versus. Optimize. Our, VAX, athens against it but we will we will flip at some point in time. So I was kind of wondering. How. You. How. Vulnerable you think the automobile, industry is to this type, of attack especially given, like Tesla's. Trying to train their AI for. Self-driving, cars that, caused, a lot. Of problems yeah, so it's, a great question and I I would put it this way. There's. Very little issue, with random. Noise impacting. The model so one, of the things that we found is the, models are highly resilient, to. Having, random interference so if you have a stop sign and, a, big, cement truck goes by and it gets a lot of dirt blown on it like, that actually doesn't all of a sudden make it look like a speed limit sign but. If there, was a reason, for an adversary, to. Start messing, with things like traffic signs. It. Could be very bad you. Know I think that if, adversaries. Had a reason. To, have. A negative impact on, things like autonomous, driving it, would be bad the good news is I don't think there are strong incentives. To. Disrupt, autonomous, driving you, know you could definitely have kids you, know painting on stop signs dots in exactly the right place with, infrared paint you, know and, causing, mayhem but, I don't, know that that's much different than throwing nails, and tacks or you, know other things in the road you could cause lots. Of mayhem today without. Self-driving. Cars, so, out. Of all the things I worry about it's, something that we need to be aware of and, and just, like we're trying. To immunize. Our. Malware. Based, detection, capabilities. The. Autonomous driving industry should. At least study this but. I don't think the incentives. Are there to, tamper, with autonomous, vehicles, as they, are with other industries. Right. Very. Interesting talk so, having, seen, the extent to which AI can be used from adversary, perspective. Generally. Speaking do, you think it is the. Right time that there should be a regulation, to which, AI can be used how it should be used or there, is a downside to a regulation, should it be open source how, do you see as an expert to security.

So, It's. A fabulous question, and my opinion. Is it. Is just math the, technology. Is out there the technology is well understood and, what. We see with regulation, is the. The. Legitimate. Businesses, and organizations. Will. Follow the regulations, the bad actors, will. Not follow the regulation, that it's very similar to. My view on should, we, regulate. The use of strong encryption where. There's, this big debate that's been going on for the past few years on things like, should. The vise makers, put backdoors, in, things. Like phones, in order, to give law enforcement the. Ability to, recover. Keys in the, case of criminal, or national. Security incidents. And similarly. In that case my perspective, is bad. Actors, they, can download application. Level encryption. Applications. That do encryption. Outside, of the device so putting, backdoors. Into, a device for. Law enforcement or. Other government. Officials is going to make it so that all of the innocent, people, are putting. Their privacy, potentially, at risk while. The terrorists. And criminals, will, simply download, strong encryption apps and still. Have their data unavailable. To, law enforcement I I think artificial, intelligence, is exactly. The same it's. It's, well understood the, techniques, are, already. Out there so. Creating. Regulation, to try to pull it back is is, really impractical the, other thing that I'll say on this that. I think is a good point is we, need to treat software, differently. Than. Other forms of technology in. Regulation, in that you. Can build software out, of nothing right. So so, you can basically write. An RSA, algorithm in a. Couple of lines of Python even. If you have nothing to start with which, is very different, than building an explosive, device or a nuclear weapon, or even. Other, forms, of technology controls, to. Build high-performance, semiconductors. Requires. Multibillion-dollar. Fabrication. Plants, that, somebody. Just couldn't build in their garage so, thinking. About software. Differently. Than other types of technology. That we. Should consider regulating. At least in my view is a very, different, ballgame. So. We. Can say that, humans. Have achieved, intelligence, over these years only because, like we have been faced this much amount of data right, from the day we are born and all. The analysis, so if we are fetching, the same amount of data and analysis. Results, to the computers, do you think that we could actually achieve to artificial, intelligence I. I. Think that we sometimes fail. To recognize. How. Much of, everything that we do is. Driven. By things that that we've learned even very. Basic, functions, so, when we think about things, like vision. You. Know there there are cases where people. Who have been blind for the vast majority of their life regained. Their sight and struggle. With things like. Seeing. Stairs is just a set of lines on the floor and that's, because their their brain hasn't. Been trained to. Comprehend. What the inbound, images. Should, actually be. Turned into in, the, physical world so, I I do I do think there is this, interesting. Question, on when, we think about artificial. Intelligence and, machines versus. The way that the human brain works like, there's lots of research, on, putting. The two together so, that we can make AI, think. More like humans, but I also think we need to think about how. Humans, actually, learn is. Based. On a number of different things, partially. Things. That are inherent, in genetics. As well, as what. What they learn from, environmental. Capabilities. And the. Complexity. Of the human brain is something that, at least I don't see being, possible, in AI. Anytime, in the very near future it's, one of the reasons in cybersecurity. We. Believe that AI, is very effective, at helping with automation. And some. Course level analytics. But, we still need human, responders. To, understand. The context. Of complex. Situations. And be, able to reason and, rationalize. Things, that have never been seen before so, I I, think, we're still a ways away from the to really coming together. I. Don't know if I saw a hand or not.

Thank. You very much.

2019-04-21 07:17

Show Video

Other news