AI Security: Robots or Cyber Criminals Running the Show? | Intel Business

AI Security: Robots or Cyber Criminals Running the Show? | Intel Business

Show Video

foreign [Music] where we take the confusion out of tech jargon and encourage more meaningful conversation about cyber security here is your host Camille morehart hi and welcome to this episode of what that means today we're going to cover artificial intelligence trustworthiness or decentralized artificial intelligence and its trustworthiness I've got three professors with me today who are part of the Intel sponsored private AI collaborative Research Institute I have with me today farina's kushanfar from University of California San Diego I have n Ashokan from University of Waterloo and Ahmad Reza sadegi from University of darmstadt we're going to get into a conversation about all everything having to do with security and artificial intelligence pretty much everything from software down through the hardware layer and also talk a little bit about what's at the frontier with respect to privacy and ethics in this space today so welcome the three of you to the show today's conversation we're going to talk about artificial intelligence and specifically security with respect to it and then I think we can dive in a little bit deeper as we go and talk more about the different kinds of models of artificial intelligence whether they're centralized or decentralized and how maybe that changes security and what you three are looking at from that perspective but somebody help me out here can somebody give an overview of the ways that artificial intelligence is adapting to improve Security in general we are at the rink of the next Industrial Revolution which is rightfully called the automation Revolution and this is enabled by the power of artificial intelligence we see now many things that have been done manually in the past are now getting automated and this automation is done by the power of AI algorithms and learning from the data that we see from them so this Automation and this Revolution and automation that is enabled by the AI is of course also happening in the area of security and privacy so we are using algorithms that are smarter more intelligent to automate the security and privacy processes but as we are using these engines these artificial intelligence is really an engine for automation there is also another site to it and that is is your engine reliable is it secure can somebody take out this engine and try to do malicious things to it or when you're training your engine to do these automated tasks can they actually impact it to make wrong decisions because now all the decisions are automated there are still scenarios of human body interaction which is another Frontier of artificial intelligence but this is really what is happening on one hand we are automating many processes including security and privacy processes on the other hand the artificial intelligence itself has vulnerabilities that we need to specify and last but not least this abundance of data an abundance of models is also exposing a lot of sensitive information from people so there's also that privacy risk involved the artificial intelligence in 80s as as I was a young student I was fascinated by neural networks but this is not really new what we are talking about the technology has changed automation has changed in that sense because we have the basic technology for it we have better and more efficient hardware and also software so there are lots of technological advancement that brings us to another so Reincarnation of AI and the hype of AI these so-called artificial intelligence they are getting integrated into many decision-making processes and it doesn't need at all that these decisions are the right ones there are lots of security holes in these algorithms because they are not made with security in mind and they can be simply manipulated and that is where I am very much interested in usually from a more destructive side because I believe that AI will be a big big danger for something that I call it AI medic like pandemic it will be bringing all of us into a crisis a digital crisis where the decisions that these neural networks or whatever they are are not clear to any human being we cannot verify it we cannot really understand it and those who want to manipulate it can manipulate it I'm talking about Wall Street they are doing it manipulation for many years but now they have more powerful tools so looking it from a destructive side I would prefer to look into their weaknesses rather than a strength and what defined system security is thinking about how to secure systems in the presence of what we call an adversary meaning that somebody like a human being who is trying to undermine the system using all their powers of intellect that they have so I started getting involved with machine learning and security initially purely as an application area like how do how can I use machine learning to improve security and privacy and this is something that has been done for like more than three decades but what's different about how we approach problems is that we don't just apply machine learning to get a faster better solution we would also think about what would the bad guy do to undermine our solution and this is something that machine learning and AI people normally don't do so I could give an example think about like a face recognition system right so we want to have a system that will recognize our faces correctly as belonging to ourselves so an AI expert would say the way you would validate the system is to collect 10 images from my face 10 images from Ahmad space I don't know the space and so on and then show that my face will always be recognized as me Fair enough's face will be always be recognized as her and my face will not be accidentally recognized as Ahmed space this is this is where they will declare success but if Amir is trying to break the system he's not going to oblige justice system by using his face to pretend to be me so he's going to do things that he's going to wear glasses so he's going to put makeup on or lipstick or something like that so that he would look like me and and this is an aspect that hasn't been taken into account in ai-based systems but AI is everywhere not only in security but in human resources in jurisprudence in policing so making mistakes here and designing systems that can be easily circumvented is going to impact Us in ways that we haven't imagined before so that's how pivoted from not just figuring out how to use AI to improve security and privacy but studying how to make the AI based systems more robust and more trustworthy as a security researcher we also recognize that the Standard Security aspects not always apply to AI in general because we we need to get into all these neural networks to understand because they have like the human brain they may have many layers we cannot really understand that sometimes the information that goes from one layer to to another layer and again another layer and because of this we cannot immediately say okay we have standard solutions for that there is a lot of room for research to apply to AI systems in general do we have to actually be able to explain the AI in order to be able to protect it or are we able to somehow come up with protection mechanisms still without understanding truly how it's working underneath the explanativity aspect is very important and it's also a hot topic of research according to the some of the recent Studies by some Consulting companies many companies around 40 they don't know what is the use of AI for their business daily business but more than 90 percent of all the CEOs of small size mid-sized and big companies in Germany take Germany as use case they are extremely concerned about the explainability they they want to know what happens if I cannot explain what is happening so how can I have Assurance about the trustworthiness of these algorithms we are right now in this stage where we apply AI it works great we don't quite understand how and there are companies bigger and smaller trying to sell this and make money out of that but explainability and interpretability are important like Ahmad said but I don't think it's even going to be a choice that we can make I think 10 years down the line governments are going to require us that we cannot deploy systems without being able to explain because there are issues like fairness that come into play so I would imagine that the next iteration of privacy regulation or data protection regulation like the European gdpr might include strong aspects of explainability and interpretability if you can't explain why you are making some decisions that are going to be used in recruitment or deciding who gets parole on and so on then you can't deploy them I think that'll be enforced by policy and this is not going to be a matter of choice it's going to be a requirement so farinas would you be able to help us understand some of the security perspectives I know you work with helping to secure at the hardware level I'll say decentralized artificial intelligence and algorithms maybe you can explain kind of what those Czar and how you're looking at that I'm really a systems person so the solutions I have goes all layers of the system you could think about securing the system any system including systems that have ai in them as securing your house if you want to secure your house then nobody comes in would you do you you can close all the windows and all the doors and try to just shut down everybody from coming but that's not practical the truth is in securing computer systems the story is exactly the same the vulnerabilities are really at the interfaces and if you have a system where you have these vulnerabilities that the interfaces then people can attack your system and try to get that secret from the interfaces I really think Hardware also has a role in making things much faster much more efficient because right now to make artificial intelligence more efficient there is a lot of accelerators out there a lot of these accelerators actually are not very robust and reliable one part of the research that I'm doing which interfaces this is is when you make your AI models more robust be a distributed AI models or centralized AI model can you simultaneously make sure that this robustness doesn't add to the overhead of your system that you could do this robustness in real time and you could have a end-to-end system that performs really really well so at the most basic level you're basically saying encryption or various forms of encryption or similar Hardware level protection or software level algorithm level protection can tend to slow a system down not always but often and so part of the role then is to improve the performance once you've done that encryption yes and secure robustness here in definitely in terms of AI systems goes far beyond encryption for example adversarial attacks to AI systems where people are providing instances that look legitimate but they're actually fooling the model to make a wrong decision the reality of it is that AI systems are working based on Gathering a lot of data and extracting statistics about it and at the end of the day we are as good as the data that we get and we are as good as the model that we learn the problem with learning these models is that the space of what we are trying to learn is really huge and AI models are trying to build a lower dimensional representation of really a huge dimensional space and when they try to do that if they don't have enough data in all corners of this multi-dimensional space if there is added noise to the data if the AI model building itself is not well regularized then these models don't have the very very well-defined boundaries that are always correct and then attackers use that to construct samples that have a little bit of structured noise in them so the input looks like really legitimate because for example in terms of the picture you can see a picture of a cat that looks exactly like a cat but it has a little bit of noise in it which is not detectable Say by human eye then the AI models they could classify this as a car so that's kind of a really nefarious attack because just imagine projects in the Auto industry where these algorithms are making real-time decisions as your car is driving now no longer just a cat versus dog versus horse mistake but it's a dynamic attack on a cyber physical system like a car that can have a really nefarious consequences and there's a lot of beautiful theory about adversarial attacks and there's a lot of nice algorithms to try to avoid adversarial attack but one aspect that we've uniquely introduced and we've been working on it for now about five years is how do you make these Solutions integrated all the way from system level all the way down to Hardware so that you could actually detect them in real time without impacting the performance of the AI system which by itself is quite intense on the computational resources that we have which is why a lot of companies so far have been focusing on building accelerators and now just imagine you need to use these accelerators in your Automotive system but these are not very robust if you try to introduce robustness you you lose the real-time aspect of detecting things so this is another big aspect of how Hardware can be used and if there is more and more awareness of that at some level adversarial examples like this are fundamental it's not a problem with just AI based systems but any system that tries to build a model that approximates reality so human brains are no exception optical illusions are adversal examples against the human brain where something that looks as if it's a square is in fact a circle or something like that so all of us are familiar with optical illusions and and they are exactly the same they they rise up to the same theoretical consideration is that fairness explained so at some level you can't avoid them but you can do better by detecting them or compensating for them by looking at the system as a whole and not just the model that's at the heart of the system well I have to say when you're talking I'm thinking this is horrifying this thought of in real time a traffic direction or functional safety system that's using automation to enhance decisions and actions or maybe even taking action autonomously so help me understand how far are we from having something that we can consider secure or is this just very very early we are very far from when you come to the safety and liability of an industry you're far away from those cases their Regulators allow you to put a car or put a vehicle on the speed where this vehicle is connected to a cloud and to other vehicles and it's a self-driving car or even without any drivers in it and when we we are there then the safety of many human beings will rely on that the problem is dependency if a whole country is dependent on a communication Network that's provided by a specific company then that company has overall the control of this communication and can be misused either because the company wants to make money or because the governments are misusing that so this is a sword which has a side which I think is sharper than the other side so in that sense I'm optimistic I trust human Ingenuity that we are still in early stages of AI based systems eventually I think Humanity as a whole will figure out how to do these properly so things that look scary now would be honest and then would be used in the right way a decade or two decades down the line Are there specific things that you three are looking at that are not kind of well known in the world when you consider security and AI I was talking about one of them which is looking into full stack solutions for an accelerated robust AI which is something rather unique and robustness has multiple facets to it we talked about this inference time and there's also the big aspect of data poisoning where people who are trying to say collaboratively learn some of them is trying to poison the data to revert the models in a way that is doing something malicious for them just the Confluence of all these objectives from privacy Security robustness in one system that has to work consistently consistently and efficiently that I really think is really one of the frontier challenges of this yeah I can give an example of something that one of my postdocs is doing he's trying to answer the question what do deep neural networks really learn are they learning rules are they learning the ability to do symbolic computation or are they just doing associated essentially like statistics on steroids and and I think he's not alone there are others who are looking at this kind of fundamentally understanding what are these things learning even though a lot of the publicity and Glitz is on look ma what what are neural networks capable of doing they're doing fantastic things but understanding why how they work is going to pave the way for interpretability explainability and so on perhaps not enough people are working on that because the applications and showing sort of dramatic improvements is so much sexier than trying to understand how neural networks work but I think that's that's an important part that is sort of evolving now many people are starting to think about the basics of what's under the hood and trying to understand that one thing that I was thinking about for a long time was about the poisoning of data or the models so if we have an algorithm that we push it through another algorithm and that mitigates the poisoning assumed that we have that so the question is can I have a filter that I push any algorithm that I want to sell you so for example recruiting somebody comes like Amazon or anybody else comes to you and says buy this this is my algorithm it's good for recruiting if 10 000 of people apply for a job this can decide for you at least the first phase before a human or HR look at it and then I say but how do I know that it's a fair so you have to provide data I have to check it all these kind of things is very messy and it's not efficient so how about I I have a filter I pushed your algorithm through my filter and then it becomes a kind of I add so much noise that is not bad for the accuracy but it is enough to obfuscate certain aspects of this algorithm that may be unfair and I think this is just an idea but we are starting very small because there are there is a big research community on unfairness especially when you want to Define what is fair what is not fair it's it's very complex especially if it comes to the legal aspects like when you go to a court how do you prove what is fair what it's not fair and that's fascinates me because it has to do with a number of disciplines that you need to work with it it makes it more challenging so this is the Known Unknown that I personally I'm very interested to look into my God we've covered a lot today fairness privacy security decentralized centralized Federated learning we've looked at full stack and the interest in that and explainability and its relevance I really appreciate the conversation thank you Ashokan Ahmad and farinas for joining me today and I hope to have more conversations about Ai and Security in the future never miss an episode of what that means with Camille by following us here on YouTube you can also find episodes wherever you get your podcasts the views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation

2022-09-22 03:41

Show Video

Other news