AI Safer Smarter More Secure Direct Current - An Energy gov Podcast

Show video

Welcome, back this, is direct-current and energy gov podcast. I'm, your host Matt Dozier still, broadcasting, from inside my coat closet I hope, you're well if, you're like me you've probably found, it hard to transition to a fully digital social, life we're. Relying, on virtual tools to stay in touch with our loved ones more than ever with all the glitches and hiccups, and awkwardness that comes with it does. Anyone else find seeing their own face during video calls incredibly, distracting, can't. Be just me, so. One. Side effect of us living our lives through the internet is we're interacting more, and more with artificial intelligence, food. Delivery apps streaming, services, telemedicine. They're all using some form of AI to bring, dinner to your door recommend. A new show to binge or connect, you with a doctor the, same is true for almost any digital service that isn't totally reliant, on individual. Humans and while. These AI powered, tools can be incredibly, useful they. Mark, perfect they, can make mistakes or have, security, flaws that leave them vulnerable to hackers my. Guest in this episode spent a lot of time thinking about the risks of handing over so much responsibility, to AI systems, and how. We can improve them make, them safer smarter, and, more secure this. Is the second of our two live episodes, recorded pre quarantine, at the American, Association for the Advancement of, science or triple-a, s meeting earlier this year, thanks. For listening and stay safe out there, it's. Science, for, the people. Direct-current. Hello. Everyone this is direct-current, and energy gov podcast. I'm your host Matt Dozier with the US Department of Energy we are here live at, the 2020, triple-a s meeting in Seattle on the sign, Mike podcasting. Stage presented, by this study shows I'm. Delighted, to welcome my guest today Kyle. Bingman and Cort Corley thank you so much for joining me today yeah, thank you so much start. By having you introduce. Yourselves tell us where you work and what you do or we'll start with you sure so my name is Cort Corley and I, am a data, scientist, at the laboratory, I leads, a bunch of our AI research, as well as a group of data scientists, that apply, a. In machine learning across. Energy. Science, national. Security type domains and, it's a really fantastic way, to see, just. How far we've come with AI which will totally, bash over the next 30 minutes of this podcast. So. This. Is Pacific, Northwest National Laboratory, in, the kind of neighborhood absolutely. So we, are in Richland Washington our, main campus and we also have half. Of my group and we have a larger presence as well and South Lake Union and, Seattle and, so you're also at the lab right I am so I actually work at our Seattle office just. A mile away or so my, name is Kyle Kingman I'm an advisor on assured, artificial, intelligence, here at the lab what, that means is I am essentially, figuring out our research. Direction, our research goals - how, do I simply develop and deploy artificial. Intelligence, those trusted, safe and secure. So. Okay we're talking about AI today, artificial, intelligence it's a big area of research for the Department of Energy and National, Labs I've. Heard people say we're living in a golden age of, AI just. How widespread is, AI, in, our lives today. So. It's really everywhere, if you imagine your phone, if, you've ever used a photo app on other Google or iOS the. Other day I wanted to see what sushi I had eaten so, I opened up my Photos app and I typed in sushi and lo and behold came back all these photos. Sushi. And so, what that is is it's an AI it's a machine learning algorithm that goes in it detects objects, and images and then categorizes. Them and makes them searchable, so I could go back later and find. Pictures of sushi that I had or dogs or anything else they want so whenever you say AI is everywhere, that's one example yeah.

What. Are some of the places that people would be surprised, you think to learn, that AI is that work yeah. So that's one thing I was actually doing this morning is seeing if I could brainstorm a list of all of the places that I see AI day to day Jenelle, Shane she's this researcher, that looks at AI in the world she's amazing, one of the things she says that if you've been on the internet you've, probably interacted, with AI and not. So it's, everything from you, getting your driving directions it's getting. Matched, with a doctor, and a live health service it is things like figuring, out how to get custom playlists and then even outside of that it's stuff like getting, your pictures to look better on your phone like some of the best AI in phones is actually in the camera there's, all of these kind of weird unexpected. Places, that we're actually using it all the time right so what is the AI doing, to my pictures and in my phone it's, making them look better so, you can essentially have a lower quality camera, that is able to make photos, look like they are from a really, expensive camera, so, there, are lots of different forms. Of AI right we're talking about all these different applications. That are already in use how, has our, definition, of what constitutes a I changed. Over time. So. I think it's, it's, grown and it's morphed but it's also been the same if, you go to the Wikipedia page it says AI goes back to antiquity. With, automatons, and Greek mythology but, I think in modern day vernacular came around in the 50s talking, about you know things that humans, can do and making a computer think see touch so, today what we think about is AI includes, all of those things, there's, a great quote by Andrew, Inge who is a Stanford, professor and, he. Says if. You can do, a task if a human can do a task in a couple of seconds then, likely an AI can do that today where. That will be in five years is probably maybe a minute or two, so that means picking things up recognizing. Objects, and images detecting. Sensing, all those things if I think back to whenever I was in grad school there was no speech translation it. Was good all the Souris and you. Know my Spanish dictionary to, try and learn Spanish, and today all that is done for me although you. Know maybe with the caveat of it's still not perfect yeah, we talked a little bit about the evolution and some of the steps. That have come along the way and, what, people thought a I was and, redefined. It subsequently, so tell me a little bit about that yes, that's one of the interesting things that's happened over the years is essentially. Every time we say, something is a I we decide that it's not that that it's actually going to be something else and. Really where this all started like, cor saying back in the 50s people were doing something called, rules-based AI.

As In we have to explain everything, there, is about the world to a computer, and then we will have an artificially, intelligent system. In, fact there's these professors at Stanford who thought, that you know we're gonna spend a summer. Figuring, out how to do this and by the end of the semester we'll have essentially, artificial, intelligence figured out but, it turns out there's a phrase called you know more than you can say it's, incredibly hard to describe the world in any way that is comprehensive. Outside, of like very specific small, tasks, so, over time what happened is that we have been trying to figure out ways to, offload. The determination. Of what the world is and how the world works from humans on to the, AI and that's. One of the things that happened in like the late, 70s, early 80s as, pushed towards machine learning, well. The system will kind, of give it a rough outline of the world will tell it sort of what's important, and then it will figure out how things work and figure out what those patterns are that's still hard it still didn't work really well so eventually what, happened is they realized, that we, could make a system an artificial, neural network the technique actually goes back to the 40s but, it got reinvigorated. And the. Whole idea with that was that you don't have to tell it really anything all you have to do is give it data and some. Information about, what that data is and over, time the, AI will kind of use trial and error to make itself better the, downside. With all that is that you have less understanding, of what is going on with, those rules base AI you know what, it's gonna do you, you made the rules yeah exactly. But now we didn't we just kind of told it what, direction to go there's a pile of data please, take a look at it and tell me what you're gonna do with it yeah exactly. So as a I becomes, then more complex. And more, commonplace, so, what, are some of the risks so we're talking about you know making AI safer. Smarter, more, secure what are some of the risks of handing. Over so much power to algorithms. So. Over the past couple years I've. Spent a lot of time applying, a particular part of AI. To science. And energy missions, at the Pacific Northwest National Laboratory and, what's interesting over that time you begin to see all the great things that can do and then, we begin to say okay well now, we can use AI. For, climate. Science climate modeling, for high energy, physics but, now what does it mean when we start to use AI, for security. And I think you know we've talked about some really interesting examples, what. Does it mean what is at risk in an autonomous vehicle like. We have a car that's self-driving that. Seems to open up a lot more risks in discussion, about risks and safety then. Would, be if you're. Talking about a high-energy physics experiment, and so a lot of this developed, over the past few years at least internally. To what we're working on and then looking outward to see what other people are working on as well for the risks and, I know they involve the security, so how. Secure, is my model can't be messed with or, hacked is it, safe does, it work the way I think it's gonna work and, there's many many other ways to think about the categories, of risk I wanted. To talk about self-driving, cars especially, because I think they were one of the most high-profile. Examples. Of people, seeing, AI being, applied in a way that is you know very visible, very present, they're already rolling out in cities, across the u.s. so, tell, me a little bit about what sort, of things you're concerned, about and thinking about in an application like that which could potentially put, people's, lives at risk sure. So when you think about an autonomous, vehicle there is AI all, in it that's the name of the thing but when you break it down it's, made of a bunch of different AI based, systems, that are all doing a specific task they're figuring out what the drivable, space is they're figuring out what vehicles, are around it they're looking for pedestrians, everything.

You Do when you're driving but. One of the things we keep seeing in academic, research is that those specific tasks. Often can. Be fooled there, are papers, out there about how you can put stickers on a stop sign for instance that to, us would just look like some random graffiti, on a stop sign but. It would cause the autonomous, car to believe that it's now seeing a speed limit sign and potentially, would ignore that direction, stop, this is like it's happening more and more and more of there's a creasing numbers of techniques, and methods that are out there that are potentially, able to fool, vehicles. In that way right so I mean there are other concerns, as well in terms of understanding. The. Way that these. Algorithms are arriving at certain decisions, so tell, me a little bit about what you're thinking about in terms of, understanding, the, sort of mechanisms. By which they, reach those decisions, so. I mean the way they reach their decisions, often is by training on data we've, seen a lot of news. Stories or I've seen a lot of news stories recently just about bias and the data itself how, its trained what, it's used for how the data was collected and, all those things translate, to the autonomous vehicle setting like what was data. That was collected was it lidar data was it video data was it stereo data how was it collected then, how was it trained and the risks introduced, by that and the, models that are built from it and, in Kyle was just talking about this area of adversely machine learning we can insert. Something to, make you know the AI do something, that I wasn't supposed to do well, you can mess with the AI, itself, but you can also mess with the data so. What happens if you have an autonomous vehicle and. You know now there's all these risks associated how the data was used how is it protected, how, was the model training because you're right it's a safety critical application, of AI. So, how. Do you have assurance, that it's going to work the, way you want it to so, I think a lot of the things that Kyle and I think about at the lab is very much that assurance angle, of yes, we know that in the literature there are risks to data, we, know that there are risks to models, we know that the risk to how these things work but it's it's also beginning to think you know more broadly of what are the large systems that could be affected by it and what can we do to help, speaking.

Of Large systems what, are some of the other kind of big applications. That the lab and others are looking at going forward in terms of AI, rolling. Out as a new sort of way of controlling, things the. One that I think the most about is the, grid the electric grid is made up of, independent. Connecting, components. Of electricity, flowing, across transmission. Lines it's. A very much a critical, piece of infrastructure, to get electricity to, our hospitals, to our schools to our streetlights. To everything. And it is driven by human operation, today so there are human operators, that depending. On the strain on the system you know follow guidance based, by standard, you know electrical, engineering and the science associated, with a grid you, know what actions they should take upon, that so it's a very human and driven process, now which means that it's more robust, in some senses, but also at risk at others because it's slower maybe. It can't react as quickly as one might like so, people are trying to use, AI, to. Help augment, that process, to, be able to say okay under, strain on the grid do they call it emergency grid, contingency. Or you, know making sure if there's a situation. The. AI itself, will say okay these are the best ways to go about protecting the grid turning, the station on and off and, so that's a really exciting way the AI could be used you, know on the grid now. These are really, big systems, which raised some really big questions, then about you know how we're going to secure them how we're going to protect them from outside. Interference. Where. Do you even start. For. Me one of the things I think is most important. Is that we start accepting, that this is potentially a very big risk we've. Seen things happen over and over and over with technology, that we make. Something and then we rush to implement, it and then we realize it's vulnerable we've. Seen this with the Internet we've seen this with like cars, when we realized cars didn't have any safety systems to protect the drivers you know just time and time again so to me, one of the things I think is most important, is that we're, like okay we. Do want to implement this and the systems we want to help make our grid better we want to do better science like we want to do all these things but at the same time we should be making, it a priority to do this safely and securely so. As I know Kyle you've said to me that if you invent the ship you invent, the shipwreck. So. You, know in terms of identifying, what, the shipwreck, could, be you know what are some of the the tools and tricks that you have and scientists have to start. Trying to address those risks going. Forward, yeah, so, my background is actually and like cyber red teaming and one, of the organization's. I work for in the Air Force was what's called an aggressor, so, you, take the mindset, of a creative, capable, adversary and, you look at the full spectrum of, in our case at that time a network, and figure out what are the various, things that could, potentially happen. What could we do and, you know we're doing it to make things more secure it's, one of the things I believe we should do with this is take a look across the, range of how an, AI system is developed, all the way from when.

The Data is collected the court was talking about through, its training process, and then through its deployment to take, a card look what. That is not to stop it but to help it be better right, right talk, a little bit if you could about work that's happening at the lab in, terms of like trying to address, this and understand. These concerns, so. I think the area that we invest a lot in is in this assured AI, concept. And it's really beginning to divide, the, area and kind, of categories, of focus I guess, first direction is acknowledging, it so there's. Some great reports, out there Microsoft. Has published a series of them, describing. What, are the risk commit to their enterprise, and. I think for what we're doing is it's very much the same where the risks are Enterprise so kind of acknowledging, it, security. So that is what is the security, of the data the, models the things that we are developing that are in critical applications, the, other is how safe are they so can we ensure their robustness, what. Are ways that we can measure how it, will operate or, how it will work in the real world and those, are the things that we see in the literature are there's. A ton of papers that are very academic in, the sense of their experimentation. They're trying it out they're saying hey what is this gonna work is this not going to work but. What, we are doing at the lab is saying is this a problem in the real world is it a problem in the physical sense like and fog, whenever. It's raining, like is it patched on a or, a sticker on a stop, sign really going to be a problem in, all conditions and trying, to understand that what is the what is the boundary, of what we need to think about i with, that to understanding. How AI, is actually integrated into systems, and what potential, safety or security concerns. Arise, from that with. The timeless vehicles you have systems that are essentially, special made that they were able to engineer specifically. To work with, AI but when you talk about systems like the grid you know it's, implementing, AI into, older systems and we, need to understand, in advance you know what are the implications of that and how do we do this smartly so. You folks are asking these questions, is anyone, else asking, these questions, there's. Actually an increasing, amount of people you, know Microsoft report, is one that I was personally so excited, to see the, statement they made about how important security was to their enterprise was great another. Really good one was opening, eye and they've been kind, of at the forefront of leading discussions about, what, does it mean to release AI into, some type, of use in society, and. Making sure that we're thinking about what we're doing and you know not rushing, into it so, we're at a big, scientific. Conference, one of the biggest here obviously there's a lot of excitement around using. AI in. Science. Are. There specific concerns that you have when, it comes to adding, more AI, and, potentially. More uncertainty, into, research, findings, because, of you know AIS complexity, and and it being kind of a black box. So. I think that. The. Answer is not. More concerned but, just more awareness, and education needs to happen so, we're going to be using it it's coming whether. We like it or not and so. Whatever, form it is in we need to have a dialogue about the safety and security of it we need to be able to describe it and characterize. It and go forward so, meaning if you're going to have an imaging system that has an AI that's going to detect a you, know a cancer, is it going to be a human in the loop as well to, be able to augment that diagnosis. Or is, it just going to be fully automated pathologist. So no more radiologists. Any more all right I don't think we're there and I don't think that's what we're saying I think yes let's use it but let's help AI make us better and smarter and you, know a more effective human machine teams as we, go along right. Do. You ever feel, like, a, buzzkill going around your everybody is so excited about AI you're. The one saying wait, hold on let's. Think about this for a minute I. Think. It's easy to sometimes, but, then, you, start kind of stop and think about what you're doing and I'm not trying to stop this we're trying to make it better and once, you can help people see, that and kind, of get the vision for ya we're gonna keep using this we're gonna do it even better it's, easy to step away from that and. I definitely give the analogy of penicillin. Like, penicillin was this thing that was accidentally, discovered and, people used it but they had no idea the science behind it the theory behind it, microorganisms.

You, Know anything of that of that sort if AI it's kind of in the dark ages right now in that same way in, the future we'll have a theory about how it works but right now we know some things work and we're gonna try and use them to be functional, and effective and, it's working really really well and making things a lot better in many cases, and. It really is important, to understand, how it's actually working especially. At you know some place like the Department of Energy and the National Labs, when these, are their high stakes with a lot of these applications, right absolutely, I think the, Department, of Energy has invested, a lot in high performance computing and and scientific, applications, over the years from atmospheric, science. Nuclear, energy and. Everything in between that really involves complex high performance computing simulation, that, is both of scientific, value energy, resilience. And the security, value so, the, next step is is AI that follows from that right and that's one of the reasons why the do we does care about this, is because AI, is going to be supporting, all the scientific missions and energy missions that go along with it so the, next question is what, are we doing about that as the dewy and yes it's very much we, will use AI and all we do but we're also going to come at it with how do we make sure that we're using it, as safe and robust and resilient way to, ensure that we, have the best use of of the technology, what. Does the future, look like to, you are you optimistic about. Being. Able to take AI use, it to its maximum benefit, and, also keeping, that risk an, acceptable, level I. Am. Actually which is coming. From a cyber background, is maybe surprising, that I'm optimistic. But. You. Know especially, in the adversarial, machine learning community, there's just whole, growing, number of researchers, who are like, very excited, about. Potentially. How do these systems, work where could they go wrong and how could we make them better and there's, just so much excitement in, that community and so much energy towards making progress in it that I really think we can make, good steps, core. What about you so. For. Those that haven't seen Black Mirror this series, hopefully. I'm allowed to say on here it's it's a great way to see like what is the opposite, end of what, could happen, dystopian. Future and I definitely don't. Think that will ever happen like, I think it's fictional and, it's up to us a scientists, and engineers and adviser and leaders in the field to make sure that doesn't happen. And I think there's enough people that really care and are, creative and have to see a state, where maybe, it wouldn't be as positive for us that are working on it like Kyle said that's. One of the key things is that I think by having this discussion and starting. To do this work potentially. We forestall, a reality. Where we do have insecure, AI where we do have unsafe AI and you. Know if that turns out to be something that never would have happened to begin with like that's fine because we've still done all the work we still had all the conversations, that have made it a priority yeah just step in the right direction, exactly, yeah cool well thank you both very much for joining me today I really appreciate it yeah thanks so much for the great being here Cheers. So yes thank you to my guest Cort Corley and Kyle Bingman that's it for this episode of direct-current, thank, you to triple-a, s for having us here on the sign mic stage presented, by this study shows you, can find direct-current at energy gov slash podcasts, or wherever you get your podcast follow. Us on Twitter at energy I've been your host Matt, Dozier thank you so much for listening.

2020-04-28

Show video