Autonomous Systems Self-Driving Cars Aircraft and More Intel Technology

Show video

(bright music) - [Voice-Over] Welcome to "What That Means" with Camille, where we take the confusion out of tech jargon and encourage more meaningful conversation about cybersecurity. Here is your host, Camille Morhardt. - Hi and welcome to "What That Means", part of "Cyber Security Inside". Today, we're gonna do safety in autonomous systems.

And I have with me, Mykel Kochenderfer. He's professor at Stanford University in aeronautics and astronautics, as well as computer science and he heads up Stanford's Intelligence Systems Lab which is part of their Institute for Human-Centered AI. He's also co-director for Stanford Center for AI Safety. Mykel actually focuses on decision making and autonomous systems where human safety is directly affected including unmanned aircraft and autonomous driving. So, we're gonna focus on that.

Welcome, Mykel. - Thanks for the invitation. - Well, I wanna start by, can you just give everybody a quick definition of what is an autonomous system? - An autonomous system is just a system that takes inputs from the real world as perceived through some sensor systems and it makes decisions and it tells the actuation system what to do. So, it takes us input observations of the world and outputs actions or decisions.

- Or recommendations. - Or recommendations. There's a whole spectrum of autonomous systems, some that have to be fully autonomous, so many cybersecurity agents will have to be fully autonomous because they have to make decisions faster than human can.

On the other hand, the AI may be used as a decision support system that provides recommendations to a human to actually execute. - Tell us how you go about building one of these autonomous systems. - So, to build an autonomous system, you need to choose what kinds of sensor systems to use. Do you use camera systems? Do you use radar? So, you need to understand what your sensing modality is, as well as their error characteristics.

Then, you need to develop a perceptual system that will process those sensory inputs to arrive at a good understanding of what's going on in the world. So, you need to be able to infer where might there be other vehicles, where might there be pedestrians, and so forth. Then, inside the decision making system, it needs to handle all of the different kinds of scenarios that it might encounter, like maybe a pedestrian walking into the road or another aircraft crossing into your flight path. And then, those decisions then get translated into some actuation if it's a physical system.

So, it may be control signals that go to the aileron of an aircraft or for a car, it may maybe to speed up or apply the brakes. - Okay. So, basically, you're gonna sense what's happening. You're going to perceive based on the sensors the same way - Mm-hmm. - essentially humans take in data. - Yeah. - I see something now,

what do I think that means (chuckles) based on what I'm seeing. - Yeah. - And then, I'm gonna take some action, even if it's just a recommendation to pilot or a driver, or I'm going to actually just apply the brakes immediately because I can apply them faster than a human can. And otherwise, we're gonna hit somebody. - The decisions that you make are going to affect the world, and then that's going to affect what you're going to be perceiving at the next timestep. And so, this is known as the control loop. - Oh, that's really interesting.

So, how frequently, you're gonna tell me it depends. (both laughs) How frequently are you taking in new sensor, new signals? Is that a constant thing or... - Yeah, so it depends, of course on the application (Camille chuckles) for aircraft collision avoidance systems. This is typically done at 1 Hz or one decision per second. In other situations like for autonomous cars, it may be 10 milliseconds it needs to make a new decision.

If you're trying to land a SpaceX rocket, that may have to be even faster. - What makes building these systems difficult? - [Mykel] It's uncertainty. I'll give you some examples. The first major category is just uncertainty about the state of the world. So, one application that we've been working on is wildfire fighting.

That requires an understanding of the current state of the fire. Typically, fire chiefs only have an imperfect understanding. They can gain more understanding by using more sensors, doing overflights with helicopters or drones or whatever. In autonomous driving, you may have uncertainty about where there might be pedestrians.

So, there could be noise in the LiDAR sensors where there could be occlusions like another vehicle maybe in front of us, blocking our view of the pedestrian. So, that first category, lots of uncertainty potentially in the state of the system. we also have uncertainty in how the environment will evolve. We don't know exactly whether the pilot will continue straight or turn left or turn right. We don't know if the fire is going to propagate to the east at a particular rate, but we might just have a probability distribution over what might happen.

And it's important to take into account the full spectrum of possibilities here to produce robust decisions. - If your car that has an autonomous system knows that the car that's coming at it and is now gonna have a head on, is not using a system, is that a factor in how it acts or does it just escalate the risk and uncertainty but it still behaves the same way? - So, generally, you can do better if you know the behavior of the other agents. So, if both vehicles are equipped with the same system, you can make better predictions about what might happen. Whereas if you have an autonomous vehicle encountering another human-driven vehicle, a human might be drunk or distracted or whatever. And so, there might be a lot more uncertainty about where the vehicle will be over the next few seconds. They might suddenly swerve or whatever.

Whereas if you knew that the other system followed the same rules you are, then you can potentially make much better decisions. And that seems logical to me. - Mm-hmm. - If you had told me the opposite, I would've been surprised. - Yeah. (chuckles) - So, now, I'm just wondering, what is that implication then for the rollout of these kinds of systems? - It really depends. In some kinds of applications, you just won't have perfect 100% penetration with your particular technology.

So, for autonomous driving, I think, we just have to build our cars to be robust to other humans for the foreseeable future. There will be human drivers on the roads for a very, very long time. And we just need to build systems that are robust to that. - And why wouldn't you account for absolutely everything? Is that just performance of the system? - You can't really account for absolutely anything that can happen. So, for an example, we wouldn't be able to drive on two lane road because it's possible that the oncoming car might just swerve into us at the last moment. And there is absolutely nothing that we can do about it.

And so, we have to decide what's within scope for our particular system and what's outside of the scope. - And how do you decide that? - It requires discussions between the engineers and the various stakeholders. So, the folks who will be using or selling the system, as well as the regulators.

- Who really needs to be involved when you're designing these kinds of autonomous systems or systems that are providing recommendations? - Having as broad array of stakeholders together in the same room as possible, that's what you would want. You want to engage the regulators as early as possible so that they can build up an intuition about how you're going validating the system. You want to engage the end user as much as possible to ensure that there's an alignment between what they're expecting from the system and what the engineers have designed the system to optimize. - End user is just like a passenger. - If you're building an aircraft system, you'd want to engage actually both the pilots and maybe also the passengers, depending on what the system is.

So, for an example, you'd want to understand what is the comfort level for the passengers. You don't want to create a system that pulls a half G or a full G on passengers. So, getting that balance right is very, very important and it ties into some pretty key engineering trade-offs.

So, if by engaging with passengers, you would have a sense of what kind of maneuvers are appropriate and within scope for your system. That's just one limited example. - Maybe the primary consideration is safety, but then all things okay on that front, it's gonna conserve fuel. But then, that might mean like a sharp nosedive. (laughs) So, then, you have the passenger saying, no, no, please let's use a little bit more fuel, so that I can be comfortable.

- [Mykel] The alert rate is also pretty key, hazard alerting system. And it's alerting all the time, then the operator will not pay attention to it. Also another example of engaging humans at the end user on what's acceptable, this comes up quite a bit in autonomous driving.

You wanna understand what's a comfortable level of deceleration. Maybe that will depend upon whether it's just like a normal maneuver or whether it's safety critical. You may have different thresholds as to what decceleration rate is acceptable for the passengers of the vehicle. - Why does it take so much longer than we initially thought? - A lot of people underestimated the difficulty of both building a very robust system, as well as validating that it's robust and it will behave as expected when deployed in the real world. And so, the reason for that is it's just very difficult to anticipate all of the different edge cases that you're going to experience in the world. So, some of the early crashes involving Tesla autopilot and other systems, they encountered situations that would've been very, very difficult for a human designer to anticipate.

Sometimes, it's referred to as a very long tail. - Mm. - If you think about the distribution over possible situations, there are a lot of low probability events that you're still going to encounter if you have a broad deployment over an extended period of time. - Can you talk about incorporating, I'll say, subject matter expert, a human, early on in the process of training AI or AI self-learning? What is that... Been hearing lately that there's a lot of benefit to incorporating human knowledge, as opposed to just providing data and letting the model run.

Can you talk about how you use that or incorporate that? - [Mykel] Humans are extremely important in a number of different aspects, but two that come immediately to mind. The first one being sanity-checking are models. So, for the development of this aircraft collision avoidance system, we needed to build a model of the airspace that captured the trajectories of aircraft as they come within close proximity to each other. To validate that, very early on, we generated many, many synthetic encounters from our model and then compared it to real data and tried to have a human expert guess which one was synthetic and which was real. That was a major milestone when we were able to convince human experts that the model of the environment was at least in the right ballpark. We used a whole bunch of other quantitative metrics for assessing how representative the model is of data.

Humans are also very important in specifying the objective of the system. What is it that we want to achieve? What are the appropriate trade-offs? - Right. - Sometimes, it feels a little bit strange to talk about a trade-off between safety and operational performance, but you have to make that trade-off in order to have a system that actually works and is acceptable when deployed in the real world. You wouldn't want to build an autonomous car that went to a full stop as soon as it encountered another car. So, getting that balance right is something that panel of humans can help inform.

- Or it refuses to leave the driveway. It's like, nope, (chuckles) you prioritized safety, so I'm not gonna go at all. (laughs) - We also have to really understand that for these safety critical systems, the goal is not to drive the probability of failure to zero. If someone tells you that they did that, then they're lying to you or their models don't really capture the full spectrum of what might actually happen. And the reason for that just goes back to the fact that sensors are imperfect. With some probability, those sensors will fail.

And also, when you have other agents in the environment with you, it's often impossible to perfectly predict what they will be doing. - Tell me a little bit about some of the spectrum of research that your lab is looking at. - We sit in the aerospace department and so, we have looked at aviation applications involving air traffic control and air to air collision avoidance and drones. We've looked at phase technologies on how do you produce robust plans for satellite sensing. We're also interested in wildfire fighting.

For an example, we looked at how would you intelligently use drones to monitor the evolution of a wildfire? And then, on top of that, how do you appropriately allocate resources to fight that fire? We've also applied our methods to scientific discovery. So, right next to Stanford is SLAC, the Linear Accelerator Center. One of my PhD students has been collaborating with them on using our techniques to control an x-ray machine for examining a specimen.

You have control over how do you move the x-ray beam, what aperture to use, and so forth, over time. And you wanna make these decisions to maximize scientific value. I've also had a student working on exoskeletons and human assistive devices. We've also developed an autonomous cane, a cane that has a lighter sensor and camera on board that can help steer someone who is blind around obstacles to get them efficiently and safely to their destination. There's a tremendous interest of course, in sustainability and climate change in order to get to net zero.

So, if we want to have a net zero emission of carbon, carbon sequestration has to be part of the equation. And so, we've been working with others at Stanford, with expertise in the earth sciences on how do you safely sequester carbon. So, safely sequestering carbon requires making inferences about what's happening in the subsurface.

And you want to sequester the carbon in a way that it will (chuckles) stay there for a hundred or more years. If the carbon comes up, then it goes back into the atmosphere. But also, since it's carbon dioxide, it can lead to suffocation. And so, this is something that we need to be able to do extremely reliably.

- What's one of the biggest arguments that's out there right now among people who are designing autonomous systems? What are they disagreeing about? - [Mykel] There are disagreements along every part of the chain on (chuckles) the sensor systems. So, what sensors should be used on autonomous vehicles. And of course, there are lots of engineering discussions about the trade-offs between cost and error characteristics and so forth. There's a lot of discussion and disagreement about how much of the factor should or how much of a role neural networks should play in safety critical systems. For some kinds of processing like image processing, natural language processing, it has to be neural network. There are no other known technologies that can do what neural networks can for object recognition or speech recognition.

But there's also a temptation to use neural networks for making controlled decisions. So, after the image processing and so forth, how do you take that situational awareness into a decision? Should that process use neural networks? So, we as humans, we have our biological neural networks and we've built up confidence that our biological neural networks are adequate for us to fly aircraft and drive, at least to some extent. But maybe more interpretable methods would be better for the decision making systems. - So, the downside of the neural network isn't the decision making quality of it, it's that it's not as explainable or interpretable by people? - Yeah, explainability and interpretability is a major challenge when using representations like neural networks. Though, a lot of research labs are very interested in figuring out how to make neural networks a bit more interpretable.

Sometimes, what they do is they produce what's called a surrogate model. So, they model the decisions that the neural network makes but in a representation that might be a little bit easier for humans to understand, like a decision tree or something like that. - Right. - And so, they can look at the decision tree to get a rough idea as to what the neural network is doing. It's not a perfect representation. That's why it's a surrogate model, but at least, it can give enough of an intuition that we may be able to have a warm feeling in our hearts that the system we deploy will behave sensibly.

- Is there anything that I haven't asked you that I should ask you that people should know about autonomous systems? - Two major things will contribute to the safe deployment of these autonomous systems. One is modeling and simulation. That's going to be key. You can't flight test, - Mm. - you can't drive test everything. Flight tests and drive tests are useful for validating the implementation and collecting data on sensory error, characteristics, and so forth.

But you don't want to be testing your safety critical system in the real world and have that be part of your design process. As much as possible, you want to do the design of these safety critical systems in simulation. And in order to do that well, your models have to be trustworthy and you need to validate those models. The second point that I want to make is we always want to gradually deploy our systems and develop our confidence in the system. If we're a delivery drone company, we'll want to do small restricted tests in less populated areas and build up an understanding of the failure modes before deploying in San Francisco or New York City. You always want to take baby steps when developing these systems. - Right.

Introduce ice next week. - Yeah, you wanna do that (Camille and Mykel chuckles) very, very gradually. - Mykel Kochenderfer, professor at Stanford University and a head of the Intelligence Systems Lab there within Human-Centered, no, the Institute for Human-Centered Artificial Intelligence. Thank you so much for speaking with me today. I appreciate it. - Thanks so much, Camille.

(upbeat music) - [Voice-Over] Never miss an episode of "What That Means" with Camille by following us here on YouTube. You can also find episodes wherever you get your podcasts. - [Announcer] The views and opinions expressed are those of the guests and author, and do not necessarily reflect the official policy or position of Intel corporation. (bright music)

2022-08-17

Show video