Wen Hua Chen - Control Engineering in Future Highly Automated Society

Wen Hua Chen - Control Engineering in Future Highly Automated Society

Show Video

that's okay yeah thanks for uh and uh Janice uh have me here and I exactly I really enjoy my race and I have a lot of opportunity to talk with and very different people and now I just want to share some view about it and my thinking and also is in the discussion with many others about what's the future for control engineering and and part of a talk I was given to our government the research Council they invite me with a while we'll talk about my view on the future of potential engineering and what we should do to work with other people so this is actually a full part of my part of the day thank you so and the love bro I said some people ask me and I just throw you here and where we are and the location of that is quite a middle of uh in England is about a one hour and 15 minutes train from London it's not very far away so your work after this in the future okay so that's it it's a small town is Market town in the University of invested so this is my uh some area I'm working uh in and give you some flavor of water so basically we do a lot of different things [Music] oh [Music] [Music] we have a thermal camera on that is there any problems all about this one [Music] for the infrastructure is an inspection [Music] thank you [Music] okay so this is just a background about what we are doing and not everything and also oh not all the workers are done by me because I have a larger grouper behind me so let's talk about the higher levels of automation what this means why we are interesting that this is our campus is quite a green so then when we talk about high levels of automation can mean differently and those are some examples like the uavs or the uh often driving here and Agriculture and Healthcare robots and many other things and the the the society the public have a strong interest in those areas and uh so this is a a video produced uh by one of my previous students let's talk about it use this kind of technique for provisioning Auto Parts it's just an example where those are more than digital technology can be used to help Society too many stuff you should I [Music] okay [Music] [Music] myself [Music] [Applause] [Music] [Applause] [Music] I've been talk with each other foreign automation already significantly increase our productivity but in order to further increase our productivity what can we do we want to produce more by using less manpower to automation Suddenly It's one of the major solution for that and also and they by doing this you also can open many new business opportunities like products and service so that's the reason there are some datas and all examples if you talk about alternate many people may maybe already heard of those speakers you have 1.25 million people killed on road every year and 20 to 50 million people injured so if we are able to reduce by using this tank reduce the accident so it will hugely benefit Society another is about Asian Society that means two things one is you short of levels so this is why you need to increase productively another is the need of people to look after them so near the hill care robot and other things so now we talk about why we need a control period and this is something we are doing control we need to say why we do that and the people and very easily can argue without your control theory we already have UAV operate we have our tons of vehicles testing many many different place we have robots already opened why we need air control because without that we still can do the job so this is a big argue how to and to convince people that can you have a strong space here and if we're looking at the because you have very many different type of autumn systems and like uavs drones or robots and they have different maybe functions these two different if we try to abstract that what are the common common things behind them and then you can see there is some key elements like our typical control system should always there and the one side for example we have a physical system or virtual assistant if you do the auto trading in the share market and you have sensing you have a perception try to understand what's happening there you have decision making or Control Function there and then you you apply your actions on the system and also you have operation environment and sometimes you also care about a human operate so you have all those key elements and we have a lot of research in those areas like perception decision making and many other areas but now we should not forget those things are worked together they interact with each other and that they are talk with each other in the pair with each other so how to understand when they are come to work together their performance their contribution to the overall risk or the error and in all our view they are those key like functions they connect with each other a little feedback why the feedback is so essential because you think about it in any scenario if you do the autonomous driving or the uh robot without sensing to check the outcome of Your Action try to compare with what you want to achieve you couldn't do anything so you need the feedback try to compare what you want to do what You observe with what you expected and also in this kind of system the model is not perfect you couldn't have 100 accurate model and information under the feedback is the best way to try to cope with on Sunday or disturbance so this is why feedback is so essential here and but the problem now is we don't have a good theory try to answer if we have ai enabled function in this Loop how they're going to interact with others so this is what I'm going to talk about is uh the control theory why I need and what will happen so and in order to further talk about it why we need to control period we need to go back to History A lot of people are going to know the story when uh James Watt invented the steam engine and the one of the major contribution is a big developed a flyable governor which is trying to regulate the speed after that engine so basically when the speed of fly and the faster and the ball move up and when you close the uh the wall a little bit you have less steam camera you use this try to regulate that any speed it's before the gyms work there are many different type of steam engines developed in various places but this one is quite successful because you're using steam engine to uh join the textile machines and others if speed is not even it will cause a lot of damage and and also for uh quality in the product but however it was observed there is a hunting oscillation what it means sometimes it can observe this kind of a continuous persistent oscillation in some operating condition no one understands why but he can argue that James would developed maybe one of the first kind of a control system there without knowing what really happened but then is actually people also knew that James Maxwell if the same people were develop it he started looking through the phone and he found it this is because the instability caused by feedback is that and he reduced to the fourth order Dynamics try to understand what really happened so this is why like you have a theory try to understand what really happened in the James Waters uh steam engine so usually realize the time and is about a half century afterwards so this is what American Hi-Fi argued in the control area maybe many times you have control system before you have a theory yeah try to understand that then Sunday because we give up Theory we can use inferior to guidance our design we can improve our control design then we have more and more powerful control system and also because of the theory we are able to follow the mechanical electrical chemical engineering system in the same framework try to understand that but if we look at the water we have now is the same as what we have in maybe two centuries ago and we people working on the uavs you basically put different components of UAV together and you fly testing and if it works that's good if not work it makes some changes to some clean components try again and also people working on the Autumn driving that repeat a similar kind of process you don't have a fundamental Theory to underpin the whole design process and each of the area have a specialist working on their own problem try to solve it in their own solution with their own solution so this is what you really want to do like a can we develop and maybe the future control generation to enable this kind of high levels of automation as the current control theory we have for the current control system in industry so this is the we think that we needed to working in this area but the problem is as we highlight it's through difficulties so complicated because they have many very complicated building blocks there which are maybe enabled by Ai and many other Advanced functions or the decision-making period or data science but how we can we do that so I'm just using another example try the illustrator why and the the understanding of the interplay between the different components is so important but also quite often ignored by people now operating this area the example MSI and many people work in the MTC maybe feel quite familiar with but however where am I giving this example to many other people outside it building we found a pilot surprise the next thing about that typical scenario and we talk about it we have two levels we have low level Vehicles try to control you try to follow a reference under them high level you try to do trajectory planning you plan in part for the low level to follow okay so then we can write this kind of typical our control diagram we have a goal you want what you want to do or what you want to drive into you do the path planning provide reference to that new level control to follow if I at a high level we simplify just put them together and then we do that for example using the optimization try to find the optimal path so this is let's see how this will happen and supposing in this case the gold is quite simple we won't be driving a robot from any place go back home look you know the humming function try to go back home and and then the the the cost function is quite simple because we want because x square y Square basically is that is a b is a distance to the origin okay because for this one we have two steps and X Y position and then but what is it okay well I want to minimize this try to find the best country action make my robot or vehicle go back home as quick as possible so this is why you try to minimize this function suppose this is uh the simplified dynamics of that then you see what happened you I'll do the optimization you apply the contraction to that and then we do the optimizing again and apply conjunction to that this is what happened so it didn't come to the origin and it move away from that and the new one many people don't understand it why because every time what we will try to do is try to minimize the distance the next time distance to the origin as small as possible so some people will chat it's like a challenge but they said okay maybe for this particular system there's no control he's able to drive the vehicle or robot to the origin but actually this is not true and we can develop some very simple structures you can drive in the vehicle or the robot to the origin so this is a kind of thing so maybe it's not new from our control point of view is because the interaction between the optimization group and the system dynamic every time we try to optimize a cross-function but it's too short-sided and we apply the action which is actually interfere with the original physical Dynamics and this carbon is actually making it happen this is typical feedback problem so this is just one example try to demonstrate that because the scheme I talk about exactly apply to many many different systems and for the chemical engineering you also have the high level to the planning and the real-time optimization planning low level control try to follow the planning there are so many other examples in the robot in the vehicle they do that without realizing the possible danger you might have but I also I should say little things not happen all the time it's very really happen but it do happen so the question is how can we stop it so the lessons we learned from this very simple example what I try to highlight is that there's into covering between the different components here at high level and uh you not just take the division information from that but also when you apply contraction it will interact with environment which will change it that your perception changes your uh actually okay so there is a continual involvement a feedback loop there and the particular this is a very simple system and there is no issues about optimization the goal is also very simple you can be easily Define a solution if you think about it very complicated system with a complicated goal and then what is good and it will sense the noise what is going to happen and I'm particularly interested in this example is because I mentioned that a loss of function like a decision making or perception are based on the AI a lot of the AI algorithm is actually based on the optimization okay so if you're something wrong I started here the version of all supervisor machine learning uh I would boil down to solve optimization Pro so this nothing means the optimization process is actually hit behind all those functions if we don't understand this optimizing process what they're going to do for the data we're connected little will damage of the whole system so that means if we do it correctly we need to Guidance the people working on the AI side in this block this book how to develop uh AI argument with the proper Properties or attributes we need in order to ensure the whole system perceptive or stability so that's the key thing why I'm interested in this particular simple example try to highlight something because it essentialism optimization and the physical system social feedback there are certainly many solutions in particular we're working on that and the MCC area knew that and because I looking now from a new refresh point of view it just try to understand that and uh 20 years ago I was working on the stability of mgc and now exactly I can come back try and look into this issue again and we derive some new theory of that so there are some other methods we are able to mitigate the risk but the challenges now is can we bring these ideas to machine Learning Community try to make sure if this is argument developed based on some machine learning how we can change the cost function or add some constraints to the machine learning optimizing process we are able to ensure the stability of safety of the whole system so this is a big question I will not talk about this in a short of time and then we'll put it at the high level ideas I'm thinking we need to shift to the control design from transitionally I says how to perform a task by giving the reference or command to the control system therefore it to the high level of things like what we want to achieve and then that system to figure out what is the best way to do it and why I'm doing this we can develop some uh we call that a goal only the behavior this is a the features of high level of autonomy so we'll not talk about this and then I talk about another things we do in this framework is called a Duke dual control for uh exploration exploitation so that really means what can we try to talk about here is that we understand that when we have a perception we will make a division based on our perception okay but however we also need to understand when we take any division on the physical system if they were interacted with the environment the environment we're changing our belief okay change our perception so there is a carbon between them it's not just a One Direction it's a perception changes position also changes perception so this is a call because the Dual control because it's only not only a concern about how to produce decision making to achieve the goal but also understand how decision making will help us to understand the new environment or uncertainty of our environment so let's say I'm using a self-optimization as an example try to illustrate this idea the example optimization thing is quite simple because you think about if I have an autonomous system I want to operate in an unknown environment them no matter what happened we always can maintain as best a possible performance so what is the best possible could be in terms of our productivity or efficiency whatever but this is what you want to achieve okay so there are many examples for example uh in the uh the wave energy or renewable energy you want to harvest as much as possible power no matter what kind of ocean conditions you have and the same is for the sooner uh farmer whether the weather condition changes you always have maximized yeah the same is also true for autonomous driving for example if you want to have a autonomous emerging the braking you won't have as short as possible distance to stop your vehicle somehow like you won't have a maximum uh friction no matter what kind of surface of the road you have okay so this is that the example you have so many like this but then how to do it this is a little bit math but good things here I think most people have control background I'm not a better Avenue so so basically the idea is quite simple here because if we want to drive our system to the optimal operating condition okay X star and if exercise noon which is a classical control problem regulation child control system for optimal but the problem now is we don't know what is where is it because it'll change it with operation environment okay now the best thing that we can do is we connect all the data from Beginnings now every time import output and we put them as a state information state and based on that we said okay based on the data we estimated or try to learn where is the optimal operating condition so you condition the open all the data it connected so far so this is a typical approach but what we do now is one more Step Ahead we say we're not only a condition on all the data we have so far we both but also we condition on the data in the future okay so that means we also added this just one step you can have multiples there I use the one step to expand the concept okay so you have the future if you say if I had to apply this control what kind of measurement I would like would have with this memory how much we have been to change it my belief or my learning of the parameters okay so yeah and then we have quickly can derive them this into these two forms so basically I can use this try to expand the idea for example if I mean just this is an optimal operating condition I believe all those thoughts because I don't know where it is this I believe and I choose the center or middle as my nominal estimation of the optimal condition so that's the first time it means I want to drive in my system to the nominal estimation of optimum but it also have the second term which is the variance of this one like Origins at this size is try to quantify the level of uncertainty of this estimation if the area is larger then this estimation not reliable if the area is much smaller then is much more reliable okay so now we have second term it's a very inspection so when the total cost function now consists of two terms one is related to our control objective you want to do something or try to move your system to the optimal operating condition but another one is associated with the onset the information again so when we try to optimize this cost function with any control but then you control and then each you get a good balance between the exploitation exploration and at the beginning this term is much higher because you have higher level on Sunday now the main purpose is not try to drive your system to the believed optimal condition but try to reduce the uncertainty of this belief by explore the environment more and the way the time goes the answer is this term will be getting larger and you're driving your system to uh you believed or estimated optimal condition so this is a concept that you control that we work this is the idea and also and the the diagram like this you have a perception look try to understand what's happening and the estimate based on the data and then you feel this into your division making but one of the most important things in the decision making we also need to do inference or estimation because for each potential control action we want to understand its influence on the on the belief on the uncertainty of our estimation so have this Loop and I will use the example try to show you how to make happen so this is a system we develop and and for the autonomous search of uh [Music] Airborne release sources and they have many many uh applications the release could be actually happened or deliberate if it is or naturally happened and the polar bears and they use this to found the food and the most in the foreign if they try to find the females to make and they need to using the smells from the female child following so this is they have a lot of loss exam but on the application side they have lots of application on the environment protection or some other chemical release methane for example my one of my if you're still working on that try doing it but so there are many many applications which I'm using the principle we just discussed try to uh develop a system to solve this essentially you can expand this as a control problem okay so basically what you want to do is they want if this is the source if you robot any place in this area so you want to drive in Europe to a to a source so this is the control well the control robust results but the problem is you don't do the sauce okay and also you don't have a reference to follow you don't have a reference trajectory to portal so what is that how to do it so what they need to do is uh every step you take the measurements based on the measurements you need to learn the environment and say okay How likely the source will could it be and then we decided where you should move in order to maximize the chance for you to find the source you move it and then we take the data again you update your belief so this is the typical carbon between the learning of perception and the control and they are both intact with each other so this is the water we did using the the new country concept and it works uh very well and then you can see some examples we did the algorithm we put the algorithm on the on the on the robot we run it and there are certainly you can see that all those purples are initial belief of where the source could be the robot have a chemical sensors on that this resource you can see just the charger for so every time the transaction will be different because all those uh the uh the problems are random and the wings are rounded because this is not about an unknown uh source of the casing but also about the unknown environment because the dispersion heavily influenced by the wind direction and the speed and also the particles property uh the temperature so so they have so that was in the means you need to develop a control system which is able to cope with unknown Target and also our new environment and also we did this on the UAV as you say we have done lots of work on the UAV we not just the stock on the theory and we just uh after we did that we implement we also have some uh industrial scale and the test on this kind of system so this is a chemical problem and then you can do this [Music] of Canada [Music] and then we are airports are flying and try to follow the police and under them at all times and then try to map in the area to see where are the chemicals [Music] fan it's the best way to have applications thank you car involving accidents they can check the sign you can do and then look at the patches or a result [Music] and also we apply the same principle to a renewable energy and I mentioned that for the sonar Farm the idea is trying to maximize the energy generate as much as possible but however with optimal operating condition and basically the voltage of the inverter to the grid is exchanging with the uh the temperature and the and the sonar insulation and then we need to as design a system no matter what climate change of weather condition you always have a maximum power generator is called a maximum power point tracking you also can so we are able to follow that very well they are not not surprised the idea here is a very crucial link with a reinforcement learning or learning and the machine learning area and there are some Imperial comparison between them I don't think I have a time to talk about this and which if someone interesting can talk after the meeting but the basically what I try to say here we develop uh complemented approach to reinforcement learning the approach we did we feel is more suitable for control or the real or physical involved the system rather than just a simulation and I just using the for example very simply I just use the Autumn such as the example try to give you some idea what's the difference between those two across for example this is the dispersion this is uh the source this is dispersion if we want to uh have an autonomous search strategy using reinforcement learning we need to have this kind of scenario set up then we run it many many times and then learn what is optimal policy at each location okay then try to do that but in the real life if something happens you don't have this environment for you to change many many times before you are able to deploy your uh the the the robot or you have in that field okay first secondly even you training this optimal solution for this environment but in the other time the wind direction and speed are different the particles or a temperature difference learning is not optimal anymore how can you deal with that so in our course we don't need to have all the information we just put the robot in the environment it automatically try to find uh the source based on the sensing and the learning capability so that's just a high level thing but then we and there is more detail about that about in my pivotal body and but the more interesting thing is uh I'm not sure is anyone here or online the active inference in neuroscience yeah and I do talk with a colleague in the kdh from uh robotics cognitive group they don't know this uh they're usually also using active interference that's gonna uh it's based on they're called a free energy principle in the in the neuroscience and the car uh called friction a directly theory is quite very popular you try to understand the humans and animals intelligence they talk about the three things interconnect with each other perception learning action this is his ambition all right is actually what we found out his theory is very close to the the due control we talk about okay and I have no conversation with him he invited me to give a talk to the Neuroscience group I never talk with any people in the medical school because and and I also invite me to give a talk in on their human brain uh conference about that so we both believe we converge into quite a close area to try to understand it uh the intelligence and if someone interesting I do write a paper about the link between those two and and before I publish this paper I sent a paper to him and please review the paper make sure I fully understand your theory and also understand the relationship between the two and he certainly read that and and agree with what I said but the idea is quite similar so now you can see we are able to do something very interesting quite similar to uh natural environment so come back to the talk uh the key thing that I want to say here is without a good understanding of the interaction between the key components involved in high levels of automation we are going to have a big trouble in the future a lot of things is quite rare but it will happen just like the example that I assure you and last time when you combine optimization with uh the uh like a control uh or the Dynamics together it's okay but they do have a chance you will have big trouble so there are typical examples and for example uh many people maybe heard of that is a stock market that crash in the the question in the in the United States because AI enabled automatically trading cause this trouble and then some other examples and what maybe my message is if we don't do the carefully you will see little things more often we need to have a good understanding before we push too much about a higher levels of automation and now we lack of theory to understand what really happened and this is actually also reflect in some recent report about this no good can come up from autonomy without proper assurance if we don't have a solid analysis a design tools for that and we just try to push things like a healthcare robot or the Autumn's driving and argument too quick we will create a society full of risk so this is the best message we need a period chart on the pin that certainly a lot of people working on their verification validation side but uh we need to make sure at least from design stage the system is reliable to save so so basically try to wrap what I say I feel the controls still have a strong place in the future for high levels of automation but the problem is our control theory is not up to the job and I knew a lot of people and working in the area like Carly and others to work in this push this and like and then other day we talk about for example and the task gets more complicated people maybe have to using Tempo logical or other languages try to describe the the task but the overall things I knew that some new formulation new Theory or new tools to help us to understand that to camera Some solid design process to ensure the safety it is a long way to do I feel but it just is that this is my major my theory okay thank you

2023-06-29 06:02

Show Video

Other news