Control Systems and the Quest for Autonomy - MATLAB & Simulink
Video Player is loading.
Current Time 0:00
Duration 20:04
Loaded: 0.82%
Stream Type LIVE
Remaining Time 20:04
 
1x
  • Chapters
  • descriptions off, selected
  • en (Main), selected
    Video length is 20:04

    Control Systems and the Quest for Autonomy

    From the series: MathWorks Research Summit

    Panos J. Antsaklis, Department of Electrical Engineering, University of Notre Dame

    Autonomous vehicles have certainly captured the imaginations of everyone. The promise of reducing or even eliminating accidents via autonomy is very appealing. The quest for autonomy has been a pervasive theme in engineered systems throughout the centuries. Adding advanced sensing and decision making to traditional control systems is a way to increase the level of autonomy of a system. Control systems are seen as a cornerstone of autonomous dynamic systems. A functional hierarchical architecture describing the necessary functions needed in an autonomous space vehicle is presented. These concepts were developed while the author was visiting research faculty at NASA’s JPL.

    When people refer to autonomous systems, they often mean different things. It is important to be more precise and agree upon a common definition: Autonomy is the ability of a system to achieve a set of goals under uncertainty in the system and its environment. For the same set of goals, the larger the uncertainties the system can handle, the higher the degree of autonomy. The lower the needed external intervention by humans or other systems to achieve the goals under the uncertainties, the higher the degree of autonomy. So, the level of autonomy depends on both the measure of the set of the goals that are being accomplished and the measure of the set of uncertainty present. Specifically, {measure of the set of goals} x {measure of the set of uncertainties} = L, the level of autonomy. This definition allows the comparison of the autonomy levels of different systems.

    Note that since every autonomous system has a set of goals to be achieved under a set of uncertainties, and a control mechanism to achieve them, clearly, every autonomous system is a control system.

    Adding advanced functionality to controllers is essential to achieving higher levels of autonomy. Learning is very important to autonomy. Online learning control for autonomy should rely on prior knowledge, use simplified models, take full advantage of smart data, and incorporate active learning. The following is a list of directions that need to be emphasized more if online learning control is to become successful in generating correct improved controllers in reasonable amount of time:

    • Use simplified models with fewer parameters as needed for goals.
    • Use all available prior knowledge for the model and use smart initial conditions.
    • Use smart data, loaded with pertinent information, and incorporate active learning to go after what you need.
    • Use trusted data; trust but verify.
    • Time and resources are limited; do your best under limited resources and stop earlier before reaching the optimum, if needed.

    Published: 17 Mar 2023

    And Panos is a very senior person. He has a fantastic perspective on the field of control and so we asked him if he could give us a bit of a feel of what his observations are and where he thinks we might be going. So, Panos, please.

    Thank you. Thank you very much. I hope you didn't raise the expectations because that's going to make my job much more difficult. I will be brief. Actually, when a speaker says, I will be brief this is usually bad news.

    [LAUGHTER]

    But seriously, my presentation doesn't have an equation. And I just want to tell you some ideas, some things which I think are important when we try to build autonomous systems. So I see building autonomous systems using control systems, external control systems, as the basic foundation of that. And I see autonomy, the quest for autonomy, as a pervasive thing over the centuries, really, starting from very simple systems. And actually, some people were very serious about it and you have things like for example, The Quest Beyond the Grail because it's like the Holy Grail trying to build autonomous systems.

    So the question is, are we there yet? That depends. For example, if autonomy to you means that then you can declare victory and go home, but I think it's more than that. We have to look into it a little bit more. But first, let me give you a little bit of history behind it.

    For mentioned, water clocks to autonomous systems I call it. And, you know, this is the first, for example, water clock back . This is about 300 BC type of thing. This was the first documented case of feedback control.

    And of course, you go to the governor, flyball governor there for the steam engines and so forth and you reach the point where this is a functional architecture of an autonomous spacecraft. This was done during a summer I spent at Jet Propulsion Laboratory about 30 years ago. And you see some characteristics, for example, the bottom is the regular control as we know it with all the methods of optimization, feedback, stabilization and so forth.

    The very top are the big decisions and everything else was in between. That's why you have three levels. Actually, you can have many more levels. And if you do a little bit more, be a little bit more specific, you see at the top there is the executive where big decisions are made, for example, a repair satellite type of thing. And this can be translated down the road as you go based on the capabilities of the system at the time, on the availability of the robot arm to pull the satellite in and so forth and so on, until it reaches down to the very basic control level.

    What I want to point out is, at different levels you have learning there. Learning is very important and I will return to this in a minute. Also you have things like FDI, failure detection identification, something which is not really has been brought forward in autonomous cars, for example, autonomous vehicles. And planning, you have a planning. And my interest in that was also what triggered a lot of research from my part because the top level, for example, is described by mathematical models like finite automata or Petri nets. The bottoms are differential equations-- or difference equations. And the middle is like hybrid systems where I spend a lot of time doing.

    You see, at the top level you have the interface with other machines or the human operator, which gives the macro command. And as I said, these are translated down. And at the bottom level, the execution level, you have the level where you can do the control design optimization. The one we've been doing for a long time now.

    What are some of the characteristics there? You have a successive delegation of tasks from the top to the bottom. You have the higher levels are concerned with slower aspects of the approach there. And you have, also, in the next one you will see that you have the smaller, shorter time horizons where the top level has longer time horizons. The top level, for example, could be like in a company, buy new company, invest so and so. The bottom level is also-- The time scale could be years say or months or days while the bottom level is milliseconds, seconds. OK?

    And also you have things like the principle increasing intelligence as you go up, with decreasing precision. Increasing intelligence in the higher levels and this is an idea which is very popular with the management at different companies. So-- But what you have to do is to start talking about degrees of autonomy. And at the time, we were looking, about 30 years ago, looking at intelligent control and we decided that autonomy really is the issue there, the goal, and the intelligent methods are different methods to achieve this.

    And we had also a task force in Control Society on defining intelligent control. And the report was issued in the middle '90s of that. One interesting thing was that you saw the architecture, the functional architecture I showed you, it is a hierarchical architecture. The question is, do we need hierarchical architecture? The point is that maybe not, but we use hierarchies to deal with complexity and this seems to be the way to go, at least at the beginning.

    Now, if you step back a little bit so you say, so what is autonomy? First, let's mention the following thing, that every autonomous system always has a set of goals to achieve. You cannot say this is autonomous system in general. It's like very vague as saying this is optimal. Optimal in respect to what?

    OK? So since always the autonomous system has a set of goals to achieve and a methodology to achieve then to me, every autonomous system is a control system in a marginalized sense. And I see that you have the initial control systems. You have the functionalities as you saw in the hierarchical structure there and you see more and more things.

    But this is nothing new, in the company steel industry. For example, you can have the rolling build of the steel seats and the set points are defined by control considerations. But above that, they are decided based on economic decision. So you have different levels in the complete control system. So this is nothing new, has been around for a while.

    Autonomy means to be able to do things by yourself and is the ability of the system to achieve a set of goals. And if you move that, then let me give an example. Any feedback, for example, system can be considered to have certain degree of autonomy. Why? If you have the goal is to stabilize the system and you can do that with feedback control, this is autonomous with respect to this set of goals and with respect to certain set of uncertainties. You can cope with so much.

    Now, if the uncertainties are greater than perhaps you use adaptive control. So adaptive control appears to have higher autonomy than the one when you use static control. Now, if you think of those terms then, you can come with a very easy type conceptual definition and some equations regarding autonomy.

    You need equations to compare the levels of autonomy. So we say a system is autonomous with respect to a set of goals and under a set of uncertainties. It's always you have to have that. That is if you have two systems with the same goals but one can cope with higher levels of uncertainty, obviously, the second system has higher autonomy.

    This is an equation. If you take some measures of the goals there and measures of the uncertainty, you multiply them, you get the level of autonomy, how much autonomous you are. So L is a measure of the level of autonomy of the system. Now, we use the-- How to measure these things? Well, it's not the easiest thing. For example, for uncertainty you can use entropy for example, to measure or any other kinds of measures.

    So sometimes, instead of disturbances there are uncertainties, you can use the term how much intervention you need to achieve the goals. So in a way, a system able to achieve the goals under high uncertainty it corresponds to able to achieve the goals with very little external intervention. So there is some relation between them. So these are very useful things to keep in mind.

    And when you talk about autonomous vehicles people have declared, well, decided, that there are five levels of autonomy and if you're talking about UAV it's 10 or 11. Each one of these cases can be expressed using the very simple expression I suggested. So this is an idea which I hope to be useful when you compare autonomous systems.

    Let me now switch to learning. Learning is something which is very popular nowadays. I think it is because people think they're going to get solutions out of this. That's the reason actually. Yesterday and the day before, we had a workshop at MIT on learning for dynamics and control. 400 people showed up. So I understand many of them are students around the area and so forth, but still, it shows the great, great interest in this area. This is not new in controls.

    If you go back in the '60s, for example, there is this book by Tsypkin, Adaptive and Learning Systems, and at the time I said intelligence, of course, does not close to AI. It became part of AI much more recently than that. And as you see in the slide, we had some special issues from intention, and learning, and so forth. We had really panel discussions on this and primarily, at the time, I was working on neural networks.

    But if you step back and think, what we need? What are the goals of learning? First, you have to have identifiable goals because there are many things you may want to learn. You learn about the plant. You learn about the environment. You learn about the controller. You learn about the new goals you may have.

    And the very important thing is that, see here, I'm focusing on learning for control, actually, online. And here this is something very important, that you have to get it right the first time because as you get it, then you apply it, it might be too late. You cannot say oops, I didn't learn it right. You may kill someone. So this is a very hard constraint, I think, when you do.

    So what are some thoughts about learning? Everybody's talking about big data. Well, we don't have the luxury to deal with big data when you are online, you can do that offline. So I'm talking about-- I prefer to talk about small data and actually, smart data because there is data and there is data. I remember we had a project about sensors and we had set up some sensors outside the building and the sensor was used to report the temperature every 10 milliseconds or something. The same temperature. It was not only annoying, it was harmful because it blocked the network and nobody else could communicate.

    So that's why I say there is data and there is data. That's why I'm talking about smart data, data which contain new information. This is not easy to do and this is related to passive learning and active learning.

    Right now, we work, we say, OK. We'll be sitting here looking at the data and making sense out of this but many times you have the flexibility to go be active about it. That is, provide a special signal. When you drive a car, for example, you hear a noise on the front right wheel you keep listening and if nothing happens, then you do some active testing to see if this the right wheel, the tire or what. So this is something we should be thinking about and you have to be able to learn from small data.

    And this is another important issue here, the convergence rates. I mean, these stakes you cannot wait for something to take forever. So what is the key here? Use prior information, of course. We don't start from scratch. We start from something, you know, an optimization, this optimization. Optimization is very important to have nice, good initial conditions because no matter what algorithm you use, if you-- unless the conditions are close to the solution you got it. OK?

    So we have to find ways to use prior information and to select a good initial condition or a nice place for warm start up, as some people call it. Also, you have to talk about simplification of model. In control the first thing we learn is that you use different models depending on what you want to do.

    Of course it's great to have a complete model, a simulation model which is very accurate but to control a system, for example, to stabilize it, you don't need that. So can we take that into consideration when we do learning in control? I think we should because this is the way to simplify the problem and be able to do that in a reasonable time.

    Another issue is that you get the data, no matter how you do it, and then you run the learning algorithm. Do you have to wait until it converges? This may be too long. So how about if we start thinking about maybe we could stop at one point, maybe it wouldn't be optimum but we do better than we did before.

    As we do, for example, in some optimization algorithms when the algorithm is feasible that they say that every time it satisfies the constraints and at each step it improves the objective function then you can stop at some point and at least you are not infeasible. There is a solution, maybe it's not the best. So you have to think in those terms as well.

    So the emphasis are on simplified models as needed for goals, prior knowledge. And also, everybody is talking or many people are talking about CPS. CPS, presumably, are very complex systems, networked systems and so forth. In contrast, typically, we make the assumption that we have enough information about the mathematical model. In the CPS I doubt this is going to be true because even if you know the subsystem dynamics early on things will change. So somehow you have to take into consideration you cannot use very accurate models.

    You have, in our case, for example, we try to use things related to passivity, which is like an equality thing and passivity indices, which is a measure of how passive you are. And we have tried to do a data driven adaptive control using that. That if something changes, I'm not going to recalculate the whole system but simply, I'm going to find these two parameters, the passivity indices and based on that I can do some limited things but definitely stability and so forth. So this is another idea we have to keep in mind.

    So this is the last slide. The emphasis-- This is about the learning. Emphasis on simplified model, we have to keep that in mind. Prior knowledge whenever is available. Somehow we have to systematically do that.

    The best, under limited resources, this is to stop the algorithm maybe before reaches the optimum. Active learning, don't sit there passively and wait for things to happen. Trust the data. As somebody said, trust but verify. Transit data is very important. You see, for example, what happened recently with 737 MAX airplane.

    So small data, you have to be able to do things with a limited number of data. These are related and also there are all kinds of other issues in learning like improvability to learn, one skill based on how much you learn about the other skill. Controls, I mean, some people said that earlier. It's changing, which is true. It's more data driven, which is true.

    So we have to give-- So we have to think in new ways. That's why I am mentioning all these observations. And the bottom line is that learn to make control great again and I hope this is going to happen. OK. Thank you.

    [APPLAUSE]