How to use LQR for setpoint tracking?
101 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
Pedro Carvalho
el 1 de Mayo de 2024
Initially I was using LQR to regulate the error dynamics, i.e., I computed the gains for de = (A - BK)e, but this basically results in a PI controller since the control law (with integral action) is u = -K*e + ki*z. I have seen many sources teaching how to do it by augmenting the state vector with the integral of the error, expanding the matrices to A = [A 0; C 0], etc. but I still can't understand how that works. I am working on a first order cruise control problem. From my observations the integral action is doing all the tracking and the -Kx term is only getting in the way, trying to regulate the state x to zero. Here is my code:
% Parameters
X_u = 0;
X_uu = 22.7841;
m = 5037.7;
% Equilibrium point x0
x0 = 20*1.852/3.6; % Longitudinal linear velocity in m/s
u0 = X_u*x0 + X_uu*x0^2 % Thrust in N
% Linearize system around x0
A = -(X_u/m + 2*X_uu/m*x0);
B = 1/m;
C = 1;
D = 0;
% System order
n = size(A,1);
% Open loop system
sys_ol = ss(A,B,C,D);
openloopPoles = eig(A)
% Augmented system with the integral of the error
A_hat = [A zeros(n,1);...
-C 0 ];
B_hat = [B; 0];
Br = [zeros(n,1); 1];
C_hat = [1 zeros(1,n)];
D_hat = 0;
% Q R matrices
Q = 1000*(C'*C);
R = 0.5e-3;
% Feedback gain
K_hat = lqr(A_hat, B_hat,Q,R)
K = K_hat(1)
ki = -K_hat(2)
% Scaling matrix
%Nbar = rscale(sys_ol,K)
% Closed loop system
AA = [A - B*K B*ki;-C 0];
BB = Br;
CC = [C 0];
DD = 0;
sys_cl = ss(AA, BB, CC, DD);
% Time vector
t = 0:0.1:30;
% Control input
u = u0*ones(size(t));
% Reference setpoint
r = 25*1.852/3.6*ones(size(t));
% Initial states
x0_hat = [x0,0]
% Simulate the response of the system
[y,t,x_hat] = lsim(sys_cl,r,t,x0_hat);
figure
plot(t, y*3.6/1.852, 'k', 'LineWidth', 1.5,'Color','k')
xlabel('Time (seconds)')
ylabel('Speed (knots)')
title('Closed loop response with integrator')
grid on
% Control effort (Thrust)
u_effort = -K*x_hat(:,1) + ki*x_hat(:,2);
figure
plot(t, u_effort,'Color','k')
xlabel('Time (s)')
ylabel('Thrust (N)')
title('Control effort')
0 comentarios
Respuesta aceptada
Sam Chak
el 1 de Mayo de 2024
Editada: Sam Chak
el 1 de Mayo de 2024
Your code appears to be error-free. However, the control action you implemented differs from the error-based PI control scheme that was mentioned.
To comprehend why integral control can track the setpoint, it is important to visualize that the plant is a 1st-order transfer function, denoted as , where the plant lacks an integrator (referred to as a Type-0 system).
The state-feedback control term will shape and enhance the transient behavior, resulting in . Nevertheless, a steady-state error will persist since .
To eliminate the steady-state error, introducing an integrator in the cascade compensation path is necessary, transforming it into a Type-1 system. This results in . The closed-loop system can then be expressed as . Consequently, the steady-state error is eliminated since .
Update: In the code, the initial value of the Integrator output (2nd state variable, z0) should be set to a non-zero value. This is necessary because the initial velocity (x0) is non-zero. Therefore, the initial value of the Integrator output can be calculated by solving the control law and considering the initial error (r0 - x0).
% Parameters
X_u = 0;
X_uu = 22.7841;
m = 5037.7;
% Equilibrium point x0
x0 = 20*1.852/3.6; % Longitudinal linear velocity in m/s
u0 = X_u*x0 + X_uu*x0^2; % Thrust in N
% Linearize system around x0
A = -(X_u/m + 2*X_uu/m*x0)
B = 1/m
C = 1;
D = 0;
% System order
n = size(A,1);
% Open loop system
sys_ol = ss(A,B,C,D);
Gp = tf(sys_ol)
openloopPoles = eig(A);
% Augmented system with the integral of the error
A_hat = [A zeros(n,1);...
-C 0 ];
B_hat = [B;
0];
Br = [zeros(n,1); 1];
C_hat = [1 zeros(1,n)];
D_hat = 0;
% Q R matrices
Q = 1000*(C'*C);
R = 0.5e-3;
% Feedback gain
K_hat = lqr(A_hat, B_hat,Q,R)
K = K_hat(1);
ki = -K_hat(2);
% Scaling matrix
% Nbar = rscale(sys_ol,K)
% Closed loop system
AA = [A-B*K, B*ki;
-C, 0];
BB = Br;
CC = [C 0];
DD = 0;
sys_cl = ss(AA, BB, CC, DD);
A-B*K
B*ki
Gcl = tf(sys_cl)
% steady-state response
ssr = dcgain(Gcl)
% Time vector
t = 0:0.1:30;
% Control input
u = u0*ones(size(t));
% Reference setpoint
r0 = 25*1.852/3.6;
r = r0*ones(size(t));
% Initial states
z0 = (u0 + K*x0)/ki + (r0 - x0);
x0_hat = [x0, z0];
% Simulate the response of the system
[y, t, x_hat] = lsim(sys_cl, r, t, x0_hat);
figure
plot(t, y*3.6/1.852, 'k', 'LineWidth', 1.5, 'Color', '#265EF5')
xlabel('Time (seconds)')
ylabel('Speed (knots)')
title('Closed loop response with integrator')
grid on
% Control effort (Thrust)
u_effort = -K*x_hat(:,1) + ki*x_hat(:,2);
figure
plot(t, u_effort, 'LineWidth', 1.5, 'Color', '#F15EF5'), grid on
xlabel('Time (s)')
ylabel('Thrust (N)')
title('Control effort')
4 comentarios
khalid
el 26 de Ag. de 2024
Editada: khalid
el 26 de Ag. de 2024
Hi guys! Could anyone please tell me while choosing the augmented A and B why did you choose zeros(n,1) ? Why that specific size? I know the theory says Aaug= [A 0; -C 0] I just don't understand then how is it that size? Also how do I know which one is my Kx and which one is my Ki. My lqr gain is a 2x5 matrix.
I have 5 states in my system but I only want to track 2 and regulate the other 3. How do I do this? How do I choose the integrator? In your case you had only 1 state judging by your A matrix. Please help in this regard.
Más respuestas (1)
Joshua Levin Kurniawan
el 1 de Mayo de 2024
Hello Pedro. Regarding the LQR controller it has a unique properties. In this case, without an integral action, the controller only tries to compesate the system into a steady state condition (i.e. it does not necessarily reach ), or in another word, we only want to regulate the system to have a stable behaviour. However, this is different for the case where we added the integral action, where the error e is supressed to reach zero.
Lets say that we want to track a specific state, , which we modelled as y for convenience. Then, as the standard state space term,
.
Then, the control function can be defined as . Here, we want to suppress the error e to equal to zero, naturally, we want to add them into the state matrix, which we called the augmented matrix . Remember that
Hence, the matrix can be defined as
Therefore, by conducting the LQR method to the augmented system as described as above, you can get the optimal control full-state feedback gain matrix for the integrating action.
1 comentario
Amirah Algethami
el 27 de Jun. de 2024
Hi @Joshua Levin Kurniawan , thanks for comment it is helpful. Do you have matlab code example for that please.
Ver también
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!