How to create a state space model with constant term and do feedback.

23 visualizaciones (últimos 30 días)
I have the discrete time system -
x(k+1) = [A]*x(k) + [B]*u(k) + c
y(k) = [C]*x(k)
I have read online that I can merge the constant term into B by doing [B 1] * [u1(k) ; u2(k)] with u2(k) = c ( I have no control over this input). ie the system becomes
x(k+1) = [A]*x(k) + [B 1]*[u1(k) ; c]
y(k) = [C]*x(k).
However, when I do state feed back, how can I ensure that u2(k) = c?
  3 comentarios
Marcus
Marcus el 15 de Oct. de 2024
See my recent response to Aquatris. Just represent closed loop dyanamics with ss(A-B*K,B*G,C,D,Ts) due to the control law u_k = G*r_k - K*x_k.
Sam Chak
Sam Chak el 16 de Oct. de 2024
You appear to have used the continuous-time linear system to place the discrete-time closed-loop pole inside the unit circle (stable region of discrete-time linear system), which is considered incorrect.

Iniciar sesión para comentar.

Respuesta aceptada

Marcus
Marcus el 18 de Oct. de 2024
The solution that I found to work for my system is by representing
as the follwing uncontrollable system
with initial conditions .
State feedback and steady state tracking involve placing the poles of A and B of the origianl system and not the poles of the uncontrollable system.
ie the close loop system is -
Acl = [A-B*K eye(size(A));
0 eye(size(A))]
Bcl = [B;0]
C = [C,eye(size(A))]
Dcl = 0
sys_cl = ss(Acl,Bcl,Ccl,Dcl)

Más respuestas (2)

Aquatris
Aquatris el 15 de Oct. de 2024
You can define which inputs and outputs are connected to your controller. Checkout the feedback function page and look at the 'Specify Input and Output Connections in a Feedback Loop' section.
First try it yourself to figure it out, cause it is a nice exercise to understand documentation and how to search for things. If you get stuck while doing it, provide what you have done and we can guide you.
  3 comentarios
Aquatris
Aquatris el 15 de Oct. de 2024
Editada: Aquatris el 15 de Oct. de 2024
It is a general representation. State feedback essentially means your observation matrix C is identity matrix with a size of nxn where n is the number of states.
So in addition to your actual output, you can create another output for your feedback controller that would have the state information, something like:
Then you need to connect the y_states as an input to your feedback controller and connect u1 to the output of your feedback controller
Marcus
Marcus el 15 de Oct. de 2024
Editada: Marcus el 15 de Oct. de 2024
I seem to still be having trouble. This is what I've done so far.
A = [0.999979515574799];
B = [0.00001070633417008353561593759356585 1];
C = [1; 1];
D = [0];
c = 0.0046089956808409359920175596414538;
Ts = 1;
sys1 = ss(A,B,C,D,Ts)
sys1 has 2 inputs / 2 outputs / 1 state
I want to use the following control law where for steady state tracking and K is my gains matrix.
K = place(A,B,0.5);
G = inv(C*inv(1-(A-B*K))*B);
I am not sure how to represent this controller in state space form. I want to connect y_states to u1 with feedback(sys1, sys2, [1], [2], -1]), but I am not sure how to formulate sys2. How should I do this?
Typically, if c=0, I would use ss(A-B*K,B*G,C,D) to represend my closed loop dynamics, but this already incoroprates feedback.
Also, nowhere have I specified that u2 = c, is this done by setting my reference input to c (ie r2_k = c)?
Edit: Seems that G does not exist since a left inverse doenst exist. Anyway how would this be done with u = r - Kx?
Thanks

Iniciar sesión para comentar.


Pavl M.
Pavl M. el 8 de Nov. de 2024
Editada: Pavl M. el 8 de Nov. de 2024
There are 2 simple precise ways to do it:
Just make your
Baug = [B 1]
and regard your c = u2(k) as constant control input to your plant(environment) with u1(k) as the control from the controller(agent) you are designing.
then
sys1 = ss(A,Baug,C,D,Ts)
K = place(A,Baug,0.5);
G = inv(C*inv(1-(A-Baug*K))*Baug);
cldyn = ss(A-Baug*K,Baug*G,C,D,Ts)
or treat the c as a noise(disturbance), then don't augment much your B, leave it as B ( I worked on it ) and use disturbance rejection PID or LQR next controller:
with defining state cost, control cost and cross control-state cost matrices as per your problem objections and econometrics,
use lqgtrack or lqg or Kalman:
%State cost matrix:
Q1 = gain*eye(A_size_row+aug_level+1);
Q2 = gain*eye(A_size_row+aug_level);
%Control cost matrix:
%symmetric positive semi-definite state cost matrix, and R (k × k) is a symmetric positive definite control cost matrix.
R = eye(n_states,n_states); %eye(size(A));
%Cross Control-state cost:
N = 1; %eye(A_size_row,length(C));
S = eye(n_states,n_outputs);
E = eye(n_states,n_states);
[~,p] = chol(R)
CM = [Q2;N;N';R]
##Function File: [G, X, L] = dlqr (SYS, Q, R)
## -- Function File: [G, X, L] = dlqr (SYS, Q, R, S)
## -- Function File: [G, X, L] = dlqr (A, B, Q, R)
## -- Function File: [G, X, L] = dlqr (A, B, Q, R, S)
## -- Function File: [G, X, L] = dlqr (A, B, Q, R, [], E)
## -- Function File: [G, X, L] = dlqr (A, B, Q, R, S, E)
## Linear-quadratic regulator for discrete-time systems.
##
## *Inputs*
## SYS
## Continuous or discrete-time LTI model (p-by-m, n states).
## A
## State transition matrix of discrete-time system (n-by-n).
## B
## Input matrix of discrete-time system (n-by-m).
## Q
## State weighting matrix (n-by-n).
## R
## Input weighting matrix (m-by-m).
## S
## Optional cross term matrix (n-by-m). If S is not specified, a
## zero matrix is assumed.
## E
## Optional descriptor matrix (n-by-n). If E is not specified,
## an identity matrix is assumed.
##
## *Outputs*
## G
## State feedback matrix (m-by-n).
## X
## Unique stabilizing solution of the discrete-time Riccati
## equation (n-by-n).
## L
## Closed-loop poles (n-by-1).
##
## *Equations*
## x[k+1] = A x[k] + B u[k], x[0] = x0
##
## inf
## J(x0) = SUM (x' Q x + u' R u + 2 x' S u)
## k=0
##
## L = eig (A - B*G)
[K1,XG1,L1] = lqi(sys,Q1,R)
[G, X, L] = dlqr (A, B, Q2, R,S)
[Fr,Pr,Er]=lqrpid(sys,Q,R)
%dare provides similar to dlqr results
%QXU =
%QWV =
%QI =
%https://www.mathworks.com/help/control/getstart/design-an-lqg-servo-controller.html
%reg = lqg(sys,QXU,QWV,QI)
K_i = -min(X3)
K_x = -X3
%Hamiltonian:
H = A - B*G;
p = 0.5;
%poles placement control:
Ka=acker(A, B, p)
A2 = A - Ka*B
Hope this clears up full solution to your conundrum. For complete solution help me financially if you wanna I do it for you. I have had been developing 3/4 of the disturbance rejection, contact me more for exact codes if you are interested.

Productos


Versión

R2023b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by