I want to convert the PI controller in this instance to a PID controller, so can my network be changed to something like this?
5 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
嘻嘻
el 18 de Oct. de 2023
Comentada: 轩
el 4 de En. de 2024
Respected all,
The following example shows the PI controller design using TD3 reinforcement learning algorithm. I want to convert the PI controller in this instance to a PID controller, so can my actornetwork be changed to something like the following?
actorNetwork = [
featureInputLayer(numObservations3,'Normalization','none','Name','state')
fullyConnectedPILayer(randi([-50,50],1,3), 'Action')];
1 comentario
轩
el 4 de En. de 2024
Hello, I am trying the same method in the current time, can you leave a contact information for conmunication ?
你好,可以留一个邮箱或微信交流吗?
Respuesta aceptada
Emmanouil Tzorakoleftherakis
el 23 de Oct. de 2023
That should work - you are essentially adding one more weight to the PI layer for the D term. Potential issues may arise when you provide the error derivative as an observation (may be hard to approximate due to the trial and error nature of RL) but give it a try
2 comentarios
Emmanouil Tzorakoleftherakis
el 23 de Oct. de 2023
It looks like it's just a couple of values close to zero. RL solves an optimization problem under the hood so the better the initial guess, the more likely to get a better solution
Más respuestas (0)
Ver también
Categorías
Más información sobre PID Controller Tuning en Help Center y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!