How to reduce noise from SAC RL-agent?

2 visualizaciones (últimos 30 días)
Oleg Makarov
Oleg Makarov el 11 de Oct. de 2023
Editada: Emmanouil Tzorakoleftherakis el 13 de Oct. de 2023
Good afternon!
I trained SAC agent to balance pendulum (normal one, not inverted) which is the part of crane system. The results are apropriate but actions from agent include not only mean actions but also standart deviations, which agent used during training. In that way the response of my pendulum is not stable enough. Agent do actions with noise but I need only mean part of these actions without noise. How can I do that? Is there any possible way to exclude standart deviations from actions after training?
In the conclusion the response of the ball is very smooth without any shaking. How did they do that?

Respuestas (2)

Emmanouil Tzorakoleftherakis
Emmanouil Tzorakoleftherakis el 11 de Oct. de 2023
Editada: Emmanouil Tzorakoleftherakis el 13 de Oct. de 2023
Hi,
There is an agent option that achieves exactly what you want and outputs the mean output values after training:
sacagent.UseExplorationPolicy = 0;
You also need to keep in mind that there is no guarantee that your action signal will be continuous, even if you only use the mean value. That's how the SAC algorithm works. You may either need to train further to reduce this variance, or you could potentially add a penalty term in the reward to make the action signal smoother. Another option is to try using DDPG which are deterministic in nature and generate smoother actions.
The example that you shared in your question only shows the plot of the ball position - the SAC actions could still be noisy/discontinuous though.
Hope this helps

Oleg Makarov
Oleg Makarov el 13 de Oct. de 2023
Editada: Oleg Makarov el 13 de Oct. de 2023
Thank you for answer. I tried to use the option UseExplorationPolicy = 0; and nothing has changed. The Agent still uses the action with noise inside the action limits prescribed during agent creation (-20;20). Please look the attached figure.
  1 comentario
Emmanouil Tzorakoleftherakis
Emmanouil Tzorakoleftherakis el 13 de Oct. de 2023
A couple of suggestions:
Once you set UseExplorationPolicy = 0, try to run 'getAction(agent, obs)' a couple of times for the same observation input to make sure you are getting the same action output. If you do get the same output, then this means that you are indeed not sampling your actions anymore.
You also need to keep in mind that there is no guarantee that your action signal will be continuous, even if you only use the mean value. That's how the SAC algorithm works. You may either need to train further to reduce this variance, or you could potentially add a penalty term in the reward to make the action signal smoother. Another option is to try using DDPG which are deterministic in nature and generate smoother actions.
The example that you shared in your question only shows the plot of the ball position - the SAC actions could still be noisy/discontinuous though.

Iniciar sesión para comentar.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by