Borrar filtros
Borrar filtros

The reward gets stuck on a single value during training or randomly fluctuates (Reinforcement Learning)

28 visualizaciones (últimos 30 días)
I train the reinforcement learning system, and on the reward plot I have some failures during which the reward does not change. This doesn’t look normal, especially when compared with examples (Biped Robot, etc.) I believe that some rlDDPGAgentOptions settings are responsible for this, but it seems that I changed all the possible settings, but even after several thousand episodes, the system does not learn. What can be the reason for this behavior of this graph during training?

Respuesta aceptada

Ari Biswas
Ari Biswas el 5 de Mayo de 2020
It could mean that the training is experiencing a local minima. You can try out a few things:
1. Change the OU noise options to favor more exploration so that the robot can explore more states and get new rewards.
2. Design a different reward function that is not too dependent on sparse rewards. From the graph (flatlines) it looks like you have a sparse reward for a state that the agent is continuously visiting.
In most cases, designing better reward functions will improve training. That being said, 350 episodes might be too early to expect good results. I would let it run for a few 1000 episodes at least before coming to a conclusion that something needs to change.
  4 comentarios
Abd Al-Rahman Al-Remal
Abd Al-Rahman Al-Remal el 12 de Jun. de 2021
Hi,
When you say to change the noise options to favour more exploration: how would this be implemented? i.e what parameters should be changed and in what manner?
My case is slightly different than OP's however as my agent just stays at the same reward value consistently (I've never tested it for more than 100 episodes or so however).
Many thanks!
Ari Biswas
Ari Biswas el 13 de Jun. de 2021
Editada: Ari Biswas el 13 de Jun. de 2021
For a DDPG agent you can tune the StandardDeviation and StandardDeviationDecayRate parameters. Please see the documentation for instructions.

Iniciar sesión para comentar.

Más respuestas (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by