Reinforcement Learning Toolbox - Change Action Space

2 visualizaciones (últimos 30 días)
(I'm using a DQN Agent in a custom template enviroment)
Is there a way to change the Action Space from which the action is choosen based on the current state during an episode?
For example let's say I have an agent that is moving in a room by choosing the directions of the motion, I would like that when he reaches the edge of the room in one direction he can no longer choose the direction that would eventually lead him off, thus reducing the Action Space.
Basically I want to reduce the Action Space to handle illegal moves.

Respuesta aceptada

Emmanouil Tzorakoleftherakis
Emmanouil Tzorakoleftherakis el 23 de Jul. de 2019
Hi Federico,
Unfortunately, the action space is fixed once created. To reduce the amount of times an action is selected, you could penalize it in the reward signal if certain criteria are met.
I hope this helps.
  2 comentarios
Federico Sello
Federico Sello el 23 de Jul. de 2019
Hi Emmanouil,
thank you for your answer, unfortunatly until now every attempt I tried to give a negative reward to the action that I didn't want the agent to do, didn't work. Eventually, after some initial time, the agent will still choose to perform that action. I don't really know how to explain this beheviour, I've tried changing the agent options, the training options, the reward function, the neural network architecture but nothing worked. But I suppose I should ask another question for that, anyway thanks again for the info.
Emmanouil Tzorakoleftherakis
Emmanouil Tzorakoleftherakis el 24 de Jul. de 2019
In general, DQN has the tendency to choose more frequently optimistically estimated values due to maximization bias. Some additional things that may be helpful:
1) Make sure you are using double dqn (check the dqn agent options) to reduce overestimation
2) Play with the exploration settings. After exploration decays considerably, agent tends to choose what's best according to current values, which may not converge to true values. Decreasing the decay rate may help.

Iniciar sesión para comentar.

Más respuestas (0)

Categorías

Más información sobre Training and Simulation en Help Center y File Exchange.

Productos


Versión

R2019a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by