Deep Reinforcement Learning Reward Function for Reference Tracking

2 visualizaciones (últimos 30 días)
PB75
PB75 el 28 de Abr. de 2021
Hi All,
Would like some advice on training a RL agent reward function for good reference tracking.
My environment is a PMLSM, which i have now simplified to a simply second order system to allow me to debug the RL code and permit tuning the parameters in the reward function..
I have been using the RL PMSM example, TD3Agent example ‘mcb_pmsm_foc_sim_RL’ using the script ‘TrainTD3AgentForPMSMControlExample’.
My second order system just uses 1 action (Vq) and three observations (iq, iq_Error and iq_Error Int), with a simple reard function . With initial and. Training the agent with 2000 episodes did get iq to follow iq_ref, however the ouput behaviuor was underdamped and oscillatory.
iq Ref and iq actual
Actions Vq
Reward
Can this underdamped step response be tuned via the reward function or via tuning any of the TD3 agent hyperparameters?
Also any pointers to any literature to an alternative reward function for good reference tracking for iq current control and also speed/position control of a PMSM?
My research so far has only highlighted the following reward. , where a is a constant and σ standard deviation, any suggestions or comments?
Please comment of guide to any literature with regards to using an TD3 RL agent, for PMSM reference tracking and reference tracking reward functions.
Many thanks
Patrick

Respuestas (0)

Categorías

Más información sobre Deep Learning Toolbox en Help Center y File Exchange.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by