Reinforcement learning/Experiecne buffer/Simulink

3 visualizaciones (últimos 30 días)
hieu nguyen
hieu nguyen el 5 de Mayo de 2023
Comentada: hieu nguyen el 6 de Mayo de 2023
I am trying to create a experience buffer for my DDPG algorithm in Simulink. However, I can't find anyway or blocks to help me create a experience buffer to store (state,action,reward,next state) in simulink.
I have tried to create a experience buffer by rl.util.ExperienceBuffer in Matlab function but here is my code and the error
I hope you can help me deal with this problem. Thank you verry much!

Respuestas (1)

Emmanouil Tzorakoleftherakis
Emmanouil Tzorakoleftherakis el 5 de Mayo de 2023
Why do you want to create your own buffer? If you are using the built-in DDPG agent, the buffer is created automatically for you. In any case, in 22a we added a feature that allows you to create your own experience buffer (see here). You can potentially use this to manually modify the experience buffer of a built-in agent.
  1 comentario
hieu nguyen
hieu nguyen el 6 de Mayo de 2023
Actually, I want create my own DDPG agent with my own neural network structure and my own optimizer algorithm in Simulink. I have tried to do that with built-in DDPG agent and custom agent but it is quite difficult so I decide to built everything by myself in Simulink with Matlab function.Now, the only problem I have is creating a experience buffer to store and batch data.

Iniciar sesión para comentar.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by