How to extract the trained actor network from the trained agent in Matlab environment? (Reinforcement Learning Toolbox)
3 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
wujianfa93
el 2 de Jun. de 2020
Respondida: Anh Tran
el 5 de Jun. de 2020
When the agent is successfully trained using DDPG in Matlab environment, if I want to verify the agent, the following codes should be executed according to the tutorial of MathWorks:
simOptions = rlSimulationOptions('MaxSteps',50);
experience = sim(env,agent,simOptions);
Unfortunately, it is not flexible enough for my program. I hope I can extract the trained actor network from the trained agent so that I can obtain the actions by directly inputting the observation vector to the actor network in each sampling step of my robot program for more complex tasks. However, I can’t seem to find the trained actor network from the following variables in the workspace:
Is there a way to extract the trained actor network? If so, how to call the extracted actor network (e.g., what are the I/O formats of the network)?
0 comentarios
Respuesta aceptada
Anh Tran
el 5 de Jun. de 2020
You can collect the actor (or policy) from the trained agent with getActor. Then, you can use the actor to predict the best action from an observation wtih getAction.
% get actor representation
actor = getActor(agent);
% actor predicts an action given an observation
action = getAction(actor, observation)
0 comentarios
Más respuestas (0)
Ver también
Categorías
Más información sobre Policies and Value Functions en Help Center y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!