How to load pre trained agent model and simulation by using it?
Mostrar comentarios más antiguos
I wanna do simulation by using pre trainned model(agnet).
How should I do ? And Is there a way to check if I can use the trained model (display of training manager, etc.)?
Is it possible to extract observation data from the model?
agentOptions = rlSACAgentOptions;
agentOptions.SampleTime = Ts;
agentOptions.DiscountFactor = 0.5;
agentOptions.TargetSmoothFactor = 1e-3;
agentOptions.ExperienceBufferLength = 1e6;
agentOptions.MiniBatchSize = 1024;
agentOptions.EntropyWeightOptions.TargetEntropy = -1;
agent = rlSACAgent(actor,[critic1 critic2],agentOptions); % What should i do here
maxepisodes = 15000;
maxsteps = 1e6;
trainingOptions = rlTrainingOptions(...
'MaxEpisodes',maxepisodes,...
'MaxStepsPerEpisode',maxsteps,...
'StopOnError','on',...
'Verbose',true,...
'Plots','training-progress',...
'StopTrainingCriteria','AverageReward',...
'StopTrainingValue',Inf,...
'ScoreAveragingWindowLength',10);
%% is this correct way?
load("K35_cal2_joint1_60.mat","agent")
agent = agent;
trainingStats = train(agent,env,trainingOptions);
Respuestas (1)
Abhaya
el 6 de Dic. de 2024
Hi Ryunosuke,
To load and simulate a pre-trained Reinforcement learning model, please follow the steps given below.
- Load the pre-trained agent data:
load(‘K35_cal2_joint1_60.mat’, 'agentData');
- Extract the actor and critic networks from the pre-trained agent
actorNetwork = agentData.agent.Actor;
criticNetwork = agentData.agent.Critic;
- Create the agent with the loaded networks and configuration
agentOptions = rlSACAgentOptions;
% Configure agentOptions as required
agent = rlSACAgent(actorNetwork, criticNetwork, agentOptions)
- Simulate the agent using the MATLAB ‘rlSimulationOptions’ function and ‘sim’ Function: MATLAB ‘rlSimulationOptions’ function creates options for simulating a reinforcement learning agent within an environment.
simOptions = rlSimulationOptions('MaxSteps', 500); % Specify the maximum steps per simulation
res = sim(agent, env, simOptions);
Additionally, you can extract the observation from the agent by using MATLAB ‘getObservationInfo’ function.
obsInfo = getObservationInfo(agent);
For more information, please follow the MATLAB documentation for the ‘getObservationInfo’ function and the ‘rlSimulationOptions’ function.
- https://www.mathworks.com/help/reinforcement-learning/ref/rl.env.basicgridworld.getobservationinfo.html
- https://www.mathworks.com/help/reinforcement-learning/ref/rl.option.rlsimulationoptions.html
You may also find the following MATLAB community discussion helpful.
Hope this solves the query.
Categorías
Más información sobre Reinforcement Learning Toolbox en Centro de ayuda y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!