Why my critic agent works with DQN, and it doesn't work with Q-learning or SARSA agents for the exactly same environment??

2 visualizaciones (últimos 30 días)
%Why my critic agent works with DQN, and it doesn't work with Q-learning or SARSA agents for the exactly same environment??
% AFTER USING The TRAIN fucntion with the rlQAgent I GET THIS ERROR
Error using rl.policy.AbstractPolicy/step (line 242)
Invalid input argument type or size such as observation, reward, isdone or loggedSignals.
Error in rl.env.MATLABEnvironment/simLoop (line 258)
action = step(policy,observation,reward,isdone);
Error in rl.env.MATLABEnvironment/simWithPolicyImpl (line 106)
[expcell{simCount},epinfo,siminfos{simCount}] = simLoop(env,policy,opts,simCount,usePCT);
Error in rl.env.AbstractEnv/simWithPolicy (line 83)
[experiences,varargout{1:(nargout-1)}] = simWithPolicyImpl(this,policy,opts,varargin{:});
Error in rl.task.SeriesTrainTask/runImpl (line 33)
[varargout{1},varargout{2}] = simWithPolicy(this.Env,this.Agent,simOpts);
Error in rl.task.Task/run (line 21)
[varargout{1:nargout}] = runImpl(this);
Error in rl.task.TaskSpec/internal_run (line 166)
[varargout{1:nargout}] = run(task);
Error in rl.task.TaskSpec/runDirect (line 170)
[this.Outputs{1:getNumOutputs(this)}] = internal_run(this);
Error in rl.task.TaskSpec/runScalarTask (line 194)
runDirect(this);
Error in rl.task.TaskSpec/run (line 69)
runScalarTask(task);
Error in rl.train.SeriesTrainer/run (line 24)
run(seriestaskspec);
Error in rl.train.TrainingManager/train (line 424)
run(trainer);
Error in rl.train.TrainingManager/run (line 215)
train(this);
Error in rl.agent.AbstractAgent/train (line 77)
TrainingStatistics = run(trainMgr);
Error in QNN_train_emg_RLmat_ToolboxV1 (line 132)
trainStats = train(agent,env,opt);
Caused by:
Error using rl.representation.rlAbstractRepresentation/validateInputData (line 509)
Input data must be a cell array of compatible dimensions with observation and action info specifications.
%-----------------------------------------------------------------------------------------------------------------------------
%-----------------------------------------------------------------------------------------------------------------------------
%There is an error in the function AbstractPolicy.m
% call the step implementation with the generated experience
%line 237% action = stepImpl(this,exp); %aqui esta el error
%I do not have this issue when using agent = rlDQNAgent(NeuralN);
I only have it when using -> agent = rlQAgent(NeuralN); OR agent = rlSARSAAgent(NeuralN);
It seems that the ExperienceBuffer instance is missing in the variable called "this" when I use agent = rlQAgent(NeuralN); instead of agent = rlDQNAgent(NeuralN);
Any suggestions??????

Respuestas (0)

Productos


Versión

R2021a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by