Unable to run 'rlwatertank' example in R2020a

8 visualizaciones (últimos 30 días)
Nima Mathwork
Nima Mathwork el 17 de Abr. de 2020
Respondida: Stefalo Acha el 18 de Feb. de 2023
Hello everyone
I was trying to run this example.
While I successfully ran this example in R2019b, I could not successfully train the agent for this example in R2020a.
I also tried other available examples in the documentation; however, the learning plots in Reinforcement Learning Episode Manager differed from the plots exhibited in documentation.
I should mention I followed the exact steps in documentation and did not change value of any parameter.
Is this some sort of bug in "Reinforcement Learning Toolbox" at R2020a release?
  2 comentarios
Radovan Vuletic
Radovan Vuletic el 8 de Feb. de 2021
I have a same problem with R2020b
>> open_system('rlwatertank')
No system or file called 'rlwatertank' found. 'rlwatertank' is used in
Create Simulink Environment and Train Agent
Create Simulink Environment Using Agent in Workspace
Validate Simulink Environment
Water Tank Reinforcement Learning Environment Model.
Any official statement on this?
Basically I am interested on PMSM control with RL, but that example dosn't work neither.
Mariam  Kashkash
Mariam Kashkash el 21 de Mzo. de 2021
I have also the same problem the rlwatertank does not open. Do you find any solution for this problem?

Iniciar sesión para comentar.

Respuesta aceptada

Emmanouil Tzorakoleftherakis
Emmanouil Tzorakoleftherakis el 21 de Abr. de 2020
Hi Nima,
This is the plot I got when running the watertank example in 20a:
While this is not exactly the same as the one shown in the documentation, training still converges.
A couple of reasons why the visual is not the same as in R2019b:
  1. We recently started using autodifferentiation under the hood, and while gradient values are still close, there are small numerical differences which lead to a different optimization route,
  2. Each release comes with more optimization improvements like the one above, which affect training results.
Note that training should still converge. Hope this helps
  1 comentario
Reinforcement Learning
Reinforcement Learning el 21 de Mzo. de 2021
@Emmanouil Tzorakoleftherakis Why won't the specifications from the Water Tank Model apply to other transfer functions? I keep getting zero for any other system other than the water tank model (last two questions from me).

Iniciar sesión para comentar.

Más respuestas (1)

Stefalo Acha
Stefalo Acha el 18 de Feb. de 2023
Hello everyone
I was trying to run the below code in R2022b but keep an error about "No system or file called 'rlAreaCoverage' on the command line #15 --> mdl = "rlAreaCoverage";
Please, I will apreciate any help. Thanks
%define the drones location of obstacles within the grid using matrix of indeces
clc;
clear all;
close all;
obsMat = [4 3; 5 3; 6 3; 7 3; 8 3; 9 3; 5 11; 6 11; 7 11; 8 11; 5 12; 6 12; 7 12; 8 12];
%initialize the drones position
sA0 = [2 2];
sB0 = [11 4];
sC0 = [3 12];
s0 = [sA0; sB0; sC0];
%specify sample time
Ts = 0.1;
Tf = 100;
maxsteps = ceil(Tf/Ts);
mdl = "rlAreaCoverage";
open_system(mdl)
% Define observation specification
obssize = [12 12 4]
oinfo = rlNumericSpec(actionSpace)
ainfo.Nmae = 'action';
blks = mdl + ["/Agent A (Red)","/Agent B (Green)","/Agent C (Blue)"];
env = rlSimulinkEnv(mdl,blks,{oinfo,oinfo,oinfo},{ainfo,ainfo,ainfo});
env.ResetFcn = @(in) resetMap(in, obsMat);
rng(0)
for idx = 1:3
% Create actor deep neural network.
actorNetWork = [
imageInputLayer(obsSize,'Normalization','none','Name','observations')
convolution2dLayer(8,16,'Name','conv1','Stride',1,'Padding',1,'WeightsInitializer','he')
reluLayer('Name','relu1')
convolution2dLayer(4,8,'Name','conv2','Stride',1,'Padding','same','WeightsInitializer','he')
reluLayer('Name','relu2')
fullyConnectedLayer(256,'Name','fc1','WeightsInitializer','he')
reluLayer('Name','relu3')
fullyConnectedLayer(128,'Name','fc2','WeightsInitializer','he')
reluLayer('Name','relu4')
fullyConnectedLayer(64,'Name','fc3','WeightsInitializer','he')
reluLayer('Name','relu5')
fullyConnectedLayer(numAct,'Name','output')
softmaxLayer('Name','action')];
actorNetWork = dlnetwork(actorNetWork);
% Create critic deep neural network.
criticNetwork = [
imageInputLayer(obsSize,'Normalization','none','Name','observations')
convolution2dLayer(8,16,'Name','conv1','Stride',1,'Padding',1,'WeightsInitializer','he')
reluLayer('Name','relu1')
convolution2dLayer(4,8,'Name','conv2','Stride',1,'Padding','same','WeightsInitializer','he')
reluLayer('Name','relu2')
fullyConnectedLayer(256,'Name','fc1','WeightsInitializer','he')
reluLayer('Name','relu3')
fullyConnectedLayer(128,'Name','fc2','WeightsInitializer','he')
reluLayer('Name','relu4')
fullyConnectedLayer(64,'Name','fc3','WeightsInitializer','he')
reluLayer('Name','relu5')
fullyConnectedLayer(1,'Name','output')];
criticNetwork = dlnetwork(criticNetwork);
% create actor and critic
actor(idx) = rlDiscreteCategoricalActor(actorNetWork,oinfo,ainfo); %#ok<*SAGROW>
critic(idx) = rlValueFunction(criticNetwork,oinfo);
end
actorOpts = rlOptimizerOptions('LearnRate',1e-4,'GradientThreshold',1);
criticOpts = rlOptimizerOptions('LearnRate',1e-4,'GradientThreshold',1);
opt = rlPPOAgentOptions(...
'ActorOptimizerOptions',actorOpts,...
'CriticOptimizerOptions',criticOpts,...
'ExperienceHorizon',128,...
'ClipFactor',0.2,...
'EntropyLossWeight',0.01,...
'MiniBatchSize',64,...
'NumEpoch',3,...
'AdvantageEstimateMethod','gae',...
'GAEFactor',0.95,...
'SampleTime',Ts,...
'DiscountFactor',0.995);
agentA = rlPPOAgent(actor(1),critic(1),opt);
agentB = rlPPOAgent(actor(2),critic(2),opt);
agentC = rlPPOAgent(actor(3),critic(3),opt);
trainOpts = rlMultiAgentTrainingOptions(...
"AgentGroups","auto",...
"LearningStrategy","decentralized",...
'MaxEpisodes',1000,...
'MaxStepsPerEpisode',maxsteps,...
'Plots','training-progress',...
'ScoreAveragingWindowLength',100,...
'StopTrainingCriteria','AverageReward',...
'StopTrainingValue',80);
doTraining = false;
if doTraining
result = train([agentA,agentB,agentC],env,trainOpts);
else
load('rlAreaCoverageAgents.mat');
end

Productos


Versión

R2020a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by