Problems encountered in matlab reinforcement learning

22 visualizaciones (últimos 30 días)
邓龙京
邓龙京 el 26 de Abr. de 2024
Comentada: 邓龙京 el 4 de Nov. de 2024 a las 8:40
Hi,I encountered some problems when building a model with the reinforcement learning, and its errors are as the following:
错误使用 rl.train.seriesTrainer/run
An error occurred while running the simulation for model 'DlDRL' with the following RL agent blocks:
DIDRL/RL Agent
出错 rl.train.TrainingManager/train(第 479 行)
run(trainer);
出错 rl.train.TrainingManager/run(第 233 行)
train(this);
出错 rl.agent.AbstractAgent/train(第136行)
trainingResult =run(trainMgr,checkpoint);
原因:
错误使用 rl.env.internal.reportsimulinksimError
Subsystem block 没有名为'Value’的参数
my code is:
mdl = 'D1DRL';
open_system(mdl);
obsInfo = rlNumericSpec([3 1]);
actInfo = rlFiniteSetSpec(linspace(-30,30,100));
agent = [mdl '/RL Agent'];
env = rlSimulinkEnv(mdl, agent, obsInfo, actInfo);
env.ResetFcn = @(in)localResetFcn(in);
rng(0)
layers =[
featureInputLayer(obsInfo.Dimension(1))
fullyConnectedLayer(200)
reluLayer
fullyConnectedLayer(200)
reluLayer
fullyConnectedLayer(length(actInfo.Elements))];
net = dlnetwork(layers);
critic = rlVectorQValueFunction(net,obsInfo,actInfo);
crticOpts = rlOptimizerOptions(LearnRate=1e-3,GradientThreshold=1);
agentoption = rlDQNAgentOptions( ...
SampleTime=0.05, ...
CriticOptimizerOptions=crticOpts, ...
MiniBatchSize=256, ...
UseDoubleDQN=false);
agent = rlDQNAgent(critic,agentoption);
trainOpts = rlTrainingOptions( ...
"MaxEpisodes",200, ...
"MaxStepsPerEpisode",200, ...
"StopTrainingCriteria","AverageReward", ...
"StopTrainingValue",400, ...
"SaveAgentCriteria","AverageReward", ...
"SaveAgentValue",400);
trainingStats = train(agent,env,trainOpts);
simoptions = rlSimulationOptions(MaxSteps=600);
sim(env,agent,simoptions)
function in = localResetFcn(in)
p = 3*randn + 10;
while p <= 0 || p >= 50
p = 3*randn + 10;
end
in = setBlockParameter(in,"D1DRL/Env/VSG控制/VSG有功控制-一次调频",Value=num2str(p));
end

Respuestas (1)

Satwik
Satwik el 28 de Ag. de 2024
Hi,
The error you are encountering likely stems from the incorrect use of the ‘setBlockParameter’ function. Here are the key points to address the issue:
1. Incorrect Use of ‘setBlockParameter’:
The function 's = setBlockParameter(s, blkpath, blkparam, paramval)’ is used to set the value of a block parameter ‘blkparam’ as ‘paramval’ for the block at ‘blkpath’ on a ‘Simulink.SimulationInput’ or ‘Simulation’ object ‘s’. It seems that in your case, ‘in’ used in the ‘localResetFcn’ is not defined as a ‘Simulink.SimulationInput’ or ‘Simulation’ object. Here is an example of how you can correctly define a ‘Simulink.SimulationInput’ object and modify a parameter value:
mdl = "modelName";
% Create a SimulationInput object for this model
simin = Simulink.SimulationInput(mdl);
% Modify block parameter value
simin = setBlockParameter(simin, "modelName/Gain", "Gain", "7");
% Simulate the model
out = sim(simin);
For more details, refer to this documentation:
2. Simulink Model Consistency:
Ensure that the block path and parameter names you are using are consistent with those defined in your Simulink model. Verify the exact names and paths by checking them directly in the Simulink model to avoid any discrepancies. You can determine the path name of a block by choosing the desired block in your Simulink model and then entering the command ‘gcb in the MATLAB Command Window. This command will return the path name of the currently selected block. For more information on the ‘gcb’ command, you can refer to this documentation:
By following these steps, you should be able to resolve the error and successfully modify block parameters in your Simulink model.
  1 comentario
邓龙京
邓龙京 el 4 de Nov. de 2024 a las 8:40
Thanks for your help, but there have some question about setblcokparameter:
报错信息:错误使用 rl.train.seriesTrainer/runAn error occurred while running the:simulation for modelrlVSG'with the following Rl agent blocks:rlVsG/Subsystem2/Rl Agent 出错 rl.train.TrainingManager/train(第 516 行)run(trainer);
出错 rl.train.TrainingManager/run(第253 行)train(this);
出错 rl.agent.AbstractAgent/train(第187 行)trainingResult =run(trainMgr,checkpoint);
reason :
using rl.env.internal.reportSimulinkSim Error There was an error evaluating the reset function. Seethe help for the ResetFcn property for instructions oncreating a valid reset function.
错误使用rl.env.internal.reportSimulinkSimError模型命令不支持加速模式。
The program runs when I delete the initialization section. Why this happening?

Iniciar sesión para comentar.

Categorías

Más información sobre Environments en Help Center y File Exchange.

Etiquetas

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by