''Error using horzcat '' when use Reinforcement Learning Toolbox to create a environment object for the Simulink model
1 visualización (últimos 30 días)
Mostrar comentarios más antiguos
Something wrong when I try to use Reinforcement Learning Toolbox to create a environment object for the Simulink model;
First I used the following syntax to load ActionObj and ObservationObj into the workspace
open_system('rlSimplePendulumModel')
env = rlPredefinedEnv('SimplePendulumModel-Discrete');
obsInfo = getObservationInfo(env);
actInfo = getActionInfo(env);
Then I creat a DQN agent just using the syntax in the help documents of Reinforcement Learning Toolbox, then delete the object env in workshape, saved rlSimplePendulumModel.slx to current folder;
I really want to know how to create a environment object for the Simulink model finally when I executived this syntax
env = rlSimulinkEnv(rlSimplePendulumModel,[rlSimplePendulumModel '/RL Agent'],obsInfo,actInfo);
It returned a wrong statement saying : Error using horzcat. Dimensions of arrays being concatenated are not consistent.
I couldn't figure out what mistake I had made. Should I consider it as a bug caused by the Reinforcement Learning Toolbox?
I also used other predefined environment provided by the Toolbox and got the same mistake.
0 comentarios
Respuestas (1)
Harsha Priya Daggubati
el 18 de Mzo. de 2020
Hi,
Can you try saving your model name in another variable and use it instead of the model name. I guess the error is with concatenation being done here.
[rlSimplePendulumModel '/RL Agent']
Try using it this way-
mdl = open_system('rlSimplePendulumModel');
env = rlSimulinkEnv(mdl,[mdl '/RL Agent'],obsInfo,actInfo);
Hope this helps!
0 comentarios
Ver también
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!