Problems on the rlnumeric action space and how to modify it in the custom reinforcement learning environment
Mostrar comentarios más antiguos
I have defined the action space for the custom RL environment class using the code below:
function this = MyEnvironment()
numObs = 79;
ObservationInfo = rlNumericSpec([numObs 1]);
numAct = 3;
ActionInfo = rlNumericSpec([numAct 1], Lowerlimit=-1, Upperlimit = 1)
this = this@rl.env.MATLABEnvironment(ObservationInfo, ActionInfo)
end
It is clearly stated that the action space is a 3x1 vector in a continuous space with the range of (-1, 1). To step the state, I am using these code:
[delVx, delVy, delVz] = getdelV(Action);
where getdelV() is defined in the class as a helper method
function [delVx, delVy, delVz] = getdelv(this, action)
Vmag = action(1)*this.Max_Vmag;
theta = action(2)*pi/2 + pi;
phi = action(3)*pi + pi;
delVx = Vmag*sin(theta)*cos(phi);
delVy = Vmag*sin(theta)*sin(phi);
delVz = Vmag*cos(theta);
end
However, everytime I validate the environment then an error message appear
>> validateEnvironment(test)
Error using rl.env.MATLABEnvironment/validateEnvironment (line 72)
Unable to evaluate step function.
Caused by:
Undefined function 'getdelV' for input arguments of type 'double'.
it seems that the input arguments is double instead, I am confused since the action should be a multi dimension vector. Therefore, I want to ask what's the nature of 'Action' in Reinforcement Learning Environment? Is there an example for the action space custom environment for me as a reference? (the matlab example only use the simple pole with a discrete action space)
Any help is appreciated.
Respuesta aceptada
Más respuestas (0)
Categorías
Más información sobre Environments en Centro de ayuda y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!