Design an actor critic network for non-image inputs

3 visualizaciones (últimos 30 días)
ali farid
ali farid el 6 de En. de 2024
I have a robot with 3 inputs including wind, and current location and the current action. I use this three inputs to predict the best action for unknown environment using RL Toolbox. My actions are continious so I am using rlContinuousGaussianActor, is there any example that how I can define the actor network when the inputs are not image? For example we have a vector of [3 2] inputs, and an action vector of [1 2], how I can define the actor critic networks.

Respuestas (1)

Emmanouil Tzorakoleftherakis
Emmanouil Tzorakoleftherakis el 9 de En. de 2024
I may be missing something but why don't you frame your observations as a [4 1] vector? That way it would be consistent with how most examples are set in Reinforcement Learning Toolbox.
  2 comentarios
ali farid
ali farid el 10 de En. de 2024
As you mentioned, I used an example from reinforcement learning toolbox where we have 12x12 4 channel observation, so I changed obsSize from [12 12 4] to [4 1], but I see the following error:
Invalid network.
Error in dlnetwork (line 218)
net = initialize(net, dlX{:});
Caused by:
Layer 'conv_1': The size of the convolution dimensions of the padded input data must be larger than or equal to the filter size. For networks with sequence input, this check depends on the MinLength property of the
sequence input layer. To ensure that this check is accurate, set MinLength to the shortest sequence length of your training data.
Emmanouil Tzorakoleftherakis
Emmanouil Tzorakoleftherakis el 10 de En. de 2024
That's expected because you are apparently using a convolutional layer which is not required here. I would either try following an example that's closer to the nature of your problem, or even better use the default agent capability which automatically generate an initial architecture for the policy that you can then modify as needed. Since you are using rlContinuousGaussianActor, I would try the SAC agent, so take a look at this section.

Iniciar sesión para comentar.

Categorías

Más información sobre Deep Learning Toolbox en Help Center y File Exchange.

Productos


Versión

R2022a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by