How can I extract a trained RL Agent's network's weights and biases?
23 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
How can I extract a trained RL Agent's network's weights and biases?
My network is:
statePath = [
imageInputLayer([numObservations 1 1], 'Normalization', 'none', 'Name', 'state')
fullyConnectedLayer(NumNeuron, 'Name', 'CriticStateFC1')
reluLayer('Name', 'CriticRelu1')
fullyConnectedLayer(NumNeuron, 'Name', 'CriticStateFC2')];
actionPath = [
imageInputLayer([1 1 1], 'Normalization', 'none', 'Name', 'action')
fullyConnectedLayer(NumNeuron, 'Name', 'CriticActionFC1')
reluLayer('Name', 'ActorRelu1')
fullyConnectedLayer(NumNeuron, 'Name', 'CriticActionFC2')];
commonPath = [
additionLayer(2,'Name', 'add')
reluLayer('Name','CriticCommonRelu')
fullyConnectedLayer(1, 'Name', 'output')];
criticNetwork = layerGraph(statePath);
criticNetwork = addLayers(criticNetwork, actionPath);
criticNetwork = addLayers(criticNetwork, commonPath);
criticNetwork = connectLayers(criticNetwork,'CriticStateFC2','add/in1');
criticNetwork = connectLayers(criticNetwork,'CriticActionFC2','add/in2');
% set some options for the critic
criticOpts = rlRepresentationOptions('LearnRate',learing_rate,...
'GradientThreshold',1);
% create the critic based on the network approximator
critic = rlQValueRepresentation(criticNetwork,obsInfo,actInfo,...
'Observation',{'state'},'Action',{'action'},criticOpts);
agent = rlDQNAgent(critic,agentOpts)
trainingStats = train(agent,env,trainOpts);
After training, I'd like to get the network's trained weights and biases.
0 comentarios
Respuesta aceptada
Anh Tran
el 27 de Mzo. de 2020
Editada: Anh Tran
el 27 de Mzo. de 2020
You can get the parameters from the trained's critic representation for DQN agent. In MATLAB R2020a, see getLearnableParameters and getCritic functions (function name changes a bit since R2019b). You can follow similar steps to get the actor's parameters from actor-based agent like DDPG or PPO.
critic = getCritic(agent);
criticParams = getLearnableParameters(critic);
6 comentarios
Francisco Serra
el 14 de Dic. de 2023
轩
el 5 de En. de 2024
@Francisco Serra I have the same need. I find a silly method: save the agent after each episode and use "getLearnableParameters" to print the parameter of each agent.
Más respuestas (0)
Ver también
Productos
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!