- As you mentioned that the training environment is slightly more complex than the validation environment, it is important to have similar complexities between the training and validation to ensure that the trained agents can be generalized.
- Also, to improve the effectiveness of MAPPO, you may want to increase the amount of training data which covers a wide range of scenarios.
- If not, already you may also have to normalize the values to stabilize value learning
- Analysis and refinement of the reward function might also be necessary.
运用MAPPO训练后进行验证,发现保存的所有Agent.mat文件的的验证reward回合奖励都是一个数,而且效果非常差
24 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
我运用MAPPO强化学习做了500个episode的训练,训练结果从奖励函数来看是收敛了,训练结果如下:
训练加载的环境相对于验证环境复杂一点,但是我验证时发现所有保存的Agent输出的回合奖励非常差,而且所有episode保存的Agent都是一样的结果,感觉网络没有被训练收敛或者正常保存,我的文件写法如下:
trainOpts = rlMultiAgentTrainingOptions(...
"AgentGroups","auto",...%%{[1,2]}
"LearningStrategy","decentralized",...
"MaxEpisodes",500,...
"MaxStepsPerEpisode",Tf/Ts,...
"ScoreAveragingWindowLength",10,...
"StopTrainingCriteria","AverageReward",...
"StopTrainingValue",99999990,...
'Verbose',true,...%在命令行输出训练进度
SaveAgentCriteria="Averagereward",...
SaveAgentValue=-inf);
0 comentarios
Respuestas (1)
Sarthak
el 21 de Ag. de 2023
Hi 郭欣,
As per my understanding, there could be a few potential reasons for the poor performance you’re observing:
However, it is difficult to pinpoint the exact reason why you are experiencing poor results without looking at your implementation, you can implement a thorough debugging process to track the training progress and identify potential issues.
You can also refer to the following documentation for a better understanding of how to train your reinforcement learning agents
Ver también
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!