- Use parallel computing and not GPU: Set StepsUntilDataIsSent to a higher value (e.g. 132, 256, etc.). This will create a bigger batch each training step (similar to NumStepsToLookAhead property of rlACAgentOptions if not use parallel computing).
- Use GPU and not parallel computing: Set NumStepsToLookAhead property to a higher value.
how to use GPU for actor and critic while env simulation happens on multiple cores for RL training
34 views (last 30 days)
i am new to GPU computing.
i am using reinforcement learning toolbox - particularly rlACAgent
training happens normally on multiple cores within the system. but to due to large actor and critic networks training gets slower. when i use GPU for actor and critic networks and initiate training, only 1st N (N- number of cores in pool) episodes run properly. beyond that all episodes settle at zero reward (not exactly zero but some negative number close to zero).
is there a way to use both GPU and CPU for RLtraining. GPU for networks and CPU for environment (simulink) simulation
thanks in advance
CPU - intel xeon gold 5220
RAM - 128GB
GPU - Nvidia RTX 2080
device = 'gpu';
actorOpts = rlRepresentationOptions('LearnRate',1e-4,'GradientThreshold',2,'UseDevice',device);
criticOpts = rlRepresentationOptions('LearnRate',1e-4,'GradientThreshold',2,'UseDevice',device);
agent = rlACAgent(actor,critic,agentOpts);
trainOpts = rlTrainingOptions(...
trainOpts.UseParallel = true;
trainOpts.ParallelizationOptions.Mode = "sync";
trainOpts.ParallelizationOptions.DataToSendFromWorkers = "gradients";%for A3C
trainOpts.ParallelizationOptions.StepsUntilDataIsSent = 20;
trainOpts.ParallelizationOptions.WorkerRandomSeeds = -1;
trainOpts.StopOnError = 'off';
trainingStats = train(agent,env,trainOpts);
Anh Tran on 27 Mar 2020
We are continuously improving GPU training performance with parallel computing in future releases. For now, I would recommend the following options to improve training speed: