Multiple GPUs perform slower than single GPU to train a semantic segmentation network

3 visualizaciones (últimos 30 días)
I have to my disposal two NVIDIA Tesla V100-16Gb GPUs to train a deep neural network model for semantic segmentation. I am training the Inception-ResNet-v2 network with the DeepLab v3+ architecture. I am using the randomPatchExtractionDatastore to feed the network with training data. When I set the 'ExecutionEnvironment' option to multi-gpu the processing time for each iteration is higher than using only gpu, that is a single GPU. I am working in Windows 10 with MatLab 2019b. What should I do to use the full potential of both GPUs for training? Bellow an example of my code
pathSize = 512;
imageSize = [pathSize pathSize 3];
numClasses = 6
lgraph = deeplabv3plusLayers(imageSize, numClasses, 'inceptionresnetv2','DownsamplingFactor',16);
MaxEpochs=10;
PatchesPerImage=1500;
MiniBatchSize=20;
options = trainingOptions('sgdm', ...
'ExecutionEnvironment','gpu',...
'LearnRateSchedule','piecewise',...
'LearnRateDropPeriod',3,...
'LearnRateDropFactor',0.2,...
'Momentum',0.9, ...
'InitialLearnRate',0.03, ...
'L2Regularization',0.001, ...
'MaxEpochs',MaxEpochs, ...
'MiniBatchSize',MiniBatchSize, ...
'Shuffle','every-epoch', ...
'CheckpointPath', tempdir, ...
'VerboseFrequency',2,...
'Plots','training-progress',...
'ValidationPatience', 4);
imageAugmenter = imageDataAugmenter( ...
'RandRotation',[-20,20], ...
'RandXTranslation',[-10 10], ...
'RandYTranslation',[-10 10]);
% Random patch extraction datastore
PatchSize=[pathSize pathSize];
dsTrain = randomPatchExtractionDatastore(imds,pxds,PatchSize,'PatchesPerImage',PatchesPerImage,'DataAugmentation',imageAugmenter);
[net, ~] = trainNetwork(dsTrain,lgraph,options);

Respuesta aceptada

Joss Knight
Joss Knight el 9 de Mzo. de 2020
On Windows, due to GPU communication issues on that platform, it is difficult to get any benefit from multi-GPU training. This will be improved in a future release. Try the following:
  • Maximize the patches per image and the MiniBatchSize
  • Increase the learn rate to match the number of GPUs
If moving to Linux is an option for you that is definitely the way to go.
  3 comentarios
Joss Knight
Joss Knight el 23 de Mzo. de 2021
Yes, this situation has been much improved since R2020a, although you still cannot quite get as good performance as Linux.
WSL works well and is an excellent solution, but you do need to update to Windows Insider. Follow instructions here.
Preetham Manjunatha
Preetham Manjunatha el 30 de Jul. de 2022
I still notice the same issue in classification problem using 2 GPUs (RTX 2080TI - VRAM 11 GB). It is super slow (takes 8 seconds for 1 iteration) on Ubuntu 22.04, MATLAB 2022a. It has taken 1228 minutes for 11837 iterations or 19 epochs (batch size of 256 images). There are about 159,500 images for training. In this speed, I expect the training to finish after a week or so!

Iniciar sesión para comentar.

Más respuestas (1)

junnet
junnet el 24 de Mzo. de 2021
Thank! That is great news. Also thanks for the link.
Now if only I can score a pair or trio of RTX 3060s somewhere, anywhere ...

Categorías

Más información sobre Image Data Workflows en Help Center y File Exchange.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by