- The batch size is huge
- The network is huge and requires a huge amount of matrix multiplications.
Training deep learning network on GPU is significantly slower than using CPU
11 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
I am training a CNN-LSTM regression network on matlab 2023b.
layers = [...
sequenceInputLayer([1440 1 1],'Name','input')
flattenLayer('Name','flatten')
convolution1dLayer(5,2,'Padding','same','WeightsInitializer','he','Name','conv','DilationFactor',1);
batchNormalizationLayer('Name','bn')
reluLayer('Name','relu')
lstmLayer(24,'OutputMode',"last",'Name','bil4','RecurrentWeightsInitializer','He','InputWeightsInitializer','He')
dropoutLayer(0.1,'Name','drop3')
fullyConnectedLayer(1,'Name','fc')
regressionLayer('Name','output') ];
MaxEpochs = 200;
learningrate = 0.0061;
learningrate = 0.001;
MiniBatchSize = 64;
solver = 'adam';
options = trainingOptions(solver, ...
'MaxEpochs',MaxEpochs, ...
'GradientThreshold',1, ...
'InitialLearnRate',learningrate, ...
'LearnRateSchedule',"piecewise", ...
'LearnRateDropPeriod',96, ...
'LearnRateDropFactor',0.25, ...
'MiniBatchSize',MiniBatchSize,...
'Verbose',false, ...
'Shuffle',"every-epoch",...
'ExecutionEnvironment',mydevice,...
'Plots','training-progress');
[net,info] = trainNetwork(XrTrain,YrTrain,layers,options);
My GPU is an NVIDIA RTX A4500. However, when I use GPU for training (mydevice = 'gpu'), the training speed is significantly slower than using CPU. Only 15223 iterations are completed in 5min using GPU, while 30372 iterations are completed using CPU.
The training results are acctually both good regardless of using GPU or CPU. The only difference is the time consumed. The GPU usage in Windows Task Manager is around 80% to 100% during training.
Could anyone please help with this problem? Are there special settings that need to be applied when training a DL network using GPU?
0 comentarios
Respuestas (1)
Shreeya
el 16 de Dic. de 2023
GPU's are usually slower than CPU's but the advantage of a GPU is that thousands of threads can be computed simultaneously. Data transfer to the GRAM is expensive and GPU's can help with faster computations when:
In the other case, the overhead outweighs the benefits of parallelization, and hence making the GPU computations slower.
You can refer to the answer below to understand more about this:
0 comentarios
Ver también
Categorías
Más información sobre Image Data Workflows en Help Center y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!