i need to utilize fully of my GPUs during network training!

15 visualizaciones (últimos 30 días)
Abolfazl Nejatian
Abolfazl Nejatian el 13 de Mzo. de 2023
Comentada: Abolfazl Nejatian el 18 de Mzo. de 2023
Greetings,
I am currently engaged in training a hybrid neural network architecture, GoogleNet and LSTM. My training dataset comprises a voluminous set of images amounting to approximately 2.5 million samples, and the total size of these images as stored in my computer's memory is approximately 18 gigabytes. During the training process, I am encountering constraints in utilizing all available computational resources offered by my computer's GPU(the gpu load mostly under 70%). In this context, I am contemplating the application of the "imageDatastore" function as a potential solution.
Kindly find appended herewith my code for your reference, and I would highly appreciate any guidance or recommendations that you might offer.
Sincerely,
Abolfazl
reset(gpuDevice(1));
% ---------- Load Data
allImages = imageDatastore(fullfile(parentDir,dataDir),...
'IncludeSubfolders',true,...
'LabelSource','foldernames');
rng default
[imgsTrain,imgsValidation] = splitEachLabel(allImages,0.80,'randomized');
% ---------- Hybrid Google Net with LSTM Layer
net = googlenet;
lgraph = layerGraph(net);
% Modify GoogLeNet Network Parameters
newDropoutLayer = dropoutLayer(0.2,'Name','new_Dropout');
lgraph = replaceLayer(lgraph,'pool5-drop_7x7_s1',newDropoutLayer);
numClasses = numel(categories(allImages.Labels));
newConnectedLayer = fullyConnectedLayer(numClasses,...
'WeightLearnRateFactor',5,'BiasLearnRateFactor',5);
LSTM_layer = [ flattenLayer;
lstmLayer(350,OutputMode="last");
newConnectedLayer];
lgraph = replaceLayer(lgraph,'loss3-classifier',LSTM_layer);
optimizer = 'adam';
newClassLayer = classificationLayer('Name','new_classoutput');
lgraph = replaceLayer(lgraph,'output',newClassLayer);
% ---------- Network Training
% Set Training Options and Train GoogLeNet
options = trainingOptions(optimizer,...
'MiniBatchSize',1024,...
'MaxEpochs',15,...
'InitialLearnRate',1e-4,...
'ValidationData',imgsValidation,...
'ValidationFrequency',10,...
'Verbose',1,...
'shuffle','every-epoch',...
'ExecutionEnvironment','gpu',...
'BatchNormalizationStatistics','moving',...
'Plots','training-progress');
rng default
trainedGN = trainNetwork(imgsTrain,lgraph,options);

Respuestas (3)

Ben
Ben el 14 de Mzo. de 2023
To use more of the GPU resource per iteration you can increase the minibatch size.
I'll note that the LSTM layer you are adding is not receiving sequence data, so it will simply run one iteration of the LSTM cell, which wouldn't be standard usage.

Abolfazl Nejatian
Abolfazl Nejatian el 14 de Mzo. de 2023
In regard to the minibatch size, it should be noted that the maximum permissible size has been selected due to memory constraints. Any attempt to further increase the minibatch size would result in a "out of memory" error being encountered.
When you say 'sequence data', could you please explain what you mean by that?
regards,
Abolfazl
  2 comentarios
Ben
Ben el 14 de Mzo. de 2023
>Any attempt to further increase the minibatch size would result in a "out of memory" error
In that case there is no way to utilize more GPU resources through trainNetwork.
>When you say 'sequence data', could you please explain what you mean by that?
LSTMs are typically used on sequence data such as time series. In practical terms they will iterate the LSTM "cell" operation over the sequence data, e.g. the T dimension of a dlarray with a dimension with label T. Typically LSTM is used in this way because it has a "state" that it can use at time step N that can hold information about timesteps 1,2,...,N-1. However in your example there is no sequence data, the LSTM "cell" just operates once.
Abolfazl Nejatian
Abolfazl Nejatian el 15 de Mzo. de 2023
Thank you for your information.
GPU utilization did not improve significantly when the pure GoogleNet model was trained. Therefore, the LSTM component wasn't solely responsible for the problems.

Iniciar sesión para comentar.


Joss Knight
Joss Knight el 15 de Mzo. de 2023
It's hard to be sure from the info you provide but it looks like the filesystem is your bottleneck. If you cannot load the next 1024 images before you've finished the previous iteration, your GPU will have to wait.
  5 comentarios
Abolfazl Nejatian
Abolfazl Nejatian el 18 de Mzo. de 2023
I have included the profiler obtained from a smaller dataset trained for only one epoch.
Total run time: 1129.042 s
here is profiler results
https://drive.google.com/file/d/1BnrvjNgejWcxoZ5gBGc1aQKJkPOjpHcU/view?usp=share_link

Iniciar sesión para comentar.

Categorías

Más información sobre Image Data Workflows en Help Center y File Exchange.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by