Why would the file size of a deep learning gradient become much bigger after saving as a .mat file?
5 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
SC
el 2 de Dic. de 2019
Respondida: Joss Knight
el 3 de Dic. de 2019
Hi,
I have a variable gradients which is the gradient of a deep learning model. From the code "whos gradients" you can see that it only requires 5742 bytes (i.e. 5.7 kB) to store. However, when I tried to save it as a .mat file, the file size becomes 13320098 bytes (i.e. 13.3 mB. Please refer to the code "file_size=file.bytes" ), which is more than 2000 times larger. May I ask the reason why, and how can I save the file with a size similar to 5.7kB?
Thanks!
My code:
%% Define Network Architecture
layers = [
imageInputLayer([1 1 100],'Normalization','none','Name','in')
transposedConv2dLayer([4 4],8*64,'Name','tconv1')
batchNormalizationLayer('Name','bn1')
reluLayer('Name','relu1')
transposedConv2dLayer([4 4],4*64,'Stride',2,'Cropping',1,'Name','tconv2')
batchNormalizationLayer('Name','bn2')
reluLayer('Name','relu2')
transposedConv2dLayer([4 4],2*64,'Stride',2,'Cropping',1,'Name','tconv3')
batchNormalizationLayer('Name','bn3')
reluLayer('Name','relu3')
transposedConv2dLayer([4 4],64,'Stride',2,'Cropping',1,'Name','tconv4')
batchNormalizationLayer('Name','bn4')
reluLayer('Name','relu4')
transposedConv2dLayer([4 4],1,'Stride',2,'Cropping',1,'Name','tconv5')
tanhLayer('Name','tanh')];
MyLGraph = layerGraph(layers);
myDLnet = dlnetwork(MyLGraph);
[dlZ, Y]=get_dlZ_Y();
gradients = dlfeval(@modelGradients, myDLnet, dlZ, Y);
whos gradients
save("gradients.mat","gradients");
file=dir("gradients.mat");
file_size=file.bytes
function [gradients] = modelGradients(myModel, modelInput, CorrectLabels)
CorrectLabels_transpose=transpose(CorrectLabels);
[modelOutput,state] = forward(myModel,modelInput);
modelOutput_mean=reshape(mean(mean(modelOutput)),1,100);
loss = -sum(sum(CorrectLabels_transpose.*log(sigmoid(modelOutput_mean/100))));
gradients = dlgradient(loss, myModel.Learnables);
end
function [dlZ, Y]=get_dlZ_Y()
rng(123); % seed
Z = randn(1,1,100,100,'single');
Y = randn(1,100,'single');
% Convert mini-batch of data to dlarray specify the dimension labels
% 'SSCB' (spatial, spatial, channel, batch).
dlZ = dlarray(Z, 'SSCB');
executionEnvironment="auto";
% If training on a GPU, then convert data to gpuArray.
if (executionEnvironment == "auto" && canUseGPU) || executionEnvironment == "gpu"
dlZ = gpuArray(dlZ);
end
end
1 comentario
Walter Roberson
el 2 de Dic. de 2019
I wonder if you are configured to default to -v7.3 files for save()? There is a preference for that.
Respuesta aceptada
Joss Knight
el 3 de Dic. de 2019
The difference is that whos is unable to account for the fact that the data is all stored on the GPU, and is only showing CPU memory. Add the following
gradients = dlupdate(@gather, gradients);
and you will see parity between the numbers.
The answer is - you can't save 3.6 million 32-bit numbers in 5.7 kilobytes, no matter what magic you employ!
0 comentarios
Más respuestas (0)
Ver también
Categorías
Más información sobre Image Data Workflows en Help Center y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!