I find that minibatchqueue has a 'name,value' pair which is outputCast, 'single' as default. So I assign 'double' to it. But I haven't checked if the GPU training process does use single precision, as mentioned by @Walter Roberson.
minibatchqueue or arrayDatastore drops my data precision from double to single
3 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
I get XTrain from MNIST by processImagesMNIST and put it on GPU, so its type is gpuArray dlarray.
then I use these code to make minibatches:
```
miniBatchSize = 128;
dsTrain = arrayDatastore(XTrain,IterationDimension=4);
% numOutputs = 1;
mbqTest = minibatchqueue(dsTrain,1, ...
MiniBatchSize = miniBatchSize, ...
MiniBatchFcn=@preprocessMiniBatch, ...
MiniBatchFormat="SSCB", ...
PartialMiniBatch="discard");
% numObservationsTrain = size(XTrain,4);
% numIterationsPerEpoch = ceil(numObservationsTrain / miniBatchSize);
% numIterations = numEpochs * numIterationsPerEpoch;
%% test batch order
i=0;
while hasdata(mbqTest)
i = i+1;
x = next(mbqTest);
if ~hasdata(mbqTest)
disp(i)
end
end
```
And I find that x is single gpuArray dlarray, XTrain is gpuArray dlarray.
I wonder which part makes it lower the precision.
And how to avoid this?
Respuestas (1)
Walter Roberson
el 28 de Sept. de 2022
Gpu training does not support double precision. If you look at the available options, precision cannot be selected.
0 comentarios
Ver también
Categorías
Más información sobre Image Data Workflows en Help Center y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!