minibatchqueue or arrayDatastore drops my data precision from double to single

3 visualizaciones (últimos 30 días)
I get XTrain from MNIST by processImagesMNIST and put it on GPU, so its type is gpuArray dlarray.
then I use these code to make minibatches:
```
miniBatchSize = 128;
dsTrain = arrayDatastore(XTrain,IterationDimension=4);
% numOutputs = 1;
mbqTest = minibatchqueue(dsTrain,1, ...
MiniBatchSize = miniBatchSize, ...
MiniBatchFcn=@preprocessMiniBatch, ...
MiniBatchFormat="SSCB", ...
PartialMiniBatch="discard");
% numObservationsTrain = size(XTrain,4);
% numIterationsPerEpoch = ceil(numObservationsTrain / miniBatchSize);
% numIterations = numEpochs * numIterationsPerEpoch;
%% test batch order
i=0;
while hasdata(mbqTest)
i = i+1;
x = next(mbqTest);
if ~hasdata(mbqTest)
disp(i)
end
end
```
And I find that x is single gpuArray dlarray, XTrain is gpuArray dlarray.
I wonder which part makes it lower the precision.
And how to avoid this?
  1 comentario
Runcong Kuang
Runcong Kuang el 29 de Sept. de 2022
I find that minibatchqueue has a 'name,value' pair which is outputCast, 'single' as default. So I assign 'double' to it. But I haven't checked if the GPU training process does use single precision, as mentioned by @Walter Roberson.
But thanks for your input @Walter Roberson.

Iniciar sesión para comentar.

Respuestas (1)

Walter Roberson
Walter Roberson el 28 de Sept. de 2022
Gpu training does not support double precision. If you look at the available options, precision cannot be selected.

Categorías

Más información sobre Image Data Workflows en Help Center y File Exchange.

Productos


Versión

R2021a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by