Training NN with single precision data on GPU

11 visualizaciones (últimos 30 días)
Cameron Lee
Cameron Lee el 16 de En. de 2020
Comentada: Cameron Lee el 21 de Feb. de 2020
I am trying to use fitnet to train a network on my GPU using single-precision input data (X and T). However, this always returns an error, which starts with:
"Error using nnGPUOp.bg (line 134) Variable 'perfs1' changed type. Consider renaming variable on left hand side of assignment."
This only seems to be a problem when using single-precision data AND the GPU. When I train using double-precision on GPU, it works fine, and when I use single- or double-precision data on the CPU, it also works fine.
Anyone found a way around this?
  3 comentarios
Cameron Lee
Cameron Lee el 31 de En. de 2020
Editada: Cameron Lee el 31 de En. de 2020
Hi Raunak... Thanks for addressing this issue. Here is some code. Obviously I don't use random x and t variables, but nonetheless, this thows the same error. Notice that if you leave x and t as double-precision, it works fine. Further, if run on the CPU rather than the GPU, it will also work fine with either single or double precision x and t variables (but will take quite a bit longer). Ideally, I want this to work on the GPU with single-precision data, as my Titan RTX GPUs are best equipped to process such data types. I am using MATLAB Version: 9.7.0.1261785 (R2019b) Update 3 and all the updated toolboxes.
neurons=10;
xvars=rand(700000,6);
yvar=rand(700000,1);
% CHANGING THEM TO SINGLE-PRECISION DATA-TYPE DOES NOT WORK
% (THROWS ERROR: "Error using nnGPUOp.bg (line 134)
% Variable 'perfs1' changed type. Consider renaming variable on left hand side
% of assignment.")
x = single(xvars');
t = single(yvar');
% LEAVING THEM AS DOUBLE-PRECISION DATA-TYPE WORKS FINE
% x = xvars';
% t = yvar';
trainFcn='trainscg';
net = fitnet(neurons,trainFcn);
net.input.processFcns = {'removeconstantrows','mapminmax'};
net.output.processFcns = {'removeconstantrows','mapminmax'};
net.trainParam.showWindow = 0;
net.divideFcn = 'dividerand'; % Divide data randomly
net.divideMode = 'sample'; % Divide up every sample
net.divideParam.trainRatio = 60/100;
net.divideParam.valRatio = 20/100;
net.divideParam.testRatio = 20/100;
net.trainParam.max_fail = 10;
net.performFcn = 'mse'; % Mean Squared Error
net.trainParam.epochs=100;
[net,tr] = train(net,x,t,'useGPU','yes');
y = net(x)';
Cameron Lee
Cameron Lee el 18 de Feb. de 2020
Has anyone solved this issue?

Iniciar sesión para comentar.

Respuesta aceptada

Raunak Gupta
Raunak Gupta el 19 de Feb. de 2020
Editada: Raunak Gupta el 19 de Feb. de 2020
Hi,
The single precision GPU training can only be done in the ‘nnGPU’ calculation mode. By default train uses nnGPUOp’ which doesn’t support single precision GPU Training.
As a workaround, you may do single precision GPU training by any of the two ways mentioned below:
  • You can use the nndata2gpu function:
% Here x,t are original double precision data
net = configure(net,x,t);
sx = nndata2gpu(x,'single');
st = nndata2gpu(t,'single');
[net,tr] = train(net,sx,st,'useGPU','yes');
  • You can specify single precision GPU training:
% Here x,t are single precision data
[net,tr] = train(net,x,t,nnGPU('precision','single'));
Hope it helps.
  3 comentarios
Raunak Gupta
Raunak Gupta el 20 de Feb. de 2020
Hi Cameron,
The speed up will not happen because by using single-precision instead of double-precision the memory used by the GPU decreases which doesn't translates to speed. Instead if you have more available memory maybe increasing the batch-size (In Case of Deep Neural Network like CNNs) would speed up the code.
Cameron Lee
Cameron Lee el 21 de Feb. de 2020
Hi Raunak,
I appreciate the suggestion, but this still does not make sense to me. I understand/agree that the single-precision data requires the GPU to use less memory, but doesn't it also mean that each individual calculation should proceed faster considering that there is less precision (and less memory) required in each operation? That is, in terms of TFLOPS, according to the specs from Nvidia, my GPUs should be performing MUCH faster (at about 30x the speed) using single-precision data vs double-precision data. Indeed, using gpuBench (https://www.mathworks.com/matlabcentral/fileexchange/34080-gpubench), my GPU performs anywhere from 8x (Backslash test) to 28x (MTimes test) faster using single-precision data than with double-precision data. The only explination I have for this is that all of the training is STILL being done with double-precision data (and evidence for this might be that y (from line final line of my example code above) and net.IW are still output as double-precision data types even after using your solutions). This seems like a pretty important drawback to using MatLab for shallow networks.

Iniciar sesión para comentar.

Más respuestas (0)

Categorías

Más información sobre Sequence and Numeric Feature Data Workflows en Help Center y File Exchange.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by