Time series prediction using neural networks (narnet / narxnet); is it at all possible to train a network faster on a GPU than the CPU?

1 visualización (últimos 30 días)
I have gpuBenched ( https://www.mathworks.com/matlabcentral/fileexchange/34080-gpubench ) my Geforce GTX 570 GPU, in this calculation the GPU proved to be orders of magnitude faster than the CPU.
Left: Results for data-type 'double' (In GFLOPS)
Right: Results for data-type 'single' (In GFLOPS)
MTimes Backslash FFT MTimes Backslash FFT
GeForce GTX 570 175.11 103.31 60.14 954.68 287.08 187.61
Host PC 41.02 35.08 7.13 82.34 66.76 11.61
But...When I use the GPU to train a neural network it takes much longer time compared to training with the CPU. For the code below it takes 20 times longer using the GPU. I have tried different sizes of dataset, different feedback lengths and different amount of neurons but the conclusion remains.
Any suggestions of how the GPU can be used more efficiently for these types of time series prediction calculations?
Have you preformed calculation involving time series prediction were the GPU was faster than the CPU?
Also; Do you know why Mathworks hasn't developed the Jacobian training methods (eg. trainlm, trainbr (these are the best ones)) for GPU training/computing?
The code (paste and run, this is an executable syntax):
clear all
close all
g = gpuDevice(1);
reset(g);
FD=1:50;
T = solar_dataset;
T=[T T T T T T T T T T];
neu=10;
%%GPU CODE
tic
rng('default');
for Ntrials=1:3
net = narnet(FD,neu);
net.trainFcn ='trainscg';
for i=1:net.numLayers
if strcmp(net.layers{i}.transferFcn,'tansig')
net.layers{i}.transferFcn = 'elliotsig';
end
end
[Xo,Xoi,Aoi,To] = preparets(net,{},{},T);
net = configure(net,Xo,To);
Xo = nndata2gpu(Xo,'single');
To = nndata2gpu(To,'single');
Xoi = nndata2gpu(Xoi,'single');
Aoi = nndata2gpu(Aoi,'single');
[ net] = train(net,Xo,To,Xoi,Aoi);
Yo = net(Xo,Xoi,Aoi);
Yo=gpu2nndata(Yo);
To=gpu2nndata(To);
Eo = gsubtract(To,Yo);
clear net
reset(g);
Ntrials
end
toc
%%Removing some variables to make everything equal
clearvars -except FD T neu
%%CPU CODE
tic
rng('default');
for Ntrials=1:3
net = narnet(FD,neu);
net.trainFcn ='trainscg';
for i=1:net.numLayers
if strcmp(net.layers{i}.transferFcn,'tansig')
net.layers{i}.transferFcn = 'elliotsig';
end
end
[Xo,Xoi,Aoi,To] = preparets(net,{},{},T);
net = configure(net,Xo,To);
% Xo = nndata2gpu(Xo,'single');
% To = nndata2gpu(To,'single');
% Xoi = nndata2gpu(Xoi,'single');
% Aoi = nndata2gpu(Aoi,'single');
[ net] = train(net,Xo,To,Xoi,Aoi);
Yo = net(Xo,Xoi,Aoi);
% Yo=gpu2nndata(Yo);
% To=gpu2nndata(To);
Eo = gsubtract(To,Yo);
clear net
% reset(g);
Ntrials
end
toc
  4 comentarios
Ricardo de Azevedo
Ricardo de Azevedo el 7 de Jun. de 2018
Hi Staffan, I am looking for the answer to the same question, and found this thread that has been helpful:
https://www.mathworks.com/matlabcentral/answers/225961-gpu-training-of-neural-network-with-parallel-computing-toolbox-unreasonably-slow-what-am-i-missing
Did you find a solution to the problem?
Staffan
Staffan el 7 de Jun. de 2018
Hello Ricardo,
I actually generated a support case for this, the answers I got from Mathworks was nothing to cheer for. After said responses I started looking at Tensorflow, I find this tool much more appropriate for what I'd like to accomplish...I've worked with univariate time series and did not get the performance from Narnet that I thought I was going to have.
BR Staffan

Iniciar sesión para comentar.

Respuestas (0)

Categorías

Más información sobre Sequence and Numeric Feature Data Workflows en Help Center y File Exchange.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by