How to speed up code using GPU?

1 visualización (últimos 30 días)
khan
khan el 10 de Abr. de 2015
Comentada: Greg Heath el 20 de Abr. de 2015
Hi all, I have a general question, I have a neural network where the input is 80x60x13x2000.
In current setup i take one sample (80x60x13) at a time to process it for final output. Where in the first hidden layer it becomes 76x56x11x3, in second becomes 38x28x9x3, and in third becomes 34x24x7x3.
Now can any body tell me how can i use GPU at first and third layer in such a way that it becomes faster. Previously i converted all data to gpuArray, but it became worse.
Can anybody guide me how to better utilize it?
With Best Regards
khan
  1 comentario
Greg Heath
Greg Heath el 20 de Abr. de 2015
Sizes of inputs, targets and outputs are 2-dimensional. I have no idea how your description relates to 2-D matrix signals and a hidden layer net topology.
Typically,
[ I N ] = size(input)
[ O N ] = size(target)
[ O N ] = size(output)
The corresponding node topology is
I-H-O for a single hidden layer
I-H1-H2-O for a double hidden layer
Please try to explain your problem in these terms.

Iniciar sesión para comentar.

Respuestas (0)

Categorías

Más información sobre Deep Learning Toolbox en Help Center y File Exchange.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by