How do I reduce the memory imprint due to a GPU array?
1 visualización (últimos 30 días)
Mostrar comentarios más antiguos
Michael
el 2 de Dic. de 2013
Comentada: Joss Knight
el 9 de Dic. de 2013
Hello,
I am finding that creating a GPU array creates a huge spike in MATLAB's memory usage:
Opening matlab:
20309 mcoughli 20 0 4620m 172m 66m S 0.0 0.0 0:02.85 MATLAB
So approximately 4.6 GB. When I create a gpuArray from the command line:
>> gpuArray(1);
It spikes dramatically:
20309 mcoughli 20 0 537g 605m 255m S 0.0 0.1 2:07.06 MATLAB to approximately 537GB.
Does anyone understand why this is happening / can be prevented? It creates problems when I attempt to run on smaller computing nodes. Running ulimit -v beforehand works to some extent, but it is more difficult to set when running parallel process.
Thank you,
Michael
0 comentarios
Respuesta aceptada
Edric Ellis
el 3 de Dic. de 2013
This is unfortunately likely to be due to loading all the GPU support libraries. These are quite large, and all get loaded when you first create a gpuArray. I'm afraid there's no workaround for this.
2 comentarios
Joss Knight
el 9 de Dic. de 2013
Have a look at installdir / bin / arch and list the contents by size - you'll see some obvious GPU libraries near the top e.g. npp, cublas, and cufft. To get good performance GPU runtime code is very non-general, but this means there are multiple implementations for every use case - add to that the overhead for supporting multiple compute architectures.
Más respuestas (0)
Ver también
Categorías
Más información sobre GPU Computing en Help Center y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!