Speed up 'dlgradient' with parallelism?

4 visualizaciones (últimos 30 días)
Evan Scope Crafts
Evan Scope Crafts el 11 de Abr. de 2021
Comentada: Luis Hernandez el 14 de Nov. de 2023
Hi all,
I am wondering if there is a way to speed up the 'dlgradient' function evaluation using parallelism or GPUs.

Respuestas (1)

Jon Cherrie
Jon Cherrie el 12 de Abr. de 2021
You can use a GPU for the dlgradient computation by using a gpuArray with dlarray.
In this example, the minibtachqueue, puts data on to the GPU and thus the GPU is used for the rest of the computation, both the "forward" pass the "backward" (gradient) pass:
  1 comentario
Luis Hernandez
Luis Hernandez el 14 de Nov. de 2023
Hello.
I've been trying to use the functions 'dlgradient' and 'dlfeval' with gpuArray inputs so that matlab will use my GPU. Unofrtunately, they only work when I pass dlarray inputs.
What is the workaround for this? what is the minibatch doing that allows you to work with gpuArray?
Thanks!
-L

Iniciar sesión para comentar.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by