CUDA_ERROR_UNKNOWN when using floats instead of double precision
2 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
Hello,
I have a identical .cu files, one where I use variables defined as float and another where I use variables defined as double.
The double precision one works perfectly when called on by the kernel, whereas the float version does not. I get an error when I gather() the output variables:
Errror using gpuArray/gather
An unexpected error occurred during CUDA execution. The CUDA error was: CUDA_ERROR_UNKNOWN
From the documentation, it appears that the feval() function will automatically cast my input arrays to the correct type, however, I have also tried individually transforming each input/output array to float using single(), but I get a similar error.
Here is the format:
__global__ void SegForceNBodyCUDA(double const *SoA,
double const a, double const MU, double const NU,
int const S,
double *f0x, double *f0y, double *f0z,
double *f1x, double *f1y, double *f1z);
or
__global__ void SegForceNBodyCUDA(float const *SoA,
float const a, float const MU, float const NU,
int const S,
float *f0x, float *f0y, float *f0z,
float *f1x, float *f1y, float *f1z);
Both .cu files compile correctly without errors/warnings.
Please advise.
Thank you,
Francesco
5 comentarios
Ben Tordoff
el 20 de Sept. de 2013
Casting is unlikely to be the issue here - you see the error even when you have forced the types to be correct. CUDA_ERROR_UNKNOWN usually means that the card has crashed, and I've previously seen this when there is a bad memory access (e.g. off the end of an array, or an invalid pointer etc.).
Unfortunately that means that the problem may be somewhere in your kernel or in the way the output arrays are being handled in CUDAKernel. Without your code it's hard to identify which of those two it might be. If you could supply a minimal example that is sufficient to demonstrate the problem, I might be able to help work out where the problem lies. You can send it to me directly if you can't (or don't want to) post it here.
Finally, about the casting: if the input is plain MATLAB data, it is cast to the correct type and copied to the GPU. If the input is already on the GPU (i.e. it is a gpuArray), the type and complexity must exactly match and a specific error is thrown if they do not. Since you do not see this error, your inputs are being cast for you.
Cheers
Ben
Respuesta aceptada
Ben Tordoff
el 25 de Sept. de 2013
Thanks for sending the code.
I’ve done some initial investigation and it looks like you have an illegal memory access somewhere. Here is what cuda-memcheck reports:
Running CUDA Single Precision, Optimised...
warning: Cuda API error detected: cuModuleGetGlobal_v2 returned (0x1f4)
warning: Cuda API error detected: cuModuleGetGlobal_v2 returned (0x1f4)
[Launch of CUDA Kernel 102 (SegForceNBodyCUDA<<<(4,1,1),(256,1,1)>>>) on Device 0]
Memcheck detected an illegal access to address (@local)0xfff830
Program received signal CUDA_EXCEPTION_1, Lane Illegal Address.
[Switching focus to CUDA kernel 102, grid 103, block (0,0,0), thread (5,0,0), device 0, sm 12, warp 2, lane 5]
0x0000000010052d98 in SegForceNBodyCUDA(float const*, float, float, float, int, float*, float*, float*, float*, float*, float*) ()
I couldn't see anything obviously wrong in the kernel, but it's quite a lot of code. The most likely culprit is reading/writing past the end of an input/output array. However, the fact that the illegal memory address includes "@local" may indicate a problem with how data is being passed around internally to the kernel (i.e. in thread-local memory). I don't think there is any problem with the way the kernel is being called by MATLAB.
Best of luck debugging this - these type of problems can be tricky to isolate.
Ben
Más respuestas (0)
Ver también
Categorías
Más información sobre Logical en Help Center y File Exchange.
Productos
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!