Arrayfun/gpuArray CUDA kernel need to be able to remember previous steps
1 visualización (últimos 30 días)
Mostrar comentarios más antiguos
Background
- The problem can be separated into a large number of independent sub-problems.
- All sub-problems share the same matrix parameters.
- Each sub-problem needs to remember the indices itself has visited up to this point.
- The goal is to process the sub-problems in paralell on the GPU.
Array indexing and memory allocation is not supported in this context. Is this function possible to achieve?
0 comentarios
Respuestas (1)
Joss Knight
el 29 de Mzo. de 2024
This is a bit too vague to answer. Without indexing, how can each subproblem retrieve its subset of the data? If you just mean indexed assignment is not allowed then sure, you could write an arrayfun perhaps that solves some independent problem for a subset of an array, as long as all the operations are scalar and the output is scalar. Not if the subproblems are completely different algorithms though.
Anyway, sorry, but not enough information to help.
0 comentarios
Ver también
Categorías
Más información sobre Matrix Indexing en Help Center y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!