Make double loop run on GPU
Mostrar comentarios más antiguos
Hi
I have the following code that I want to run somehow on the GPU.
In short, I have a 2D array (an MRI image). Starting from a user-defined centre-point, the the image is divided into radial concentric "rings" of specified width (f.eks. 0.1mm) and total number of pixels and total signal intensity (pixel value) in each ring are recorded in separate output arrays (called Profile and SumBinSignal). The code below does this using 2 for loops.
for X= Xmin:1:Xmax
for Y= Ymin:1:Ymax
if ~isnan(Image(Y,X)) %NaN pixels are just ignored
Dist = Resolution * ( (X-CentX)^2 + (Y-CentY)^2 )^0.5; %distance of point to centre measured in mm
TargetBin= ceil(Dist * BinDensity); %convert distance from mm to "bins/rings"
if (TargetBin>=1) && (TargetBin<=MaxTargetBin)
Profile(TargetBin)= Profile(TargetBin)+1; %record number of pixels in bin/ring
SumBinSignal(TargetBin)= SumBinSignal(TargetBin) + Image(Y, X); % record total signal in ring/bin
end
end
end
end
Now I know that in order to run on the GPU, I either must vectorize the calculations or use "arrayfun". But I really cant manage to do either. I maganged to vectorize the calculations only partly with no great speed advantage. The arrayfun approach I dont know even how to start with.
Help will be highly appreciated
best regards, Rozh
Respuesta aceptada
Más respuestas (0)
Categorías
Más información sobre Big Data Processing en Centro de ayuda y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!