3-D Brain Tumor Segmentation ERROR

Hi all,
I'm getting the following error running the 3-D Brain Tumor Segmentation example
Error using trainNetwork (line 165)
Incorrect loss type returned by 'forwardLoss' in the output layer. Expected to be 'single', but instead was 'gpuArray'.
The output layer is dicePixelClassification3dLayer. It seems like there's some kind of error in the definition of that custom layer. Converting to single the outputs of the layer methods didn't help, same error.
If it's helpful I'm using r2019a on windows 10 on a i9 9900k + 1080ti.
Any Help?
Thanks

6 comentarios

Joss Knight
Joss Knight el 24 de Ag. de 2019
You were also getting the "GPU low on memory" warning, yes? This is an artifact of custom layers and low memory, and is best fixed by reducing memory load (reduce minibatch size or patches per image).
Roberto
Roberto el 24 de Ag. de 2019
Yes! I also get that warning. And changing the minibatch size from 8 to 2 fixed the problem.
But now I have this strange behaviour:
U-Net.jpg
Just out of curiosity, 1080ti isn't really enough to traing U-Net on the BRATS dataset?
Thanks
Joss Knight
Joss Knight el 25 de Ag. de 2019
My experience with the brain segmentation example was that it is a good demonstration of how to get started with U-Net, but it does need some tuning, of hyper parameters and data augmentation. Also remember that after reducing the mini-batch size the network doesn't have much to go on when determining the accuracy of each iteration, especially given that this is a binary classification. Out of a handful of sample patches sometimes (often?) they'll all be classified correctly even when the network hasn't successfully converged. You're better off focussing on the validation accuracy (which itself needs some tuning) and the loss. I think the high-looking loss is an artifact of the custom DICE classification layer.
This issue with the error is fixed in a future version.
Joss Knight
Joss Knight el 25 de Ag. de 2019
Editada: Joss Knight el 25 de Ag. de 2019
By the way, I believe you can also fix this by converting the output of the forwardLoss method of the dicePixelClassificationLayer from gpuArray back to ordinary arrays using gather. Something like
loss = gather(loss);
should do the trick.
Roberto
Roberto el 25 de Ag. de 2019
Editada: Roberto el 25 de Ag. de 2019
Unfortunately loss = gather(loss); doesn't work.
Also reducing patches per image only fixes the low on memory warning but not the type error in the output layer.
Only reducing the minibatch size seems to work but the resulting training seems definitely not effective.
Thank you for your help. I hope that r2019b will fix this issue.
Joss Knight
Joss Knight el 26 de Ag. de 2019
The type problem and the low memory warning go together. If you're getting the low memory warning, you have a real problem that needs addressing, otherwise training performance will be seriously affected. The fix for the type error will allow training to continue, but you'll still need to deal with the low memory issue.

Iniciar sesión para comentar.

 Respuesta aceptada

Raunak Gupta
Raunak Gupta el 29 de Ag. de 2019
Hi,
The issue exposed here is related to low GPU memory. That being said, the following error should not be thrown. I have heard that this issue is known, and the concerned parties may be investigating further.
'Conversion to single from gpuArray is not possible'
A possible Workaround:
The attached patch will fix the error mentioned above, in the sense that the above error message will not be thrown if the training process errors out due to low GPU memory. Follow the steps below to apply the patch.
1) Save the attached zip file to your $MATLABROOT folder. The $MATLABROOT folder can be determined by running the ‘matlabroot’ command.
2) Using MATLAB, navigate to your $MATLABROOT folder by executing the command:
cd(matlabroot) in the MATLAB Command Window.
3) Execute the following command to unzip the file:
unzip('crop2dLayerPatch_18a_04_03_2018.zip')
You might need to start MATLAB as an administrator for this step.
4) Exit MATLAB and restart.
5) Execute the following command:
rehash toolboxcache
The GPU was running out of memory because the mini batch size is large and use up a lot of GPU memory. This value can be lowered by setting the "MiniBatchSize" parameter in the options that is passed to the "trainNetwork" function. I suggest you to start with 2 and if the problem doesn’t occur then you can increase the size in powers of 2.
The above should remove the error that is coming in trainNetwork.

5 comentarios

Roberto
Roberto el 29 de Ag. de 2019
Hi Raunak,
thank you for your very detailed explanation.
Unfortunately the fix seems not to work.
After the procedure you proposed when I reduce MiniBatchSize I have no errors but the training is definitely not converging.
If i put MiniBatchSize = 8 i have no "low on memory" warnings anymore but I still have
Error using trainNetwork (line 165)
Incorrect loss type returned by 'forwardLoss' in the output layer. Expected to be 'single', but instead was 'gpuArray'.
Any other ideas?
Thanks
Raunak Gupta
Raunak Gupta el 29 de Ag. de 2019
Hi,
The error you are getting is because of the low GPU memory but this “type” error message is a known issue and concerned people are working on it. Meanwhile I would suggest you to increase the GPU memory for using larger MinBatchSize.
Roberto
Roberto el 29 de Ag. de 2019
ok, thanks
any suggestions about the memory size needed? I'm asking because 1080ti already have 11gb of memory...
Raunak Gupta
Raunak Gupta el 29 de Ag. de 2019
Hi,
The size of 3D Volumes are very large that is why the memory requirement is generally high. For using larger MinBatchSize you have to increase the size of GPU memory proportionally.
Hope this helps.
Shubham Baisthakur
Shubham Baisthakur el 14 de Nov. de 2023
Hello there,
Just wondering if this problem has been fixed? I am getting the same error with my custom layer defined for the sequence-to-sequence LSTM regression. I am not getting any "GPU low on memory warning". I am working on Matlab R2022b.
Thanks

Iniciar sesión para comentar.

Más respuestas (0)

Categorías

Más información sobre Deep Learning Toolbox en Centro de ayuda y File Exchange.

Etiquetas

Preguntada:

el 24 de Ag. de 2019

Comentada:

el 14 de Nov. de 2023

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by