- Try a lower initial learning rate.
- Normalize the responses (the variable Y in your example) so that the maximum value is 1. You can use the normc function to do this.
Why appear NAN in the Mini-batch-loss and Mini-batch-RMSE when Train a Convolutional Neural Network for Regression
6 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
Iam used same code steps in following link but modified with my work
https://www.mathworks.com/help/nnet/examples/train-a-convolutional-neural-network-for-regression.html
traindata=rtrain_csiq;
Y = rscore;
testdata=utest_csiq;
layers = [ ...
imageInputLayer([256 256 1])
convolution2dLayer(12,25)
reluLayer
fullyConnectedLayer(1)
regressionLayer];
options = trainingOptions('sgdm','InitialLearnRate',0.001, ... 'MaxEpochs',15);
net = trainNetwork(traindata,Y,layers,options)
predictedTest = predict(net,testdata);
but the output as following
![](https://www.mathworks.com/matlabcentral/answers/uploaded_files/166105/image.png)
pls how can solve that..Thanks
0 comentarios
Respuestas (1)
Amy
el 31 de Ag. de 2017
Hi Ismail,
Sometimes this can happen if your data includes many regressors and/or large regression response values. This leads to larger losses that can become NaNs.
Two possible solutions:
2 comentarios
AlexanderTUE
el 4 de Sept. de 2017
Hi Amy, hi Ismail,
I has a similar problem in the past. It seems that the use of a single convolution connected layer is not enough for such big images sizes. I used three Conv layers with intial weigths. Please see the following QA https://de.mathworks.com/matlabcentral/answers/337587-how-to-avoid-nan-in-the-mini-batch-loss-from-traning-convolutional-neural-network
Alex
Ver también
Categorías
Más información sobre Deep Learning Toolbox en Help Center y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!