neural network training terminated prematurely

1 visualización (últimos 30 días)
tathagata chakraborti
tathagata chakraborti el 26 de Sept. de 2011
Comentada: Greg Heath el 25 de Mzo. de 2014
Hi, i am trying to learn the neural network toolbox from the help section. I have a problem with the folowing code given in the tutorial.
p = [0 1 2 3 4 5 6 7 8];
t = [0 0.84 0.91 0.14 -0.77 -0.96 -0.28 0.66 0.99];
net = newff(p,t,10);
y1 = sim(net,p)
net.trainParam.epochs = 50;
net.trainParam.goal = 0.01;
net = train(net,p,t);
y2 = sim(net,p)
plot(p,t,'o',p,y1,'x',p,y2,'*')
The problem is that the output doesn't match the target when the same input is given. I have decreased the goal to 0.0001 to the end that now training stops before any of the stopping conditions are met (none of the bars in the nntraintool window is filled) while significant error remains. Isn't it supposed to give results identical to the target if the training data is given as the input after training is complete?? and why does training stop before any of the stopping conditions have been met?? thanks.

Respuesta aceptada

tathagata chakraborti
tathagata chakraborti el 26 de Sept. de 2011
i think the fact that only 60% are used for training led to the mismatch, changing the default dividerand settings has resulted in 100% match. :)

Más respuestas (1)

Greg Heath
Greg Heath el 25 de Mzo. de 2014
Editada: Greg Heath el 25 de Mzo. de 2014
1. y2 ~= y1 because of network training.
2. In a repeat y3~= y1 and y4~=y2 because newff net creation creates random initial weights. You have to initialize the RNG to the same initial state in order to duplicate a design.
3. Unfortunate choices of random initial weights are not uncommon. That is why I always design 10 nets for each candidate value of H, the number of hidden nodes.
4. Search on
greg Ntrials
for examples of my double-loop designs.
Hope this helps.
Greg
  1 comentario
Greg Heath
Greg Heath el 25 de Mzo. de 2014
The cause is probably one of unfortunate random initial weights

Iniciar sesión para comentar.

Categorías

Más información sobre Deep Learning Toolbox en Help Center y File Exchange.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by