Early stopping method in neural networks, digits recognition
3 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
I'd like to perform early stopping algorithm on neural network in order to improve digit recognition by the network. Example of mine comes from coursera online course "machine learning" by Andrew NG, here is a link to codes from certain exercise
https://github.com/zhouxc/Stanford-Machine-Learning-Course/tree/master/Neural%20network%20learning/mlclass-ex4 (not my github account)
The problem is I cannot figure out how to modify fmincg.m so that in every epoch it compares result with output of predict.m function. This is what I need to implement early stopping method.
I come up with dealing with matlab's neural network toolbox
p=xt;
t=yt;
plot(p,t,'o')
net = newff(p,t,25);
y1 = sim(net,p);
plot(p,t,'o',p,y1,'x')
net.trainParam.epochs = 50;
net.trainParam.goal = 0.01;
net = train(net,p,t);
y2 = sim(net,p);
plot(p,t,'o',p,y1,'x',p,y2,'*')
where p is 3000x400 and t is 3000x1(originally they have 5000 elements, but I trimmed it to 3000) and here the problem emerges:
"Error using ==> network.train at 145
Targets are incorrectly sized for network.
Matrix must have 400 columns."
Any idea how to deal with that?
Or anybody maybe is able to give me hint how to modify fmicg.m to perform early stopping?
Thanks a lot in advance
DC
1 comentario
Walter Roberson
el 7 de Oct. de 2013
Is it possible you need to pass in p' rather than p ? Is 3000 the number of features or the number of samples ?
Respuesta aceptada
Greg Heath
el 8 de Oct. de 2013
1. Walter is correct. What the error message is trying to say is that t must have the same number of columns as p. Therefore, p and t must be transposed; i.e, the NNTBX convention is the transpose of the STATSTBX convention.
2.>> help newff newff Create a feed-forward backpropagation network.
Obsoleted in R2010b NNET 7.0. Last used in R2010a NNET 6.0.4.
The recommended function is feedforwardnet.
3.I recommend fitnet for regression/curvefitting and patternnet for classification/pattern-recognition. Each calls feedforwardnet. Therefore, the latter never has to be called explicitly.
4. In these functions, early stopping is a default.
Hope this helps.
Thank you for formally accepting my answer
Greg
0 comentarios
Más respuestas (3)
Greg Heath
el 8 de Oct. de 2013
What does monitoring weights after each iteration have to do with early stopping?.
The only way to monitor weights every epoch is to loop over single epoch trials. The problem with doing this with the latest functions fitmet and feedforwardnet is that the default trainlm mu is initialized every time train is called. Therefore, mu has to be saved after each epoch and used to reinitialze mu before train is called again. Designs are successful; however, they are not the same as if you trained continuously.
I am not sure if patternnet's trainscg reinitializes any parameters every time train is called. I'll check. Meanwhile,
I'm pretty sure newfit and newff use trainlm. Does newpr use trainscg?
Hope this helps.
Thanks for formally accepting my answer
Greg
2 comentarios
Greg Heath
el 8 de Oct. de 2013
You might want to see the help and/or doc explanations of each training function to see if there are any that do not change parameters during training.
Greg Heath
el 8 de Oct. de 2013
I don't see any nonconstant parameters in trainscg. Therefore, you have at least one solution.
Ver también
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!