nnstart Neural Net toolbox and validation ROC
1 visualización (últimos 30 días)
Mostrar comentarios más antiguos
Hi,
I have been training neural networks for classification with nnstart. I get perfect training restults 0% false positives (FP) and 100% true positives (TP). The testing set performs quite less well but still acceptable, usually i can get up to 60% TP for 40% FP (and sometimes 80% TP). However, the Validation set ROC is usually very bad: i.e., random or worse. Can someone help me understand what it means ? What does it mean for the validation ROC to be bad and for the training ROC to be perfect ?
P.S. I use nn classification with usually about 100-200 samples split by default with 70% training set, 15% validation, 15% test. And default params, scaled conjugate gradient back-propagation & cross entropy minimization, 1000 hidden layers with sigmoid activation and last layer with softmax squashing function.
0 comentarios
Respuestas (0)
Ver también
Categorías
Más información sobre Deep Learning Toolbox en Help Center y File Exchange.
Productos
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!