Validation accuracy that appears different from the graph in DeepNetworkDesigner

3 visualizaciones (últimos 30 días)
Hi all!
I've been using DeepNetworkDesigner and have noticed that the final validation accuracy and validation loss do not seem to match what I would read off the plot. For instance, in the image below I would have expected training validation to be about 90% and the loss to be ~2. Instead, it reported 48% and a loss of nearly 8. Can anyone explain to me why this is happening and it is shooting up at the end?

Respuestas (1)

Jack Xiao
Jack Xiao el 21 de Feb. de 2021
it is overfit. add more training data, or add dropout layer, or reduce net layers.

Categorías

Más información sobre Sequence and Numeric Feature Data Workflows en Help Center y File Exchange.

Productos


Versión

R2020b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by