Training accuracy remains constant and loss keeps decreasing
16 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
Hi… I am using CNN for image classification. From the simulation results it is observed that the mini batch accuracy reaches 100% after few epochs and mini batch loss keeps on decreases as shown below:
|Epoch | Iterations | Time Elasped | Mini-batch accuracy | Mini-batch loss | base learning rate
| 1 | 1 |00:00:10 | 43.75% | 8.9676 | 1.0000e-04 |
| 1 | 9 | 00:00:34 | 43.75% | 2.3432 | 1.0000e-04 |
| 2 | 18 | 00:00:57 | 68.75% | 0.4471 | 1.0000e-04 |
| 3 | 27 | 00:01:19 | 56.25% | 0.8530 | 1.0000e-04 |
| 4 | 36 | 00:01:43 | 81.25% | 0.4184 | 1.0000e-04 |
| 5 | 45 | 00:02:08 | 93.75% | 0.3022 | 1.0000e-04 |
| 6 | 54 | 00:02:33 | 81.25% | 0.2594 | 1.0000e-04 |
| 7 | 63 | 00:02:59 | 87.50% | 0.5467 | 1.0000e-04 |
| 8 | 72 | 00:03:23 | 93.75% | 0.1394 | 1.0000e-04 |
| 9 | 81 | 00:03:50 | 100.00% | 0.0409 | 1.0000e-04 |
| 10 | 90 | 00:04:14 | 100.00% | 0.0920 | 1.0000e-04 |
| 11 | 99 | 00:04:38 | 100.00% | 0.0318 | 1.0000e-04 |
| 12 | 108 | 00:05:02 | 93.75% | 0.1280 | 1.0000e-04 |
| 13 | 117 | 00:05:26 | 93.75% | 0.1724 | 1.0000e-04 |
| 14 | 126 | 00:05:50 | 87.50% | 0.2529 | 1.0000e-04 |
| 15 | 135 | 00:06:14 | 100.00% | 0.0252 | 1.0000e-04 |
| 16 | 144 | 00:06:38 | 100.00% | 0.0362 | 1.0000e-04 |
| 17 | 153 | 00:07:04 | 100.00% | 0.0345 | 1.0000e-04 |
| 18 | 162 | 00:07:28 | 100.00% | 0.0400 | 1.0000e-04 |
| 19 | 171 | 00:07:53 | 100.00% | 0.0552 | 1.0000e-04 |
| 20 | 180 | 00:08:17 | 100.00% | 0.0704 | 1.0000e-04 |
Is it correct to get such results or I am doing something wrong??
What is the reason behind these values??
0 comentarios
Respuestas (1)
Divyam Gupta
el 16 de Jun. de 2021
Hi,
I notice that your loss is fluctuating a lot after the 6th epoch of training while the accuracy stagnates for a certain number of epochs. There could be multiple reasons for this, including a high learning rate, outlier data being used while training etc. You can try reducing the learning rate or progressively scaling down the learning rate using the 'LearnRateSchedule' parameter in the trainingOptions documentation.
Additionally, you could try stopping the training process early, i.e., if the accuracy doesn't change for a few epochs, the model stops the training process, thus preventing further fluctuations. Please go through the following link to achieve this: https://www.mathworks.com/help/deeplearning/ug/customize-output-during-deep-learning-training.html
Hope this helps.
2 comentarios
Philip Brown
el 21 de Jun. de 2021
The minibatch loss is the loss computed for just a single minibatch (which might be a tiny fraction of your dataset), so it's not too surprising that the loss and accuracy fluctuate between minibatches. Each minibatch contains only a small number of images.
Given that you are getting 100 % accuracy on your training set after a small number of iterations, it could be valuable to check if your network is going to overfit when used on new data it hasn't seen before. Take a look at the section "Specify Validation Data" on this doc page for more info.
I agree with Divyam's suggestion that you might want to stop training earlier once the loss stops decreasing. Take a look at the ValidationPatience option in trainingOptions for details of this.
Ver también
Categorías
Más información sobre Image Data Workflows en Help Center y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!