Borrar filtros
Borrar filtros

Mini batch size changing value during gradient descent

2 visualizaciones (últimos 30 días)
Arthur CASSOU
Arthur CASSOU el 27 de Jul. de 2022
Respondida: Shubham el 27 de Sept. de 2023
Hello everyone,
I am currently working on multimodal deep learning, with a neural network classifier receiving two time-dependent inputs, videos and a set of given features. Videos are 4D matrixes of size width x height x depth x frames and features are 2D matrixes of size number of features x frames.
I've been trying to classify the inputs based on the examples given below, as on some of my previous work.
During my training, I have come across a very singular situation. The value of minibatchsize, which I had initially set to 16, was decreased to 9. This produced an error as the layer were expecting batch sizes of 16 in the dlfeval() function.
I haven't found anything related to this problem on here, I was wondering if any of you would have a piece of advice or a solution for me.
Thank you for your help !

Respuestas (1)

Shubham
Shubham el 27 de Sept. de 2023
I understand that while training the neural network you found that minibatch size which was initially set to 16 was later decreased to 9.
Please check whether the dataset being used has the total number of samples divisible by 16. This would ensure that all samples are used and the minibatch size is not adjusting automatically to accommodate remaining samples. The minibatch size is also dependent on the available memory. Try looking for any inconsistencies in data preprocessing or the network architecture.
You may try using Mini-Batch datastore for reading data in batches.
Please refer to the following:
Hope this helps!!

Categorías

Más información sobre Image Data Workflows en Help Center y File Exchange.

Productos


Versión

R2020b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by