How can I train a Big Data (30k) using neural network fitting problem? or How to set mini batch?
1 visualización (últimos 30 días)
Mostrar comentarios más antiguos
Hi
I have a input vector of 518 numbers. For output there are 20 numbers. But I have 30 thousand of samples. I find that using Bayesian Regularization can have a good performance. But when I handel this huge amount of samples, I find that it is too slow.
Is there anyway to sovle this problem?
I guess using deep learning tool box and set a mini batch could help. But I do not know how to do this?
0 comentarios
Respuestas (1)
Prateek Rai
el 29 de Jul. de 2021
To my understanding, you are using Bayesian Optimization. You want to set a mini-batch to increase the speed of training. You can set the mini batch size using ‘MiniBatchSize’ name-value pair arguments of the ‘trainingOptions’ function in Deep Learning Toolbox. Moreover, you can also set a maximum number of epochs and options for data shuffling using the ‘MaxEpochs’ and ‘Shuffle’ as name-value pair arguments.
Please refer to trainingOptions MathWorks documentation page to find more on trainingOptions function. You can also refer to Deep Learning Using Bayesian Optimization MathWorks documentation page to learn more about applying Bayesian optimization to deep learning.
0 comentarios
Ver también
Categorías
Más información sobre Pattern Recognition and Classification en Help Center y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!