How to import weights and bias in nntool

6 visualizaciones (últimos 30 días)
Niculai Traian
Niculai Traian el 25 de Sept. de 2018
Respondida: Aditya el 3 de Sept. de 2025
Somebody told me that i can create new network architecture and reload my pretrained weights and biases into the new network by using nntool. My problems is that i don't know where to find the weights and biases and how to export/import them into the new network. I want to do this because i don't want to retrain my network from 0 to add more classes to it.

Respuestas (1)

Aditya
Aditya el 3 de Sept. de 2025
Hi Niculai,
To reuse pretrained weights and biases in a new network architecture using nntool, you first need to locate the weights and biases in your current model. Typically, these parameters are stored within your model file, such as .h5 for Keras or .pt for PyTorch. You can extract these weights using your framework's API. When you want to add more classes, you modify your network by replacing the last layer with a new one that matches the new number of classes. After this, you transfer the weights from the old model to the new one for all layers except the new output layer. Next, export the modified model to a format compatible with nntool, such as TFLite or ONNX. Finally, you can import the model into nntool for deployment or further analysis. This approach allows you to avoid retraining the entire network from scratch and only requires fine-tuning the new layers.
I hope this helps.

Categorías

Más información sobre Deep Learning Toolbox en Help Center y File Exchange.

Productos


Versión

R2018a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by