how to handle soft weight constraints in neural network
3 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
massoud pourmandi
el 4 de Jul. de 2022
Let us assume that there is a feedforward neural network with two layers. and weights of each layer are constrained such that sum of the weights is a constant value in each layer and their values are non-negative. You may wonder why should we have such assumptions? Answer: I have an optimization problem with unknown variables that can be mapped to a neural network in that weights represent my variables that's why. Can anyone suggest to me a way to handle these constraints? for now, I just integrated these constraints into the cost function, though the way I did is not working very well. I just added the constraints to the main cost function using max. for example when A(x)<x I just added its cost as max(A(x)/x-1,0) to the main cost function.
0 comentarios
Respuesta aceptada
Matt J
el 4 de Jul. de 2022
Editada: Matt J
el 4 de Jul. de 2022
If you wish to train with standard unconstrained stochastic gradient descent algorithms, you will probably have to make a custom layer in which the score functions are calculated according to,
where are the learnable parameters. This is equivalent to weighting the inputs with positive weights that sum to C.
0 comentarios
Más respuestas (0)
Ver también
Categorías
Más información sobre Sequence and Numeric Feature Data Workflows en Help Center y File Exchange.
Productos
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!