gradient descent for custom function
2 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
I have four equations:
1) e = m - y
2) y = W_3 * h
3) h = z + W_2 * z + f
4) f = W_1 * x
I want to update W_1, W_2 and W_3 in order to minimize a cost function J = (e^T e ) by using gradient descent.
x is an input, y is the output and m is the desired value for each sample in the dataset
I would like to do W_1 = W_1 - eta* grad(J)_w_1
W_2 = W_2 - eta* grad(J)_w_2
W_3 = W_3 - eta* grad(J)_w_3
Going through documentation I found out that you can train standard neural networks. But notice that I have some custom functions, so I guess it would be more of an optimization built in function to use.
Any ideas?
2 comentarios
Matt J
el 24 de Abr. de 2024
x is an input, y is the output and m is the desired value for each sample in the dataset
It looks like z is also an input. It is not given by any other equations.
Respuestas (2)
Matt J
el 24 de Abr. de 2024
Editada: Matt J
el 24 de Abr. de 2024
so I guess it would be more of an optimization built in function to use.
No, not necessarily. Your equations can be implemented with fullyConnectedLayers and additionLayers.
3 comentarios
Torsten
el 24 de Abr. de 2024
Movida: Torsten
el 24 de Abr. de 2024
e = m - y = m - W_3*h = m - W_3*(z + W_2 * z + W_1 * x )
Now if you formulate this as
e = W1*z + W2*x - m
with
W1 = W_3 + W_2*W_3 and W2 = W_1*W_3
your problem is
min: || [z.',x.']*[W1;W2] - m ||_2
and you can use "lsqlin" to solve.
10 comentarios
Ver también
Categorías
Más información sobre Sequence and Numeric Feature Data Workflows en Help Center y File Exchange.
Productos
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!