Update a parameter which is not learnable in Custom Layers Deep Learning
4 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
Mathieu Chêne
el 14 de En. de 2022
Comentada: Mathieu Chêne
el 31 de En. de 2022
Hello,
I am working on a deep learning project in which I use a custom layer. In this layer, I have a parameter, α, that depend of the weight. I want to update when the weight is updated thus, for me, α is not a learnable parameter. Here is my predict function :
function [Z] = predict(layer, X)
% Z = predict(layer, X1, ..., Xn) forwards the input data X1,
% ..., Xn through the layer and outputs the result Z.
B=layer.Bias;
W = layer.Weights;
numel=size(X,2);
% Initialize output
Z = zeros(layer.OutputSize,numel,"single");
%alpha coef calculation
e=zeros(1,layer.InputSize);
numel=size(X,2);
for j= 1:size(layer.Graphe.neighbors(layer.TargetNode))
for i=1:numel
e(:,j)=mean(e(:,j)+W(:,layer.TargetIndex)*X(layer.TargetIndex,i)+W(:,j)*X(j,i),1);
end
end
e=leakyRELU(e,0.2);
A=softmax(e');
A=A';
layer.Alpha=A;
% Weighted addition
Z=(A.*W)*X+B;
end
Alpha is declared as a parameter here
properties
InputSize
OutputSize
TargetNode
Graphe
TargetIndex
Alpha
end
However when my training ends net.layer(3,1).Alpha gives me the initial value of α and it is the same thing in the backward function.
How can I do to update α ?
Thank you in advance for your futur help.
Mathieu
0 comentarios
Respuesta aceptada
Katja Mogalle
el 21 de En. de 2022
The technical and not too helpful answer is that custom layers are not handle classes and the predict function doesn't return the modified layer object so the framework doesn't/can't get the updated layer.
The question is, what do you plan to do with Alpha? From the sounds of it, you want to look at it after training? Or during training? Is it used anywhere in the training process?
One possible solution would be to declare a second output on the custom layer (look for NumOutputs and OutputNames properties in this custom layer doc page) which returns the alpha value. If you use dlnetwork and custom training loops, you can easily get the second (Alpha) output at any time during or after training without actually using it in the training process.
I hope this helps you move forward with your project.
2 comentarios
Más respuestas (0)
Ver también
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!