Define Custom Loss Function for Tabular Data
36 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
Hi, I am trying to implement a custom neural network from the paper attached. But I got the next error:
Value to differentiate is not traced. It must be a traced real dlarray scalar. Use dlgradient inside a function called by dlfeval to trace the variables.
I know that I cannot use extractdata but I could not realize other way to compute the loss.
The working code is:
%Generate sample data (1000 instances with 4 attributes)
Data = rand([1000,4]);
%Network layers hyperparameters
[nrow, ncol] = size(Data);
relu_scale = 0.1;
%Network architecture
layers = [
featureInputLayer(ncol,"Name","features")
fullyConnectedLayer(10,"Name","hidlayer01")
leakyReluLayer(relu_scale,"Name","leakyrelu")
fullyConnectedLayer(10,"Name","hidlaye02")
leakyReluLayer(relu_scale,"Name","leakyrelu_1")
fullyConnectedLayer(2,"Name","output")];
%Network initialization
net = dlnetwork(layers);
net = initialize(net);
%Clean up auxiliary variables
clear layers relu_scale myPath dataset;
%Training options for Adam Solver
numIterations = 1;
learningRate = 1e-4;
trailingAvg = [];
trailingAvgSq = [];
gradDecay = 0.9;
gradDecaySq = 0.99;
%Training options for SVDD
v = 0.10;
C = 1/(nrow*v);
lb = zeros(nrow,1);
ub = C*(lb+1);
%Convert data to dlarray
X = dlarray(Data','CB');
monitor = trainingProgressMonitor(Metrics = "Loss",XLabel = "Iteration");
iteration = 0;
%Loop over mini-batches.
while iteration < numIterations && ~monitor.Stop
iteration = iteration + 1;
%Evaluate the model loss and gradients using dlfeval and the modelLoss function
[loss,gradients] = dlfeval(@modelLoss,net,X,C,lb,ub,nrow);
%Update the network parameters.
[net,trailingAvg,trailingAvgSq] = adamupdate(net,gradients, ...
trailingAvg,trailingAvgSq,iteration,learningRate,gradDecay,gradDecaySq);
%Update the training progress monitor.
recordMetrics(monitor,iteration,Loss=loss);
monitor.Progress = 100 * iteration/numIterations;
end
function [loss,gradients] = modelLoss(net,X,C,lb,ub,nrow)
%The modelLoss function calculates the CDSVDD loss and returns the loss and the gradients
%of the loss with respect to the network learnable parameters
%Get phiX
phiX = forward(net,X);
%Compute loss
loss = CDSVDDLoss(phiX,C,lb,ub,nrow);
% Calculate gradients of the loss with respect to the network learnable parameters
gradients = dlgradient(loss,net.Learnables);
end
function loss = CDSVDDLoss(phiX,C,lb,ub,nrow)
%Convert phiX to array
phiX = double(transpose(extractdata(phiX)));
%Compute Q matrix
Q = phiX*phiX';
%Solve QP problem
alpha = quadprog(2*Q,-diag(Q),[],[],ones(1,nrow),1,lb,ub);
%Set to zero alpha values lower than optimality tolerance
alpha(alpha < 1.000000e-08) = 0;
%Compute sphere center
sphC = sum(alpha.*phiX,1);
%Compute distance of data points to sphere center
distC = pdist2(phiX,sphC);
%Compute sphere radius
sv = alpha > 0 & alpha < C;
sphR = mean(distC(sv));
%Compute the loss
loss = sphR + C*sum(max(distC-sphR,0));
loss = dlarray(loss);
end
I appreciate any help.
Regards
Ramiro
3 comentarios
Matt J
el 24 de Nov. de 2024 a las 19:44
You will probably have to implement your own gradient calculation.
Respuestas (0)
Ver también
Categorías
Más información sobre Custom Training Loops en Help Center y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!