Custom Vnet layer validation failed error

I am trying to train a custom Vnet network it keeps giving me the following error
Layer 'en3_prelu1': Layer validation failed. Error using
'predict' in Layer prelu3dLayer. The function threw an error
and could not be executed.
Array dimensions must match for binary array op.
This is my class definition
classdef prelu3dLayer < nnet.layer.Layer
% Custom 3-D PReLU layer.
%
% Copyright 2019 The MathWorks, Inc.
properties (Learnable)
% Layer learnable parameters
% Scaling coefficient
Alpha
end
methods
function layer = prelu3dLayer(numChannels, name)
% layer = preluLayer(numChannels, name) creates a PReLU layer
% with numChannels channels and specifies the layer name.
% Set layer name.
layer.Name = name;
% Set layer description.
layer.Description = "PReLU with " + num2str(numChannels) + " channels";
% Initialize scaling coefficient.
layer.Alpha = rand(numChannels);
end
function Z = predict(layer, X)
% Z = predict(layer, X) forwards the input data X through the
% layer and outputs the result Z.
Z = max(0, X) + layer.Alpha .* min(0, X);
end
function [dLdX, dLdAlpha] = backward(layer, X, ~, dLdZ, memory)
% [dLdX, dLdAlpha] = backward(layer, X, Z, dLdZ, memory)
% backward propagates the derivative of the loss function
% through the layer.
% Inputs:
% layer - Layer to backward propagate through
% X - Input data
% Z - Output of layer forward function
% dLdZ - Gradient propagated from the deeper layer
% memory - Memory value which can be used in backward
% propagation
% Outputs:
% dLdX - Derivative of the loss with respect to the
% input data
% dLdAlpha - Derivative of the loss with respect to the
% learnable parameter Alpha
dLdX = layer.Alpha .* dLdZ;
dLdX(X>0) = dLdZ(X>0);
dLdAlpha = min(0,X) .* dLdZ;
%dLdAlpha = sum(sum(dLdAlpha,1),2);
% Sum over all observations in mini-batch.
dLdAlpha = sum(dLdAlpha,5);
end
end
end
And these are the network options
inputSize = [40 40 40];
numClasses = 2;
lgraph = createVnetBn(inputSize, numClasses);
figure;
lgraph.plot
maxEpochs = 250;
options = trainingOptions('adam', ...
'MaxEpochs',maxEpochs, ...
'InitialLearnRate',1e-3, ...
'LearnRateSchedule','piecewise', ...
'LearnRateDropPeriod',5, ...
'LearnRateDropFactor',0.97, ...
'ValidationData',dsVal, ...
'ValidationFrequency',400, ...
'Plots','training-progress', ...
'Verbose',false, ...
'MiniBatchSize',miniBatchSize);
doTraining = true;
if doTraining
modelDateTime = datestr(now,'dd-mmm-yyyy-HH-MM-SS');
[net,info] = trainNetwork(dsTrain,lgraph,options);
save(['trained3DUNet-' modelDateTime '-Epoch-' num2str(maxEpochs) '.mat'],'net');
end

Respuestas (1)

Amanjit Dulai
Amanjit Dulai el 11 de Sept. de 2022

0 votos

It's tricky to tell without the code for createVnetBn, but I would guess the problem is this line in prelu3dLayer:
% Initialize scaling coefficient
layer.Alpha = rand(numChannels);
This will initialize layer.Alpha to a numChannels-by-numChannels matrix, which is not what you would normally want for a 3D input. For a 3D input, you probably want something like this:
% Initialize scaling coefficient
layer.Alpha = rand([1 1 1 numChannels]);

7 comentarios

Amira Youssef
Amira Youssef el 14 de Sept. de 2022
It didn't solve the problem
Amanjit Dulai
Amanjit Dulai el 14 de Sept. de 2022
Is there any way to provide the code for createVnetBN?
Amira Youssef
Amira Youssef el 14 de Sept. de 2022
Editada: Amira Youssef el 14 de Sept. de 2022
the original code is here but he is using 64x64x64 mat files while I am using 40x40x40
Amanjit Dulai
Amanjit Dulai el 14 de Sept. de 2022
I got a different error when running the code:
Error using rand
Size inputs must be integers.
Error in prelu3dLayer (line 25)
layer.Alpha = rand(numChannels);
The problem here is that V-nets and U-nets have a particular architecture where sizes are divided by 2 through the depth of the network (see the diagram on page 3: https://arxiv.org/pdf/1606.04797.pdf). For this reason, only some input sizes are valid for a V-net or U-net. Normally sizes that are powers of 2 (like 64) will work well. A size of 40x40x40 doesn't work in this instance because 40/(2*2*2*2) = 2.5.
You could try using unet3dLayers, which lets you control the depth of the network, and so will let you reduce the number of downsampling layers so an input size of 40x40x40 will work. Or, you can edit createVnetBn to remove the 5th downsampling stage and the 1st upsampling stage.
Amira Youssef
Amira Youssef el 15 de Sept. de 2022
I tried to fix the createVnet but it gives unconnected layers and doesn't run as well, do you have any idea what exactly should be edited
Amanjit Dulai
Amanjit Dulai el 15 de Sept. de 2022
I've attached an edited version of createVnetBn.m which should work for an input size of 40x40x40. I have removed one of the downsampling and upsampling stages.
Amira Youssef
Amira Youssef el 15 de Sept. de 2022
Thank you so much sir, I really can't thank you enough for this

Iniciar sesión para comentar.

Categorías

Más información sobre Deep Learning Toolbox en Centro de ayuda y File Exchange.

Productos

Versión

R2020a

Preguntada:

el 10 de Sept. de 2022

Comentada:

el 15 de Sept. de 2022

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by