U-net segmentation gives error for layers output size after starts to training
6 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
Mahmut Uçar
el 24 de Jun. de 2020
Comentada: sugreev bal
el 2 de Jun. de 2022
Trying to make segmentation for covid areas from CT images with using U-net. I had created mask image and added on original image in tiff format. createUnet funtion is used to create layers of network. I have resized the images to power of 2. Exact size is [512 512 4]. There will be 4 class. Code works fine until starts to training. In training part it starts for a while then gives the missmatch error. I am using Matlab R2019b.
Error using trainNetwork (line 170)
Invalid training data. The output size ([512 512 4]) of the last layer does not match the response size ([1 1 1]).
Error in Unet_segmen_guide_mismatch_error (line 39)
net = trainNetwork(imds,lgraph,options);
The code and createUnet() function are below. I hope you can help me to solve this problem...
Image_DataSet = imageDatastore('four_channel_CT', 'LabelSource','foldernames', 'FileExtensions', {'.tiff'});
DatasetPath=fullfile('E:\DEV\Covid\four_channel_CT');
imds=imageDatastore(DatasetPath, 'IncludeSubfolders', true,...
'LabelSource','foldernames','fileextension',{'.tiff'});
labelDir = fullfile(DatasetPath,'testImages');
classNames = [ "Background","Lung","space","Covid",];
pixelLabelIds = 1:30;
inputTileSize = [512,512,4];
lgraph = createUnet();
% disp(lgraph.Layers)
initialLearningRate = 0.05;
maxEpochs = 150;
minibatchSize = 16;
l2reg = 0.0001;
options = trainingOptions('sgdm',...
'InitialLearnRate',initialLearningRate, ...
'Momentum',0.9,...
'L2Regularization',l2reg,...
'MaxEpochs',maxEpochs,...
'MiniBatchSize',minibatchSize,...
'LearnRateSchedule','piecewise',...
'Shuffle','every-epoch',...
'GradientThresholdMethod','l2norm',...
'GradientThreshold',0.05, ...
'Plots','training-progress', ...
'VerboseFrequency',20);
modelDateTime = datestr(now,'dd-mmm-yyyy-HH-MM-SS');
net = trainNetwork(imds,lgraph,options);
save(['CovidUnet-' modelDateTime '-Epoch-' num2str(maxEpochs) '.mat'],'net','options');
*********************************************************************
*********************************************************************
function lgraph = createUnet()
% EDIT: modify these parameters for your application.
encoderDepth = 4;
initialEncoderNumChannels = 64;
inputTileSize = [512 512 4];
convFilterSize = [3 3];
inputNumChannels = 4;
numClasses = 4;
inputlayer = imageInputLayer(inputTileSize,'Name','ImageInputLayer');
[encoder, finalNumChannels] = iCreateEncoder(encoderDepth, convFilterSize, initialEncoderNumChannels, inputNumChannels);
firstConv = createAndInitializeConvLayer(convFilterSize, finalNumChannels, ...
2*finalNumChannels, 'Bridge-Conv-1');
firstReLU = reluLayer('Name','Bridge-ReLU-1');
secondConv = createAndInitializeConvLayer(convFilterSize, 2*finalNumChannels, ...
2*finalNumChannels, 'Bridge-Conv-2');
secondReLU = reluLayer('Name','Bridge-ReLU-2');
encoderDecoderBridge = [firstConv; firstReLU; secondConv; secondReLU];
dropOutLayer = dropoutLayer(0.5,'Name','Bridge-DropOut');
encoderDecoderBridge = [encoderDecoderBridge; dropOutLayer];
initialDecoderNumChannels = finalNumChannels;
upConvFilterSize = 2;
[decoder, finalDecoderNumChannels] = iCreateDecoder(encoderDepth, upConvFilterSize, convFilterSize, initialDecoderNumChannels);
layers = [inputlayer; encoder; encoderDecoderBridge; decoder];
finalConv = convolution2dLayer(1,numClasses,...
'BiasL2Factor',0,...
'Name','Final-ConvolutionLayer');
finalConv.Weights = randn(1,1,finalDecoderNumChannels,numClasses);
finalConv.Bias = zeros(1,1,numClasses);
smLayer = softmaxLayer('Name','Softmax-Layer');
pixelClassLayer = pixelClassificationLayer('Name','Segmentation-Layer');
layers = [layers; finalConv; smLayer; pixelClassLayer];
lgraph = layerGraph(layers);
for depth = 1:encoderDepth
startLayer = sprintf('Encoder-Stage-%d-ReLU-2',depth);
endLayer = sprintf('Decoder-Stage-%d-DepthConcatenation/in2',encoderDepth-depth + 1);
lgraph = connectLayers(lgraph,startLayer, endLayer);
end
end
%--------------------------------------------------------------------------
function [encoder, finalNumChannels] = iCreateEncoder(encoderDepth, convFilterSize, initialEncoderNumChannels, inputNumChannels)
encoder = [];
for stage = 1:encoderDepth
% Double the layer number of channels at each stage of the encoder.
encoderNumChannels = initialEncoderNumChannels * 2^(stage-1);
if stage == 1
firstConv = createAndInitializeConvLayer(convFilterSize, inputNumChannels, encoderNumChannels, ['Encoder-Stage-' num2str(stage) '-Conv-1']);
else
firstConv = createAndInitializeConvLayer(convFilterSize, encoderNumChannels/2, encoderNumChannels, ['Encoder-Stage-' num2str(stage) '-Conv-1']);
end
firstReLU = reluLayer('Name',['Encoder-Stage-' num2str(stage) '-ReLU-1']);
secondConv = createAndInitializeConvLayer(convFilterSize, encoderNumChannels, encoderNumChannels, ['Encoder-Stage-' num2str(stage) '-Conv-2']);
secondReLU = reluLayer('Name',['Encoder-Stage-' num2str(stage) '-ReLU-2']);
encoder = [encoder;firstConv; firstReLU; secondConv; secondReLU];
if stage == encoderDepth
dropOutLayer = dropoutLayer(0.5,'Name',['Encoder-Stage-' num2str(stage) '-DropOut']);
encoder = [encoder; dropOutLayer];
end
maxPoolLayer = maxPooling2dLayer(2, 'Stride', 2, 'Name',['Encoder-Stage-' num2str(stage) '-MaxPool']);
encoder = [encoder; maxPoolLayer];
end
finalNumChannels = encoderNumChannels;
end
%--------------------------------------------------------------------------
function [decoder, finalDecoderNumChannels] = iCreateDecoder(encoderDepth, upConvFilterSize, convFilterSize, initialDecoderNumChannels)
decoder = [];
for stage = 1:encoderDepth
% Half the layer number of channels at each stage of the decoder.
decoderNumChannels = initialDecoderNumChannels / 2^(stage-1);
upConv = createAndInitializeUpConvLayer(upConvFilterSize, 2*decoderNumChannels, decoderNumChannels, ['Decoder-Stage-' num2str(stage) '-UpConv']);
upReLU = reluLayer('Name',['Decoder-Stage-' num2str(stage) '-UpReLU']);
% Input feature channels are concatenated with deconvolved features within the decoder.
depthConcatLayer = depthConcatenationLayer(2, 'Name', ['Decoder-Stage-' num2str(stage) '-DepthConcatenation']);
firstConv = createAndInitializeConvLayer(convFilterSize, 2*decoderNumChannels, decoderNumChannels, ['Decoder-Stage-' num2str(stage) '-Conv-1']);
firstReLU = reluLayer('Name',['Decoder-Stage-' num2str(stage) '-ReLU-1']);
secondConv = createAndInitializeConvLayer(convFilterSize, decoderNumChannels, decoderNumChannels, ['Decoder-Stage-' num2str(stage) '-Conv-2']);
secondReLU = reluLayer('Name',['Decoder-Stage-' num2str(stage) '-ReLU-2']);
decoder = [decoder; upConv; upReLU; depthConcatLayer; firstConv; firstReLU; secondConv; secondReLU];
end
finalDecoderNumChannels = decoderNumChannels;
end
%--------------------------------------------------------------------------
function convLayer = createAndInitializeConvLayer(convFilterSize, inputNumChannels, outputNumChannels, layerName)
convLayer = convolution2dLayer(convFilterSize,outputNumChannels,...
'Padding', 'same',...
'BiasL2Factor',0,...
'Name',layerName);
% He initialization is used
convLayer.Weights = sqrt(2/((convFilterSize(1)*convFilterSize(2))*inputNumChannels)) ...
* randn(convFilterSize(1),convFilterSize(2), inputNumChannels, outputNumChannels);
convLayer.Bias = zeros(1,1,outputNumChannels);
convLayer.BiasLearnRateFactor = 2;
end
%--------------------------------------------------------------------------
function upConvLayer = createAndInitializeUpConvLayer(UpconvFilterSize, inputNumChannels, outputNumChannels, layerName)
upConvLayer = transposedConv2dLayer(UpconvFilterSize, outputNumChannels,...
'Stride',2,...
'BiasL2Factor',0,...
'Name',layerName);
% The transposed conv filter size is a scalar
upConvLayer.Weights = sqrt(2/((UpconvFilterSize^2)*inputNumChannels)) ...
* randn(UpconvFilterSize,UpconvFilterSize,outputNumChannels,inputNumChannels);
upConvLayer.Bias = zeros(1,1,outputNumChannels);
upConvLayer.BiasLearnRateFactor = 2;
end
0 comentarios
Respuesta aceptada
Divya Gaddipati
el 23 de Jul. de 2020
It seems like your last conv layer is of size [512, 512, 4], whereas it should have been [512,512,1].
You can use analyzeNetwork on your created network (lgraph) to understand what the output sizes are each layer.
Refer to the examples in the following link for more information on using pixelClassificationLayer
Here's a link to the documentation of analyzeNetwork
2 comentarios
Manjunath R V
el 17 de Mayo de 2021
Hello Mahmut if u r code is working fine means please share this with me
Más respuestas (1)
bochra jaafar
el 9 de Nov. de 2021
Hi Mahmut I have the same prolem . Can you please tell me how you solved it .
1 comentario
Ver también
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!