Unrecognized method, property, or field 'Min' for class 'nnet.cnn.layer.ImageInputLayer'.
9 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
sun rise
el 11 de Nov. de 2021
Comentada: sun rise
el 13 de Nov. de 2021
clear;clc;close all
% Load the Image Dataset of Normal and Malignant WBC
%imds = imageDatastore('D:\Project\DB1\train','IncludeSubfolders',true,'LabelSource','foldernames');
%img = readimage(imds,1);
%size(img)
%%labelCount = countEachLabel(imds);
%Perform Cross-Validation using Hold-out method with a percentage split of 70% training and 30% testing
%[imdsTrain,imdsValidation] = splitEachLabel(imds,0.7,'randomized');
%%
net = inceptionv3;
inputSizeNet = net.Layers(1).InputSize;
%Convert the network to a dlnetwork object for feature extraction and remove the last four layers, leaving the "mixed10" layer as the last layer.
lgraph = layerGraph(net);
lgraph = removeLayers(lgraph,["avg_pool" "predictions" "predictions_softmax" "ClassificationLayer_predictions"]);
%View the input layer of the network. The Inception-v3 network uses symmetric-rescale normalization with a minimum value of 0 and a maximum value of 255.
lgraph.Layers(1)
%net1 = net;
%%
%Custom training does not support this normalization, so you must disable normalization in the network and perform the normalization in the custom training loop instead. Save the minimum and maximum values as doubles in variables named inputMin and inputMax, respectively, and replace the input layer with an image input layer without normalization.
inputMin = double(lgraph.Layers(1).Min);
inputMax = double(lgraph.Layers(1).Max);
layer = imageInputLayer(inputSizeNet,'Normalization','none','Name','input');
lgraph = replaceLayer(lgraph,'input_1',layer);
%Determine the output size of the network. Use the analyzeNetwork function to see the activation sizes of the last layer. To analyze the network for custom training loop workflows, set the TargetUsage option to 'dlnetwork'.
analyzeNetwork(lgraph,'TargetUsage','dlnetwork')
%Create a variable named outputSizeNet containing the network output size.
outputSizeNet = [8 8 2048];
%Convert the layer graph to a dlnetwork object and view the output layer. The output layer is the "mixed10" layer of the Inception-v3 network.
dlnet = dlnetwork(lgraph);
%Load the Image Dataset of Normal and Malignant WBC
imds = imageDatastore('D:\Project\DB1\train','IncludeSubfolders',true,'LabelSource','foldernames');
labelCount = countEachLabel(imds);
%Partition the data into training and validation sets. Hold out 5% of the observations for testing.
cvp = cvpartition(numel(imds),'HoldOut',0.05);
idxTrain = training(cvp);
idxTest = test(cvp);
annotationsTrain = imds(idxTrain);
annotationsTest = imds(idxTest);
%Create an augmented image datastore containing the images corresponding to the captions. Set the output size to match the input size of the convolutional network. To keep the images synchronized with the captions, specify a table of file names for the datastore by reconstructing the file names using the image ID. To return grayscale images as 3-channel RGB images, set the 'ColorPreprocessing' option to 'gray2rgb'.
tblFilenames = table(cat(1,annotationsTrain.Filename));
augimdsTrain = augmentedImageDatastore(inputSizeNet,tblFilenames,'ColorPreprocessing','gray2rgb')
%%Perform Cross-Validation using Hold-out method with a percentage split of 70% training and 30% testing
%%[imdsTrain,imdsValidation] = splitEachLabel(imds,0.7,'randomized');
%Select the Test images and save in Y_test
Unrecognized method, property, or field 'Min' for class 'nnet.cnn.layer.ImageInputLayer'.
Error in cnnv3 (line 23)
inputMin = double(lgraph.Layers(1).Min);
0 comentarios
Respuesta aceptada
yanqi liu
el 12 de Nov. de 2021
sir,the code for lgraph.Layers(1).Min or lgraph.Layers(1).Max is confuse,may be use the follow
clear;clc;close all
% Load the Image Dataset of Normal and Malignant WBC
%imds = imageDatastore('D:\Project\DB1\train','IncludeSubfolders',true,'LabelSource','foldernames');
%img = readimage(imds,1);
%size(img)
%%labelCount = countEachLabel(imds);
%Perform Cross-Validation using Hold-out method with a percentage split of 70% training and 30% testing
%[imdsTrain,imdsValidation] = splitEachLabel(imds,0.7,'randomized');
%%
net = inceptionv3;
inputSizeNet = net.Layers(1).InputSize;
%Convert the network to a dlnetwork object for feature extraction and remove the last four layers, leaving the "mixed10" layer as the last layer.
lgraph = layerGraph(net);
lgraph = removeLayers(lgraph,["avg_pool" "predictions" "predictions_softmax" "ClassificationLayer_predictions"]);
%View the input layer of the network. The Inception-v3 network uses symmetric-rescale normalization with a minimum value of 0 and a maximum value of 255.
lgraph.Layers(1)
%net1 = net;
%%
%Custom training does not support this normalization, so you must disable normalization in the network and perform the normalization in the custom training loop instead. Save the minimum and maximum values as doubles in variables named inputMin and inputMax, respectively, and replace the input layer with an image input layer without normalization.
% inputMin = double(lgraph.Layers(1).Min);
% inputMax = double(lgraph.Layers(1).Max);
layer = imageInputLayer(inputSizeNet,'Normalization','none','Name','input');
lgraph = replaceLayer(lgraph,'input_1',layer);
%Determine the output size of the network. Use the analyzeNetwork function to see the activation sizes of the last layer. To analyze the network for custom training loop workflows, set the TargetUsage option to 'dlnetwork'.
%analyzeNetwork(lgraph,'TargetUsage','dlnetwork')
%Create a variable named outputSizeNet containing the network output size.
outputSizeNet = [8 8 2048];
%Convert the layer graph to a dlnetwork object and view the output layer. The output layer is the "mixed10" layer of the Inception-v3 network.
dlnet = dlnetwork(lgraph);
%Load the Image Dataset of Normal and Malignant WBC
% imds = imageDatastore('D:\Project\DB1\train','IncludeSubfolders',true,'LabelSource','foldernames');
imds = imageDatastore(fullfile(matlabroot,'toolbox','matlab'),...
'IncludeSubfolders',true,'FileExtensions','.tif','LabelSource','foldernames');
labelCount = countEachLabel(imds);
%Partition the data into training and validation sets. Hold out 5% of the observations for testing.
cvp = cvpartition(numel(imds),'HoldOut',0.05);
idxTrain = training(cvp);
idxTest = test(cvp);
annotationsTrain = imds(idxTrain);
annotationsTest = imds(idxTest);
%Create an augmented image datastore containing the images corresponding to the captions. Set the output size to match the input size of the convolutional network. To keep the images synchronized with the captions, specify a table of file names for the datastore by reconstructing the file names using the image ID. To return grayscale images as 3-channel RGB images, set the 'ColorPreprocessing' option to 'gray2rgb'.
tblFilenames = table(cat(1,annotationsTrain.Filename));
augimdsTrain = augmentedImageDatastore(inputSizeNet,tblFilenames,'ColorPreprocessing','gray2rgb')
%%Perform Cross-Validation using Hold-out method with a percentage split of 70% training and 30% testing
%%[imdsTrain,imdsValidation] = splitEachLabel(imds,0.7,'randomized');
%Select the Test images and save in Y_test
Más respuestas (0)
Ver también
Categorías
Más información sobre Image Data Workflows en Help Center y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!