How to combine image inputs of samples from class1 and feature inputs from samples of class 2 in a DL network?
4 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
I tried to implement the DL algorithm presented in the paper, Poojan Oza, and Vishal M. Patel, “One-Class Convolutional Neural Network”, arXiv:1901.08688v1 [cs.CV] 24 Jan 2019. I used a concatenateLayer to combine features extracted from image inputs of class 1 samples by convolutional layers and feature inputs of class 2 samples. I ran an example program:
%OCCNN_Oza.m
clc; clear; close all;
%set up network parameters
imageInputSize = [28,28,1];
filterSize = 3;
numFilters = 8;
numClasses = 10;
numFeatures = 50;
%construct the network
layers = [
imageInputLayer(imageInputSize,'Normalization','none','Name','images')
convolution2dLayer(filterSize,numFilters,'Name','conv')
reluLayer('Name','relu')
fullyConnectedLayer(50,'Name','fc1')%1 x 1 x 50 x N
squeezeLayer()%50 x N
% concatenationLayer(1,2,'Name','cat')
concatenationLayer(2,2,'Name','cat')% 50x2N???
fullyConnectedLayer(numClasses,'Name','fc2')
softmaxLayer('Name','softmax')
classificationLayer];
lgraph = layerGraph(layers);
featInput = featureInputLayer(numFeatures,Name="features");%50 x N
lgraph = addLayers(lgraph,featInput);
lgraph = connectLayers(lgraph,"features","cat/in2");
figure; plot(lgraph)
%simulate dummy datasets for training
numObservations = 100;
fakeImages = randn([imageInputSize,numObservations]);%28 28 1 100
imagesDS = arrayDatastore(fakeImages,IterationDimension=4);
fakeFeatures = randn([numObservations,numFeatures]);
featureDS = arrayDatastore(fakeFeatures.',IterationDimension=2);%50x100
fakeTargets = categorical(mod(1:2*numObservations,numClasses));%1x200
targetDS = arrayDatastore(fakeTargets,IterationDimension=2);
ds = combine(imagesDS,featureDS,targetDS);
opts = trainingOptions("adam","MaxEpochs",1);
net=trainNetwork(ds,lgraph,opts);
I got the following error messages:
Error using trainNetwork (line 184)
Invalid network.
Error in OCCNN_Oza (line 43)
net=trainNetwork(ds,lgraph,opts);
Caused by:
Layer 'cat': Input size mismatch. Size of input to this layer is different from the expected input size.
Inputs to this layer:
from layer 'layer' (size 50(C) × 1(B))
from layer 'features' (size 50(C) × 1(B))
0 comentarios
Respuestas (1)
Sugandhi
el 25 de Abr. de 2023
Hi Ming-Jer Tsai,
I understand that you are getting errors when you are trying to implement the DL algorithm presented in the paper, Poojan Oza, and Vishal M. Patel, “One-Class Convolutional Neural Network”, arXiv:1901.08688v1 [cs.CV] 24 Jan 2019.
Based on the error message, it seems that the input size of the concatenation layer 'cat' in your DL network is not matching with the expected input size. The error is specifically related to the number of elements along the second dimension (B) of the input. In the paper you mentioned, the authors concatenate the features extracted from image inputs of class 1 samples with the feature inputs from samples of class 2 by adding an extra dimension to the latter (i.e., 1 x 1 x numFeatures x numSamples). However, in your implementation, you are passing the feature inputs as a matrix with dimensions numFeatures x numSamples.
In the code provided, you have defined a feature input layer 'featInput' with a size of [numFeatures, N], where 'N' represents the number of samples in class 2. However, when connecting this feature input layer to the concatenation layer 'cat', you are specifying the input size as [50, 1] using the 'IterationDimension' parameter in the 'arrayDatastore' function. This is causing a mismatch in the input size of the 'cat' layer.
To fix this issue, you should specify the input size of the concatenation layer 'cat' as [numFeatures, N] to match the size of the feature input layer 'featInput'. You can update the 'arrayDatastore' function for the feature input data accordingly, like the following:
featureDS = arrayDatastore(fakeFeatures,IterationDimension=2); % [numFeatures, N]
And update the 'concatenationLayer' function in your layer graph to match the expected input size:
concatenationLayer(2,2,'Name','cat') % [numFeatures, N]
Make sure that the input size of the feature input layer 'featInput' matches the input size of the 'cat' layer, and the error related to the input size mismatch should be resolved.
0 comentarios
Ver también
Categorías
Más información sobre Get Started with Deep Learning Toolbox en Help Center y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!