For a network with 1 inputs and 1 output, the datastore read function must return a cell array with 2 columns, but it returns an cell array with 1 columns.

4 visualizaciones (últimos 30 días)
I Read all the audio files into AudioDatastore then Transformed to get MelSpectrograms
Now when I try to train my network with TransformedDatastore It giving the Below Error
"For a network with 1 inputs and 1 output, the datastore read function must return a cell array with 2 columns, but it
returns an cell array with 1 columns."
Thanks

Respuesta aceptada

jibrahim
jibrahim el 25 de En. de 2022
Hi Gudimalla,
The training process typically expects a pair from the datastore: For classification problems, it expects a mel spectrogram + the expected label or class. For regression problems, it expects two signals (two spectrograms, two sequences, two pairs of features, etc...representing input and desired output for example).
This example might be helpful:
Similar to your case, we use a transfomed datastore to extract spectrograms. However, notice that we combine the transformed datastore with another datastore that hold the labels:
adsSpecTrain = transform(adsAugTrain, @(x)getSpeechSpectrogram(x,afe,params));
Use an arrayDatastore to hold the training labels.
labelsTrain = arrayDatastore(adsTrain.Labels);
Create a combined datastore that points to the mel-frequency spectrogram data and the corresponding labels.
tdsTrain = combine(adsSpecTrain,labelsTrain);
tdsTrain is then passed to trainNetwork.
  1 comentario
Gudimalla Pruthviraj
Gudimalla Pruthviraj el 27 de En. de 2022
one more help please
Below is my CNN and Training Options I tried changing many options for better accuracy but every time its not reaching 20% - 30% only can u please see and suggest the code and what possible steps should i change
NETWORK LAYERS CODE :
layers = [
imageInputLayer(imageSize,"Name","imageinput")
convolution2dLayer([3 3],32,"Name","conv_1","Padding",[1 1],"PaddingValue",0)
leakyReluLayer(0.33,"Name","leakyrelu_1")
convolution2dLayer([3 3],32,"Name","conv_2","Padding",[1 1],"PaddingValue",0)
leakyReluLayer(0.33,"Name","leakyrelu_2")
maxPooling2dLayer([3 3],"Name","maxpool_1","Padding","same")
dropoutLayer(0.25,"Name","dropout_1")
convolution2dLayer([3 3],64,"Name","conv_3","Padding",[1 1],"PaddingValue",0)
leakyReluLayer(0.33,"Name","leakyrelu_3")
convolution2dLayer([3 3],64,"Name","conv_4","Padding",[1 1],"PaddingValue",0)
leakyReluLayer(0.33,"Name","leakyrelu_4")
maxPooling2dLayer([3 3],"Name","maxpool_2","Padding","same")
dropoutLayer(0.25,"Name","dropout_2")
convolution2dLayer([3 3],128,"Name","conv_5","Padding",[1 1],"PaddingValue",0)
leakyReluLayer(0.33,"Name","leakyrelu_5")
convolution2dLayer([3 3],128,"Name","conv_6","Padding",[1 1],"PaddingValue",0)
leakyReluLayer(0.33,"Name","leakyrelu_6")
maxPooling2dLayer([3 3],"Name","maxpool_3")
dropoutLayer(0.25,"Name","dropout_3")
convolution2dLayer([3 3],256,"Name","conv_7","Padding",[1 1],"PaddingValue",0)
leakyReluLayer(0.33,"Name","leakyrelu_7")
convolution2dLayer([3 3],256,"Name","conv_8","Padding",[1 1],"PaddingValue",0)
leakyReluLayer(0.33,"Name","leakyrelu_8")
globalMaxPooling2dLayer("Name","gmpool")
fullyConnectedLayer(3,"Name","fc")
leakyReluLayer(0.33,"Name","leakyrelu_9")
dropoutLayer(0.5,"Name","dropout_4")
sigmoidLayer("Name","sigmoid")
classificationLayer('Classes',categories(ADSValidationlabel))];
TRAINING OPTIONS CODE :
miniBatchSize = 50;
options = trainingOptions('adam', ...
'InitialLearnRate',0.001, ...
'MaxEpochs',2, ...
'LearnRateSchedule',"piecewise",...
'LearnRateDropFactor',.1,...
'LearnRateDropPeriod',10,...
'MiniBatchSize',miniBatchSize, ...
'Shuffle','every-epoch', ...
'Plots','training-progress', ...
'Verbose',false, ...
'ValidationData',{ADSValidation,ADSValidationlabel},...
'ValidationFrequency',ceil(numel(adtrain.Files)/miniBatchSize));
OUTPUT :
Accuracy 20% -30 %
Thank You

Iniciar sesión para comentar.

Más respuestas (0)

Etiquetas

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by