Error in matlab included deep learning example

I am trying to run the matlab example
openExample('nnet/SeqToSeqClassificationUsing1DConvAndModelFunctionExample')
In 2019b but, when i change to train the network on gpu the example show me this error. Please help me to run it or give me a workaround to train using gpu.
Error using gpuArray/subsasgn
Attempt to grow array along ambiguous dimension.
Error in deep.internal.recording.operations.ParenAssignOp/forward (line 45)
x(op.Index{:}) = rhs;
Error in deep.internal.recording.RecordingArray/parenAssign (line 29)
x = recordBinary(x,rhs,op);
Error in dlarray/parenAssign (line 39)
objdata(varargin{:}) = rhsdata;
Error in SeqToSeqClassificationUsing1DConvAndModelFunctionExample>maskedCrossEntropyLoss (line 484)
loss(i) = crossentropy(dlY(:,i,idx),dlT(:,i,idx),'DataFormat','CBT');
Error in SeqToSeqClassificationUsing1DConvAndModelFunctionExample>modelGradients (line 469)
loss = maskedCrossEntropyLoss(dlY, dlT, numTimeSteps);
Error in deep.internal.dlfeval (line 18)
[varargout{1:nout}] = fun(x{:});
Error in dlfeval (line 40)
[varargout{1:nout}] = deep.internal.dlfeval(fun,varargin{:});
Error in SeqToSeqClassificationUsing1DConvAndModelFunctionExample (line 284)
[gradients, loss] = dlfeval(@modelGradients,dlX,Y,parameters,hyperparameters,numTimeSteps);
Thanks!

1 comentario

Edric Ellis
Edric Ellis el 15 de Oct. de 2019
Thanks for reporting this - I can reproduce the problem using R2019b here, I shall forward this to the development team...

Iniciar sesión para comentar.

 Respuesta aceptada

Joss Knight
Joss Knight el 15 de Oct. de 2019
There is a bug in this Example which will be rectified. Thanks for reporting. To workaround, initialize the loss variable in the maskedCrossEntropyLoss function:
function loss = maskedCrossEntropyLoss(dlY, dlT, numTimeSteps)
numObservations = size(dlY,2);
loss = zeros([1,1],'like',dlY); % Add this line
for i = 1:numObservations
idx = 1:numTimeSteps(i);
loss(i) = crossentropy(dlY(:,i,idx),dlT(:,i,idx),'DataFormat','CBT');
end
end

6 comentarios

Javier Bush
Javier Bush el 17 de Oct. de 2019
When I changed the miniBatchSize to 2 I get the following error. I am trying to understand the example so will help to be able to change the miniBatchSize to a different value
Workarounds?
Thanks!
Joss Knight
Joss Knight el 17 de Oct. de 2019
That's odd - I'll get back to you on that.
I appreciate your support, I just changed the miniBatchSize to 2 and I get the following error:
Index exceeds the number of array elements (1).
Error in SeqToSeqClassificationUsing1DConvAndModelFunctionExample>maskedCrossEntropyLoss (line 486)
idx = 1:numTimeSteps(i);
Error in SeqToSeqClassificationUsing1DConvAndModelFunctionExample>modelGradients (line 472)
loss = maskedCrossEntropyLoss(dlY, dlT, numTimeSteps);
Error in deep.internal.dlfeval (line 18)
[varargout{1:nout}] = fun(x{:});
Error in dlfeval (line 40)
[varargout{1:nout}] = deep.internal.dlfeval(fun,varargin{:});
Error in SeqToSeqClassificationUsing1DConvAndModelFunctionExample (line 287)
[gradients, loss] = dlfeval(@modelGradients,dlX,Y,parameters,hyperparameters,numTimeSteps);
Katja Mogalle
Katja Mogalle el 25 de Oct. de 2019
There are some small issues in the example script that will prevent you from setting the miniBatchSize>1. The fix is pretty simple though.
1) Replace the modelGradients function with the following:
function [gradients,loss] = modelGradients(dlX,T,parameters,hyperparameters,numTimeSteps)
dlY = model(dlX,parameters,hyperparameters,true);
dlY = softmax(dlY,'DataFormat','CBT');
dlT = dlarray(T,'CBT');
loss = maskedCrossEntropyLoss(dlY, dlT, numTimeSteps);
gradients = dlgradient(mean(loss),parameters); % this line was changed to compute the mean loss
end
2) Replace the transformSequences function with the following:
function [XTransformed, YTransformed, numTimeSteps] = transformSequences(X,Y)
% Removed line which computed the numTimeSteps. We'll compute this later in the loop
miniBatchSize = numel(X);
numFeatures = size(X{1},1);
sequenceLength = max(cellfun(@(sequence) size(sequence,2),X));
classes = categories(Y{1});
numClasses = numel(classes);
sz = [numFeatures miniBatchSize sequenceLength];
XTransformed = zeros(sz,'single');
sz = [numClasses miniBatchSize sequenceLength];
YTransformed = zeros(sz,'single');
for i = 1:miniBatchSize
predictors = X{i};
% Create dummy labels.
numTimeSteps(i) = size(predictors,2); % This line now sets the time steps for the i-th observation
responses = zeros(numClasses, numTimeSteps(i), 'single'); % This line also uses the i-th observation numTimeSteps
for c = 1:numClasses
responses(c,Y{i}==classes(c)) = 1;
end
% Left pad.
XTransformed(:,i,:) = leftPad(predictors,sequenceLength);
YTransformed(:,i,:) = leftPad(responses,sequenceLength);
end
end
Note, however, that depending on your GPU you might run into out-of-memory issues already with a small miniBatchSize. I have a GeForce GTX 1080 and I already run into this issue with a miniBatchSize of 3.
We will work on updating the example to fix these issues as soon as possible. Apologies for the inconvenience!
Javier Bush
Javier Bush el 26 de Oct. de 2019
Thanks, I can change miniBatchSize now.
Zekun
Zekun el 29 de Dic. de 2019
Editada: Walter Roberson el 30 de Dic. de 2019
I found another solution for
"Error using gpuArray/subsasgn
Attempt to grow array along ambiguous dimension."
In dlarray/parenAssign.m, at this location:"\R2019b\toolbox\nnet\deep\@dlarray\parenAssign.m"
Line 15:
obj = zeros(0, 0, 'like', rhs);
Replace line 15 with the following 2 lines:
szrhs = size(rhs);
obj = zeros(szrhs(1), szrhs(2), 'like', rhs);
Users cannot directly edit this file, so I backed it up and replace it with a new file.

Iniciar sesión para comentar.

Más respuestas (2)

Linda Koletsou Soulti
Linda Koletsou Soulti el 22 de Oct. de 2019

0 votos

Thank you for reporting the issue. The error you are getting is related to an attempt to grow a gpuArray using linear indexing assignment.
For more information please refer to the following bug report:

1 comentario

Linda,
I just changed the miniBatchSize to 2, in the same example and I get the following error, could you please help me with that? I think this is a bug because that is offered as a parameter in the example but you cannot change it.
Index exceeds the number of array elements (1).
Error in SeqToSeqClassificationUsing1DConvAndModelFunctionExample>maskedCrossEntropyLoss (line 486)
idx = 1:numTimeSteps(i);
Error in SeqToSeqClassificationUsing1DConvAndModelFunctionExample>modelGradients (line 472)
loss = maskedCrossEntropyLoss(dlY, dlT, numTimeSteps);
Error in deep.internal.dlfeval (line 18)
[varargout{1:nout}] = fun(x{:});
Error in dlfeval (line 40)
[varargout{1:nout}] = deep.internal.dlfeval(fun,varargin{:});
Error in SeqToSeqClassificationUsing1DConvAndModelFunctionExample (line 287)
[gradients, loss] = dlfeval(@modelGradients,dlX,Y,parameters,hyperparameters,numTimeSteps);

Iniciar sesión para comentar.

Categorías

Más información sobre Train Deep Neural Networks en Centro de ayuda y File Exchange.

Productos

Etiquetas

Preguntada:

el 15 de Oct. de 2019

Editada:

el 30 de Dic. de 2019

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by