Using weights from OL in CL training; how should the weight vector(s)/cell matrices be formatted when used as input in train() ?

1 visualización (últimos 30 días)
Dear reader,
I am trying to attach the weights obtained in OL in the CL training. I can see that the amount of data contained in the weight sets; .IW, .LW and .B are altered when going from open loop to closed loop....still, the weight vector obtained from getwb() have the same amount of data for both in OL and CL. Any ideas how to format the weight vector (in the code below the weight vector is designated EWc1) before inserting this to train()? Is there any way that preparets() (or a similar function) can handle this?
Code and error message:
close all
clear all
% format long
T = simplenar_dataset;
[I,N] = size(T);
d = 5;
FD = 1:d;
H = 10;
% open net number one, input for closed net number one and closed net number two
neto1 = narnet( FD, H );
neto1.divideFcn = 'divideblock';
[ Xo1, Xoi1, Aoi1, To1] = preparets( neto1, {}, {}, T );
to = cell2mat( To1 );
% zto = zscore(to,1);
varto1 = mean(var(to',1));
% minmaxto = minmax([ to ; zto ]);
rng( 'default' )
[neto1,tro,Yo1,Eo1,Aof1,Xof1] = train( neto1, Xo1, To1, Xoi1, Aoi1 );
[Yo1,Xof1,Aof] = neto1( Xo1, Xoi1, Aoi1 );
Eo1 = gsubtract( To1, Yo1 );
NMSEo1 = mse( Eo1 ) /varto1;
yo1 = cell2mat( Yo1 );
netc1 = closeloop(neto1);
EWo1=getwb(neto1);
EWc1=getwb(netc1);
isequal( EWo1, EWc1); % 1
netc1.divideFcn = 'divideblock';
[ Xc1, Xci1, Aci1, Tc1, EWc1 ] = preparets( netc1, {}, {}, T, EWo1 ); % 1.232667933023756e-08
isequal( EWo1, cell2mat(EWc1)); % 1 if EWo1 is included in preparets, 0 if EWo1 is NOT included in preparets
figure(1)
plot(1:length(EWo1),EWo1,1:length(cell2mat(EWc1)),cell2mat(EWc1))
isequal( Tc1, To1);
tc = to;
[netc1,troc1,Yc1,Ec1,Acf1,Xcf1] = train( netc1, Xc1, Tc1, Xci1, Aci1, EWc1);
% Here, in the training I would like to insert EWc1 to continute working weights from the
% preparets which is nine lines up. However, when adding EWc1 as the last
% input parameter I get the following error:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Error using nntraining.setup (line 17)
% Error weights EW{1,1 contains negative values.
% Error in network/train (line 292)
% [net,rawData,tr,err] = nntraining.setup(net,net.trainFcn,X,Xi,Ai,T,EW,~isGPUArray);
%Error in question160516 (line 50)
% [netc1,troc1,Yc1,Ec1,Acf1,Xcf1] = train( netc1, Xc1, Tc1, Xci1, Aci1, EWc1);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% [netc1,troc1,Yc1,Ec1,Acf1,Xcf1] = train( netc1, Xc1, Tc1, Xci1, Aci1, EWc1);
EWc1=getwb(netc1);
disp('Weights IW') % Here I try to show the content of each weight set
o_iw=neto1.IW
c_iw=netc1.IW
disp('Weights LW')
o_lw=neto1.LW
c_lw=netc1.LW
disp('Weights b')
o_b=neto1.b
c_b=netc1.b
isequal( EWo1, EWc1); % 0
figure(2)
plot(1:length(EWo1),EWo1,1:length(EWc1),EWc1)
[Yc1,Xcf1,Acf1] = netc1( Xc1, Xci1, Aci1 );
Ec1 = gsubtract( Tc1, Yc1 );
yc = cell2mat( Yc1 );
NMSEc = mse(Ec1) /var(tc,1);
[Yc1_2,Xcf1_2,Acf1_2] = netc1( Xc1, Xci1, Aci1 );
Xc1_2 = cell(1,N);
[Yc1_2,Xcf1_2,Acf1_2] = netc1( Xc1_2, Xcf1_2, Acf1_2 );
yc1_2 = cell2mat(Yc1_2);
If you would like to run the code without getting the error, just remove EWc1 from the end of [netc1,troc1,Yc1,Ec1,Acf1,Xcf1] = train( netc1, Xc1, Tc1, Xci1, Aci1, EWc1);
Regards
Staffan

Respuesta aceptada

Greg Heath
Greg Heath el 18 de Mayo de 2016
Editada: Greg Heath el 18 de Mayo de 2016
% Using weights from OL in CL training; how should the % weight vector(s)/cell matrices be formatted when used % as input in train() ? % 2 views (last 30 days) % Asked by Staffan 17MAY2016 % % I am trying to attach the weights obtained in OL in the CL % training. I can see that the amount of data contained in the % weight sets; .IW, .LW and .B are altered when going from % open loop to closed loop....still, the weight vector obtained % from getwb() have the same amount of data for both in % OL and CL. Any ideas how to format the weight vector (in % the code below the weight vector is designated EWc1) % before inserting this to train()? Is there any way that % preparets() (or a similar function) can handle this?
GEH0 =[ ' YOU HAVE CONFUSED NETWORK WEIGHT BIAS '... ' VECTORS, WB, FROM GETWB WITH ERROR ' ... ' WEIGHTS, EW, OF LENGTH N THAT ARE CHOSEN ' ... ' BY THE PROGRAMMER TO WEIGHT EACH TERM ' .. ' IN MEAN SQUARE ERROR ' ]
GEH1 = 'I REMOVED SOME ENDING SEMICOLONS BELOW TO CHECK RESULTS'
clc
% Code and error message:
close all
clear all
% format long
T = simplenar_dataset;
[ I, N ] = size(T) % [ 1 100 ]
d = 5
GEH2= ' WHY 5 ?'
FD = 1:d;
H = 10;
% open net number one, input for closed net number
% one and closed net number two
neto1 = narnet( FD, H );
neto1.divideFcn = 'divideblock';
[ Xo1, Xoi1, Aoi1, To1] = preparets( neto1, {}, {}, T );
to = cell2mat( To1 );
% zto = zscore(to,1);
varto1 = mean(var(to',1)) % 0.062747
% minmaxto = minmax([ to ; zto ]);
rng( 'default' )
% [neto1,tro,Yo1,Eo1,Aof1,Xof1] = train( neto1, Xo1, To1, Xoi1, Aoi1 );
GEH3 = ' ERROR1: SWITCH Aof1 and Xof1'
[neto1,tro,Yo1,Eo1,Xof1,Aof1] = train( neto1, Xo1, To1, Xoi1, Aoi1);
%[Yo1,Xof1,Aof] = neto1( Xo1, Xoi1, Aoi1 );
GEH4 = 'ERROR: Aof1 not Aof'
%Eo1 = gsubtract( To1, Yo1 );
GEH5 = ' COMMENT ABOVE 2 REDUNDANT STATEMENTS'
NMSEo1 = mse( Eo1 ) /varto1 %1.6546e-09
GEH6 = ' ALWAYS MAKE SURE NMSEo1 IS ADEQUATE BEFORE CL'
yo1 = cell2mat( Yo1 );
netc1 = closeloop(neto1);
EWo1=getwb(neto1);
EWc1=getwb(netc1);
isequal( EWo1, EWc1) % 1
GEH7 = [ 'INCORRECT NOTATION: EW IS RESERVED FOR MSE' ...
' ERROR WEIGHTS. USE WBo1 AND WBc1 FOR WEIGHT '...
' BIAS VECTORS ' ]
%netc1.divideFcn = 'divideblock';
GEH8 = 'ABOVE ASSIGNMENT IS UNNECESSARY'
[ Xc1, Xci1, Aci1, Tc1, EWc1 ] = preparets( netc1, {}, {}, T, EWo1 ); % 1.232667933023756e-08
GEH9 = 'ERROR: SEE GEH0'
GEH10 = 'WHAT IN THE WORLD IS 1.232667933023756e-08 ???'
% isequal( EWo1, cell2mat(EWc1)); % 1 if EWo1 is included in preparets, 0 if EWo1 is NOT included in preparets % figure(1) % plot(1:length(EWo1),EWo1,1:length(cell2mat(EWc1)),cell2mat(EWc1))
GEH11 = 'DELETE ABOVE 3 STATEMENTS'
isequal( Tc1, To1);
tc = to;
[netc1,troc1,Yc1,Ec1,Acf1,Xcf1] = train( netc1, Xc1, Tc1, Xci1, Aci1, EWc1);
GEH12 = 'ERRORS: 1: SWITCH Acf1 AND Xcf1 2: REMOVE EWc1'
GEH13 = 'I"LL STOP HERE'
HOPE THIS HELPS
Greg
  2 comentarios
Staffan
Staffan el 18 de Mayo de 2016
Editada: Staffan el 18 de Mayo de 2016
Greg,
You are correct, main item here is that I thought EW were in some sense a representation of net.IW, new.LW and net.B.
I do not fully comprehend how neto.IW, newo.LW and neto.B a reformatted when closing the loop. neto1.LW is a cell matrix with the following dimensions:
[] []
[1x10 double] []
...and netc1.LW is a cell matrix with the following dimensions (regardless of is training is carried out or not):
[] [10x5 double]
[1x10 double] []
I would like to assume that since no input parameter(s) are used to train the network in the closed loop the weights matrices should differ between open and closed loop....and this might very well be true since netc1.IW does not contain any values. But, how can it be that the amount of data in netc1.LW has increased (compared to neto1.LW)?
My final goal with this thread is to be able to understand how to start a closed loop with the weights from the open loop. This is currently my understanding of this question:
  • Item one When an open loop is converted ( netc=closeloop(neto); ) to a closed loop the weights from the open loop are used in the closed loop. The weights from the open loop are automatically reformatted to be able to be used with a closed loop. If the closed loop is trained the weights from the open loop are used as initial conditions.
Is this true?
If I, for some reason, wouldn't like to use the weights from the open loop (or even create the open loop) I guess I could close the loop right after creating the net with narxnet or narnet. However, as you have written in earlier posts, training a closed network from scratch is time consuming; so as long as I can make sure that I obtain a small NMSEo I would like to think that it's better to start with training of the open loop and closing the loop.
A question on EW; do you know what this array contains 71 values (for the example above)? I would expect one value or 100 values...but not 71. (the documentation on train, http://se.mathworks.com/help/nnet/ref/train.html does not go into detail on this particular subject)
Regards
Staffan
Staffan
Staffan el 19 de Mayo de 2016
Editada: Staffan el 19 de Mayo de 2016
On behalf of Greg I would like to submit the following answer. It was not possible for Greg to submit the answer himself due to a technical issue. Greg, I've labeled your additions using bold text and have removed the bold text from my previous text.
////////////////////////
Greg,
You are correct, main item here is that I thought EW were in some sense a representation of net.IW, new.LW and net.B. I do not fully comprehend how neto.IW, newo.LW and neto.B a reformatted when closing the loop. neto1.LW is a cell matrix with the following dimensions:
[] []
[1x10 double] []
...and netc1.LW is a cell matrix with the following dimensions (regardless of is training is carried out or not):
[] [10x5 double]
[1x10 double] []
I would like to assume that since no input parameter(s) are used to train the network in the closed loop the weights matrices should differ between open and closed loop....and this might very well be true since netc1.IW does not contain any values. But, how can it be that the amount of data in netc1.LW has increased (compared to neto1.LW)?
% 2. When the loop is closed, the target-to-input weights in neto.IW are moved to the feedback output-to-input weights in netc.LW.
My final goal with this thread is to be able to understand how to start a closed loop with the weights from the open loop. This is currently my understanding of this question:
  • Item one When an open loop is converted ( netc=closeloop(neto); ) to a closed loop the weights from the open loop are used in the closed loop. The weights from the open loop are automatically reformatted to be able to be used with a closed loop. If the closed loop is trained the weights from the open loop are used as initial conditions.
Is this true?
% 3. Yes
If I, for some reason, wouldn't like to use the weights from the open loop (or even create the open loop) I guess I could close the loop right after creating the net with narxnet or narnet. However, as you have written in earlier posts, training a closed network from scratch is time consuming; so as long as I can make sure that I obtain a small NMSEo I would like to think that it's better to start with training of the open loop and closing the loop.
% 4. Yes
A question on EW; do you know what this array contains 71 values (for the example above)? I would expect one value or 100 values...but not 71. (the documentation on train, http://se.mathworks.com/help/nnet/ref/train.html does not go into detail on this particular subject)
% 5. I don't know. Maybe only training data are are weighted. However, Ntrn = 70, not 71. Use the command TYPE to check the source codes of TRAIN & TRAINLM
%
% type train
%
% type trainlm
%
Hope this helps.
Greg

Iniciar sesión para comentar.

Más respuestas (0)

Categorías

Más información sobre Sequence and Numeric Feature Data Workflows en Help Center y File Exchange.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by