As predicted delayed outputs settle in NarX
1 visualización (últimos 30 días)
Mostrar comentarios más antiguos
FRANCISCO
el 26 de Mzo. de 2013
Comentada: Greg Heath
el 19 de Feb. de 2017
I applied my code to data simplenarx_dataset. To do this I performed the following steps:
1 - I have done autocorrelation and cross correlation peaks to see that gives us more information. ID = 1, FD = 1
2 - I have found H, where H = 5
3 - I have created the network and have evaluated the details. Although the purpose of this post is not to evaluate the details but understand why you see a delayed response when performing closeloop, but public details and code In case of emergency there is some other error: My code is as follows(I used 80 data for training the network and 20 to check with closeloop):
p=p';
t=t';
p1=p(1:1,1:80);
p2=p(1:1,81:end);
t1=t(1,1:80);
t2=t(1,81:end);
inputSeries = tonndata(p1,true,false);
targetSeries = tonndata(t1,true,false);
inputDelays = 1:1;
feedbackDelays = 1:1;
hiddenLayerSize = 5;
net = narxnet(inputDelays,feedbackDelays,hiddenLayerSize);
[inputs,inputStates,layerStates,targets] = preparets(net,inputSeries,{},targetSeries);
net.divideFcn='divideblock';
net.divideParam.trainRatio=0.70;
net.divideParam.valRatio=0.15;
net.divideParam.testRatio=0.15;
[I N]=size(p1);
[O N]=size(t1);
N=N-1;
Neq=N*O;
ID=1;
FD=1;
Nw = (ID*I+FD*O+1)*hiddenLayerSize+(hiddenLayerSize+1)*O;
Ntrneq = N -2*round(0.15*N);
Ndof=Ntrneq-Nw;
ttotal=t1(1,1:N);
MSE00=mean(var(ttotal,1));
MSE00a=mean(var(ttotal,0));
t3=t(1,1:N);
[trainInd,valInd,testInd] = divideblock(t3,0.7,0.15,0.15);
MSEtrn00=mean(var(trainInd,1));
MSEtrn00a=mean(var(trainInd,0));
MSEval00=mean(var(valInd,1));
MSEtst00=mean(var(testInd,1));
net.trainParam.goal = 0.01*Ndof*MSEtrn00a/Ntrneq;
[net,tr,Ys,Es,Xf,Af] = train(net,inputs,targets,inputStates,layerStates);
outputs = net(inputs,inputStates,layerStates);
errors = gsubtract(targets,outputs);
MSE = perform(net,targets,outputs);
MSEa=Neq*MSE/(Neq-Nw);
R2=1-MSE/MSE00;
R2a=1-MSEa/MSE00a;
MSEtrn=tr.perf(end);
MSEval=tr.vperf(end);
MSEtst=tr.tperf(end);
R2trn=1-MSEtrn/MSEtrn00;
R2trna=1-MSEtrn/MSEtrn00a;
R2val=1-MSEval/MSEval00;
R2tst=1-MSEtst/MSEtst00;
and my results are:
ID=1
FD=1
H=5
N=79
Ndof=34
Neq=79
Ntrneq=55
Nw=21
O=1
I=1
R2=0.8036
R2a=0.7347
R2trn=0.8763
R2trna=0.8786
R2val=0.7862
R2tst=0.7541
As I mentioned earlier, I will not focus much on the accuracy in the answer but later will. The code I applied for closeloop was:
netc = closeloop(net);
netc.name = [net.name ' - Closed Loop'];
view(netc)
NumberOfPredictions = 15;
s=cell2mat(inputSeries);
t4=cell2mat(targetSeries);
a=s(1:1,79:80);
b=p2(1:1,1:15);
newInputSeries=[a b];
c=t4(1,80);
d=nan(1,16);
newTargetSet=[c d];
newInputSeries=tonndata(newInputSeries,true,false);
newTargetSet=tonndata(newTargetSet,true,false);
[xc,xic,aic,tc] = preparets(netc,newInputSeries,{},newTargetSet);
yPredicted = sim(netc,xc,xic,aic);
w=cell2mat(yPredicted);
plot(cell2mat(yPredicted),'DisplayName','cell2mat(yPredicted)','YdataS
ource','cell2mat(yPredicted)');figure(gcf)
plot(t2,'r','DisplayName','targetsComprobacion')
hold on
plot(w,'b','DisplayName','salidasIteradas')
title({'ITERACCIONES'})
legend('show')
hold off
and the result was the chart that you have indicated the link below where you will see it:
In this picture we see the blue line (line outputs predicted) lags behind the red line (real targets). I'd like to know how I can do to get that blue line is in front of the red line, that is one step get out early. As I said, in this post I want to focus on why this happens and how I can fix it.
thank you very much
0 comentarios
Respuesta aceptada
Greg Heath
el 29 de Mzo. de 2013
% 1. Selected ending semicolons can be removed to aid debugging
[P, T ] = simplenarx_dataset;
whos
p= cell2mat(P);
t = cell2mat(T);
ID = 1:1
FD = 1:1
H = 5
NID= length(ID)
NFD=length(FD)
Nw = (NID*I+NFD*O+1)*H+(H+1)*O
% 2. Use NID and NFD for Nw in case delays are not single
% 3. No need to use tonndata because the simplenarx_data set is instantly ready for preparets.
% 4. No need for (p1,t1) and (p2,t2). Delete both.
% 5. Input delays are suboptimal. Did you try to find the significant lags of the target/input cross-correlation function?
% 6. Feedback delays are suboptimal. Did you try to find the significant lags of the target autocorrelation function?
% 7. H is suboptimal. Was it chosen using the suboptimal delays? If so, please explain how.
rng(0)
net = narxnet(ID,FD,H);
[inputs,inputStates,layerStates,targets] = preparets(net,P,{},T);
whos P T inputs inputStates layerStates targets
%8. N=N-1: DELETE. NOT A GOOD IDEA TO USE A VARIABLE OR PARAMETER NAME ON BOTH SIDES OF AN EQUATION. BESIDES, PREPARETS OUTPUTS THE CORRECT DIMENSIONS
[ I N ] = size(inputs)
[ O N ] = size(targets)
% 9. No need for ttotal it should be the same as targets. % No need for Neq, MSE00,MSE00a and t4. Delete
net.divideFcn='divideblock';
[trainInd,valInd,testInd] = divideblock(N,0.7,0.15,0.15);
ttrn = targets(trainInd);
tval = targets(valInd);
ttest = targets(testInd);
Ntst = length(ttrn)
Nval = length(valInd)
Ntst = length(testInd)
Ntrneq = prod(size(ttrn)) % Ntrn*O
Ndof = Ntrneq-Nw
%Naive Constant Output Model
ytrn00= mean(ttrn,2);
Nw00 = size(ytrn00,2)
Ndof00 = Ntrneq-Nw00
MSEtrn00 = sse(ttrn-ytn000)/Ntrneq
MSEtrn00=mean(var(ttrn,1))
MSEtrn00a = sse(ttrn-ytrn00)/Ndof00
MSEtrn00a=mean(var(ttrn,0))
%9. MSEval00 and MSEtst00 should be obtained from the Naive constant output model output
MSEval00 = mse(tval-ytrn00)
MSEtst00 = mse(tttst-ytrn00)
net.trainParam.goal = 0.01*Ndof*MSEtrn00a/Ntrneq; % R2trna >= 0.99
rng(0)
[net,tr,Ys,Es,Xf,Af] = train(net,inputs,targets,inputStates,layerStates);
outputs = net(inputs,inputStates,layerStates);
errors = gsubtract(targets,outputs);
MSE = perform(net,targets,outputs);
MSEa=Neq*MSE/(Neq-Nw)
R2=1-MSE/MSE00
R2a=1-MSEa/MSE00a
% 10. The DOF "a"djustment is only applied to the training data % 11. Can delete the last 6 equations that refer to all of the data instead of the trn/val/tst division.
MSEtrn=tr.perf(end)
MSEtrna = Ntrneq*MSEtrn/Ndof
MSEval=tr.vperf(end)
MSEtst=tr.tperf(end)
% 12.Using "end" is only valid if training converges because of tr.min_grad (not valstop ). Better to use "tr.best_epoch".
R2trn=1-MSEtrn/MSEtrn00
R2trna=1-MSEtrna/MSEtrn00a
%13 Original MSEtrna misprint.
R2val=1-MSEval/MSEval00
R2tst=1-MSEtst/MSEtst00
and my results are:
% 14. Unable to compare results because you did not intialize the RNG before the first call of the RNG in the net creation command net = ... where H =5 random weights were assigned to the input bias. I will use rng(0) before the net creation.
I will do the closeloop part next.
Thank you for formally accepting my answer
Greg
Más respuestas (2)
Greg Heath
el 30 de Mzo. de 2013
Editada: Greg Heath
el 7 de Abr. de 2013
1. I do not have autocorr or parcorr AND your plots are too fuzzy
for me to understand.
2.Since training parameters should only depend on training data,
I use ztrn =zscore(ttrn,1), autocorrt = nncorr(ztrn,ztrn,Ntrn-1,'biased')
and find a 95% confidence level for significant auto AND
cross-correlations with abs values >= 0.14 by repeating 100
times and averaging:
a. Crosscorrelate ztrn with n = zscore(randn(1,Ntrn),1)
b. Correct the nncorr symmetry bug by concatenating:
crosscorrzn = [ crosscorrnz(1:Ntrn-1) crosscorrzn(Ntrn:end)]
c. Sort the absolute values and find the significance threshold
as the value at index floor( 0.95*(2*Ntrn-1))
3. You used crosscorrxt instead of crosscorrtx. You found the
max of crosscorrxt instead of all abs values >= the significance
threshold for nonnegative lags (indices >= Ntrn) of crosscorrtx.
THIS MAY ACCOUNT FOR YOUR ERRONEOUS SHIFT
4. The ideal lags are ID = 1, FD = 1,2
5. To determine Hub use Ntrneq, NID and NFD (see your equation
for Nw)
6. I typically find Ntrials = 5 to be too small
7. You are using divideblock inside the inner loop. Since the data
division doesn't change, use divideblock before the outer loop
to obtain the indices and the data splits ttrn,tval,tst, etc.
8. Inside the inner loop just assign the indices to
net.divide.trainInd etc.
9. Continuously find and save the best net within the inner loop
after you have R2val to compare with the current R2valmax.
Then when you exit the outer loop you will already have the best
net (and all of it's statistics) to use for the closeloop stage.
10. Test the netc on all of the data and overlay plot yc on tsc.
11. Train netc and compare trn/val/tst results with the openloop
solution and the untrained closeloop solution.
12. Unfortunately, I have found that a good closeloop solution
depends very critically on the choices of ID, FD and H. Just
accepting the defaults is very suboptimal.
Hope this helps.
- Thank you for formally accepting my answer*
Greg
Greg Heath
el 7 de Abr. de 2013
1. Your use of the words pump, rejections, cookies and futulos(not English) are not appropriate and somewhat confusing.
2. Once you have an openloop solution, y, netc = closeloop(net); followed by preparets and yc = netc(Xsc,Xic,Aic) should yield your closeloop output.
3. If yc and y differ substantially, you can either continue training the openloop configuration and repeat OR, directly train netc.
4. As long as ID > 0, you have a predictive net. So, I think you are misinterpreting your output.
2 comentarios
Constantin Silkin
el 19 de Feb. de 2017
I'm using NARX and have just the same problem. Actually the network doesn't carry out a prediction. Instead it repeats slightly changed previous value. Nevertheless, I could organize a prediction! For this purpose I prolong TargetSeries and InputSeries. I just duplicate the last value of these series. In fact I give for network own option of a prediction by the principle "tomorrow will be same as today". The network processes this naive forecast according to own knowledge and corrects it. Accuracy of such prediction is about 70%. Whether I am right?
Ver también
Categorías
Más información sobre Modeling and Prediction with NARX and Time-Delay Networks en Help Center y File Exchange.
Productos
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!