How can I avoid negative values in the output of a feedforward net?
5 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
Juan
el 14 de Oct. de 2014
Editada: Greg Heath
el 17 de Oct. de 2014
Hello
I'm creating a neural feedforward net to predict hourly values of solar radiation. However, in some hours "at night" where it is supposed to generate a value of 0, a negative number is presented instead.
Output is a vector of 15 elements (one value each hour of the day) that vary from 0 to 1400.
Here is my code:
inputs = tonndata(xlsread('datosJP','inirr3'),false,false);
targets = tonndata(xlsread('datosJP','targirr3'),false,false);
net = feedforwardnet([12,8],'trainlm');
net.trainParam.lr = 0.05;
net.trainParam.mc = 0.1;
net.inputs{1}.processFcns = {'removeconstantrows','mapminmax'};
net.outputs{2}.processFcns = {'removeconstantrows','mapminmax'};
net.divideFcn = 'dividerand';
net.divideMode = 'time';
net.divideParam.trainRatio = 90/100;
net.divideParam.valRatio = 10/100;
net.divideParam.testRatio = 0/100;
net.performFcn = 'mse';
net = configure(net,inputs,targets);
a = 20*rand(12,size(x,2))-10;
net.IW{1} = a;
net = train(net,inputs,targets);
outputs = net(inputs);
errors = gsubtract(targets,outputs);
performance = perform(net,targets,outputs);
0 comentarios
Respuesta aceptada
Greg Heath
el 14 de Oct. de 2014
Editada: Greg Heath
el 14 de Oct. de 2014
1. If your targets are bounded for a physical or mathematical reason the transfer functions logsig or tansig can be used. For documentation use the help and doc commands. For example,
help logsig
doc logsig
2. What sizes are your input and target matrices?
[ I N ] = size(inputs)
[ O N ] = size(outputs)
3. If predictions are correlated to past inputs and predictions, the best model could be a time-series network.
help narxnet
doc narxnet
4. If you stay with the static model, use the regression function FITNET which calls FEEDFORWARDNET but includes the helpful regression plot. If you want to know the default values just type, WITHOUT SEMICOLON
net = fitnet
5. It is confusing when you assign default values to parameters. For examples look at the code associated with help and doc examples.
The only parameters that I tend to change are net.divideFcn, net.trainParam.goal and net.trainParam.min_grad. Increasing the latter two can reduce training time without introducing significant errors.
6. Any good real world design is going to require looking at tens or hundreds of candidates. My policy is to first use defaults, then change parameters to improve performance.
Except for the three I mentioned above, it usually just comes down to finding out the minimum number of hidden nodes that are sufficient and designing multiple candidates that only differ by initial random weights.
Hope this helps
Thank you for formally accepting my answer
Greg
2 comentarios
Greg Heath
el 17 de Oct. de 2014
Editada: Greg Heath
el 17 de Oct. de 2014
Only 1 hidden layer is sufficient input-hidden-output .
fitnet only differs from feedforward net by a single output plot. However, I keep my problems straight by only using fitnet for regression and patternnet for classification. Feedforward never has to be used.
Use 'tansig' for hidden and 'logsig' for output. The default reverse normalization will yield the correct answer.
The default weight initialization is supposed to be optimal
Más respuestas (0)
Ver también
Categorías
Más información sobre Sequence and Numeric Feature Data Workflows en Help Center y File Exchange.
Productos
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!