When changing the output layer transfer function of an ANN to anything different than 'purelin' got a bad response
2 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
My network has 8 inputs, 1 output, and has one hidden layer with 12 nodes. I have m=53041 data points, and I'm training the network sequencially with intervals=577. So, the network is train from 1 to i*577 data points and predicting the (i*577)+1 to (i+2)*577 data points.
for i= 1:fix(m/interval)
[net,tr]=train(net,x2(1:i*interval,:)',y2(1:i*interval)');
if (i+1)*interval<m
y_for(i*interval+1:(i+1)*interval)=net(x2(i*interval+1:(i+1)*interval,:)');
else
y_for(i*interval+1:m)=net(x2(i*interval+1:m,:)');
end
end
It works well (blue is the output of the network) but I get some values outside of the phisical domain of the output (0-100) as can see in fig 1.
Fig, 1
So, for restricting the output domain I normalized the output that train the network to values between [0 1] and then selected the transfer function of the output to be 'logsig'. In theory now I shoul be restricted to an output between [0 1] and then rescaling it to values between [0 100] and my network should improbe. But when runinng it I get a very bad result (Fig 2), and I found out that no matter witch transfer function I choose, if it is differen to 'purelin' I get a result like that.
net.layers{2}.transferFcn='logsig'
Fig 2.
For me these results do not make sense, can you see my mistake?
0 comentarios
Respuestas (1)
Krishna
el 30 de Mayo de 2024
Hi Jonathan,
It appears that you're attempting to process sequence input with a feedforward neural network, which might not be the most effective approach. I would strongly recommend considering Recurrent Neural Networks (RNNs) instead, as they are better suited for handling sequential data and are likely to yield improved results. Additionally, rather than using a for-loop used cell arrays to structure your data, it would be more beneficial to structure your sequence data in a format of ‘n x a x b’, where 'n' represents the number of sequences, 'a' is the number of features per sequence, and 'b' is the length of each sequence. For instance, if you're dealing with 10 sequences, each with 4 features with length 100 your cell arrays would be of format 10 x 4 x 100.
Now if you want to use feedforward networks, I could find some mistake in your code.
Firstly, it seems the data is not going in correct order to the network,
[net, tr] = train (net, x2(1:i*interval,:)',y2(1:i*interval)');
Here the data always goes from the starting value to the interval value i.e. 577. Now if you want to maintain the sequence length to 577 then change the starting point of the data to incorporate the fixed sequence length.
Now if you have normalized the output from 0 to 1. There is no point in using ‘logsig’ because the function would only learn to output from 0 to 1.
After normalizing if you want to again convert the values back to original scale make sure you do it correctly.
If it is min max scaling, then you can use the formula,
originalValue = (normalizedValue * (maxValue - minValue)) + minValue;
If it is z-scaling use standard deviation and mean of the original dataset to convert back to original values.
From the observations, it appears that the performance without normalization is superior to that after normalization. Therefore, it might be advantageous to train the neural network without normalization. To manage the output range, consider implementing a limiter that scales down the neural network's output to a specified minimum and maximum range.
You could explore this suggestion, and it may potentially enhance your results.
0 comentarios
Ver también
Categorías
Más información sobre Define Shallow Neural Network Architectures en Help Center y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!