Dynamic (Recurrent) Neural Network - Input Data Structure- Concurrent inputs

2 visualizaciones (últimos 30 días)
Hi, i am using dynamic neural network (in fact recurrent neural network). my input data samples are in fact several sequences (or time series). the MATLAB Help says that if we have some sequences, we should use a Cell whose elements are matrix. for example this is my input data to network: {[I1] [I2] ... [In]} every Ii (1<=i<=n) is a matrix with the size 20x500. 20 is the input vector size of the Network. and as the HELP says, I1 contains all the sequences of time step 1, I2 has time step 2 and so on. as long as the sequences are not equal, some sequences have the value NaN (mostly at the end of the cell). anyway, this is my network configuration: input-hidden-output layers, with hidden recurrently connected to itself.
here is the problem: when i want to train the network with the input set that i explained above, it takes a lot of time (after 2-3 hour maybe one epoch or less!)
where the problem? am i doing something wrong, or ...? please answer. thank you in advance!

Respuestas (3)

Lucas García
Lucas García el 26 de Ag. de 2011
What are your inputs? What is the architecture of your network?
By doing {[I1] [I2] ... [In]}, where each Ii is a 20x500 matrix size, you are presenting to the network 500 sequences at each time, where each sequence is nx20. That is a huge problem to solve...
I am wondering that if you have 20 time series that are inputs to the network, your input to the network should be {[I1] [I2] ... [In]}, with [Ii] being a 20x1 vector containing one element of each time series at time i.

wonderboy
wonderboy el 27 de Ag. de 2011
inputs: 20, hidden 50, output 16 hidden layer is recurrent to itself.
if i had one long sequence, then you are right, i should use 20x1 vectors inside the Cell array. but as long as i have several sequences (about 500 sequences), (as the HELP says), to prevent discontinuity, i should make the Ii to the size of 20x500. but the problem of time starts!
i changed the training algorithm to gradient decent (traingd), i think the problem of time has solved.
however, here is my question, my way of making data input to the network is right or wrong?

Lucas García
Lucas García el 27 de Ag. de 2011
Then I guess you are solving a big problem here. Recurrent networks take much longer than static feedforward networks to be trained. In your case, the architecture of the network is not too big, but your input to the problem is. It is a good idea to try different training algorithms to see how the network behaves. For example, in the case of Levenberg-Marquardt backpropagation (trainlm), MATLAB has to invert a huge matrix. This algorithm is considered a good choice for middle-size problems. However, in your case, that has to be done 500 times per time step. As you have already done, I will try different algorithms: conjugate gradient are generally much faster, traingd, traincgb, etc.

Categorías

Más información sobre Sequence and Numeric Feature Data Workflows en Help Center y File Exchange.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by