Borrar filtros
Borrar filtros

neural network simulation in matlab

2 visualizaciones (últimos 30 días)
sudhakar
sudhakar el 8 de Jul. de 2014
Respondida: Dr Oluleye Babatunde el 8 de Jul. de 2014
I am created one small neural network and I am trying to program that network.I am getting different results when I am trying to simulate it.
Please find the attached file and help me.
%%%%%%%%%%%%%%%%%%% please find the code%%%%%%%%%%%%
Input1 = [1 100 200 300];
Input2 = [400 500 600 700];
Input3 = [800 900 1000 2000];
Input4 = [2 102 203 3];
Output1 = [0 10 20 30 ];
Output2 = [40 50 60 70 ];
Output3 = [80 90 100 200 ];
I=[Input1 Input2 Input3]; O=[Output1 Output2 Output3];
%%%%%%%%%% function for training the inputs and oyutputs; net = feedforwardnet([10 5]);%%%%%%%%%%%%%feed forward network%%%%%%%%
% train net net.trainParam.epochs = 1000; % net.trainParam.show = 10; % net.trainParam.goal = 0.1; net.divideParam.trainRatio = 0.7; % training set [%] net.divideParam.valRatio = 0.15; % validation set [%] net.divideParam.testRatio = 0.15; % test set [%] % net.efficiency.memoryReduction = 1000; [net,tr] = train(net,I,O);% train a neural network view(net); wb = formwb(net,net.b,net.iw,net.lw); [b,iw,lw] = separatewb(net,wb);
y=net(Input1)
%%%%%%%%%%%%%%%%%%%%%%validating network using code%%%%%%%%%%%%%%%%%%%%%%%
b1=[2.57310486009111 1.93785769426316 -1.46320033175527 -0.842004107257358 -0.286323193468700 0.286827247650549 %%%%%%%%%%%%%%%%%%%%%%%%%%%weights obtained after training%%%%%%%%%%%%%%%%%%%%%%%%%%%% 0.831493499871035 -1.39671607955901 -1.90380904621022 -2.45493214632677];
b2=[1.83574067690576 0.774732933645161 -0.0847258567572143 0.938593059039810 1.46771777356798];
b3=[0.420197391190367 0.115212450273526 -0.759763338095833 -0.250672959073144];
%%%%%%%%%%%%%%%%%%%%%%%%%%%weights obtained after training%%%%%%%%%%%%%%%%%%%%%%%%%%%%
iW=[-1.38166200670922 0.957490912286695 0.515857183896969 -1.67802011516874 -0.607219159206796 1.21648695700533 -1.52373804094426 -1.42190529155021 0.959885562980659 -1.82824861935095 0.812406556958687 1.04963708328287 0.106290246812093 -1.29433065994092 1.70115025195897 1.23078410448700 1.31076122407300 -0.517144473527336 -2.08491227030795 0.284098191355632 0.830270776449073 1.58153232049612 -0.834366455134675 1.51390644460052 1.91950733164198 0.0871757775656829 -1.56946254953226 0.0784020062364014 -2.03145306741346 1.06527607936168 0.886361361008305 0.324013786621172 -1.25556535930711 -1.27864849126282 -1.24741299098989 -1.21464494025010 -0.368339680827727 0.851405223284353 2.34576432829584 0.0434109884245971];
%%%%%%%%%%%%%%%%%%%%%%%%%%%weights obtained after training%%%%%%%%%%%%%%%%%%%%%%%%%%%%
lW1=[-0.602609989946975 0.525733750947808 -0.203602085475660 0.222921297850082 -0.179900498642999 -0.482399972861968 0.727409811592458 0.692255327028914 0.343955316924856 -0.686890820583700 -0.795542517778123 0.307953699635952 -0.270964970640794 -0.435776642224120 0.924207180182342 -0.475816746204693 -0.170114502128735 0.833297282454862 -0.0518751068652391 -0.494048413548348 -0.378651557509865 0.381275940064419 0.879407533770447 0.654548914748321 0.841996207871969 -0.486494949311301 0.0926006882680324 0.389713200473236 0.438691192343393 0.250420511382062 0.401305604144410 0.807228415644053 -0.675581969078475 0.0269187271775576 0.156864083817956 -0.677534259590177 -0.355312921627937 0.449415578346035 0.546280498052905 0.429541160955270 0.386338268169618 0.0556388076959695 0.190289127159856 0.761310740378647 -0.610789120692214 -0.541220414173197 0.681280059126398 -0.0679901158816549 -0.779139708993143 0.654725685947907];
%%%%%%%%%%%%%%%%%%%%%%%%%%%weights obtained after training%%%%%%%%%%%%%%%%%%%%%%%%%%%%
lW2=[-0.997829228826340 -0.738244586915366 0.138812837427359 0.363607240326660 0.693990178780186 -0.644765560994143 -1.04877878583043 0.0449859320288510 0.543366970025662 0.521704189739841 -0.107487306272888 0.123220178980386 -0.850671952687010 -0.765452096422727 0.686182723836282 -0.129335421277810 -0.895531315080272 -0.574093351224515 -0.157806595675016 -0.211448632638790];
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%Input layer%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% r1=b1(1,1)+(Input1(1,1)*iW(1,1))+(Input1(2,1)*iW(1,2))+(Input1(3,1)*iW(1,3))+(Input1(4,1)*iW(1,4)); hidden1= (1/(1+exp(-r1))); r2=b1(2,1)+(Input1(1,1)*iW(2,1))+(Input1(2,1)*iW(2,2))+(Input1(3,1)*iW(2,3))+(Input1(4,1)*iW(2,4)); hidden2=(1/(1+exp(-r2))); r3=b1(3,1)+(Input1(1,1)*iW(3,1))+(Input1(2,1)*iW(3,2))+(Input1(3,1)*iW(3,3))+(Input1(4,1)*iW(3,4)); hidden3=(1/(1+exp(-r3))); r4=b1(4,1)+(Input1(1,1)*iW(4,1))+(Input1(2,1)*iW(4,2))+(Input1(3,1)*iW(4,3))+(Input1(4,1)*iW(4,4)); hidden4=(1/(1+exp(-r4))); r5=b1(5,1)+(Input1(1,1)*iW(5,1))+(Input1(2,1)*iW(5,2))+(Input1(3,1)*iW(5,3))+(Input1(4,1)*iW(5,4)); hidden5=(1/(1+exp(-r5))); r6=b1(6,1)+(Input1(1,1)*iW(6,1))+(Input1(2,1)*iW(6,2))+(Input1(3,1)*iW(6,3))+(Input1(4,1)*iW(6,4)); hidden6=(1/(1+exp(-r6))); r7=b1(7,1)+(Input1(1,1)*iW(7,1))+(Input1(2,1)*iW(7,2))+(Input1(3,1)*iW(7,3))+(Input1(4,1)*iW(7,4)); hidden7=(1/(1+exp(-r7))); r8=b1(8,1)+(Input1(1,1)*iW(8,1))+(Input1(2,1)*iW(8,2))+(Input1(3,1)*iW(8,3))+(Input1(4,1)*iW(8,4)); hidden8=(1/(1+exp(-r8))); r9=b1(9,1)+(Input1(1,1)*iW(9,1))+(Input1(2,1)*iW(9,2))+(Input1(3,1)*iW(9,3))+(Input1(4,1)*iW(9,4)); hidden9=(1/(1+exp(-r9))); r10=b1(10,1)+(Input1(1,1)*iW(10,1))+(Input1(2,1)*iW(10,2))+(Input1(3,1)*iW(10,3))+(Input1(4,1)*iW(10,4)); hidden10=(1/(1+exp(-r10)));
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%Hidden layer1%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% r11=[b2(1,1), (hidden1*lW1(1,1)), (hidden2*lW1(1,2)), (hidden3*lW1(1,3)), (hidden4*lW1(1,4)), (hidden5*lW1(1,5)), (hidden6*lW1(1,6)), (hidden7*lW1(1,7)) ,(hidden8*lW1(1,8)) ,(hidden9*lW1(1,9)) ,(hidden10*lW1(1,10))]; r11n=nansum(r11); hidden11=(1/(1+exp(-r11n))); r12=[b2(2,1),(hidden1*lW1(2,1)),(hidden2*lW1(2,2)),(hidden3*lW1(2,3)),(hidden4*lW1(2,4)),(hidden5*lW1(2,5)),(hidden6*lW1(2,6)),(hidden7*lW1(2,7)),(hidden8*lW1(2,8)),(hidden9*lW1(2,9)),(hidden10*lW1(2,10))]; r12n=nansum(r12); hidden12=(1/(1+exp(-r12n))); r13=[b2(3,1),(hidden1*lW1(3,1)),(hidden2*lW1(3,2)),(hidden3*lW1(3,3)),(hidden4*lW1(3,4)),(hidden5*lW1(3,5)),(hidden6*lW1(3,6)),(hidden7*lW1(3,7)),(hidden8*lW1(3,8)),(hidden9*lW1(3,9)),(hidden10*lW1(3,10))]; r13n=nansum(r13); hidden13=(1/(1+exp(-r13n))); r14=[b2(4,1),(hidden1*lW1(4,1)),(hidden2*lW1(4,2)),(hidden3*lW1(4,3)),(hidden4*lW1(4,4)),(hidden5*lW1(4,5)),(hidden6*lW1(4,6)),(hidden7*lW1(4,7)),(hidden8*lW1(4,8)),(hidden9*lW1(4,9)),(hidden10*lW1(4,10))]; r14n=nansum(r14); hidden14=(1/(1+exp(-r14n))); r15=[b2(5,1),(hidden1*lW1(5,1)),(hidden2*lW1(5,2)),(hidden3*lW1(5,3)),(hidden4*lW1(5,4)),(hidden5*lW1(5,5)),(hidden6*lW1(5,6)),(hidden7*lW1(5,7)),(hidden8*lW1(5,8)),(hidden9*lW1(5,9)),(hidden10*lW1(5,10))]; r15n=nansum(r15); hidden15=(1/(1+exp(-r15n)));
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%Hidden layer2%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% r16=[b3(1,1),(hidden11*lW2(1,1)),(hidden12*lW2(1,2)),(hidden13*lW2(1,3)),(hidden14*lW2(1,4)),(hidden15*lW2(1,5))]; r16n=nansum(r16); r17=[b3(2,1),(hidden11*lW2(2,1)),(hidden12*lW2(2,2)),(hidden13*lW2(2,3)),(hidden14*lW2(2,4)),(hidden15*lW2(2,5))]; r17n=nansum(r17); r18=[b3(3,1),(hidden11*lW2(3,1)),(hidden12*lW2(3,2)),(hidden13*lW2(3,3)),(hidden14*lW2(3,4)),(hidden15*lW2(3,5))]; r18n=nansum(r18); r19=[b3(4,1),(hidden11*lW2(4,1)),(hidden12*lW2(4,2)),(hidden13*lW2(4,3)),(hidden14*lW2(4,4)),(hidden15*lW2(4,5))]; r19n=nansum(r19);
disp(r16n); disp(r17n); disp(r18n); disp(r19n);
please suggest the changes so that i get same output using neural network and program.

Respuesta aceptada

Greg Heath
Greg Heath el 8 de Jul. de 2014
Please format your post to have one executable command per line.
Use matrix commands instead of one for every matrix component
One hidden layer is sufficient. You have 2.
Use fitnet for regression/curvefitting and patternnet for classification/pattern-recognition. Both call feedforwardnet internally.
Are you taking into account all of the defaults (e.g., mapminmax, tansig, etc)?
Type, without the ending semicolon,
net = net
in order to see the default settings.
Hope this helps.
Thank you for formally accepting my answer
Greg

Más respuestas (1)

Dr Oluleye Babatunde
Dr Oluleye Babatunde el 8 de Jul. de 2014
You must identify the classinformation and the actual features set in the dataset.
You will need these info to train the network
i.e net = train (net, features, classinfo)

Categorías

Más información sobre Deep Learning Toolbox en Help Center y File Exchange.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by