Is it possible to use Simulated Annealing or Genetic Algorithm for parameter estimation of a parametric System of ODEs?

9 visualizaciones (últimos 30 días)
Hi, I would like to know if is it possible to apply Genetic Algorithm (GA) or Simulated Annealing (SA) optimization methods to a parametric system of ODEs in order to estimate the best set of parameters for curve fitting to an experimental data set.
I've written a code to do that, using nnlinfit and lsqcurvefit. With the first one I've obtained good results in fitting but some parameters had negative values, so I used lsqcurvefit (with trust-region-reflective algorithm) with lower bounds=0 but the system have a lot of local minima and is very susceptible to the initial values and to the upper bounds, giving always different results.
The ODEs system consists in 5 equations and 6 parameters, and for lsqcurvefit I've used this line script
[parameters] = lsqcurvefit(@odesystem,parr0,time,experimentaldata,zeros(size(p)),[ ])
parr0 is the vector of the parameters' initial values; I've tried to change it several times obtaining always different solutions.
For these reasons I've tried to replace lsqcurvefit with GA or SA but It doesn't work because they don't recognize as valid the function @odesystem used in the lsqcurvefit script.
In the lsqcurvefit script @odesystem is a function that uses ode23s to solve the parametric system.
Maybe I have to rewrite the code in order to give to GA or SA a correct form of the function to be used by these algorithm, do you have an idea how to solve this problem?
Do you think that the usage of GA or SA is not the correct way to obtain the best set of parameters because of the system complexity?
Thank you in advance.

Respuesta aceptada

Star Strider
Star Strider el 27 de Feb. de 2019
It is definitely possible to use the ga function to estimate the parameters of a system of differential equations. I experimented with this on my own.
An example using ga (based on a previous comparmental kinetics modeling problem that used lsqcurvefit):
% NOTES:
%
% 1. The ‘theta’ (parameter) argument has to be first in your
% ‘kinetics’ funcition,
% 2. You need to return ALL the values from ‘DifEq’ since you are fitting
% all the values
function C=kinetics(theta,t)
c0=[1;0;0;0];
[T,Cv]=ode45(@DifEq,t,c0);
%
function dC=DifEq(t,c)
dcdt=zeros(4,1);
dcdt(1)=-theta(1).*c(1)-theta(2).*c(1);
dcdt(2)= theta(1).*c(1)+theta(4).*c(3)-theta(3).*c(2)-theta(5).*c(2);
dcdt(3)= theta(2).*c(1)+theta(3).*c(2)-theta(4).*c(3)+theta(6).*c(4);
dcdt(4)= theta(5).*c(2)-theta(6).*c(4);
dC=dcdt;
end
C=Cv;
end
t=[0.1
0.2
0.4
0.6
0.8
1
1.5
2
3
4
5
6];
c=[0.902 0.06997 0.02463 0.00218
0.8072 0.1353 0.0482 0.008192
0.6757 0.2123 0.0864 0.0289
0.5569 0.2789 0.1063 0.06233
0.4297 0.3292 0.1476 0.09756
0.3774 0.3457 0.1485 0.1255
0.2149 0.3486 0.1821 0.2526
0.141 0.3254 0.194 0.3401
0.04921 0.2445 0.1742 0.5277
0.0178 0.1728 0.1732 0.6323
0.006431 0.1091 0.1137 0.7702
0.002595 0.08301 0.08224 0.835];
theta0=[1;1;1;1;1;1];
% [theta,Rsdnrm,Rsd,ExFlg,OptmInfo,Lmda,Jmat]=lsqcurvefit(@kinetics,theta0,t,c);
ftns = @(theta) norm(c-kinetics(theta,t));
PopSz = 500;
Parms = 6;
opts = optimoptions('ga', 'PopulationSize',PopSz, 'InitialPopulationMatrix',randi(1E+4,PopSz,Parms)*1E-3, 'MaxGenerations',2E3, 'PlotFcn','gaplotbestf');
t0 = clock;
fprintf('\nStart Time: %4d-%02d-%02d %02d:%02d:%07.4f\n', t0)
% [B,fval,exitflag,output,population,scores] = ga(ftns, 53, [],[],[],[],zeros(53,1),Inf(53,1),[],[],opts)
[theta,fval,exitflag,output] = ga(ftns, Parms, [],[],[],[],zeros(Parms,1),Inf(Parms,1),[],[],opts)
t1 = clock;
fprintf('\nStop Time: %4d-%02d-%02d %02d:%02d:%07.4f\n', t1)
GA_Time = etime(t1,t0)
fprintf('\nElapsed Time: %23.15E\t\t%02d:%02d:%02d.%03d\n', GA_Time, hmsdv(GA_Time))
fprintf(1,'\tRate Constants:\n')
for k1 = 1:length(theta)
fprintf(1, '\t\tTheta(%d) = %8.5f\n', k1, theta(k1))
end
tv = linspace(min(t), max(t));
Cfit = kinetics(theta, tv);
figure(1)
hd = plot(t, c, 'p');
for k1 = 1:size(c,2)
CV(k1,:) = hd(k1).Color;
hd(k1).MarkerFaceColor = CV(k1,:);
end
hold on
hlp = plot(tv, Cfit);
for k1 = 1:size(c,2)
hlp(k1).Color = CV(k1,:);
end
hold off
grid
xlabel('Time')
ylabel('Concentration')
legend(hlp, 'C_1(t)', 'C_2(t)', 'C_3(t)', 'C_4(t)', 'Location','N')
Note the coding of the ‘ftns’ fitness function, and its relation to the objective function I originally wrote to use with lsqcurvefit. I wrote this as my own experiment, so it is not throughly as comment-documented as I usually do for code I post here. I will provide as much of an explanation as I can for anything that is not clear.
Genetic algorithms take a while to converge, however in my experience, they are generally successful if you give them the opportunity.
  2 comentarios
alesmaz
alesmaz el 28 de Feb. de 2019
First of all I want to thank you for your quick and complete answer.
I've rewritten my code with your sintax in order to avoid any error.
I post here the entire code if you want to check it.
function f=berndode(p,t)
z0=[150,0.26,269.05,0.15,0.11];
a=0.86;
b=0.04;
c=0.26;
d=(a*c)/(c-b);
=(c-b)*d;
options=odeset('AbsTol',1e-6,'RelTol',1e-6);
[T,Z]=ode45(@bernd,t,z0,options);
function dz = bernd(t,z)
dzdt=zeros(5,1);
dzdt(1)=-(ROM*z(1))/(p(3)+z(1))*z(3);
dzdt(2)=(ROM*z(1))/(p(3)+z(1))-(d*(1-b/z(2)))*z(2);
dzdt(3)=(d*(1-b/z(2)))*z(3);
dzdt(4)=(p(1)-z(4))*(d*(1-b/z(2)))+((1-z(2)/c))*(p(6)*(1-(d*(1-b/z(2)))/a))-p(5)*(ROM*z(1))/(p(3)+z(1));
dzdt(5)=(p(2)-z(5))*(d*(1-b/z(2)))-((1-z(2)/c))*(p(6)*(1-(d*(1-b/z(2)))/a))-p(4)*(ROM*z(1))/(p(3)+z(1));
dz=dzdt;
end
f=Z;
end
%
load pariniz.txt
load temposp.txt
load dativar1.mat
load dativar2.mat
load dativar3.mat
load dativar4.mat
load dativar5.mat
p0=pariniz;
datisp=[dativar1 dativar2 dativar3 dativar4 dativar5];
z0=[datisp(1,1),datisp(1,2),datisp(1,3),datisp(1,4),datisp(1,5)];
t=temposp;
fitns = @(p) norm(datisp-berndode(p,t));
PopSz =500;
P=6;
opts = optimoptions('ga','MaxGenerations',2E3, 'PlotFcn','gaplotbestf');
t0 = clock;
fprintf('\nStart Time: %4d-%02d-%02d %02d:%02d:%07.4f\n', t0)
% [B,fval,exitflag,output,population,scores] = ga(ftns, 53, [],[],[],[],zeros(53,1),Inf(53,1),[],[],opts)
[theta,fval,exitflag,output] = ga(fitns, P, [],[],[],[],zeros(P,1),[],[],[],opts)
The code works but the problem is still present (GA didn't find the solution with ode45, overcoming the MaxGenerations=2E3), maybe due to the complexity of the system. I'm analyzing the mathematical theory behind these algorithms in order to increase the optimization's ability (maybe another algorithm is more suitable for my problem).
I have another issue to ask you if you have time.
In my previous code I wrote a script that was able to find the solution (unfortunately not the best one) working with replicates. So, referring to your script, the variables present in "c" matrix would had 3 replicates each.
How can overcome this in my case, but keeping the structure of your script?
Thank you in advance.
Star Strider
Star Strider el 28 de Feb. de 2019
As always, my pleasure.
Note that in my code, I constrained the parameters to be greater than zero, since the kinetic parameters are all positive by definition.
I doubt that the replicates would cause problems providing your differential equations and data account for them (the same replicates for the independent and dependent variables, for example). The only effect those would have is to give the replicates more ‘weight’, in the sense that they would have more influence on the fitness (and therefore the estimated parameters) than if they were not replicated. If all the data were replicated, it would simply slow down the ga function.
If you are having problems getting ga to fit your data, I would use the original ‘opts’ structure in my code, where ‘Parms’ is the number of parameters you want to fit. I scaled the initial population as:
'InitialPopulationMatrix',randi(1E+4,PopSz,Parms)*1E-3
so scale it to provide appropriate initial parameter magnitudes. (Here they are all in the range of about 1, however you can scale their magnitudes individually by multiplying them with a vector of magnitudes, and using the bsxfun function, most efficiently done in a separate line of code. I chose randi because it is simply easier to define the initial range of parameters with it.) I designed it to begin with a large and very wide-ranging initial population, specifically in order to easily find the best parameter set in its parameter space. It does this, although at the expense of speed, since larger populations produce slower convergence.
I have occasionally had to run ga a few times with the same problem if it ended up going wildly off course the first time. A large initial population (I chose 500) usually prevents this. Also, if you are having problems getting a good fit of your model to your data, go over your model carefully to be certain you coded it correctly. It has been my personal experience that small errors can cause significant problems.

Iniciar sesión para comentar.

Más respuestas (2)

alesmaz
alesmaz el 18 de Jun. de 2019
Hi I paste here the same question posted yesterday if you have any idea, since you have experience in using SA for paramters' estimation.
I've implemented, after your suggestions, a code that estimates the best set of parameters for a parametric ODE system using Simulated Annealing insted of the non linear regression or curve fitting models (nnlinfit, lsqcurvefit, etc.), that haven't been able to solve my problem.
Since SA is an algorithm that find the mimimum of a function, I've used it in order to find the best set of parameters that minimizes the Residual Sum of Squares function.
I would like to know if there is a way to investigate the goodness of this estimation, since for the regression/curve fitting models the Jacobian and the confidence intervals can be used for this scope. Is there any way to calculate the confidence interval for this application of SA?
I can insert my code if needed.
Thank you in advance for your answer
  1 comentario
Star Strider
Star Strider el 18 de Jun. de 2019
I would just use the estimated parameters with nlinfit with the known parameter estimates, and then nlparci or nlpredci (or fitnlm and its associated functions). The nlinfit (or fitnlm) function will likely not adjust the parameter estimates much (if at all), and will provide the confidence intervals (or the information necessary to calculate them).

Iniciar sesión para comentar.


crt56 ak
crt56 ak el 8 de Ag. de 2021
Editada: crt56 ak el 8 de Ag. de 2021
Hi star, thanks for the useful code. I am a follower. I have a set of 8 differential equations with 2 parameters to estimate say theta(1) and theta(2). I have only one sets of experimental data c (for differential eqn 5) and corresponding time t. I am trying to use your code for ga to get estimates for the parameters by changing C= CV to C= CV(:,5) and the rest intact. But it seems the fitness functions bring errors. The Code is complaining of not enough input arguments at the line of ga inputs . Is there anyway I could modify my fitness functions in such a case?
  4 comentarios

Iniciar sesión para comentar.

Categorías

Más información sobre Genetic Algorithm en Help Center y File Exchange.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by