Find zero of function with least amount of iterations
Mostrar comentarios más antiguos
Hi there
I have a function that takes a very long time to calculate (can be several hours), and I need to find when it becomes zero, with a tolerance on x of around 1e-5, and tolerance of the zero of maybe 1e-7. Typical values are x between 1 and 6, and y between -0.01 and 0.01.
I can use fzero to do what I want (and with quite few function calls, I might add - 13 in one test, 8 when I reduced TolX), but I wonder if there exists a function that can do this with even fewer function calls than fzero? I think t will be well worth it to spend some time implementing this to save hours in the end, so it does not have to be a simple solution (although simpicity is preferred, of course!)
I do not believe the function can be optimized significantly to save computation time - it requires a lot of calculations, and I have collegues who do similar stuff in FORTRAN and C who also require hours to do their calculations.
Thanks in advance
Henrik
EDIT: changed iterations to function calls as pointed out by Matt and Mohammad. There is more clarification in my comments below.
9 comentarios
Star Strider
el 11 de Oct. de 2014
Without knowing more about your function it’s impossible to say for sure, but two options I would suggest are polyfit and its friends (including roots), and possibly interp1.
Henrik
el 11 de Oct. de 2014
Star Strider
el 11 de Oct. de 2014
As I understand your Comment, you are iterating over parameters and then calculating eigenvalues at each value of the parameters.
You could consider this as a linear or nonlinear multiple regression problem with the parameters as the independent variable and the eigenvalues as the dependent variable, then solve the inverse of the regression for zero to give you the estimated values of the parameters that would produce one or more zero eigenvalues.
That requires several assumptions (continuity among them), the possibility of having to account for complex eigenvalues, and in the nonlinear situation an objective function that describes the eigenvalues as a function of the parameters. Those would determine your approach. I assume you have already plotted each of the eigenvalues as functions of the parameters, so you know their essential trends.
If you decide that a regression is appropriate, for the multiple linear regression I would use the Statistics Toolbox regress function, and for the nonlinear regression, either nlinfit or the Optimization Toolbox function lsqcurvefit.
I am only suggesting an approach to your problem. I cannot be more specific or provide example code, so I am not entering this as an Answer.
Henrik
el 11 de Oct. de 2014
Star Strider
el 11 de Oct. de 2014
I apologise for oversimplifying a quite complicated procedure. The fzero function may be the best option available for your problem.
The only other options that I’m aware of are the other Optimization Toolbox functions (like fsolve) or Global Optimization Toolbox functions, but I can’t say that they would be faster or more efficient. I suggest you consider them if you haven’t. I still don’t understand your problem well enough to suggest a particular function.
I hope others here see this and can offer a more effective solution.
Henrik
el 11 de Oct. de 2014
Star Strider
el 11 de Oct. de 2014
My pleasure!
I don't think the best strategy is to try to cut down on number of iterations. For any iterative method, that number will be dependent on the initial guess anyway, and would be very hard to control. The better strategy would be to see if you can cut down on the amount of computation per iteration.
For example, I see no reason to be computing the matrix N explicitly. Getting the minimum eigenvalue of N to equal 0 is the same as getting the maximum eigenvalue of f to equal 1/x. So you shouldn't have to compute further than f.
Also, M is a Kronecker product, so the multiplication you describe should be doable more efficiently without explicitly computing M.
These are just examples, of course. It doesn't look like the computations you've listed should take very long for the data sizes you've mentioned. My computer can do 250 eigendecompositions of a 256x256 matrix in about 11 seconds. So, presumably the computation bottlenecks are in steps you haven't mentioned. It might be worth expanding on those so that the community can explore ways to do them faster.
As for the choice of algorithm, it might be worth exploring the Global Optimization Toolbox as Star Strider suggests. I have my doubts about fsolve and other algorithms in the (non-global) Optimization Toolbox, however, since your problem does not look smooth.
Henrik
el 14 de Oct. de 2014
Respuesta aceptada
Más respuestas (0)
Categorías
Más información sobre Optimization Toolbox en Centro de ayuda y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!