fmincon: less optimal results with user supplied analytic gradient?
4 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
Hi there, I have a question concerning the fmincon optimization procedure in generel: I am minimizing a very complicated (unfortunately not necessarily convex and continuous) functional using the fmincon, currently along with the interior-point-algorithm.
Until some days ago, I used finite differencing to estimate the gradients. Since I derived an analytic expression for the functional, I went through the tedious work of deriving an analytic expression for the gradient, mainly to accelerate the optimization. In fact, the computation is significantly faster, but the final function value is bigger, i.e. less optimal, which I don't understand. I already compared the finite-differencing and analytic gradients, but this does not seem to be the problem (accuracy ~ 1e-6).
Do you know if there are some parameters, I should particularly adjust to tune the algorithm towards using analytic gradients, or can you imaging, where the problem might be? Thanks a lot for your help. And by the way: thanks a lot to the community in general. Although this is my first active participation, I have been extensively and successfully using the forums for years... :-)
Best Mathias
4 comentarios
Matt J
el 13 de Oct. de 2013
Editada: Matt J
el 13 de Oct. de 2013
the function only becomes badly conditioned (noncontinuous) at the border of the search space.
Not sure what you mean by the "search space". Even when constraints are specified, fmincon searches over all of R^n, which has no borders. Constraints are only satisfied asymptotically.
Anyway, we still need to see more information. How much do the G_opts differ in each case? Why should I consider 0.0147 a large final value for the gradient? Maybe it started at 1e7. Also, you said the finite differencing gave "amost perfect results", yet in what you show above it stopped with a much larger max gradient 2.130e+02.
Similarly, why should I consider the difference between 9.62976 and 10.15389 significant? Maybe the typical/starting value of the function was 1e20.
It would help to actually show your objective function and constraints, and the code you used to initialize and call fmincon.
Respuestas (1)
Matt J
el 13 de Oct. de 2013
Editada: Matt J
el 13 de Oct. de 2013
My best appraisal, without seeing the details of your objective function, is that you have unbounded (or very large) second derivatives at/near your solution. That's the only thing that explains to me why the algorithm starts to take very tiny Newton-like steps even in regions where the gradients are still pretty large. Note that this happens both with the finite differencing and analytical case. It doesn't appear that either version successfully finished the optimization.
8 comentarios
Matt J
el 15 de Oct. de 2013
A mathematical description of the problem then, showing what you're trying to minimize and why/where the discontinuities occur.
Ver también
Categorías
Más información sobre Linear Least Squares en Help Center y File Exchange.
Productos
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!