How is fmincon different from non linear solvers if the objective function for fmincon is written such that it return sum of square of error?
23 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
Nadia A
el 24 de Nov. de 2016
Hi
Please help me understand this.. I am not able to find a simple explanation.
If I write the objective function for fmincon to return sum of square of error, which is the same what solvers like lsqnonlin and lsqcurvefit tries to solve, then what is the difference between these algorithms? Which algorithm is more efficient? Or are they all inherently the same?
Thanks for the help Nadia
0 comentarios
Respuesta aceptada
Matt J
el 24 de Nov. de 2016
Editada: Matt J
el 24 de Nov. de 2016
You can find detailed algorithm descriptions for fmincon here and compare them to algorithm descriptions for lsqnonlin here. There are substantial differences in the selection of minimization algorithms available and how they work. FMINCON, for example, doesn't support an option to use Levenberg-Marquardt, unlike lsqnonlin. Mathworks also has a page with its recommendations for algorithm selection
The simplest answer is that fmincon uses fancier algorithms than lsqnonlin and lsqcurvefit because fmincon must be able to deal with nonlinear constraints, whereas lsqnonlin/lsqcurvefit do not. Therefore, if you apply fmincon to a simple bounded least squares problem with no nonlinear constraints, it may invest more computation than you really need.
This is not a perfect rule, however. lsqnonlin algorithms all use first derivative information only, whereas some fmincon algorithms let you use second derivatives as well. This can be a benefit to convergence speed, though how significant will depend case-by-case on your problem.
The bottom line is that completely optimal algorithm selection requires experimentation...
0 comentarios
Más respuestas (0)
Ver también
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!