Is it better to trick Matlab into doing constrained optimisation.

2 visualizaciones (últimos 30 días)
Assume that I want to do some constrained optimisation. I am using fmincon with the SQP algorithm, but I am hoping to get intuition which is not algorithm specific.
What I'm wondering is whether it is generally best to use the constraints in the optimiser or "tricks" to obtain the same. Let me show an example of what I mean with a trick. Say I want x_1+x_2 = 1 always, then I can simply pass only x_1 and at the beginning of the function set x_2=1-x_1. Another example is, if I want x_3 => 0, I can simply pass y= log(x) and start my function with exponentiation.
The more I think about it, the more cases it seems one can handle with doing tricks like this, so what I'm curious to know is which strategy is generally better? Is it ever better to pass equality, bounds or inequality constraints instead of tricking Matlab to always have it satisfied like exemplified above?

Respuesta aceptada

Walter Roberson
Walter Roberson el 20 de Jun. de 2015
Recently I have been assisting some people to fit exponential functions, along the lines of y = A * exp(x * B) except with the exponential part a little more complicated. The "natural" transformation is log(y) = log(A) + x * B and then you have a linear least squares fit to find the slope and intercept of the straight line, finding B and log(A).
However, linear least squares is going to minimize the sum of squares of the differences in the space it is asked to work on, not in the original space. A linear change of delta in the linear least squares corresponds to a multiplicative change by delta in the original space, so if what you were hoping to do was minimize the sum of squares of the differences in the original space you have a problem if you make the transformation.
This is not to say that transformations are not valid and useful, but you need to be paying attention to what you are minimizing after the transformation. This is particularly true if you are trying to do a global minimization. Does it make sense for the search area between 1E-20 and 1 to be given the same total weight as the search area between 1 and 1E20 ? Theory might say that given enough time the same results would be reached, but you probably aren't going to give enough time...
  1 comentario
Henrik Dam
Henrik Dam el 20 de Jun. de 2015
Editada: Henrik Dam el 20 de Jun. de 2015
Hi Walter! Thank your very much for your answer, that is a very good example just as I hoped for and makes perfect sense. I did not really expect the simplicity of my premise to be true, but now I understand why.
Regarding tricking Matlab into equality constraints, is there any downside to that? - I guess Matlab might simply makes the same trick as I do "under the hood" so that my efforts are at best futile.

Iniciar sesión para comentar.

Más respuestas (1)

Matt J
Matt J el 20 de Jun. de 2015
Editada: Matt J el 22 de Jun. de 2015
Say I want x_1+x_2 = 1 always, then I can simply pass only x_1 and at the beginning of the function set x_2=1-x_1.
I would argue that this example is better, because you both eliminate a constraint (simplifying the optimization) and reduce the dimension of the problem. Reducing dimension usually speeds up convergence.
The other example y=log(x), I did not understand. It's not a positivity preserving transformation in any way that I can see. You could make a transformation x_3=y^2. One danger in this is that the derivative with respect to y is always zero at y=0, even though this may not be the location of a local minimum in x_3. A derivative-based optimizer could therefore get stuck there if you happen to initialize at y=0.
  2 comentarios
Henrik Dam
Henrik Dam el 22 de Jun. de 2015
Editada: Henrik Dam el 22 de Jun. de 2015
Hi Matt. Probably the description was not clear. Assume I want to minimise f(x) letting x vary over the positive reals with start value x0. Then set y0=log(x0) and consider minimising the function f(exp(y)) starting in y0 but with y varying over the entire real line. Exponentiating the resulting y yields a valid positive solution for x. Voila, no constraints :) One can do similar tricks for bounds using e.g. the tangent function. (Note Walter made an exceptionel argument against doing this, blindly at least).
Regarding the dimension reduction, is there any reason for Matlab to not simply do this it self? I mean, I can make this trick for any linear equality where variables can be reduces, so so should Matlab be able to.
Matt J
Matt J el 22 de Jun. de 2015
Editada: Matt J el 22 de Jun. de 2015
Exponentiating the resulting y yields a valid positive solution for x. Voila, no constraints
This is essentially the same as what I was saying, but I proposed a different positivity transform x=y^2 whereas you proposed x=exp(y). Both transforms force x to take on non-negative values while y is allowed to vary over the entire real line. Note, however, that an exponential transformation doesn't give a problem entirely equivalent to the constraints implemented by MATLAB solvers. MATLAB solvers implement closed constraints x>=0 whereas x=exp(y) only allows x to range over x>0. If the minimum of f(x) lies at x=0, your exponential transformation will force y to go to -Inf and you will get numerical problems.
As for what Walter was saying, I think it applies to a different issue than what you are talking about. He is talking about the difference between the least squares problem that minimizes norm(y-g(x))^2 and
min. norm(log(y) - log(g(x))^2
This is not a transformation of the domain of x like you are talking about.
Regarding the dimension reduction, is there any reason for Matlab to not simply do this it self? I mean, I can make this trick for any linear equality where variables can be reduces
For situations as simple as a single equality constraint, the savings are too negligible to make it worthwhile. You can generalize the dimension reduction to the case of N equalities, but the benefits are not always clear. You basically have to find an NxN singular submatrix of Aeq in the equations Aeq*x=beq. The computational effort to do so for large N may or may not be prohibitive, and that can be difficult for MATLAB to determine automatically.

Iniciar sesión para comentar.

Categorías

Más información sobre Quadratic Programming and Cone Programming en Help Center y File Exchange.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by