Optimization of an implicit function using fmincon
Mostrar comentarios más antiguos
I am trying to carry out multi-objective constrained optimization using fmincon in MATLAB. However, one of my objective function can not be brought in an explicit form depending on variables. For example, I have an equation like:
x^a + x^b = y
and I want to minimize x given a range of values for a, b and y respectively.
How can I do that?
Respuestas (1)
Walter Roberson
el 14 de Feb. de 2021
You cannot do multi-objective optimization using fmincon(). fmincon() is restricted to single objectives.
Which of a, b, x, y are to be varied during a single call to fmincon() ?
If a, b, y are constant for this call, then use nonlinear equality
ceq = x.^a + x.^b - y
15 comentarios
Harshit Agrawal
el 14 de Feb. de 2021
Editada: Harshit Agrawal
el 14 de Feb. de 2021
Walter Roberson
el 14 de Feb. de 2021
If you can constrain x to be positive and real (mathematically, there are negative and complex x that would solve the equation) then you can calculate upper and lower bounds on x based upon the equation. Then add it as an additional optimization variable, using those upper and lower bounds, and adding ceq = x.^a + x.^b - y as a nonlinear equality. Be sure to use an initial value that satisfies the constraint or MATLAB can spend a long time trying to find something that works.
Harshit Agrawal
el 14 de Feb. de 2021
Editada: Harshit Agrawal
el 14 de Feb. de 2021
Walter Roberson
el 14 de Feb. de 2021
That could work.
Harshit Agrawal
el 14 de Feb. de 2021
Walter Roberson
el 14 de Feb. de 2021
Numeric analysis
x = eps(realmin)
a = 1
b = 1
y = 2*x
The only smaller non-negative x possible in floating point is x = 0. But 0 to a power is 0 unless the power is 0, but a and b are not permitted to be exactly 0. So x = 0, the left would be 0+0=0. However, y = 0 is not permitted.
We have thus proven that 0 exactly is not permitted. So let x be the smallest nonzero value. Can we make that work? Yes, as indicated above.
You can do better on y though. Let x = eps(realmin) and a = 1 and b>= 1+eps. Then x^b < x because b>1 and x<1 and since there is no smaller representable number, x^b underflows to 0. x^1 = x so x^a stays eps(realmin) and we add the 0 from x^b to that, getting eps(realmin) for y, which is within the permitted range. This is the smallest possible y since y=0 is not permitted and this is the next representable number.
Harshit Agrawal
el 2 de Mzo. de 2021
Editada: Harshit Agrawal
el 2 de Mzo. de 2021
Walter Roberson
el 2 de Mzo. de 2021
Numeric algorithms can never promise global minima, even given a really good starting point. Too much round-off.
Walter Roberson
el 2 de Mzo. de 2021
Are there upper limits for the variables?
Harshit Agrawal
el 3 de Mzo. de 2021
Harshit Agrawal
el 4 de Mzo. de 2021
Walter Roberson
el 4 de Mzo. de 2021
You will need to do numeric analysis. It will probably take you a number of days to carry out, possibly weeks. The expression is not very tractable, so it will be difficult.
Remember that the only way to prove global minima is by theoretical analysis. The best that you can do with fmincon or related tools is to find a local minima, and it might even be a great looking local minima, but numeric tools such as fmincon cannot rule out the possibility that the expression has a narrow unstable global minima (balancing a pin on its point sort of thing.)
Harshit Agrawal
el 5 de Mzo. de 2021
Will geneteic algorithm help in finding globala minima?
Walter Roberson
el 5 de Mzo. de 2021
Genetic algorithms explore more of the space, and can help get out of ruts. However:
- they never know (and cannot know) whether they have found a global minima. All they can ever know is that they have stalled finding better points -- but that could just mean that more iterations are needed
- they are not at all efficient at finding the local minima within whichever catch-basin they are in. They might be very close to a great minima and never locate it
- they can get misled and waste a lot of time. For example if you have a asymtopic descent around a central dip, they might spend their time chasing the ever-lower value further and further from the center, only ending up in the center through chance mutations
Mathworks claims that patternsearch() is typically more efficient than ga()
Walter Roberson
el 5 de En. de 2022
Did this approach work? to pose implicit function as objective
Categorías
Más información sobre Surrogate Optimization en Centro de ayuda y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!