fmincon doesn't find a global minimum for a convex function
9 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
In my opinion, fmincon is a built-in function for local minimum in matlab. If the objective function is a convex problem, there is only one basin and the local minimum is the global minimum. While starting from different initial points in my experiment, the algorithm got different minimums function. I wonder if fmincon guarantees to be converged to a global minimum for convex problem. If not, is there any other techiniques I can use for convex opimization as fast as possible? Is running GlobalSearch a good idea for global optimization? Thanks.
Here is the programming code:
options = optimoptions('fmincon');
problem.options = options;
problem.solver = 'fmincon';
problem.objective = @(x) langBW(x, in_s, in_e, C1, a, p_ul);
problem.Aineq = ones(1,user_num);
problem.bineq = BW2;
problem.nonlcon = @(x) nonlConstr_bw(x,a,p_ul,T1,in_s,in_e,BW2);
problem.x0 = ones(user_num,1)
[b_ul,fval] = fmincon(problem);
langBW is the objective function, which is a convex function of x , the code of langBW is as follow,
function fmin = langBW(x, in_s, in_e, C1, a, p_ul)
if size(x,1)<size(x,2)
x = x';
end
b_ul = x;
r_ul = b_ul .* log2(1 + a.*p_ul./b_ul);
fmin = sum((in_s+in_e).*p_ul./r_ul) + sum(C1);
end
The nonlConstr_bw is the function of nonlinear constraints. It is shown as follow,
function [c,ceq] = nonlConstr_bw(x,a,p_ul,T1,in_s,in_e)
user_num = size(p_ul,1);
if size(x,1)<size(x,2)
x = x';
end
b_ul = x;
r_ul = b_ul .* log2(1 + a.*p_ul./b_ul);
c1 = max(in_s./r_ul) + in_e./r_ul - T1;
c = c1;
ceq = zeros(user_num,1);
end
Except x , all other variables are supplied. The problem is that when I set different `problem.x0`, for example, when `problem.x0=ones(user_num,1);`, the solution of `[b_ul,fval] = fmincon(problem);` is different from that when `problem.x0=2*ones(user_num,1);`. That is what I am confused about.
2 comentarios
Walter Roberson
el 10 de Mayo de 2018
Could you give an example of a function this is happening for?
Respuestas (2)
Aquatris
el 10 de Mayo de 2018
fmincon is not necessarily for finding local minima but it can get stuck in a local minimum due to the nature of the algorithms. You might wanna try changing the algorithms (interior point - trust region - sqp) used by fmincon or changing some of the parameters of optimization if you do not get reasonable results. Evolutionary algorithms tend to find global minimum a lot better. You might wanna give them a try. Matlab has a built-in genetic algorithm optimization functions if you have the required toolbox or you can find community supplied particle swarm optimization code.
Matt J
el 10 de Mayo de 2018
Editada: Matt J
el 10 de Mayo de 2018
Even if the function is convex, there may be multiple global minima and the one you arrive at would depend on the initial point.
Keep in mind as well that your selection of stopping parameters (TolX, TolFun, etc...) have an influence on where the algorithm quits. Even if your function has a unique minimum, the algorithm will try to converge to it, but won't typically reach it exactly and the final error will be x0-dependent.
7 comentarios
Matt J
el 11 de Mayo de 2018
Editada: Matt J
el 11 de Mayo de 2018
We cannot run the code, because the problem constants (in_s, in_e, etc...) have not been provided.
I will mention, however, that your nonlinear constraints, while they may be convex, are not differentiable due to the max() operation. fmincon assumes differentiability. A differentiable equivalent to your constraint would be,
term1=in_s./r_ul;
term2=in_e./r_ul;
c = term1(:) + term2(:).' - T1;
ceq = [];
Ver también
Categorías
Más información sobre Solver Outputs and Iterative Display en Help Center y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!