fmincon
Find minimum of constrained nonlinear multivariable function
Syntax
Description
Nonlinear programming solver.
Finds the minimum of a problem specified by
b and beq are vectors, A and Aeq are matrices, c(x) and ceq(x) are functions that return vectors, and f(x) is a function that returns a scalar. f(x), c(x), and ceq(x) can be nonlinear functions.
x, lb, and ub can be passed as vectors or matrices; see Matrix Arguments.
starts
at x = fmincon(fun,x0,A,b)x0 and attempts to find a minimizer x of
the function described in fun subject to the linear
inequalities A*x ≤ b. x0 can
be a scalar, vector, or matrix.
Note
Passing Extra Parameters explains how to pass extra parameters to the objective function and nonlinear constraint functions, if necessary.
defines
a set of lower and upper bounds on the design variables in x = fmincon(fun,x0,A,b,Aeq,beq,lb,ub)x,
so that the solution is always in the range lb ≤ x ≤ ub.
If no equalities exist, set Aeq = [] and beq
= []. If x(i) is unbounded below, set lb(i)
= -Inf, and if x(i) is unbounded above,
set ub(i) = Inf.
Note
If the specified input bounds for a problem are inconsistent, fmincon throws
an error. In this case, output x is x0 and fval is [].
For the default 'interior-point' algorithm, fmincon sets
components of x0 that violate the bounds lb ≤ x ≤ ub, or are equal to a bound, to the interior
of the bound region. For the 'trust-region-reflective' algorithm, fmincon sets
violating components to the interior of the bound region. For other
algorithms, fmincon sets violating components
to the closest bound. Components that respect the bounds are not changed.
See Iterations Can Violate Constraints.
Examples
Find the minimum value of Rosenbrock's function when there is a linear inequality constraint.
Set the objective function fun to be Rosenbrock's function. Rosenbrock's function is well-known to be difficult to minimize. It has its minimum objective value of 0 at the point (1,1). For more information, see Constrained Nonlinear Problem Using Optimize Live Editor Task or Solver.
fun = @(x)100*(x(2)-x(1)^2)^2 + (1-x(1))^2;
Find the minimum value starting from the point [-1,2], constrained to have . Express this constraint in the form Ax <= b by taking A = [1,2] and b = 1. Notice that this constraint means that the solution will not be at the unconstrained solution (1,1), because at that point .
x0 = [-1,2]; A = [1,2]; b = 1; x = fmincon(fun,x0,A,b)
Local minimum found that satisfies the constraints. Optimization completed because the objective function is non-decreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance. <stopping criteria details>
x = 1×2
0.5022 0.2489
Find the minimum value of Rosenbrock's function when there are both a linear inequality constraint and a linear equality constraint.
Set the objective function fun to be Rosenbrock's function.
fun = @(x)100*(x(2)-x(1)^2)^2 + (1-x(1))^2;
Find the minimum value starting from the point [0.5,0], constrained to have and .
Express the linear inequality constraint in the form
A*x <= bby takingA = [1,2]andb = 1.Express the linear equality constraint in the form
Aeq*x = beqby takingAeq = [2,1]andbeq = 1.
x0 = [0.5,0]; A = [1,2]; b = 1; Aeq = [2,1]; beq = 1; x = fmincon(fun,x0,A,b,Aeq,beq)
Local minimum found that satisfies the constraints. Optimization completed because the objective function is non-decreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance. <stopping criteria details>
x = 1×2
0.4149 0.1701
Find the minimum of an objective function in the presence of bound constraints.
The objective function is a simple algebraic function of two variables.
fun = @(x)1+x(1)/(1+x(2)) - 3*x(1)*x(2) + x(2)*(1+x(1));
Look in the region where has positive values, , and .
lb = [0,0]; ub = [1,2];
The problem has no linear constraints, so set those arguments to [].
A = []; b = []; Aeq = []; beq = [];
Try an initial point in the middle of the region.
x0 = (lb + ub)/2;
Solve the problem.
x = fmincon(fun,x0,A,b,Aeq,beq,lb,ub)
Local minimum found that satisfies the constraints. Optimization completed because the objective function is non-decreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance. <stopping criteria details>
x = 1×2
1.0000 2.0000
A different initial point can lead to a different solution.
x0 = x0/5; x = fmincon(fun,x0,A,b,Aeq,beq,lb,ub)
Local minimum found that satisfies the constraints. Optimization completed because the objective function is non-decreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance. <stopping criteria details>
x = 1×2
10-6 ×
0.4000 0.4000
To determine which solution is better, see Obtain the Objective Function Value.
Find the minimum of a function subject to nonlinear constraints
Find the point where Rosenbrock's function is minimized within a circle, also subject to bound constraints.
fun = @(x)100*(x(2)-x(1)^2)^2 + (1-x(1))^2;
Look within the region , .
lb = [0,0.2]; ub = [0.5,0.8];
Also look within the circle centered at [1/3,1/3] with radius 1/3. Use this code for the nonlinear constraint function.
function [c,ceq] = circlecon(x) c = (x(1)-1/3)^2 + (x(2)-1/3)^2 - (1/3)^2; ceq = []; end
There are no linear constraints, so set those arguments to [].
A = []; b = []; Aeq = []; beq = [];
Choose an initial point satisfying all the constraints.
x0 = [1/4,1/4];
Solve the problem.
nonlcon = @circlecon; x = fmincon(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon)
Local minimum found that satisfies the constraints. Optimization completed because the objective function is non-decreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance. <stopping criteria details>
x = 1×2
0.5000 0.2500
Set options to view iterations as they occur and to use a different algorithm.
To observe the fmincon solution process, set the Display option to 'iter'. Also, try the 'sqp' algorithm, which is sometimes faster or more accurate than the default 'interior-point' algorithm.
options = optimoptions('fmincon','Display','iter','Algorithm','sqp');
Find the minimum of Rosenbrock's function on the unit disk, . First create a function that represents the nonlinear constraint. Save this as a file named unitdisk.m on your MATLAB® path.
type unitdisk.mfunction [c,ceq] = unitdisk(x) c = x(1)^2 + x(2)^2 - 1; ceq = [];
Create the remaining problem specifications. Then run fmincon.
fun = @(x)100*(x(2)-x(1)^2)^2 + (1-x(1))^2; A = []; b = []; Aeq = []; beq = []; lb = []; ub = []; nonlcon = @unitdisk; x0 = [0,0]; x = fmincon(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon,options)
Iter Func-count Fval Feasibility Step Length Norm of First-order
step optimality
0 3 1.000000e+00 0.000e+00 1.000e+00 0.000e+00 2.000e+00
1 12 8.913011e-01 0.000e+00 1.176e-01 2.353e-01 1.107e+01
2 22 8.047847e-01 0.000e+00 8.235e-02 1.900e-01 1.330e+01
3 28 4.197517e-01 0.000e+00 3.430e-01 1.217e-01 6.172e+00
4 31 2.733703e-01 0.000e+00 1.000e+00 5.254e-02 5.705e-01
5 34 2.397111e-01 0.000e+00 1.000e+00 7.498e-02 3.164e+00
6 37 2.036002e-01 0.000e+00 1.000e+00 5.960e-02 3.106e+00
7 40 1.164353e-01 0.000e+00 1.000e+00 1.459e-01 1.059e+00
8 43 1.161753e-01 0.000e+00 1.000e+00 1.754e-01 7.383e+00
9 46 5.901601e-02 0.000e+00 1.000e+00 1.547e-02 7.278e-01
10 49 4.533081e-02 2.898e-03 1.000e+00 5.393e-02 1.252e-01
11 52 4.567454e-02 2.225e-06 1.000e+00 1.492e-03 1.679e-03
12 55 4.567481e-02 4.406e-12 1.000e+00 2.095e-06 1.501e-05
13 58 4.567481e-02 0.000e+00 1.000e+00 2.159e-09 1.511e-05
Local minimum possible. Constraints satisfied.
fmincon stopped because the size of the current step is less than
the value of the step size tolerance and constraints are
satisfied to within the value of the constraint tolerance.
<stopping criteria details>
x = 1×2
0.7864 0.6177
For iterative display details, see Iterative Display.
Include gradient evaluation in the objective function for faster or more reliable computations.
Include the gradient evaluation as a conditionalized output in the objective function file. For details, see Including Gradients and Hessians. The objective function is Rosenbrock's function,
which has gradient
This code creates the rosenbrockwithgrad function, which implements the objective function with gradient..
function [f,g] = rosenbrockwithgrad(x) % Calculate objective f f = 100*(x(2) - x(1)^2)^2 + (1-x(1))^2; if nargout > 1 % gradient required g = [-400*(x(2)-x(1)^2)*x(1)-2*(1-x(1)); 200*(x(2)-x(1)^2)]; end end
Create options to use the objective function gradient.
options = optimoptions('fmincon','SpecifyObjectiveGradient',true);
Create the other inputs for the problem. Then call fmincon.
fun = @rosenbrockwithgrad; x0 = [-1,2]; A = []; b = []; Aeq = []; beq = []; lb = [-2,-2]; ub = [2,2]; nonlcon = []; x = fmincon(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon,options)
Local minimum found that satisfies the constraints. Optimization completed because the objective function is non-decreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance. <stopping criteria details>
x = 1×2
1.0000 1.0000
Solve the same problem as in Nondefault Options using a problem structure instead of separate arguments.
Create the options and a problem structure. See problem for the field names and required fields.
options = optimoptions('fmincon','Display','iter','Algorithm','sqp'); problem.options = options; problem.solver = 'fmincon'; problem.objective = @(x)100*(x(2)-x(1)^2)^2 + (1-x(1))^2; problem.x0 = [0,0];
The nonlinear constraint function unitdisk appears at the end of this example. Include the nonlinear constraint function in problem.
problem.nonlcon = @unitdisk;
Solve the problem.
x = fmincon(problem)
Iter Func-count Fval Feasibility Step Length Norm of First-order
step optimality
0 3 1.000000e+00 0.000e+00 1.000e+00 0.000e+00 2.000e+00
1 12 8.913011e-01 0.000e+00 1.176e-01 2.353e-01 1.107e+01
2 22 8.047847e-01 0.000e+00 8.235e-02 1.900e-01 1.330e+01
3 28 4.197517e-01 0.000e+00 3.430e-01 1.217e-01 6.172e+00
4 31 2.733703e-01 0.000e+00 1.000e+00 5.254e-02 5.705e-01
5 34 2.397111e-01 0.000e+00 1.000e+00 7.498e-02 3.164e+00
6 37 2.036002e-01 0.000e+00 1.000e+00 5.960e-02 3.106e+00
7 40 1.164353e-01 0.000e+00 1.000e+00 1.459e-01 1.059e+00
8 43 1.161753e-01 0.000e+00 1.000e+00 1.754e-01 7.383e+00
9 46 5.901602e-02 0.000e+00 1.000e+00 1.547e-02 7.278e-01
10 49 4.533081e-02 2.898e-03 1.000e+00 5.393e-02 1.252e-01
11 52 4.567454e-02 2.225e-06 1.000e+00 1.492e-03 1.679e-03
12 55 4.567481e-02 4.386e-12 1.000e+00 2.095e-06 1.502e-05
13 58 4.567481e-02 0.000e+00 1.000e+00 2.193e-12 1.406e-05
Local minimum possible. Constraints satisfied.
fmincon stopped because the size of the current step is less than
the value of the step size tolerance and constraints are
satisfied to within the value of the constraint tolerance.
<stopping criteria details>
x = 1×2
0.7864 0.6177
The iterative display and solution are the same as in Nondefault Options.
The following code creates the unitdisk function.
function [c,ceq] = unitdisk(x) c = x(1)^2 + x(2)^2 - 1; ceq = []; end
Call fmincon with the fval output to obtain the value of the objective function at the solution.
The Minimize with Bound Constraints example shows two solutions. Which is better? Run the example requesting the fval output as well as the solution.
fun = @(x)1+x(1)./(1+x(2)) - 3*x(1).*x(2) + x(2).*(1+x(1)); lb = [0,0]; ub = [1,2]; A = []; b = []; Aeq = []; beq = []; x0 = (lb + ub)/2; [x,fval] = fmincon(fun,x0,A,b,Aeq,beq,lb,ub)
Local minimum found that satisfies the constraints. Optimization completed because the objective function is non-decreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance. <stopping criteria details>
x = 1×2
1.0000 2.0000
fval = -0.6667
Run the problem using a different starting point x0.
x0 = x0/5; [x2,fval2] = fmincon(fun,x0,A,b,Aeq,beq,lb,ub)
Local minimum found that satisfies the constraints. Optimization completed because the objective function is non-decreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance. <stopping criteria details>
x2 = 1×2
10-6 ×
0.4000 0.4000
fval2 = 1.0000
This solution has an objective function value fval2 = 1, which is higher than the first value fval = –0.6667. The first solution x has a lower local minimum objective function value.
To easily examine the quality of a solution, request the exitflag and output outputs.
Set up the problem of minimizing Rosenbrock's function on the unit disk,
. First create a function that represents the nonlinear constraint. Save this as a file named unitdisk.m on your MATLAB® path.
function [c,ceq] = unitdisk(x)
c = x(1)^2 + x(2)^2 - 1;
ceq = [];
Create the remaining problem specifications.
fun = @(x)100*(x(2)-x(1)^2)^2 + (1-x(1))^2; nonlcon = @unitdisk; A = []; b = []; Aeq = []; beq = []; lb = []; ub = []; x0 = [0,0];
Call fmincon using the fval, exitflag, and output outputs.
[x,fval,exitflag,output] = fmincon(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon)
Local minimum found that satisfies the constraints.
Optimization completed because the objective function is non-decreasing in
feasible directions, to within the value of the optimality tolerance,
and constraints are satisfied to within the value of the constraint tolerance.
x =
0.7864 0.6177
fval =
0.0457
exitflag =
1
output =
struct with fields:
iterations: 24
funcCount: 84
constrviolation: 0
stepsize: 6.9162e-06
algorithm: 'interior-point'
firstorderopt: 2.4373e-08
cgiterations: 4
message: 'Local minimum found that satisfies the constraints.↵↵Optimization completed because the objective function is non-decreasing in ↵feasible directions, to within the value of the optimality tolerance,↵and constraints are satisfied to within the value of the constraint tolerance.↵↵<stopping criteria details>↵↵Optimization completed: The relative first-order optimality measure, 2.437331e-08,↵is less than options.OptimalityTolerance = 1.000000e-06, and the relative maximum constraint↵violation, 0.000000e+00, is less than options.ConstraintTolerance = 1.000000e-06.'
bestfeasible: [1×1 struct]
The
exitflagvalue1indicates that the solution is a local minimum.The
outputstructure reports several statistics about the solution process. In particular, it gives the number of iterations inoutput.iterations, number of function evaluations inoutput.funcCount, and the feasibility inoutput.constrviolation.
fmincon optionally returns several outputs that you can use for analyzing the reported solution.
Set up the problem of minimizing Rosenbrock's function on the unit disk. First create a function that represents the nonlinear constraint. Save this as a file named unitdisk.m on your MATLAB® path.
function [c,ceq] = unitdisk(x)
c = x(1)^2 + x(2)^2 - 1;
ceq = [];
Create the remaining problem specifications.
fun = @(x)100*(x(2)-x(1)^2)^2 + (1-x(1))^2; nonlcon = @unitdisk; A = []; b = []; Aeq = []; beq = []; lb = []; ub = []; x0 = [0,0];
Request all fmincon outputs.
[x,fval,exitflag,output,lambda,grad,hessian] = fmincon(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon)
Local minimum found that satisfies the constraints.
Optimization completed because the objective function is non-decreasing in
feasible directions, to within the value of the optimality tolerance,
and constraints are satisfied to within the value of the constraint tolerance.
x =
0.7864 0.6177
fval =
0.0457
exitflag =
1
output =
struct with fields:
iterations: 24
funcCount: 84
constrviolation: 0
stepsize: 6.9162e-06
algorithm: 'interior-point'
firstorderopt: 2.4373e-08
cgiterations: 4
message: 'Local minimum found that satisfies the constraints.↵↵Optimization completed because the objective function is non-decreasing in ↵feasible directions, to within the value of the optimality tolerance,↵and constraints are satisfied to within the value of the constraint tolerance.↵↵<stopping criteria details>↵↵Optimization completed: The relative first-order optimality measure, 2.437331e-08,↵is less than options.OptimalityTolerance = 1.000000e-06, and the relative maximum constraint↵violation, 0.000000e+00, is less than options.ConstraintTolerance = 1.000000e-06.'
bestfeasible: [1×1 struct]
lambda =
struct with fields:
eqlin: [0×1 double]
eqnonlin: [0×1 double]
ineqlin: [0×1 double]
lower: [2×1 double]
upper: [2×1 double]
ineqnonlin: 0.1215
grad =
-0.1911
-0.1501
hessian =
497.2903 -314.5589
-314.5589 200.2392
The
lambda.ineqnonlinoutput shows that the nonlinear constraint is active at the solution, and gives the value of the associated Lagrange multiplier.The
gradoutput gives the value of the gradient of the objective function at the solutionx.The
hessianoutput is described in fmincon Hessian.
Input Arguments
Function to minimize, specified as a function handle or function
name. fun is a function that accepts a vector or
array x and returns a real scalar f,
the objective function evaluated at x.
fmincon passes x to
your objective function and any nonlinear constraint functions in the shape of the
x0 argument. For example, if x0 is a 5-by-3 array,
then fmincon passes x to fun as a
5-by-3 array. However, fmincon multiplies linear constraint matrices
A or Aeq with x after
converting x to the column vector x(:).
Specify fun as a function handle for a file:
x = fmincon(@myfun,x0,A,b)
where myfun is a MATLAB® function such
as
function f = myfun(x) f = ... % Compute function value at x
You can also specify fun as a function handle
for an anonymous function:
x = fmincon(@(x)norm(x)^2,x0,A,b);
If you can compute the gradient of fun
and the SpecifyObjectiveGradient
option is set to true, as set
by
options = optimoptions('fmincon','SpecifyObjectiveGradient',true)
fun must return the gradient vector
g(x) in the second output argument.If you can also compute the Hessian matrix and the HessianFcn option
is set to 'objective' via optimoptions and the Algorithm option
is 'trust-region-reflective', fun must
return the Hessian value H(x), a symmetric matrix,
in a third output argument. fun can give a sparse
Hessian. See Hessian for fminunc trust-region or fmincon trust-region-reflective algorithms for
details.
If you can also compute the Hessian matrix and the Algorithm option
is set to 'interior-point', there is a different
way to pass the Hessian to fmincon. For more
information, see Hessian for fmincon interior-point algorithm. For an example
using Symbolic Math Toolbox™ to compute the gradient and Hessian,
see Calculate Gradients and Hessians Using Symbolic Math Toolbox.
The interior-point and trust-region-reflective algorithms
allow you to supply a Hessian multiply function. This function gives
the result of a Hessian-times-vector product without computing the
Hessian directly. This can save memory. See Hessian Multiply Function.
Example: fun = @(x)sin(x(1))*cos(x(2))
Data Types: char | function_handle | string
Initial point, specified as a real vector or real array. Solvers use the
number of elements in, and size of, x0 to determine the
number and size of variables that fun accepts.
'interior-point'algorithm — If theHonorBoundsoption istrue(default),fminconresetsx0components that are on or outside boundslborubto values strictly between the bounds.'trust-region-reflective'algorithm —fminconresets infeasiblex0components to be feasible with respect to bounds or linear equalities.'sqp','sqp-legacy', or'active-set'algorithm —fminconresetsx0components that are outside bounds to the values of the corresponding bounds.
Example: x0 = [1,2,3,4]
Data Types: double
Linear inequality constraints, specified as a real matrix. A is an
M-by-N
matrix, where M is the number of
inequalities, and N is the number
of variables (number of elements in
x0). For large problems with
algorithms that support sparse data, pass
A as a sparse matrix. See Sparsity in Optimization Algorithms.
A encodes the M linear
inequalities
A*x <= b,
where x is the column vector of N variables x(:),
and b is a column vector with M elements.
For example, consider these inequalities:
x1 + 2x2 ≤
10
3x1 +
4x2 ≤ 20
5x1 +
6x2 ≤ 30,
Specify the inequalities by entering the following constraints.
A = [1,2;3,4;5,6]; b = [10;20;30];
Example: To specify that the x components sum to 1 or less, use A =
ones(1,N) and b = 1.
Data Types: single | double
Linear inequality constraints, specified as a real vector. b is an
M-element vector related to the A matrix. If
you pass b as a row vector, solvers internally convert
b to the column vector b(:).
b encodes the M linear
inequalities
A*x <= b,
where x is the column vector of N variables x(:),
and A is a matrix of size M-by-N.
For example, consider these inequalities:
x1
+ 2x2 ≤
10
3x1
+ 4x2 ≤
20
5x1
+ 6x2 ≤
30.
Specify the inequalities by entering the following constraints.
A = [1,2;3,4;5,6]; b = [10;20;30];
Example: To specify that the x components sum to 1 or less, use A =
ones(1,N) and b = 1.
Data Types: single | double
Linear equality constraints, specified as a real matrix. Aeq is an
Me-by-N
matrix, where Me is the number of
equalities, and N is the number
of variables (number of elements in
x0). For large problems with
algorithms that support sparse data, pass
A as a sparse matrix. See Sparsity in Optimization Algorithms.
Aeq encodes the Me linear
equalities
Aeq*x = beq,
where x is the column vector of N variables x(:),
and beq is a column vector with Me elements.
For example, consider these inequalities:
x1 + 2x2 +
3x3 = 10
2x1 +
4x2 + x3 =
20,
Specify the inequalities by entering the following constraints.
Aeq = [1,2,3;2,4,1]; beq = [10;20];
Example: To specify that the x components sum to 1, use Aeq = ones(1,N) and
beq = 1.
Data Types: single | double
Linear equality constraints, specified as a real vector. beq is an
Me-element vector related to the Aeq matrix.
If you pass beq as a row vector, solvers internally convert
beq to the column vector beq(:).
beq encodes the Me linear
equalities
Aeq*x = beq,
where x is the column vector of N variables
x(:), and Aeq is a matrix of size
Me-by-N.
For example, consider these equalities:
x1
+ 2x2 +
3x3 =
10
2x1
+ 4x2 +
x3 =
20.
Specify the equalities by entering the following constraints.
Aeq = [1,2,3;2,4,1]; beq = [10;20];
Example: To specify that the x components sum to 1, use Aeq = ones(1,N) and
beq = 1.
Data Types: single | double
Lower bounds, specified as a real vector or real array. If the number of elements in
x0 is equal to the number of elements in lb,
then lb specifies that
x(i) >= lb(i) for all i.
If numel(lb) < numel(x0), then lb specifies
that
x(i) >= lb(i) for 1 <=
i <= numel(lb).
If lb has fewer elements than x0, solvers issue a
warning.
Example: To specify that all x components are positive, use lb =
zeros(size(x0)).
Data Types: single | double
Upper bounds, specified as a real vector or real array. If the number of elements in
x0 is equal to the number of elements in ub,
then ub specifies that
x(i) <= ub(i) for all i.
If numel(ub) < numel(x0), then ub specifies
that
x(i) <= ub(i) for 1 <=
i <= numel(ub).
If ub has fewer elements than x0, solvers issue
a warning.
Example: To specify that all x components are less than 1, use ub =
ones(size(x0)).
Data Types: single | double
Nonlinear constraints, specified as a function handle or function
name. nonlcon is a function that accepts a vector
or array x and returns two arrays, c(x) and ceq(x).
c(x)is the array of nonlinear inequality constraints atx.fminconattempts to satisfyc(x) <= 0for all entries ofc.ceq(x)is the array of nonlinear equality constraints atx.fminconattempts to satisfyceq(x) = 0for all entries ofceq.
For example,
x = fmincon(@myfun,x0,A,b,Aeq,beq,lb,ub,@mycon)
where mycon is a MATLAB function such
as
function [c,ceq] = mycon(x) c = ... % Compute nonlinear inequalities at x. ceq = ... % Compute nonlinear equalities at x.
SpecifyConstraintGradient option is
true, as set
byoptions = optimoptions('fmincon',SpecifyConstraintGradient=true)nonlcon must also return, in the third and fourth
output arguments, GC, the gradient of
c(x), and GCeq, the gradient of
ceq(x). GC and
GCeq can be sparse or dense. If GC
or GCeq is large, with relatively few nonzero entries,
save running time and memory in the interior-point
algorithm by representing them as sparse matrices. For more information, see
Nonlinear Constraints.Note
Setting SpecifyConstraintGradient to
true is effective only when
SpecifyObjectiveGradient is set to
true. Internally, the objective is folded into
the constraint, so the solver needs both gradients (objective and
constraint) supplied in order to avoid estimating a gradient.
Note
Because Optimization Toolbox™ functions accept only inputs of type
double, user-supplied objective and nonlinear
constraint functions must return outputs of type
double.
Data Types: char | function_handle | string
Optimization options, specified as the output of
optimoptions or a structure such as
optimset returns.
Some options apply to all algorithms, and others are relevant for particular algorithms. See Optimization Options Reference for detailed information.
Some options are absent from the
optimoptions display. These options appear in italics in the following
table. For details, see View Optimization Options.
| All Algorithms | |||||
Algorithm | Choose the optimization algorithm:
For information on choosing the algorithm, see Choosing the Algorithm. The
If you select the
The | ||||
| CheckGradients | Compare user-supplied
derivatives (gradients of objective or constraints) to
finite-differencing derivatives. Choices are
For
The | ||||
ConstraintTolerance | Tolerance on the constraint
violation, a nonnegative scalar. The default is
For | ||||
| Diagnostics | Display diagnostic
information about the function to be minimized or
solved. Choices are | ||||
| DiffMaxChange | Maximum change in variables
for finite-difference gradients (a positive scalar). The
default is | ||||
| DiffMinChange | Minimum change in variables
for finite-difference gradients (a positive scalar). The
default is | ||||
Display | Level of display (see Iterative Display):
| ||||
FiniteDifferenceStepSize |
Scalar or vector step size factor for finite differences. When
you set
sign′(x) = sign(x) except sign′(0) = 1.
Central finite differences are
FiniteDifferenceStepSize expands to a vector. The default
is sqrt(eps) for forward finite differences, and eps^(1/3)
for central finite differences.
For | ||||
FiniteDifferenceType | Finite differences, used to
estimate gradients, are either
For
| ||||
| FunValCheck | Check whether objective
function values are valid. The default setting,
| ||||
MaxFunctionEvaluations | Maximum number of function
evaluations allowed, a nonnegative integer. The default
value for all algorithms except
For | ||||
MaxIterations | Maximum number of iterations
allowed, a nonnegative integer. The default value for
all algorithms except For | ||||
OptimalityTolerance | Termination tolerance on the first-order optimality (a
nonnegative scalar). The default is For | ||||
OutputFcn | Specify one or more
user-defined functions that an optimization function
calls at each iteration. Pass a function handle or a
cell array of function handles. The default is none
( | ||||
PlotFcn | Plots various measures of
progress while the algorithm executes; select from
predefined plots or write your own. Pass a built-in plot
function name, a function handle, or a cell array of
built-in plot function names or function handles. For
custom plot functions, pass function handles. The
default is none (
Custom plot functions use the same syntax as output functions. See Output Functions for Optimization Toolbox and Output Function and Plot Function Syntax. For
| ||||
SpecifyConstraintGradient | Gradient for nonlinear
constraint functions defined by the user. When set to
the default, For
| ||||
SpecifyObjectiveGradient | Gradient for the objective
function defined by the user. See the description of
For
| ||||
StepTolerance | Termination tolerance on
For | ||||
TypicalX | Typical The
| ||||
UseParallel | When | ||||
| Trust-Region-Reflective Algorithm | |||||
FunctionTolerance | Termination tolerance on the
function value, a nonnegative scalar. The default is
For | ||||
HessianFcn | If For | ||||
HessianMultiplyFcn | Hessian multiply
function, specified as a function handle. For
large-scale structured problems, this function computes
the Hessian matrix product W = hmfun(Hinfo,Y) where
The first
argument is the same as the third argument returned by
the objective function [f,g,Hinfo] = fun(x)
Note To use the See Hessian Multiply Function. See Minimization with Dense Structured Hessian, Linear Equalities for an example. For
| ||||
| HessPattern | Sparsity pattern of the
Hessian for finite differencing. Set
Use
When the structure is unknown, do
not set | ||||
| MaxPCGIter | Maximum number of
preconditioned conjugate gradient (PCG) iterations, a
positive scalar. The default is
| ||||
| PrecondBandWidth | Upper bandwidth of
preconditioner for PCG, a nonnegative integer. By
default, diagonal preconditioning is used (upper
bandwidth of 0). For some problems, increasing the
bandwidth reduces the number of PCG iterations. Setting
| ||||
SubproblemAlgorithm | Determines how the iteration
step is calculated. The default,
For | ||||
| TolPCG | Termination tolerance on the
PCG iteration, a positive scalar. The default is
| ||||
| Active-Set Algorithm | |||||
FunctionTolerance | Termination tolerance on the
function value, a nonnegative scalar. The default is
For | ||||
| MaxSQPIter | Maximum number of SQP
iterations allowed, a positive integer. The default is
| ||||
| RelLineSrchBnd | Relative bound (a real
nonnegative scalar value) on the line search step
length. The total displacement in x
satisfies |Δx(i)| ≤ relLineSrchBnd·
max(|x(i)|,|typicalx(i)|). This option provides control over the
magnitude of the displacements in x
for cases in which the solver takes steps that are
considered too large. The default is no bounds
( | ||||
| RelLineSrchBndDuration | Number of iterations for
which the bound specified in
| ||||
| TolConSQP | Termination tolerance on
inner iteration SQP constraint violation, a positive
scalar. The default is
| ||||
| Interior-Point Algorithm | |||||
BarrierParamUpdate | Specifies how
This option can affect the speed and convergence of the solver, but the effect is not easy to predict. | ||||
EnableFeasibilityMode | When Feasibility
mode usually performs better when
| ||||
HessianApproximation | Specifies how
Note To use For | ||||
HessianFcn | If For | ||||
HessianMultiplyFcn | User-supplied function that gives a Hessian-times-vector product (see Hessian Multiply Function). Pass a function handle. Note To use the For | ||||
HonorBounds | The default
For
| ||||
| InitBarrierParam | Initial barrier value, a
positive scalar. Sometimes it might help to try a value
above the default | ||||
| InitTrustRegionRadius | Initial radius of the trust region, a positive scalar. On badly scaled problems it might help to choose a value smaller than the default , where n is the number of variables. | ||||
| MaxProjCGIter | A tolerance (stopping
criterion) for the number of projected conjugate
gradient iterations; this is an inner iteration, not the
number of iterations of the algorithm. This positive
integer has a default value of
| ||||
ObjectiveLimit | A tolerance (stopping
criterion) that is a scalar. If the objective function
value goes below | ||||
ScaleProblem |
For
| ||||
SubproblemAlgorithm | Determines how the iteration
step is calculated. The default,
For | ||||
| TolProjCG | A relative tolerance
(stopping criterion) for projected conjugate gradient
algorithm; this is for an inner iteration, not the
algorithm iteration. This positive scalar has a default
of | ||||
| TolProjCGAbs | Absolute tolerance (stopping
criterion) for projected conjugate gradient algorithm;
this is for an inner iteration, not the algorithm
iteration. This positive scalar has a default of
| ||||
| SQP and SQP Legacy Algorithms | |||||
ObjectiveLimit | A tolerance (stopping
criterion) that is a scalar. If the objective function
value goes below | ||||
ScaleProblem |
For
| ||||
UseCodegenSolver | Indication to use the version of the software that runs on
target hardware, specified as | ||||
| Single-Precision Code Generation | |||||
Algorithm | Must be
| ||||
ConstraintTolerance | Tolerance on the constraint
violation, a nonnegative scalar. The default is
| ||||
FiniteDifferenceStepSize | Scalar or vector step
size factor for finite differences. When you set
sign′(x) = sign(x)
except sign′(0) = 1. Central finite
differences are
FiniteDifferenceStepSize expands
to a vector. The default is
sqrt(eps('single')) for forward
finite differences, and
eps('single')^(1/3) for central
finite differences. | ||||
FiniteDifferenceType | Finite differences, used to
estimate gradients, are either
| ||||
MaxFunctionEvaluations | Maximum number of function
evaluations allowed, a nonnegative integer. The default
value is | ||||
MaxIterations | Maximum number of iterations
allowed, a nonnegative integer. The default value is
| ||||
ObjectiveLimit | A tolerance (stopping
criterion) that is a scalar. If the objective function
value goes below | ||||
OptimalityTolerance | Termination tolerance
on the first-order optimality (a nonnegative scalar).
The default is | ||||
ScaleProblem |
| ||||
SpecifyConstraintGradient | Gradient for nonlinear
constraint functions defined by the user. When set to
the default, | ||||
SpecifyObjectiveGradient | Gradient for the objective
function defined by the user. See the description of
| ||||
StepTolerance | Termination tolerance on
| ||||
TypicalX | Typical | ||||
UseCodegenSolver | Indication to use the version of the software that runs on
target hardware, specified as | ||||
Example: options =
optimoptions('fmincon','SpecifyObjectiveGradient',true,'SpecifyConstraintGradient',true)
Problem structure, specified as a structure with the following fields:
| Field Name | Entry |
|---|---|
| Objective function |
| Initial point for x |
| Matrix for linear inequality constraints |
| Vector for linear inequality constraints |
| Matrix for linear equality constraints |
| Vector for linear equality constraints |
lb | Vector of lower bounds |
ub | Vector of upper bounds |
| Nonlinear constraint function |
| 'fmincon' |
| Options created with optimoptions |
You must supply at least the objective, x0, solver,
and options fields in the problem structure.
Data Types: struct
Output Arguments
Solution, returned as a real vector or real array. The size
of x is the same as the size of x0.
Typically, x is a local solution to the problem
when exitflag is positive. For information on
the quality of the solution, see When the Solver Succeeds.
Objective function value at the solution, returned as a real
number. Generally, fval = fun(x).
Reason fmincon stopped, returned as an
integer.
All Algorithms: | |
| First-order optimality measure was less than |
| Number of iterations exceeded |
| Stopped by an output function or plot function. |
| No feasible point was found. |
All algorithms except | |
| Change in |
| |
| Change in the objective function value was less than |
| |
| Magnitude of the search direction was less than 2* |
| Magnitude of directional derivative in search direction
was less than 2* |
| |
| Objective function at current iteration went below |
Information about the optimization process, returned as a structure with fields:
iterations | Number of iterations taken |
funcCount | Number of function evaluations |
lssteplength | Size of line search step relative to search direction
( |
constrviolation | Maximum of constraint functions |
stepsize | Length of last displacement in |
algorithm | Optimization algorithm used |
cgiterations | Total number of PCG iterations ( |
firstorderopt | Measure of first-order optimality |
bestfeasible | Best (lowest objective function) feasible point encountered at the ends of the iterations. A structure with these fields:
If no feasible point is found, the
The
|
message | Exit message |
Gradient at the solution, returned as a real vector. grad gives
the gradient of fun at the point x(:).
Approximate Hessian, returned as a real matrix. For the meaning
of hessian, see Hessian Output.
Limitations
fminconis a gradient-based method that is designed to work on problems where the objective and constraint functions are both continuous and have continuous first derivatives.For the
'trust-region-reflective'algorithm, you must provide the gradient infunand set the'SpecifyObjectiveGradient'option totrue.The
'trust-region-reflective'algorithm does not allow equal upper and lower bounds. For example, iflb(2)==ub(2),fmincongives this error:Equal upper and lower bounds not permitted in trust-region-reflective algorithm. Use either interior-point or SQP algorithms instead.
There are two different syntaxes for passing a Hessian, and there are two different syntaxes for passing a
HessianMultiplyFcnfunction; one fortrust-region-reflective, and another forinterior-point. See Including Hessians.For
trust-region-reflective, the Hessian of the Lagrangian is the same as the Hessian of the objective function. You pass that Hessian as the third output of the objective function.For
interior-point, the Hessian of the Lagrangian involves the Lagrange multipliers and the Hessians of the nonlinear constraint functions. You pass the Hessian as a separate function that takes into account both the current pointxand the Lagrange multiplier structurelambda.
When the problem is infeasible,
fminconattempts to minimize the maximum constraint value.
More About
fmincon uses a Hessian
as an optional input. This Hessian is the matrix of second derivatives
of the Lagrangian (see Equation 1), namely,
| (3) |
For details of how to supply a Hessian to the trust-region-reflective or interior-point algorithms,
see Including Hessians.
The active-set and sqp algorithms
do not accept an input Hessian. They compute a quasi-Newton approximation
to the Hessian of the Lagrangian.
The interior-point algorithm has several choices for the
'HessianApproximation' option; see Choose Input Hessian Approximation for interior-point fmincon:
'bfgs'—fminconcalculates the Hessian by a dense quasi-Newton approximation. This is the default Hessian approximation.'lbfgs'—fminconcalculates the Hessian by a limited-memory, large-scale quasi-Newton approximation. The default memory, 10 iterations, is used.{'lbfgs',positive integer}—fminconcalculates the Hessian by a limited-memory, large-scale quasi-Newton approximation. The positive integer specifies how many past iterations should be remembered.'finite-difference'—fminconcalculates a Hessian-times-vector product by finite differences of the gradient(s). You must supply the gradient of the objective function, and also gradients of nonlinear constraints (if they exist). Set the'SpecifyObjectiveGradient'option totrueand, if applicable, the'SpecifyConstraintGradient'option totrue. You must set the'SubproblemAlgorithm'to'cg'.
The interior-point and trust-region-reflective algorithms
allow you to supply a Hessian multiply function. This function gives
the result of a Hessian-times-vector product, without computing the
Hessian directly. This can save memory. For details, see Hessian Multiply Function.
The next few items list the possible enhanced exit messages from
fmincon. Enhanced exit messages give a link for more
information as the first sentence of the message.
The solver located a point that seems to be a local minimum, since the point is feasible (satisfies constraints within the ConstraintTolerance tolerance) and the first-order optimality measure is less than the OptimalityTolerance tolerance.
For suggestions on how to proceed, see When the Solver Succeeds.
The initial point seems to be a local minimum, since the point is feasible (satisfies constraints within the ConstraintTolerance tolerance), and the first-order optimality measure is less than the OptimalityTolerance tolerance.
For suggestions on how to proceed, see Final Point Equals Initial Point.
The solver may have reached a local minimum, but cannot be certain because the first-order optimality measure is not less than the OptimalityTolerance tolerance. The constraints are satisfied to within the ConstraintTolerance constraint tolerance.
For suggestions on how to proceed, see Local Minimum Possible.
fmincon converged to a point that does not satisfy all
constraints to within the constraint tolerance called ConstraintTolerance. The reason
fmincon stopped is that the last step was too small.
When the relative step size goes below the StepTolerance tolerance, then
the iterations end.
For suggestions on how to proceed, see Converged to an Infeasible Point.
The solver stopped because it reached a limit on the number of iterations or function evaluations before it minimized the objective to the requested tolerance.
For suggestions on how to proceed, see Too Many Iterations or Function Evaluations.
The solver reached a feasible point whose objective function value was less
than or equal to the ObjectiveLimit
tolerance. The problem
is unbounded, or poorly scaled, or the ObjectiveLimit option
is too high.
For suggestions on how to proceed, see Problem Unbounded.
fmincon encountered a feasible point with a lower objective
value than the final point. This includes the case where the final point is
infeasible, in which case the final objective function value is not relevant.
Feasible means that the maximum infeasibility is less than the
ConstraintTolerance option.
The best feasible point is in the bestfeasible field of the
output structure. For an
example, see Obtain Best Feasible Point.
The next few items contain definitions for terms in the fmincon exit messages.
A local minimum of a function is a point where the function value is smaller than at nearby points, but possibly greater than at a distant point.
A global minimum is a point where the function value is smaller than at all other feasible points.

Solvers try to find a local minimum. The result can be a global minimum. For more information, see Local vs. Global Optima.
Generally, a tolerance is a threshold which, if crossed, stops the iterations of a solver. For more information on tolerances, see Tolerances and Stopping Criteria.
The constraint tolerance called
ConstraintTolerance is the maximum of the values of all
constraint functions at the current point.
ConstraintTolerance operates differently from other tolerances.
If ConstraintTolerance is not satisfied (i.e., if the magnitude
of the constraint function exceeds ConstraintTolerance), the
solver attempts to continue, unless it is halted for another reason. A solver does
not halt simply because ConstraintTolerance is satisfied.
The constraint violation is the maximum of the values of all constraint functions
at the current point. This is measured against the tolerance called
ConstraintTolerance.
ConstraintTolerance operates differently from other tolerances.
If ConstraintTolerance is not satisfied (i.e., if the magnitude
of the constraint function exceeds ConstraintTolerance), the
solver attempts to continue, unless it is halted for another reason. A solver does
not halt simply because ConstraintTolerance is satisfied.
Feasible directions are those vectors from the current point that locally satisfy the constraints. They either point to the interior of the region where the constraints are satisfied, or are tangent to the boundary of binding constraints.
The first order optimality measure for constrained problems is the maximum of the following two quantities:
For unconstrained problems, it is the maximum of the absolute value of the components of the gradient vector (also known as the infinity norm).
This should be zero at a minimizing point.
For more information, including definitions of all the variables in these equations, see First-Order Optimality Measure.
The tolerance called OptimalityTolerance relates to the
first-order optimality measure. Iterations end when the first-order optimality
measure is less than OptimalityTolerance. For more information,
see First-Order Optimality Measure.
The predicted change in objective function is the amount the solver estimates the objective function would decrease if the current point were moved along the estimated best search direction. This estimated decrease is the inner product of the gradient of the objective at the current point with the search direction, times the step length. Optimization Toolbox solvers compute search directions via various algorithms, described in Constrained Nonlinear Optimization Algorithms.
An output function (or plot function) is evaluated once per iteration of a solver. It can report many optimization quantities during the course of a solver's progress, and can halt the solver.
For more information, see Output Functions for Optimization Toolbox or Plot Functions.
MaxIterations is a tolerance on the number of iterations the solver performs. When the solver has taken MaxIterations iterations, the iterations end.
For more information, see Iterations and Function Counts or Tolerances and Stopping Criteria.
MaxFunctionEvaluations is a tolerance on the number of points where the solver evaluates the objective and/or constraint functions. When the solver has evaluated functions at MaxFunctionEvaluations points, the iterations end.
For more information, see Iterations and Function Counts or Tolerances and Stopping Criteria.
The solver reached a feasible point whose objective function value was less
than or equal to the ObjectiveLimit
tolerance. The problem
is unbounded, or poorly scaled, or the ObjectiveLimit option
is too high.
For suggestions on how to proceed, see Problem Unbounded.
MaxSQPIter is a tolerance on the number of
sequential quadratic programming subproblem iterations the solver performs. When the
solver has taken MaxSQPIter iterations for the subproblem, the
subproblem iterations end.
For more information, see Sequential Quadratic Programming (SQP).
Relative changes in all elements of x is the normalized step vector. This vector is the change in location where the objective function was evaluated, divided by the infinity norm of the current position. If the maximum of this relative norm goes below the StepTolerance tolerance, then the iterations end.
The size of the current step is the norm of the change in location where the
objective function was evaluated. In this case, fmincon uses a
relative size: the step size divided by the infinity norm of the current position.
When this relative step size goes below the StepTolerance
tolerance, then the
iterations end.
StepTolerance is a tolerance for the size of
the last step, meaning the size of the change in location where the objective
function was evaluated.

The constraint violations are the constraint functions that are not satisfied at the current point. The norm of the gradient of these functions is so small that the solver could not proceed. The current point is not feasible (some constraint violation exceeds the ConstraintTolerance tolerance).
For suggestions on how to proceed, see Converged to an Infeasible Point.
The search direction is the vector from the current point along which the solver looks for an improvement. The norm of this direction is the infinity norm, the maximum of the absolute values of the components of the search vector.
Optimization Toolbox solvers compute search directions via various algorithms, described in Constrained Nonlinear Optimization Algorithms.
fmincon estimates gradients of objective and nonlinear
constraint functions by taking finite differences. A finite difference calculation
stepped outside the region where a function is well-defined, returning
Inf, NaN, or a complex result.
For more information about how solvers compute and use gradients, see Constrained Nonlinear Optimization Algorithms. For suggestions on how to proceed, see 6. Provide Gradient or Jacobian.
The fmincon
"interior-point" algorithm can search for a feasible point using
a specialized algorithm. Enable this search by setting the
EnableFeasibilityMode option to true using
optimoptions. For added efficiency with difficult problems,
set the SubproblemAlgorithm option to
"cg":
options = optimoptions("fmincon",... Algorithm="interior-point",... EnableFeasibilityMode=true,... SubproblemAlgorithm="cg");
For details of the EnableFeasibilityMode algorithm, see Feasibility Mode.
Algorithms
For help choosing the algorithm, see fmincon Algorithms. To set the algorithm, use optimoptions to create options, and use the
'Algorithm' name-value pair.
The rest of this section gives brief summaries or pointers to information about each algorithm.
This algorithm is described in fmincon Interior Point Algorithm. There is more extensive description in [1], [41], and [9].
The fmincon 'sqp' and 'sqp-legacy' algorithms
are similar to the 'active-set' algorithm described
in Active-Set Optimization. fmincon SQP Algorithm describes the main
differences. In summary, these differences are:
fmincon uses a sequential quadratic programming (SQP) method. In this
method, the function solves a quadratic
programming (QP) subproblem at each iteration. fmincon updates
an estimate of the Hessian of the Lagrangian at each iteration using
the BFGS formula (see fminunc and
references [7] and [8]).
fmincon performs a line search using a
merit function similar to that proposed by [6], [7], and [8]. The QP subproblem is solved using
an active set strategy similar to that described in [5]. fmincon Active Set Algorithm describes this algorithm in
detail.
See also SQP Implementation for more details on the algorithm used.
The 'trust-region-reflective' algorithm is
a subspace trust-region method and is based on the interior-reflective
Newton method described in [3] and [4]. Each iteration involves the approximate
solution of a large linear system using the method of preconditioned
conjugate gradients (PCG). See the trust-region and preconditioned
conjugate gradient method descriptions in fmincon Trust Region Reflective Algorithm.
Alternative Functionality
App
The Optimize Live Editor task provides a visual interface for fmincon.
References
[1] Byrd, R. H., J. C. Gilbert, and J. Nocedal. “A Trust Region Method Based on Interior Point Techniques for Nonlinear Programming.” Mathematical Programming, Vol 89, No. 1, 2000, pp. 149–185.
[2] Byrd, R. H., Mary E. Hribar, and Jorge Nocedal. “An Interior Point Algorithm for Large-Scale Nonlinear Programming.” SIAM Journal on Optimization, Vol 9, No. 4, 1999, pp. 877–900.
[3] Coleman, T. F. and Y. Li. “An Interior, Trust Region Approach for Nonlinear Minimization Subject to Bounds.” SIAM Journal on Optimization, Vol. 6, 1996, pp. 418–445.
[4] Coleman, T. F. and Y. Li. “On the Convergence of Reflective Newton Methods for Large-Scale Nonlinear Minimization Subject to Bounds.” Mathematical Programming, Vol. 67, Number 2, 1994, pp. 189–224.
[5] Gill, P. E., W. Murray, and M. H. Wright. Practical Optimization, London, Academic Press, 1981.
[6] Han, S. P. “A Globally Convergent Method for Nonlinear Programming.” Journal of Optimization Theory and Applications, Vol. 22, 1977, pp. 297.
[7] Powell, M. J. D. “A Fast Algorithm for Nonlinearly Constrained Optimization Calculations.” Numerical Analysis, ed. G. A. Watson, Lecture Notes in Mathematics, Springer-Verlag, Vol. 630, 1978.
[8] Powell, M. J. D. “The Convergence of Variable Metric Methods For Nonlinearly Constrained Optimization Calculations.” Nonlinear Programming 3 (O. L. Mangasarian, R. R. Meyer, and S. M. Robinson, eds.), Academic Press, 1978.
[9] Waltz, R. A., J. L. Morales, J. Nocedal, and D. Orban. “An interior algorithm for nonlinear optimization that combines line search and trust region steps.” Mathematical Programming, Vol 107, No. 3, 2006, pp. 391–408.
Extended Capabilities
Usage notes and limitations:
fminconsupports code generation using either thecodegen(MATLAB Coder) function or the MATLAB Coder™ app. You must have a MATLAB Coder license to generate code.The target hardware must support standard double-precision floating-point computations or standard single-precision floating-point computations.
Code generation targets do not use the same math kernel libraries as MATLAB solvers. Therefore, code generation solutions can vary from solver solutions, especially for poorly conditioned problems.
To test your code in MATLAB before generating code, set the
UseCodegenSolveroption totrue. That way, the solver uses the same code that code generation creates.All code for generation must be MATLAB code. In particular, you cannot use a custom black-box function as an objective function for
fmincon. You can usecoder.cevalto evaluate a custom function coded in C or C++. However, the custom function must be called in a MATLAB function.fmincondoes not support theproblemargument for code generation.[x,fval] = fmincon(problem) % Not supportedYou must specify the objective function and any nonlinear constraint function by using function handles, not strings or character names.
x = fmincon(@fun,x0,A,b,Aeq,beq,lb,ub,@nonlcon) % Supported % Not supported: fmincon('fun',...) or fmincon("fun",...)
All
fminconinput matrices such asA,Aeq,lb, andubmust be full, not sparse. You can convert sparse matrices to full by using thefullfunction.The
lbandubarguments must have the same number of entries as thex0argument or must be empty[].If your target hardware does not support infinite bounds, use
optim.coder.infbound.For advanced code optimization involving embedded processors, you also need an Embedded Coder® license.
You must include options for
fminconand specify them usingoptimoptions. The options must include theAlgorithmoption, set to'sqp'or'sqp-legacy'.options = optimoptions("fmincon",Algorithm="sqp"); [x,fval,exitflag] = fmincon(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon,options);
Code generation supports these options:
Algorithm— Must be'sqp'or'sqp-legacy'ConstraintToleranceFiniteDifferenceStepSizeFiniteDifferenceTypeMaxFunctionEvaluationsMaxIterationsObjectiveLimitOptimalityToleranceScaleProblemSpecifyConstraintGradientSpecifyObjectiveGradientStepToleranceTypicalXUseCodegenSolver
Generated code has limited error checking for options. The recommended way to update an option is to use
optimoptions, not dot notation.opts = optimoptions('fmincon','Algorithm','sqp'); opts = optimoptions(opts,'MaxIterations',1e4); % Recommended opts.MaxIterations = 1e4; % Not recommended
Do not load options from a file. Doing so can cause code generation to fail. Instead, create options in your code.
Usually, if you specify an option that is not supported, the option is silently ignored during code generation. However, if you specify a plot function or output function by using dot notation, code generation can issue an error. For reliability, specify only supported options.
Because output functions and plot functions are not supported,
fmincondoes not return the exit flag –1.Code generated from
fmincondoes not contain thebestfeasiblefield in a returnedoutputstructure.
For an example, see Code Generation for Optimization Basics.
To run in parallel, set the 'UseParallel' option to true.
options = optimoptions('solvername','UseParallel',true)
For more information, see Using Parallel Computing in Optimization Toolbox.
Version History
Introduced before R2006aSet the new UseCodegenSolver option to true to have
fmincon use the same version of the software that code
generation creates. This option allows you to check the behavior of the solver before you
generate code or deploy the code to hardware. For solvers that support single-precision code
generation, the generated code can also support single-precision hardware. You can include
the option when you generate code; the option has no effect in code generation, but leaving
the option in saves you the step of removing it. Even though the generated code is identical
to the MATLAB code, results can differ slightly because linked math libraries can
differ.
You can generate code for fmincon on single-precision
floating point hardware. For instructions, see Single-Precision Code Generation.
The CheckGradients option will be removed in a future release. To check the first derivatives of objective functions or nonlinear constraint functions, use the checkGradients function.
MATLAB Command
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)