Practical Methods of Optimization

Could anyone share their MATLAB codes or best practices for these specific methods? I am particularly interested in how you handle the "narrow valley" convergence issue in the Coordinate Search method.
Codes for metods of optimisation
One-Dimensional Methods
  • Fibonacci Search Method
  • Golden Section Search
  • Dichotomous Search
  • Newton’s Method
  • Secant Method
  • Quadratic Interpolation Method
Multi-Dimensional Methods
  • Univariate Method
  • Hooke-Jeeves Pattern Search
  • Nelder-Mead Simplex Method
  • Rosenbrock Method
  • Powell’s Conjugate Direction Method
Gradient-Based Methods
One-Dimensional Gradient:
  • Steepest Descent Line Search
  • Newton-Raphson Method
Multi-Dimensional Gradient:
  • Steepest Descent Method
  • Fletcher-Reeves Conjugate Gradient Method
  • Polak-Ribière Conjugate Gradient Method
  • Newton’s Method in Optimization
  • Davidon-Fletcher-Powell Method
  • Broyden-Fletcher-Goldfarb-Shanno Method
  • Sequential Quadratic Programming

10 comentarios

Mark
Mark hace alrededor de 1 hora
Fibonacci Method
%Fibonacci Method
clc;
clear all;
y = @(x) 2*sin(x) - x.^2/10;
% 1. Defining the interval and precision
xd = 0;
xg = 4;
e = 1e-2;
% 2. Generating the Fibonacci sequence (must exist before the loop)
m = 25;
F = zeros(1, m);
F(2)=1;
for k = 3:m
F(k) = F(k-1) + F(k-2);
end
% 3. Determining the number of steps n
Fn = (xg-xd)/e;
P = 1:m;
n_indices = P((F-Fn) > 0);
n = n_indices(1); % Takes the first index that satisfies Fn
% 4. Main loop
for i = n:-1:3
d = xg - xd; % Interval width
% Formulas for points x1 and x2 using Fibonacci numbers
x1 = xd + F(i-2)/F(i) * d;
x2 = xd + F(i-1)/F(i) * d;
% Calculating function values
y1 = y(x1);
y2 = y(x2);
% Logic for MAXIMUM
if y2 > y1
xd = x1;
ye = y2;
xe = x2;
else
xg = x2;
ye = y1;
xe = x1;
end
% Displaying results in the Command Window
fprintf('i=%2.0f, ye=%.8f\n', i, ye);
end
% Displaying final result
fprintf('\nFinal maximum: x = %.6f, y = %.8f\n', xe, ye);
Mark
Mark hace alrededor de 1 hora
Editada: Mark hace 44 minutos
% Polynomial Interpolation
% Quadratic Interpolation Method
clc
clear all
close all
f = @(x) 2*sin(x) - x.^2/10;
% Points
x1 = 0;
x2 = 1;
x3 = 4;
e = 1;
i = 0;
figure(1)
X = linspace(-1, 4, 200); % Slightly wider range for X
Y = f(X);
plot(X, Y, 'b', 'LineWidth', 2.5)
grid on
hold on
ylim([min(Y)-1, max(Y)+1]) % Keeps focus on the function
while e > 1e-5 && i < 20 % Safety guard for the number of iterations
i = i + 1;
f1 = f(x1);
f2 = f(x2);
f3 = f(x3);
% Matrix from your notebook
A = [x1^2, x1, 1;
x2^2, x2, 1;
x3^2, x3, 1];
C = [f1; f2; f3];
B = A \ C;
a = B(1);
b = B(2);
c = B(3);
x4 = -b / (2*a);
f4 = f(x4);
fprintf('i=%.0f, x=%.6f, f(x)=%.6f\n', i, x4, f4)
% Drawing the current parabola
yp = a*X.^2 + b*X + c;
h = plot(X, yp, '--r', 'LineWidth', 1);
pause(0.5) % Pause to see the search process
e = abs(x4 - x2);
% Replacement logic (works for maximum/minimum depending on function)
if x4 > x2
if f4 > f2
x1 = x2;
x2 = x4;
else
x3 = x4;
end
else
if f4 > f2
x3 = x2;
x2 = x4;
else
x1 = x4;
end
end
end
% Mark the final result
plot(x4, f4, 'ro', 'MarkerSize', 10, 'MarkerFaceColor', 'g')
title(['Optimization finished in ', num2str(i), ' iterations'])
hold off
Star Strider
Star Strider hace 34 minutos
There are a number of others. See the Global Optimization Toolbox for many of them.
Gradinent descent methods are extremely sensitive to the initial parameter estimate selection, and can get trapped in local minima. The more robust global methods search the entire parameter space for the best options. The gradient search methods can then 'fine tune' the initial results.
Mark
Mark hace alrededor de 1 hora
Coordinate Search
clc;
clear all;
y=@(X) X(1).^4+X(1).^3-X(1)+X(2).^4-X(2).^2+X(2)+X(3).^2-X(3)+X(1)*X(2)*X(3);
X0=[0;0;0];
d=0.5;
while d>1e-5
for i= 1:length(X0)
D=zeros(3,1);
D(i)=d;
if y(X0+D)> y(X0)
if y(X0-D)>y(X0)
d=d/2;
else
X0=X0-D;
end
else
X0=X0+D;
end
end
fprintf('i=%.0f,x1=%.6f,x2=%.6f,x3=%.6f,\n',i ,X0(1),X0(2),X0(3));
end
Hooke-Jeeves Pattern Search
clc
clear all
z=@(X) (X(1)-1.15)^2+(X(2)-0.98)^2 ;
X0=[0;0];
d=0.2;
i=1;
while d>1e-5
i=i+1;
% --- Exploratory move for X1 ---
if z(X0) < z(X0+[d;0])
if z(X0) < z(X0-[d;0])
X1=X0;
else
X1=X0-[d;0];
end
else
X1=X0+[d;0];
end
if z(X1) < z(X1+[0;d])
if z(X1) < z(X1-[0;d])
X1=X1; % Remains the same if all points along Y are worse
else
X1=X1-[0;d];
end
else
X1=X1+[0;d];
end
% --- Pattern move X2 ---
X2=X1+(X1-X0);
% Exploratory move around the pattern move X2
if z(X2) < z(X2+[d;0])
if z(X2) < z(X2-[d;0])
X2=X2;
else
X2=X2-[d;0];
end
else
X2=X2+[d;0];
end
% Final check: Is the new position better than the old base point?
if z(X2) < z(X0)
X0=X2; % Accept the pattern move
elseif z(X1) < z(X0)
X0=X1; % Accept at least the exploratory move
else
d=d/2; % Nothing is better, reduce the step size
end
% Print to see how it works
fprintf('Iter %d: x1=%.4f, x2=%.4f, f=%.4f, d=%.6f\n', i, X0(1), X0(2), z(X0), d);
end
Iter 2: x1=0.6000, x2=0.4000, f=0.6389, d=0.200000 Iter 3: x1=1.2000, x2=0.8000, f=0.0349, d=0.200000 Iter 4: x1=1.2000, x2=1.0000, f=0.0029, d=0.200000 Iter 5: x1=1.2000, x2=1.0000, f=0.0029, d=0.100000 Iter 6: x1=1.2000, x2=1.0000, f=0.0029, d=0.050000 Iter 7: x1=1.1500, x2=1.0000, f=0.0004, d=0.050000 Iter 8: x1=1.1500, x2=1.0000, f=0.0004, d=0.025000 Iter 9: x1=1.1500, x2=0.9750, f=0.0000, d=0.025000 Iter 10: x1=1.1500, x2=0.9750, f=0.0000, d=0.012500 Iter 11: x1=1.1500, x2=0.9750, f=0.0000, d=0.006250 Iter 12: x1=1.1500, x2=0.9813, f=0.0000, d=0.006250 Iter 13: x1=1.1500, x2=0.9813, f=0.0000, d=0.003125 Iter 14: x1=1.1500, x2=0.9813, f=0.0000, d=0.001563 Iter 15: x1=1.1500, x2=0.9797, f=0.0000, d=0.001563 Iter 16: x1=1.1500, x2=0.9797, f=0.0000, d=0.000781 Iter 17: x1=1.1500, x2=0.9797, f=0.0000, d=0.000391 Iter 18: x1=1.1500, x2=0.9801, f=0.0000, d=0.000391 Iter 19: x1=1.1500, x2=0.9801, f=0.0000, d=0.000195 Iter 20: x1=1.1500, x2=0.9801, f=0.0000, d=0.000098 Iter 21: x1=1.1500, x2=0.9800, f=0.0000, d=0.000098 Iter 22: x1=1.1500, x2=0.9800, f=0.0000, d=0.000049 Iter 23: x1=1.1500, x2=0.9800, f=0.0000, d=0.000024 Iter 24: x1=1.1500, x2=0.9800, f=0.0000, d=0.000024 Iter 25: x1=1.1500, x2=0.9800, f=0.0000, d=0.000012 Iter 26: x1=1.1500, x2=0.9800, f=0.0000, d=0.000006
Torsten
Torsten hace alrededor de 2 horas
Editada: Torsten hace alrededor de 2 horas
A large number of freely available codes on optimization are listed here:
Or you could visit File Exchange:
Mark
Mark hace alrededor de 6 horas
Powell’s Conjugate Direction Method
z = @(A) (A(1) - 1.15).^2 + (A(2) - 0.98).^2;
A = [0; 0];
e1 = [1; 0];
e2 = [0; 1];
e = [e1 e2];
r = 1;
j = 0;
while r > 1e-5
j = j + 1;
C = A;
for i = 1:length(A)
C = C + p(z, C, e(:,i)) * e(:,i);
end
e3 = C - A;
D = C + p(z, C, e3) * e3;
e = [e(:,2:end) e3];
r = abs(z(A) - z(D));
A = D;
fprintf('j = %.0f, x = %.4f, y = %.4f, z = %.4f\n', j, A(1), A(2), z(A))
end
External function
function f=p(z,M,ei)
e=1;
p1=-10;
p2=0;
p3=10;
while e>1e-5
A=[p1^2,p1,1;p2^2,p2,1;p3^2,p3,1;];
C=[z(M+p1*ei);z(M+p2*ei);z(M+p3*ei)];
B=A\C;
p4=-B(2)/(2*B(1));
e=abs(p4-p2);
if p4<p2
p3=p2;
p2=p4;
else
p1=p2;
p2=p4;
end
end
f=p4;
end
Mark
Mark hace alrededor de 2 horas
Movida: Torsten hace 22 minutos
Cauchy's Steepest Descent Method
clc
clear all
z=@(X)(2*X(1)-1.55)^2+(X(2)-0.44)^2;
J=@(X)[4*(2*X(1)-1.55);2*(X(2)-0.44)];
X1=[0.23;-0.5];
lambda0=0.001;
i=0;
r=1;
while r>0.01
i=i+1;
e=J(X1);
e=e/max(e);
lambda1=lambda(z,X1,e);
X2=X1-lambda1*e;
if i == 1
lambda0 = lambda1;
end
r=abs(lambda1/lambda0);
X1=X2;
fprintf('i=%.0f,X1=%.6f,X2=%.6f,z=%.6f\n',i, X2(1), X2(2), z(X2));
end
External function
function f=lambda(z,X1,e)
r=1;
p1=0;
p2=0.5;
p3=1;
while r>1e-5
A=[p1^2,p1,1;p2^2,p2,1;p3^2,p3,1;];
C=[z(X1-p1*e);z(X1-p2*e);z(X1-p3*e)];
B=A\C;
p4=-B(2)/(2*B(1));
r=abs(p4-p2);
if p4<p2
p3=p2;
p2=p4;
else
p1=p2;
p2=p4;
end
end
f=p4;
end
Mark
Mark hace alrededor de 1 hora
Movida: Torsten hace 22 minutos
Newton’s Method in Optimization
clc
clear all
z=@(X)(2*X(1)-1.55)^2+(X(2)-0.44)^2;
J=@(X)[4*(2*X(1)-1.55);2*(X(2)-0.44)];
H=[8, 0; 0, 2];
X1=[0.23;-0.5];
i=0;
r=1;
while i<10 && r>0.15
i=i+1;
X2=X1-(H)\J(X1);
fprintf('i=%.0f,X1=%.6f,X2=%.6f,z=%.6f\n',i, X2(1), X2(2), z(X2));
X1=X2;
r=max(J(X1));
end
Torsten
Torsten hace 20 minutos
Please stop posting your code examples.
If you want to supply code that might help other users, use the MATLAB File Exchange.

Iniciar sesión para comentar.

Respuestas (0)

Categorías

Etiquetas

Preguntada:

el 2 de Mayo de 2026 a las 11:57

Comentada:

hace alrededor de 7 horas

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by