solving problem for gradient descent

22 visualizaciones (últimos 30 días)
Muhammad Kundi
Muhammad Kundi el 14 de Oct. de 2019
Comentada: saja mk el 17 de Sept. de 2020
hi,
I am trying to solve the following question using gradient descent method.\
.Capture1.JPG
I wrote the following code but its giving error.
function [xopt,fopt,niter,gnorm,dx] = grad_descent(varargin)
if nargin==0
% define starting point
x0 = [3 3]';
elseif nargin==1
% if a single input argument is provided, it is a user-defined starting
% point.
x0 = varargin{1};
else
error('Incorrect number of input arguments.')
end
% termination tolerance
tol = 1e-6;
% maximum number of allowed iterations
maxiter = 10;
% minimum allowed perturbation
dxmin = 1e-6;
% step size ( 0.33 causes instability, 0.2 quite accurate)
alpha = 0.1;
% initialize gradient norm, optimization vector, iteration counter, perturbation
gnorm = inf; x = x0; niter = 0; dx = inf;
% define the objective function:
f = @(x1,x2,x3) 4*[x1.^2 + x2-x3].^2 +10;
% plot objective function contours for visualization:
figure(1); clf; ezcontour(f,[-5 5 -5 5]); axis equal; hold on
% redefine objective function syntax for use with optimization:
f2 = @(x) f(x(1),x(2),x(3));
% gradient descent algorithm:
while and(gnorm>=tol, and(niter <= maxiter, dx >= dxmin))
% calculate gradient:
g = grad(x);
gnorm = norm(g);
% take step:
xnew = x - alpha*g;
% check step
if ~isfinite(xnew)
display(['Number of iterations: ' num2str(niter)])
error('x is inf or NaN')
end
% plot current point
plot([x(1) xnew(1)],[x(2) xnew(2)],'ko-')
refresh
% update termination metrics
niter = niter + 1;
dx = norm(xnew-x);
x = xnew;
end
xopt = x;
fopt = f2(xopt);
niter = niter - 1;
%define the gradient of the objective
% function g = grad(x)
% g = [2*x(1) + x(2)
% x(1) + 6*x(2)];
function g = grad(x)
g = 4*(x(1).^2 + x(2)-x(3)).^2 +10;
.I saved this code in a file called steepest.m and then I try to run the following command
[xopt,fopt,niter,gnorm,dx]=steepest
.But I get error.
I have actually used the following code (which works) to solve my problem.
function [xopt,fopt,niter,gnorm,dx] = grad_descent(varargin)
if nargin==0
% define starting point
x0 = [3 3]';
elseif nargin==1
% if a single input argument is provided, it is a user-defined starting
% point.
x0 = varargin{1};
else
error('Incorrect number of input arguments.')
end
% termination tolerance
tol = 1e-6;
% maximum number of allowed iterations
maxiter = 10;
% minimum allowed perturbation
dxmin = 1e-6;
% step size ( 0.33 causes instability, 0.2 quite accurate)
alpha = 0.1;
% initialize gradient norm, optimization vector, iteration counter, perturbation
gnorm = inf; x = x0; niter = 0; dx = inf;
% define the objective function:
f = @(x1,x2) x1.^2 + x1.*x2 + 3*x2.^2;
% plot objective function contours for visualization:
figure(1); clf; ezcontour(f,[-5 5 -5 5]); axis equal; hold on
% redefine objective function syntax for use with optimization:
f2 = @(x) f(x(1),x(2));
% gradient descent algorithm:
while and(gnorm>=tol, and(niter <= maxiter, dx >= dxmin))
% calculate gradient:
g = grad(x);
gnorm = norm(g);
% take step:
xnew = x - alpha*g;
% check step
if ~isfinite(xnew)
display(['Number of iterations: ' num2str(niter)])
error('x is inf or NaN')
end
% plot current point
plot([x(1) xnew(1)],[x(2) xnew(2)],'ko-')
refresh
% update termination metrics
niter = niter + 1;
dx = norm(xnew-x);
x = xnew;
end
xopt = x;
fopt = f2(xopt);
niter = niter - 1;
% define the gradient of the objective
function g = grad(x)
g = [2*x(1) + x(2)
x(1) + 6*x(2)];
.
This code works perfectly but why my code is not working?
please help
thanks

Respuesta aceptada

Prabhan Purwar
Prabhan Purwar el 18 de Oct. de 2019
Editada: Prabhan Purwar el 18 de Oct. de 2019
Hi,
Following code Illustrates the working of Gradient Descent for 3 variables.
To eliminate error changes were made to:
  • Initial value
  • Maxiter value
  • Alpha value
function [xopt,fopt,niter,gnorm,dx] = grad_descent(varargin)
if nargin==0
% define starting point
x0 = [3 3 3]';
elseif nargin==1
% if a single input argument is provided, it is a user-defined starting
% point.
x0 = varargin{1};
else
error('Incorrect number of input arguments.')
end
% termination tolerance
tol = 1e-6;
% maximum number of allowed iterations
maxiter = 100000;
% minimum allowed perturbation
dxmin = 1e-6;
% step size ( 0.33 causes instability, 0.2 quite accurate)
alpha = 0.000001;
% initialize gradient norm, optimization vector, iteration counter, perturbation
gnorm = inf; x = x0; niter = 0; dx = inf;
% define the objective function:
f = @(x1,x2,x3) 4*(x1.^2 + x2-x3).^2 +10;
% plot objective function contours for visualization:
%figure(1); clf; contour3(f,[-5 5 -5 5 -5 5]); axis equal; hold on
% redefine objective function syntax for use with optimization:
f2 = @(x) f(x(1),x(2),x(3));
% gradient descent algorithm:
while and(gnorm>=tol, and(niter <= maxiter, dx >= dxmin))
% calculate gradient:
g = grad(x);
gnorm = norm(g);
% take step:
xnew = x - alpha*g;
% check step
if ~isfinite(xnew)
display(['Number of iterations: ' num2str(niter)])
error('x is inf or NaN')
end
% plot current point
%plot([x(1) xnew(1)],[x(2) xnew(2)],'ko-')
refresh
% update termination metrics
niter = niter + 1;
dx = norm(xnew-x);
x = xnew;
end
xopt = x;
fopt = f2(xopt);
niter = niter - 1;
%define the gradient of the objective
% function g = grad(x)
% g = [2*x(1) + x(2)
% x(1) + 6*x(2)];
function g = grad(x)
g = 4*(x(1).^2 + x(2)-x(3)).^2 +10;
ans =
0.3667
0.3667
0.3667
OR
Alternately make use of the following code for accurate result
fun = @(x) 4*(x(1).^2 + x(2)-x(3)).^2 +10;
x0 = [3,3,3];
x = fminsearch(fun,x0);
  2 comentarios
Muhammad Kundi
Muhammad Kundi el 10 de Dic. de 2019
thank you so much
saja mk
saja mk el 17 de Sept. de 2020
Please,what do you mean with
"minimum allowed perturbation"
??

Iniciar sesión para comentar.

Más respuestas (1)

saja mk
saja mk el 15 de Sept. de 2020
at the last of the code , why
g = 4*(x(1).^2 + x(2)-x(3)).^2 +10;
you didnt grad it?

Categorías

Más información sobre Error Detection and Correction en Help Center y File Exchange.

Productos

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by