Numerical Technique to approach Global Minimum of a Function

11 visualizaciones (últimos 30 días)
PASUNURU SAI VINEETH
PASUNURU SAI VINEETH el 4 de Dic. de 2022
Respondida: Kartik el 21 de Mzo. de 2023
I have a function which has 15 input parameters and outputs the Mean Square error of a curvefit. My aim is to find the 15 parameter values whose combination outputs a value close to zero (I'm hoping for 10^(-4)). I have tried implementing Gradient Descent Method, Levenberg-Marquardt algorithm (lsqnonlin) and even solve command. They appear to depend heavily on initial guesses and settle for a local minimum. I'm hoping someone could guide me towards a suitable technique for global minimum, and its implementation. Please let me know if you need more details. Thanks in advance.
  2 comentarios
Matt J
Matt J el 4 de Dic. de 2022
Editada: Matt J el 4 de Dic. de 2022
They appear to depend heavily on initial guesses and settle for a local minimum.
All methods depend heavily on initial guesses, in general. The question you need to ask is how, for your specific model, do you generate a good initial guess. The answer to that requires us to see the model.
PASUNURU SAI VINEETH
PASUNURU SAI VINEETH el 4 de Dic. de 2022
Editada: PASUNURU SAI VINEETH el 4 de Dic. de 2022
@Matt J I have attached a sample trajectory (Trajectory.fig) that I have been trying to fit. The idea is to start from the bounce point and numerically generate forward and backward trajectories from my physics based model. Initial guesses for velocities are taken to be forward positional derivatives and spin guesses are randomised, as I couldn't think of a better way. In this case, I know all the 15 components beforehand so it should be possible to get zero error (perfect fit). But, Fit.fig is the closest I was able to approach by varying stepsize (h) and weight factor (gamma).
P.S. I had to account for the lowest point (x(k),y(k),z(k)) in the parameters because it might not be the actual bounce point in case of noise in the input excel data.
h = 0.001
gamma = 0.001
matrix = randn(1,6)
OriginalData = readmatrix('TrialNew.xlsx');
x=OriginalData(1:end-2,1);
y=OriginalData(1:end-2,2);
z=OriginalData(1:end-2,3);
[k]=find(y==min(y));
%curr_pars = [-2 2 2 0 16 0 0 0 0 2 2 -1 4 8 7];
curr_pars = [100*(x(k-1)-x(k)) 100*(y(k-1)-y(k)) 100*(z(k-1)-z(k)) matrix(1) matrix(2) matrix(3) x(k) y(k) z(k) 100*(x(k+1)-x(k)) 100*(y(k+1)-y(k)) 100*(z(k+1)-z(k)) matrix(4) matrix(5) matrix(6)]; % current point [BackwardVelocityComponents BackwardSpinComponents IntersectionPoint ForwardVelocityComponents ForwardSpinComponents]
initial_error = ErrF15(curr_pars)
gradErrorFunction = zeros(1,numel(curr_pars));
err=100;
ErrorTrend = [];
while err > 0.01
for i_v = 1:1:numel(curr_pars)
cp = curr_pars;
cp(i_v) = cp(i_v) + h;
cm = curr_pars;
cm(i_v) = cm(i_v) - h;
gradErrorFunction(i_v) = (ErrF15(cp) - ErrF15(cm))/(2*h);
curr_pars = curr_pars - gamma*gradErrorFunction ;
err = ErrF15(curr_pars)
end
ErrorTrend = [ErrorTrend err];
end

Iniciar sesión para comentar.

Respuestas (1)

Kartik
Kartik el 21 de Mzo. de 2023
Hi,
It sounds like you're dealing with a highly nonlinear optimization problem with many variables, which can be challenging to solve using standard optimization methods. To find a global minimum, you may want to consider using a stochastic optimization algorithm, such as genetic algorithms or particle swarm optimization. These methods are designed to search a large solution space efficiently and can often find global optima.
You can refer the following MathWorks documentation for information regarding PSO in MATLAB:

Productos


Versión

R2021b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by