Vectorization/increase efficiency of nonconvex optimization problem using systematic initializations

3 visualizaciones (últimos 30 días)
I am trying to speed up an optimization process. Right now, since I have a nonconvex optimization problem, I need to use systematic intializations of the start point parameters in order to find the global minimum. This is very computationally expensive and inefficient since I am using loops. I have two main questions:
1) Is there a way to vectorize the systematic initializations instead of using nested loops?
2) Can I fit mulitple optimization problems simultaneously (instead of looping through data channels)? i.e run one process for 100 different fits instead of using a loop 1:100?
Below is my code:
%%
%Parameter values for systematic evaluations
angularPosSpace = 0:360/16:360-360/16;
curvatureSpace = -1:2/11:1;
kSpace = [.5, 2];
sigmaSpace = [.2, 0.7];
%%
%Preallocates variables
storedParam2 = nan(6, length(curvatureSpace), length(angularPosSpace), length(sigmaSpace), length(kSpace), lenChannels);
determinedError = nan(length(curvatureSpace), length(angularPosSpace), length(sigmaSpace), length(kSpace), lenChannels);
usedParameters2 = nan(6, lenChannels);
%Loops through data channels
for ch = 1:lenChannels
display(ch)
%For curvature intialization
for curvVal = 1:length(curvatureSpace)
%For angular position initilization
for angVal = 1:length(angularPosSpace)
%For k spread initilization
for kVal = 1:length(kSpace)
%For sigma spread initialization
for sigmaVal = 1:length(sigmaSpace)
%Declares fitting function
gaussianModel2 = @(param) squeeze(max(param(1) + param(2) .* ((exp(-1 .* (allStimCurvAng(:, 1, :) - param(3)).^2 ./ (param(4).^2))) .* ...
(exp(1/param(5) .* cosd(allStimCurvAng(:, 2, :) - param(6))) ./ exp(1 ./ param(5))))));
%Declares objective function
resid2 = @(param) (gaussianModel2(param) - respSpikeSec(:, channels(ch)));
%Calculates values necessary for startPoint
minData = min(respSpikeSec(:, channels(ch)));
maxData = max(respSpikeSec(:, channels(ch)));
lb = [minData, -inf, -1.3, 0.05, 0, -inf];
ub = [maxData, maxData * 1.5, 1.3, 30, 100, inf];
%Initial Start Point - systmatic tiling of parameter
%space
startPoint = [minData, maxData, curvatureSpace(curvVal), sigmaSpace(sigmaVal), kSpace(kVal), angularPosSpace(angVal)];
%Finds parameters with lowest error for the given start
%point initialization
[parameters2, tempError2] = lsqnonlin(@(x) resid2(x), startPoint, lb, ub, options);
%Stores relevant fit information
storedParam2(:, curvVal, angVal, sigmaVal, kVal, ch) = parameters2;
determinedError(curvVal, angVal, sigmaVal, kVal, ch) = tempError2;
end
end
end
end
%%
%Determines parameters with lowest error
errorChanLsq = determinedError(:, :, :, :, ch);
linIdx = find(errorChanLsq == min(errorChanLsq, [], 'all'));
[curvLoc, angLoc, sigmaLoc, kLoc] = ind2sub(size(errorChanLsq), min(linIdx));
usedParameters2(:, ch) = storedParam2(:, curvLoc, angLoc, sigmaLoc, kLoc, ch);
end

Respuesta aceptada

Matt J
Matt J el 5 de Jun. de 2023
Editada: Matt J el 5 de Jun. de 2023
It is not clear that a vectorized approach would help you, because the time for each of the optimizations to converge would be different for different initial points. You don't want to be running 100 iterations for all optimizations when say, half of the optimizations might only require 10 iterations.
One thing you could try, though, is to convert the 5 nested loops to a single parfor loop, assuming you have the Parallel Computing Toolbox. Also, for additional speed, avoid repeating array indexing operations like allStimCurvAng(:, 1, :) in your anonymous functions. These are expensive.
sz=[lenChannels length(curvatureSpace) length(angularPosSpace) length(kSpace) length(sigmaSpace)];
parfor k=1:prod(sz)
[sigmaVal, kVal, angVal, curvVal, ch]=ind2sub(sz, k);
%Declares fitting function
tmp1= allStimCurvAng(:, 1, :); %extract data for current optimization just once
tmp2 = allStimCurvAng(:, 2, :);
gaussianModel2 = @(param) squeeze(max(param(1) + param(2) .* ...
((exp(-1 .* (tmp1 - param(3)).^2 ./ (param(4).^2))) .* ...
(exp(1/param(5) .* cosd(tmp2- param(6))) ./ exp(1 ./ param(5))))));
%Declares objective function
tmp=respSpikeSec(:, channels(ch));
resid2 = @(param) (gaussianModel2(param) - tmp);
...
end
  3 comentarios
Matt J
Matt J el 5 de Jun. de 2023
Editada: Matt J el 5 de Jun. de 2023
My loop would probably look like below. In other words, I wouldn't be doing anything subsequent to the lsqnonlin call in my loop. After the loop is finished, you can re-organize and reshape the results into whatever form you want. Those operations are very fast and don't need to be part of the parallelized loop.
In fact, I might not be accumulating anything other than tempError2 values if it were up to me. Once I know the lowest tempError, I would go back and repeat the optimization to calculate only the parameters where the lowest error occurs. This is going to be marginally extra computation on top of the 100s of exploratory optimizations already done, and avoids storing parameter data I'm not interesed in.
sz=[lenChannels length(curvatureSpace) length(angularPosSpace) length(kSpace) length(sigmaSpace)];
parfor k=1:prod(sz)
[sigmaVal, kVal, angVal, curvVal, ch]=ind2sub(sz, k);
%Declares fitting function
tmp1= allStimCurvAng(:, 1, :); %extract data for current optimization just once
tmp2 = allStimCurvAng(:, 2, :);
gaussianModel2 = @(param) squeeze(max(param(1) + param(2) .* ...
((exp(-1 .* (tmp1 - param(3)).^2 ./ (param(4).^2))) .* ...
(exp(1/param(5) .* cosd(tmp2- param(6))) ./ exp(1 ./ param(5))))));
%Declares objective function
tmp=respSpikeSec(:, channels(ch));
resid2 = @(param) (gaussianModel2(param) - tmp);
[parameters2{k}, tempError2(k)] = lsqnonlin(@(x) resid2(x), startPoint, lb, ub, options);
end
AES
AES el 5 de Jun. de 2023
Thank you for your help. It drastically sped up the processing time for 1 unit. Now I am testing on 100.

Iniciar sesión para comentar.

Más respuestas (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by