Find zero of function with least amount of iterations

2 visualizaciones (últimos 30 días)
Henrik
Henrik el 11 de Oct. de 2014
Comentada: Henrik el 15 de Oct. de 2014
Hi there
I have a function that takes a very long time to calculate (can be several hours), and I need to find when it becomes zero, with a tolerance on x of around 1e-5, and tolerance of the zero of maybe 1e-7. Typical values are x between 1 and 6, and y between -0.01 and 0.01.
I can use fzero to do what I want (and with quite few function calls, I might add - 13 in one test, 8 when I reduced TolX), but I wonder if there exists a function that can do this with even fewer function calls than fzero? I think t will be well worth it to spend some time implementing this to save hours in the end, so it does not have to be a simple solution (although simpicity is preferred, of course!)
I do not believe the function can be optimized significantly to save computation time - it requires a lot of calculations, and I have collegues who do similar stuff in FORTRAN and C who also require hours to do their calculations.
Thanks in advance
Henrik
EDIT: changed iterations to function calls as pointed out by Matt and Mohammad. There is more clarification in my comments below.
  9 comentarios
Matt J
Matt J el 12 de Oct. de 2014
Editada: Matt J el 12 de Oct. de 2014
I don't think the best strategy is to try to cut down on number of iterations. For any iterative method, that number will be dependent on the initial guess anyway, and would be very hard to control. The better strategy would be to see if you can cut down on the amount of computation per iteration.
For example, I see no reason to be computing the matrix N explicitly. Getting the minimum eigenvalue of N to equal 0 is the same as getting the maximum eigenvalue of f to equal 1/x. So you shouldn't have to compute further than f.
Also, M is a Kronecker product, so the multiplication you describe should be doable more efficiently without explicitly computing M.
These are just examples, of course. It doesn't look like the computations you've listed should take very long for the data sizes you've mentioned. My computer can do 250 eigendecompositions of a 256x256 matrix in about 11 seconds. So, presumably the computation bottlenecks are in steps you haven't mentioned. It might be worth expanding on those so that the community can explore ways to do them faster.
As for the choice of algorithm, it might be worth exploring the Global Optimization Toolbox as Star Strider suggests. I have my doubts about fsolve and other algorithms in the (non-global) Optimization Toolbox, however, since your problem does not look smooth.
Henrik
Henrik el 14 de Oct. de 2014
Thanks for the clarification, of course it is the number of function calls that is relevant. The numbers 8-13 that I quoted in the original post were function calls, not iterations.
There are some more loops that I did not mention. The bulk of the computation time (85% if I recall correctly) is spent at the bsxfun call shown below, with the sum coming in as a second in time consumption. Here, typically N=16 and M=50, and F and the matrices F and L have already been calculated, they are size (2*N, 2*N, N,N) and (2*N, 2*N, M), respectively.
f0=bsxfun(@times,reshape(F,[2*N 2*N N N 1]),reshape(L,[2*N 2*N 1 1 M]));
f2=sum(sum(f0));
In a typical calculation, this needs to be done 3*10^5 times for different values of the matrices F and L. The numbers here are typical values, but they may vary by almost an order of magnitude. This means that I can't really vectorize the code any more than I already have, as I run out of memory.

Iniciar sesión para comentar.

Respuesta aceptada

Matt J
Matt J el 14 de Oct. de 2014
Editada: Matt J el 14 de Oct. de 2014
The bulk of the computation time (85% if I recall correctly) is spent at the bsxfun call shown below,
I see a speed-up factor of about 30x when I re-implement that as a matrix multiplication, as below.
Ftr=reshape(F,4*N^2,[]).';
tic;
Lr=reshape(L,4*N^2,M);
f2=Ftr*Lr;
f2=reshape(f2,[N,N,M]);
toc
I didn't include the transpose .' operation in my timing tests, because you should be able to avoid that altogether, just by revising the order in which your data is generated.
  6 comentarios
Matt J
Matt J el 15 de Oct. de 2014
Editada: Matt J el 15 de Oct. de 2014
However, I see about a factor 2-4 increase in computation time with MTIMESX. Maybe it has something to do with me using Ubuntu and MATLAB R2014a?
My understanding was that the mtimesx_build tool that comes with mtimesx wasn't working for R2014. I'd be interested to know how you got it to compile. Other than that, I don't know what's happening. I definitely see a factor fo 2 speed-up in computation of F. Maybe if you try initializing mtimesx explicitly with
>> mtimesx SPEED
As for Om=0, M=1 I can't see how the computation of L would be the bottleneck when M=1. However, you can compute it instead as
L=bsxfun(@rdivide, bsx(FEU,1-FED), bsx( +1i*eta-bEU , ED ) );
in that case.
Henrik
Henrik el 15 de Oct. de 2014
I didn't do much work to make it compile. I downloaded BLAS from http://gcc.gnu.org/wiki/GfortranBuild, compiled it
gfortran -shared -O2 *.f -o libblas.so -fPIC
and in MATLAB I used mex('-DDEFINEUNIX','mtimesx.c','path_to_liblas/libblas.so')
I do get the following warning:
Warning: You are using gcc version '4.8.2'. The version of gcc is not supported. The version currently supported with MEX is '4.7.x'. For a list of currently supported compilers see: http://www.mathworks.com/support/compilers/current_release.
Warning: You are using gcc version '4.8.2-19ubuntu1)'. The version of gcc is not supported. The version currently supported with MEX is '4.7.x'. For a list of currently supported compilers see: http://www.mathworks.com/support/compilers/current_release.
my_path/mtimesx/mtimesx.c: In function ‘mexFunction’:
my_path/mtimesx/mtimesx.c:592:10: warning: assignment discards ‘const’ qualifier from pointer target type [enabled by default] ans = mexGetVariablePtr("caller","OMP_SET_NUM_THREADS");
^
MEX completed successfully.
mtimesx SPEED returns SPEED, but it is still much slower than the other method.
You are right, L is not the bottleneck. I was just surprised that the speed up compared to my old code was basically zero for M=1, when it's around x20 faster for M=50.

Iniciar sesión para comentar.

Más respuestas (0)

Categorías

Más información sobre Function Creation en Help Center y File Exchange.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by