What is the space complexity of built-in eigs function in MATLAB

16 visualizaciones (últimos 30 días)
Hi Everyone,
I am calculating the dominant eigenvalues and eigenvectors by eigs(). The matrix is very large so that eigs(fun, N) is used where fun(x) returns A*x. The vector size is nearly 24,000,000 ~ 200MB memory. However, after 3 iteration, the job is crushed on cluster as the error message indicates it exceeds the memory limit (2GB).
I want to know what is the space complexity of eigs(). Is it linear with the vector size N; and does the memory usage increase linearly with iteration steps?
Thanks

Respuesta aceptada

Christine Tobler
Christine Tobler el 26 de Feb. de 2018
The largest memory requirement for eigs is for an array that stores a subspace of vectors, which has size(A, 1) rows and a number of columns automatically determined based on the requested number of eigenvalues. You can set this number of columns directly using the 'SubspaceDimension' Name-Value pair in MATLAB R2017b, or using the p field of the options structure in older versions.
After allocating this array, there should be only small amounts of extra memory necessary in each iteration. I would typically expect eigs to go out of memory when allocating the workspace, not later in the middle of the iteration. Is it possible that your function handle requires extra memory every time it is applied?
If you have MATLAB R2017b, could you run eigs with the 'Display' option turned on, and post the results here? This gives some additional information about which algorithm was used.
  2 comentarios
Yi Hu
Yi Hu el 27 de Feb. de 2018
Editada: Yi Hu el 27 de Feb. de 2018
Thank you. This answered my question.
My function handle indeed generated some local variables of matrices each time it is called. This might be the problem if the memory limit is reached before auto garbage collection. Nonetheless I increased the memory limit of jobs to be 10GB on cluster and now it works fine.
Turn on the 'Display' opt gives some outputs like this
Iteration 1: a few Ritz values of the 20-by-20 matrix:
0
0
0
0
Iteration 2: a few Ritz values of the 20-by-20 matrix:
1.0e+05 *
-0.0002
0.0002
0.0152
2.6621
......
Christine Tobler
Christine Tobler el 27 de Feb. de 2018
Great to hear that you could work around the problem.
In the newest version (R2017b), the display has been changed to give some more information about the algorithm than is shown here - happily they won't be needed, since you have already fixed the problem.

Iniciar sesión para comentar.

Más respuestas (1)

Chandani Madnani
Chandani Madnani el 26 de Feb. de 2018
We don't generally document the order of complexity of MATLAB functions, and for eigs we don't think it's possible to give a straight-up order of complexity. For eig, the complexity is O(n^3), but we cannot guarantee that eigs is within O(n^3).
The main problem is that eigs uses an iterative method, which is supposed to decrease the residual of the eigenvalues and eigenvectors in each step, until a tolerance tol is reached. So the computational complexity depends on the convergence speed of the method.
The straight-up Arnoldi method for computing the largest eigenvalues of A*x = lambda*x has a cost of O(n*p^2 + p^3) + p*[cost of A*x]. How large p needs to be depends on the convergence speed, which in turn depends on the relative gap between the eigenvalues. But this is typically prohibitively expensive because of the factor O(p^3) involved. In eigs, we use the "implicitly restarted Arnoldi" method as implemented in ARPACK, which uses several tricks that make the algorithm faster in the general case, but also make its convergence behavior and computational complexity much more difficult to analyze.
Additionally, if the smallest magnitude eigenvalues are requested, UMFPACK is used to factorize A, which is O(n^3) in the general case; you would need to know the sparsity structure of A to find any smaller bound.
If you want to investigate further, the following two papers are relevant:
[1] Lehoucq, R.B. and D.C. Sorensen, "Deflation Techniques for an Implicitly Re-Started Arnoldi Iteration," SIAM J. Matrix Analysis and Applications, Vol. 17, 1996, pp. 789–821.
[2] Sorensen, D.C., "Implicit Application of Polynomial Filters in a k-Step Arnoldi Method," SIAM J. Matrix Analysis and Applications, Vol. 13, 1992, pp. 357–385.
and there's also the ARPACK and UMFPACK documentations which should be available online.

Categorías

Más información sobre Linear Algebra en Help Center y File Exchange.

Etiquetas

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by