Borrar filtros
Borrar filtros

Improving Precision of Eigenvectors with Large Eigenvalues

98 visualizaciones (últimos 30 días)
bil
bil el 24 de Jun. de 2024 a las 3:13
Comentada: Christine Tobler el 26 de Jun. de 2024 a las 6:25
Hi all,
This is a bit of a generic question, but I was hoping someone could provide some insight on how I could improve the precision of eigenvectors using "projection techniques", like in this post, where a matrix will have 1 or 2 very large eigenvalues, and the remaining eigenvalues are much smaller (by several orders of magnitude). They have code written in R, which I am not too familiar with, but is the idea generically that I should subtract out the overlap of smaller eigenvectors with those of larger eigenvectors to improve their precision? When I say precision, what I mean is that applying eig to a matrix M will generate a set of eigenvectors, but once I check M*v - λ*v, the resultant array will deviate from 0, i.e. applying M to the "eigenvector" has resulted in a linear combination of multiple other eigenvectors.
Any guidance is appreciated.

Respuesta aceptada

Christine Tobler
Christine Tobler el 24 de Jun. de 2024 a las 10:04
The linked post is about a symmetric matrix, is this also your case? In that case (if issymmetric returns true for your matrix), the eigenvectors returned by EIG will be orthogonal up to numerical round-off (an exact 0 is not possible in numerical computation, practically speaking).
The proposed solution doesn't improve the eigenvectors, but instead applies a projection to the computed residual vectors, to project out any components along the eigenvectors of the larger eigenvalues. Whether this is useful will depend on your application - that is, do you need to do computations with that residual?
  11 comentarios
Torsten
Torsten el 26 de Jun. de 2024 a las 1:03
But I suspect this might not be the most accurate and the most straightforward method is, as you said, to just take the reciprocal of the eigenvalues of M to get the eigenvalues of M^(-1), and the eigenvectors are the same for both matrices.
Yes, at least I cannot think of any advantage to work with the inverse for your case.
Christine Tobler
Christine Tobler el 26 de Jun. de 2024 a las 6:25
Closing the loop, I agree with Torsten that computing the eigenvalues and eigenvectors of the original matrix and then inverting the eigenvalues will be more accurate than calling eig on the inverse.

Iniciar sesión para comentar.

Más respuestas (0)

Categorías

Más información sobre Linear Algebra en Help Center y File Exchange.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by