Accuracy of eig in support of complex step differentation: Derivative estimate accuracy degrades when step gets too small
1 visualización (últimos 30 días)
Mostrar comentarios más antiguos
Mark Stone
el 15 de Mayo de 2015
Editada: Mark Stone
el 21 de Mayo de 2015
Consider the uses of complex step differentiation to estimate the derivative of an eigenvalue of a real non-symmetric matrix, using eig in MATLAB double precision, for the calculation. The traditional recommendation is to use step = 1e-20 * 1i. But perhaps extended precision calculation of eig is needed to support this?
I noticed that for steps smaller than perhaps about 1e-14 *1i (or maybe even less), the accuracy of the complex step differentiation estimate seems to degrade, and become unstable. Is this due to accuracy limitation in eig in MATLAB double precision? The matrix in question has various norms in the ballpark of 1. Furthermore, the same calculation showed greater degradation in MATLAB R2013A WIN32 than in R2014A WIN 64. Is there any reason to think the accuracy of eig should be different between these configurations?
The calculation is carried out as d = step length (say 1e-14 or 1e-20 or whatever). For simplicity, I'll show this for the maximum eigenvalue. Consider a real matrix A. Then to compute the (i,j) component of the gradient of the maximum eigenvalue with respect to A,
B = A;
B(i,j) = A(i,j) + d*1i;
derivative_estimate = imag(max(eig(B))/d)
Complex step differentiation is not "supposed to" have a problem with small step size, and in fact, that's supposed to be its advantage over forward or central differences, whose accuracy is limited by a step size below which accuracy degrades.
Thanks.
3 comentarios
Matt J
el 19 de Mayo de 2015
Editada: Matt J
el 19 de Mayo de 2015
The traditional recommendation is to use step = 1e-20 * 1i. But perhaps extended precision calculation of eig is need to support this?
Or extended precision in the addition operation,
B(i,j) = A(i,j) + d*1i;
It seems to me that, at the very least, the step magnitude needs to be chosen adaptively based on eps(A(i,j)) . If d is less than that, for example, the operation above will merely give B=A due to floating point precision thresholds.
Respuesta aceptada
Cleve Moler
el 19 de Mayo de 2015
I have to admit that I am unsure about what is going on here. Complex step differentiation relies upon the unperturbed function values for real arguments being real. Then the complex perturbation causes the software to produce a complex result, so numerical differencing is not a concern. But eig of a real nonsymmetric matrix can be complex and taking max just makes things worse. Consider A = gallery(5). The exact eigenvalues are all zero. But eig(A) doesn't reveal that. And a complex step isn't any help. The eigenvalue condition numbers are crucial. For multiple eigenvalues without a full set of eigenvectors, such as gallery(5), the condition numbers are infinite. Taking a complex step, or any step, in the vicinity of a badly conditioned eigenvalue will have numerical difficulties. -- Cleve
0 comentarios
Más respuestas (3)
Cleve Moler
el 19 de Mayo de 2015
But you should not have to resort to complex step differentiation because an analytic expression for the derivative of an eigenvalue is known. If y and x are the left and right eigenvectors of a matrix A corresponding to a particular eigenvalue lambda, then d(lambda) = (y'*d(A)*x)/(y'*x)
3 comentarios
Cleve Moler
el 20 de Mayo de 2015
I pretty much agree with your assessment of the left eigenvector situation and don't have much more to offer. -- Cleve
Cleve Moler
el 20 de Mayo de 2015
I had forgotten about Nick Higham's comment after my blog post referencing his paper, "The complex step approximation to the Fréchet derivative of a matrix function", http://link.springer.com/article/10.1007/s11075-009-9323-y. Unfortunately, it's behind a Springer pay wall. But I think it says that Mark is on the right track.
Matt J
el 15 de Mayo de 2015
I don't think eigenvalues are in general, differentiable as a function of the matrix elements. However, even where they are, shouldn't your derivative estimate involve the difference eig(B)-eig(A) ?
3 comentarios
Matt J
el 19 de Mayo de 2015
Editada: Matt J
el 19 de Mayo de 2015
I should have stated that I was performing these calculations only for non-repeated eigenvalues.
And you're sure that the eigenvalues are non-repeated in modulus as well? Even if eig() is differentiable at non-repeated eigenvalues, the combined operation max(eig()) may not be. When the maximum modulus is attained by multiple (even if distinct) eigenvalues, the max() function will face an ambiguity on how to sort them.
In any case, it will probably be worth posting your tests so that we can reproduce what you are seeing. It defies my intuition a bit that any differentiation method can allow arbitrarily small step sizes, and certainly that the step needn't depend on the matrix A. The Taylor remainder term in the derivation of the method at the link you posted should depend on A, it seems to me.
Ver también
Categorías
Más información sobre Linear Algebra en Help Center y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!