Computing approximate eigenvectors
Eigenvectors are calculated from eigenvalues. Eigenvalues, in turn, are roots of the characteristic polynomial of a matrix. Calculating eigenvalues precisely is a delicate task, and often all is known is that an eigenvalue is calculated to within a certain tolerance. What then of the corresponding eigenvector? How accurately can an eigenvector be calculated given a tolerance in the calculation of the eigenvalue?
In the article by Hecker & Lurie (2007) – see the references, below – the authors ask a number of computational questions related to their use of a least-squares algorithm to find an approximate eigenvector. These questions include the following:
1. How well does the algorithm work?
2. What effect does the choice of the random vector in the algorithm have on the algorithm performance?
3. How does the size of the closest eigenvalue to the approximate eigenvalue affect the error?
4. How does the algorithm behave as the size of the matrix increases?
5. Can the algorithm be used to find a second vector for a two-dimensional eigenspace?
6. How does the Jordan block pattern of the matrix effect the error?
The authors report simulations that address these, and other questions. There seems to be a number of numerical and computational questions remaining.
References & readings
- David Hecker & Deborah Lurie (2007) Using least-squares to find an approximate eigenvector, Electronic Journal of Linear Algebra, Vol16, 99-110 (using_least-squares_to_find_an_approximate_eigenvector)