Research in Scientific Computing in Undergraduate Education

Computing approximate eigenvectors

Eigenvectors are calculated from eigenvalues. Eigenvalues, in turn, are roots of the characteristic polynomial of a matrix. Calculating eigenvalues precisely is a delicate task, and often all is known is that an eigenvalue is calculated to within a certain tolerance. What then of the corresponding eigenvector? How accurately can an eigenvector be calculated given a tolerance in the calculation of the eigenvalue?

In the article by Hecker & Lurie (2007) – see the references, below – the authors ask a number of computational questions related to their use of a least-squares algorithm to find an approximate eigenvector. These questions include the following:

1. How well does the algorithm work?

2. What effect does the choice of the random vector in the algorithm have on the algorithm performance?

3. How does the size of the closest eigenvalue to the approximate eigenvalue \lambda affect the error?

4. How does the algorithm behave as the size of the matrix increases?

5. Can the algorithm be used to find a second vector for a two-dimensional eigenspace?

6. How does the Jordan block pattern of the matrix effect the error?

The authors report simulations that address these, and other questions. There seems to be a number of numerical and computational questions remaining.

References & readings

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: