Simpler than what Matt has suggested is to just use matrix multiplication, coupled with deflation.
That is, can you find the LARGEST magnitude eigenvalue? Yes. That is easy, as long as the eigenvalues are distinct.
- Choose some random vector. Call it V0. Scale V0 to have unit norm, so V0 = V0/norm(V0),
- In a loop, do this: V1 = A*V0; Now compute V1norm = norm(V1); If V1 is an eigenvector of A, then V1norm will be the eigenvalue associated with that eigenvector. If V0 has some component that lies in the direction of some other eigenvector, this process will tend to make the largest eigenvalue/vector come to the forefront.
- Replace V0 = V1/V1norm;
- Return to step 2, until the value of V1norm in this process converges. The result will be the largest magnitude eigenvalue of A.
Next, perform deflation, to kill off that eigenvalue/eigenvector pair. This is done as:
5. A = A - V1norm * V0 * V0';
6. Choose some new random vector V0. Scale V0 to have unit norm as you did in step 1.
7. and then return to the looping process in steps 2,3,4.
The code below does the above, except I realized my pseudo-code will have a subtle problem for negative eigenvalues.
V0 = randn(size(A,2),1); V0 = V0/norm(V0);
while abs(abs(val) - abs(lastval)) > valtol
[~,maxel] = max(abs(V0));
val = V1(maxel)/V0(maxel);
So 101 iterations were required to find the largest eigenvalue of A. It may happen to be a negative number, but that is ok.
See that A2 has the same eigenvalues as A, but the largest magnitude eigenvalue has been killed off. Now you can repeat the above process, until we have found all three eigenvalues and eigenvectors.
This scheme will have problems only when there are replicate eigenvalues with the same magnitude. The scheme I have described is often called the power method.