Vous êtes sur la page 1sur 5

LECTURE 29

EIGENVALUES AND EIGENVECTORS


Given a square matrix A, a non-zero vector v is said to be an eigenvector of A if Av = v for
some R. The number is referred to as the associated eigenvalue of A.
We rst nd eigenvalues through the characteristic equation det(AI) = 0. The eigenvectors
are then found via row reduction and back substitution.
The zero vector is never an eigenvector but it is OK to have a zero eigenvalue.
If an nn matrix A has n linearly independent eigenvectors and P is the matrix of eigenvectors
aligned vertically then P
1
AP = D where D is the diagonal matrix of eigenvalues. The order
of the eigenvalues in D must match the order of the eigenvectors in P. This is referred to as
the diagonalisation of A.
A matrix can be non-diagonalisable by coming up short on eigenvectors. The only general way
to nd out if a matrix has a full set of eigenvectors is to nd them all.
A useful check is the fact that (eigenvalues) = Trace(A).
Eigenvectors from dierent eigenvalues are linearly independent.
Eigenvectors from dierent eigenvalues for symmetric matrices are perpendicular.
Establishing the eigen-analysis of a particular matrix gives you a clear vision of the internal
workings of that matrix, and through diagonalisation the matrix may be transformed into a
more workable diagonal structure.
Consider the matrix A =
_
8 10
5 7
_
and lets have a look at what A does to a ran-
dom vector:
_
8 10
5 7
__ _
=
_ _
......its nothing special!
But now consider
_
8 10
5 7
__
2
1
_
=
Observe that A simply triples this vector! We say that v =
_
2
1
_
is an eigenvector
of A with associated eigenvalue = 3.
How do we nd all the eigenvectors and eigenvalues of a matrix A? Well
Av = v Av = Iv Av Iv = 0 (AI)v = 0.
1
Now v = 0 is the trivial solution to the above matrix equation and we are seeking non-
trivial solutions. Thus the matrix A I must be non-invertible and hence we demand
that
det(A I) = 0.
This is called the characteristic equation and generates the eigenvalues. 22 matrices
have a quadratic characteristic equation and 33 matrices will have a cubic characteristic
equation. Once you have the eigenvalues you can then nd the eigenvectors by solving
(AI)v = 0 using row reduction.
Example 1 Find all the eigenvalues and eigenvectors of A =
_
8 10
5 7
_
and hence
nd an invertible matrix P and a diagonal matrix D such that P
1
AP = D.
P =
_
2 1
1 1
_
, D =
_
3 0
0 2
_

2
Example 2 Find all the eigenvalues and eigenvectors of A =
_
_
2 2 3
2 1 6
1 2 0
_
_
and
hence diagonalise A.
3
P =
_
_
1 2 3
2 1 0
1 0 1
_
_
, D =
_
_
5 0 0
0 3 0
0 0 3
_
_

What is happening with the process of diagonalisation?
When we think of R
3
we like to use {i, j, k} as a basis. But these vectors mean nothing
to A. If you were to ask A what would it prefer as a basis it would respond by saying
Ill have my eigenvectors thanks. A likes its eigenvectors since the action of A upon the
eigenvectors is simply contraction and elongation. If we are prepared to abandon {i, j, k}
and instead make A happy by using the coordinate system generated by its eigenvectors
_
_
_
_
_
1
2
1
_
_
,
_
_
2
1
0
_
_
,
_
_
3
0
1
_
_
_
_
_
then A transforms into the trivial matrix D. That is P
transforms the complicated A into the very simple diagonal D via P
1
AP = D!
4
Eigenvectors of Symmetric Matrices
The crucial feature enjoyed by symmetric matrices is that their eigenanalysis is per-
fectly formed.
Theorem: Let A be a real symmetric matrix. Then
I) All eigenvalues of A are real.
II) There is always a full set of eigenvectors.
III) Eigenvectors corresponding to dierent eigenvalues are orthogonal.
Property III) is especially signicant. Lets take a look at the proof.
Proof III Let A be a symmetric matrix and let v and w be two eigenvectors of A
associated with distinct eigenvalues and respectively. Then
(v w) = (v) w (property of scalar products)
= (Av) w (denition of eigenvector)
= (Av)
T
w (rewrite Av as a row vector (1 n matrix))
= v
T
A
T
w (property of matrix transpose)
= v
T
Aw (since A
T
= A)
= v
T
w (denition of eigenvector)
= (v
T
w) (property of matrix multiplication)
= (v w) (rewrite as scalar product)
Hence (v w) = (v w) ( )(v w) = 0. But = implies that v w = 0
and hence we have v w.

In the above proof we have used the fact that scalar products can always be expressed
as matrix products.
That is u v = u
T
v.
For example
_
_
2
1
5
_
_

_
_
4
0
3
_
_
=
_
_
2
1
5
_
_
T
_
_
4
0
3
_
_
=
_
2 1 5
_
_
_
4
0
3
_
_
= 23
29
You can now do Q 90
5

Vous aimerez peut-être aussi