Vous êtes sur la page 1sur 15

4.

4 Similarity and Diagonalization


Similar Matrices

Let A and B be n × n matrices. We say that


A is similar to B if there is an invertible n × n
matrix P such that P −1AP = B. If A is similar
to B, we write A ∼ B.

Some remarks:

1. If A ∼ B then there is an invertible matrix


P such that P −1AP = B or equivalently,
A = P BP −1, or AP = P B.

2. P depends on A and B and is not unique.

1
Example(Example "4.22) #
1 2
The matrix A = is similar to B =
0 −1
" # " #
1 0 1 −1
. Let P = .
−2 −1 1 1

Then detP = 1 + 1 = 2 6= 0 so P is invert-


ible and
" #
3 1
AP = = P B.
−1 −1

2
Theorem 1 Let A, B and C be n × n matrices.

1. A ∼ A.

2. If A ∼ B then B ∼ A.

3. If A ∼ B and B ∼ C, then A ∼ C.

A relation which is reflexive, symmetric and


transitive is called an equivalence relation.

3
Theorem 2 Let A and B be n × n matrices
with A ∼ B. Then

1. detA = detB.

2. A is invertible if and only if B is invertible.

3. A and B have the same rank.

4. A and B have the same characteristic poly-


nomial.

5. A and B have the same eigenvalues.

4
Example (Example 4.23)
Show that A and B are not similar where

" # " #
3 −1 2 1
1. A = and B = .
−5 7 −4 3

   
1 2 0 2 1 1
2. A = 0 1 −1 and B = 0 1 0.
   
0 −1 1 2 0 1

   
1 2 0 2 1 1
3. A = 0 1 −1 and B = 0 1 0.
   
0 0 0 0 0 1

5
Diagonalization

An n × n matrix A is diagonalizable if there is


a diagonal matrix D such that A is similar to D
– that is, if there is an invertible n × n matrix
P such that P −1AP = D.

Example
"
(Example
#
4.24) " #
1 3 1 3
A= is diagonalizable: let P =
2 2 1 −2
" #
4 0
and let D = .
0 −1

Then detP = −2 − 3 = −5 6= 0, so P is in-


vertible and
" #
4 −3
AP = = P D.
4 2

6
Theorem 3 Let A be an n×n matrix. Then A
is diagonalizable if and only if A has n linearly
independent eigenvectors.

More precisely, there exists an invertible matrix


P and a diagonal matrix D such that P −1AP =
D if and only if the columns of P are n linearly
independent eigenvectors of A and the diago-
nal entries of D are the eigenvalues of A cor-
responding to eigenvectors in P in the same
order.

7
Proof Summary:
Suppose P −1AP = D or rather AP = P D. Let
p1, p2, . . . , pn be the columns of P and λ1, λ2, . . . , λn
the entries of D. Then
h i
AP = Ap1 Ap2 · · · Apn
h i
= λ1p1 λ2p2 · · · λnpn = P D
Equating columns, gives
Ap1 = λ1p1, Ap2 = λ2p2, . . . , Apn = λnpn.
Thus the λi are eigenvalues and pi the corre-
sponding eigenvectors. P is invertible, so the
columns are linearly independent (FTIM).

Conversely if p1, p2, . . . , pn are n linearly inde-


pendent eigenvectors of A and λ1, λ2, . . . , λn are
the corresponding eigenvalues, then
Ap1 = λ1p1, Ap2 = λ2p2, . . . , Apn = λnpn. Hence
if we take P to be the matrix with these eigen-
vectors as columns and D the diagonal ma-
trix with corresponding eigenvalues, AP = P D
as above. By FTIM, since the columns of
P are linearly independent, P is invertible, so
P −1AP = D.
8
Example
If possible,
 find
 a matrix P that diagonalizes
0 0 −2
A = 1 2 1 .
 
1 0 3
2
  poly: −(λ−1)(λ−2)
(Hint: Characteristic   and
−2 −1 0
E1 = span  1  and E2 = span  0  , 1)
     
1 1 0

Example
If possible,
 find
 a matrix P that diagonalizes
1 0 0
A =  1 2 0.
 
−3 5 2
(Hint: Characteristic poly: −(λ − 1)(λ − 2) 2
   
1 0
and E1 = span −1 and E2 = span 0)
   
8 1

9
Theorem 4 (Theorem 4.24) Let A be an n×
n matrix and let λ1, λ2, . . . , λk be distinct eigen-
values of A. If Bi is a basis for the eigenspace
Eλi , then B = B1∪B2∪· · ·∪Bk (i.e., the total col-
lection of basis vectors of all the eigenspaces)
is linearly independent.

Theorem 5 (Theorem 4.25) If A is an n × n


matrix with n distinct eigenvalues, then A is
diagonalizable.

Example  
−2 3 1
Let A =  0 7 −1. Is A diagonalizable?
 
0 0 −3

10
Theorem 6 (Lemma 4.26) If A is an n × n
matrix, then the geometric multiplicity of each
eigenvalue is less than or equal to its algebraic
multiplicity.

Theorem 7 (The Diagonalization Theorem)


Let A be an n × n matrix whose distinct eigen-
values are λ1, λ2, . . . , λk . The following state-
ments are equivalent:

1. A is diagonalizable.

2. The union B of the bases of the eigenspaces


of A contains n vectors.

3. The algebraic multiplicity of each eigen-


value equals its geometric multiplicity.

11
Proof Outline:
1⇒2:
Assume A is diagonalizable. Then A has n lin-
early independent eigenvectors (Theorem 4.23).
Hence B contains at least n vectors. But B
is linearly independent in Rn (Theorem 4.24).
Hence it can’t have more than n vectors, so it
has exactly n vectors.

2⇒3:
Assume B contains n vectors. Let the geo-
metric and algebraic multiplicity of λi be given
by di = dimEλi and mi respectively. Since B
contains n vectors,
n = d1 + d2 + . . . + dk .
Since the degree of the characteristic polyno-
mial is n
n = m1 +m2 +. . .+mk . Thus d1 +d2 +. . .+dk =
m1 + m2 + . . . + mk or rather

(m1 − d1) + (m2 − d2) + . . . + (mk − dk ) = 0.


12
Since mi ≥ di for i = 1, 2, . . . , k (Lemma 4.26)
it follows that mi − di ≥ 0 so mi = di (the sum
of nonnegative terms is 0 only if each term is
0).

3 ⇒ 1:
Assume that mi = di (algebraic multiplicity =
geometric multiplicity) for each λi. Then B
has d1 + d2 + . . . + dk = m1 + m2 + . . . + mk = n
vectors that are linearly independent (Theo-
rem 4.24). Thus A has n linearly independent
eigenvectors, so A is diagonalizable (Theorem
4.23).
Example(Example 4.28)
Determine which of the following matrices are
diagonalizable, and diagonalize them if possi-
ble.

 
0 1 0
1. 0 0 1
 
2 −5 4

 
−1 0 1
2.  3 0 −3
 
1 0 −1

13
Example " # " #
1 0 1 1
Show that A = and B = are not
0 1 0 1
similar.

Example(Example 4.29)
" #
−4 6
Compute A9 if A = .
−3 5

Example
Find all real values
 of k for which A is diago-
1 0 k
nalizable if A = 0 1 0.
 
0 0 1

14

Vous aimerez peut-être aussi