Vous êtes sur la page 1sur 159

Linear Algebra

Department of Mathematics
Indian Institute of Technology Guwahati

January – May 2019

MA 102 (RA, RKS, MGPP, KVK)

1 / 21
Eigenvalues and eigenvectors

Topics:
Determinant of Matrices
Eigenvalues and Eigenvectors

2 / 21
Determinant of a matrix

3 / 21
Determinant of a matrix
The determinant det(A) of 2 × 2 matrix A is defined by
 
a11 a12
det (A) = det = a11 a22 − a12 a21 .
a21 a22

3 / 21
Determinant of a matrix
The determinant det(A) of 2 × 2 matrix A is defined by
 
a11 a12
det (A) = det = a11 a22 − a12 a21 .
a21 a22

If det (A) 6= 0 then | det (A)| is the area of the parallelogram


formed by the vectors Ae1 and Ae2 .

3 / 21
Determinant of a matrix
The determinant det(A) of 2 × 2 matrix A is defined by
 
a11 a12
det (A) = det = a11 a22 − a12 a21 .
a21 a22

If det (A) 6= 0 then | det (A)| is the area of the parallelogram


formed by the vectors Ae1 and Ae2 .
The determinant det(A) of 3 × 3 matrix A is defined by
a11 a12 a13
" #
det (A) = det a21 a22 a23
a31 a32 a33
     
a22 a23 a a23 a a22
= a11 det − a12 det 21 + a13 det 21 .
a32 a33 a31 a33 a31 a32

3 / 21
Determinant of a matrix
The determinant det(A) of 2 × 2 matrix A is defined by
 
a11 a12
det (A) = det = a11 a22 − a12 a21 .
a21 a22

If det (A) 6= 0 then | det (A)| is the area of the parallelogram


formed by the vectors Ae1 and Ae2 .
The determinant det(A) of 3 × 3 matrix A is defined by
a11 a12 a13
" #
det (A) = det a21 a22 a23
a31 a32 a33
     
a22 a23 a a23 a a22
= a11 det − a12 det 21 + a13 det 21 .
a32 a33 a31 a33 a31 a32

If det (A) 6= 0 then | det (A)| is the area of the parallelepiped


formed by the vectors Ae1 , Ae2 and Ae3 .

3 / 21
Determinant of an n × n matrix
Definition: Let A = [aij ] ∈ Mn (F). The determinant of A is
defined by

4 / 21
Determinant of an n × n matrix
Definition: Let A = [aij ] ∈ Mn (F). The determinant of A is
defined by
det(A) = a11 , if n = 1, i.e., A = [a11 ].

4 / 21
Determinant of an n × n matrix
Definition: Let A = [aij ] ∈ Mn (F). The determinant of A is
defined by
det(A) = a11 , if n = 1, i.e., A = [a11 ]. For n ≥ 2:
det(A) = a11 det(A11 ) − a12 det(A12 ) + . . . + (−1)1+n a1n det(A1n ),

4 / 21
Determinant of an n × n matrix
Definition: Let A = [aij ] ∈ Mn (F). The determinant of A is
defined by
det(A) = a11 , if n = 1, i.e., A = [a11 ]. For n ≥ 2:
det(A) = a11 det(A11 ) − a12 det(A12 ) + . . . + (−1)1+n a1n det(A1n ),
where Aij is the submatrix of A obtained by deleting the i-th row
and the j-th column of A.

4 / 21
Determinant of an n × n matrix
Definition: Let A = [aij ] ∈ Mn (F). The determinant of A is
defined by
det(A) = a11 , if n = 1, i.e., A = [a11 ]. For n ≥ 2:
det(A) = a11 det(A11 ) − a12 det(A12 ) + . . . + (−1)1+n a1n det(A1n ),
where Aij is the submatrix of A obtained by deleting the i-th row
and the j-th column of A. Note that det (A) ∈ F.

4 / 21
Determinant of an n × n matrix
Definition: Let A = [aij ] ∈ Mn (F). The determinant of A is
defined by
det(A) = a11 , if n = 1, i.e., A = [a11 ]. For n ≥ 2:
det(A) = a11 det(A11 ) − a12 det(A12 ) + . . . + (−1)1+n a1n det(A1n ),
where Aij is the submatrix of A obtained by deleting the i-th row
and the j-th column of A. Note that det (A) ∈ F.
Example
 
1 5 0 0
2 0 8 0
det  =
3 6 9 0
4 7 10 1

4 / 21
Determinant of an n × n matrix
Definition: Let A = [aij ] ∈ Mn (F). The determinant of A is
defined by
det(A) = a11 , if n = 1, i.e., A = [a11 ]. For n ≥ 2:
det(A) = a11 det(A11 ) − a12 det(A12 ) + . . . + (−1)1+n a1n det(A1n ),
where Aij is the submatrix of A obtained by deleting the i-th row
and the j-th column of A. Note that det (A) ∈ F.
Example
 
1 5 0 0
0 8 0 2 8 0
" # " #
2 0 8 0
det  = 1· det 6 9 0 −5· det 3 9 0 =
3 6 9 0 
7 10 1 4 10 1
4 7 10 1

4 / 21
Determinant of an n × n matrix
Definition: Let A = [aij ] ∈ Mn (F). The determinant of A is
defined by
det(A) = a11 , if n = 1, i.e., A = [a11 ]. For n ≥ 2:
det(A) = a11 det(A11 ) − a12 det(A12 ) + . . . + (−1)1+n a1n det(A1n ),
where Aij is the submatrix of A obtained by deleting the i-th row
and the j-th column of A. Note that det (A) ∈ F.
Example
 
1 5 0 0
0 8 0 2 8 0
" # " #
2 0 8 0
det  = 1· det 6 9 0 −5· det 3 9 0 = −18.
3 6 9 0 
7 10 1 4 10 1
4 7 10 1

4 / 21
Determinant of an n × n matrix
Definition: Let A = [aij ] ∈ Mn (F). The determinant of A is
defined by
det(A) = a11 , if n = 1, i.e., A = [a11 ]. For n ≥ 2:
det(A) = a11 det(A11 ) − a12 det(A12 ) + . . . + (−1)1+n a1n det(A1n ),
where Aij is the submatrix of A obtained by deleting the i-th row
and the j-th column of A. Note that det (A) ∈ F.
Example
 
1 5 0 0
0 8 0 2 8 0
" # " #
2 0 8 0
det  = 1· det 6 9 0 −5· det 3 9 0 = −18.
3 6 9 0 
7 10 1 4 10 1
4 7 10 1
Theorem: The determinant of a diagonal, upper or lower triangular
matrix is the product of its diagonal entries.

4 / 21
Determinant of an n × n matrix
Definition: Let A = [aij ] ∈ Mn (F). The determinant of A is
defined by
det(A) = a11 , if n = 1, i.e., A = [a11 ]. For n ≥ 2:
det(A) = a11 det(A11 ) − a12 det(A12 ) + . . . + (−1)1+n a1n det(A1n ),
where Aij is the submatrix of A obtained by deleting the i-th row
and the j-th column of A. Note that det (A) ∈ F.
Example
 
1 5 0 0
0 8 0 2 8 0
" # " #
2 0 8 0
det  = 1· det 6 9 0 −5· det 3 9 0 = −18.
3 6 9 0 
7 10 1 4 10 1
4 7 10 1
Theorem: The determinant of a diagonal, upper or lower triangular
matrix is the product of its diagonal entries.
Proof: Use induction on the size of the matrix.
4 / 21
Properties of Determinant
Let A be a square matrix.

5 / 21
Properties of Determinant
Let A be a square matrix.
1 If B is obtained by interchanging two rows of A, then
det(B) = −det(A).

5 / 21
Properties of Determinant
Let A be a square matrix.
1 If B is obtained by interchanging two rows of A, then
det(B) = −det(A).
2 If A has a zero row, then det(A) = 0.

5 / 21
Properties of Determinant
Let A be a square matrix.
1 If B is obtained by interchanging two rows of A, then
det(B) = −det(A).
2 If A has a zero row, then det(A) = 0.
3 If A has two identical rows, then det(A) = 0.

5 / 21
Properties of Determinant
Let A be a square matrix.
1 If B is obtained by interchanging two rows of A, then
det(B) = −det(A).
2 If A has a zero row, then det(A) = 0.
3 If A has two identical rows, then det(A) = 0.
4 If B is obtained by multiplying a row of A by a scalar α, then
det(B) = α det(A).

5 / 21
Properties of Determinant
Let A be a square matrix.
1 If B is obtained by interchanging two rows of A, then
det(B) = −det(A).
2 If A has a zero row, then det(A) = 0.
3 If A has two identical rows, then det(A) = 0.
4 If B is obtained by multiplying a row of A by a scalar α, then
det(B) = α det(A).
5 If the matrices A, B and C are identical except that one of the
rows of C is the sum of the corresponding rows of A and B,
then det(C ) = det(A) + det(B).

5 / 21
Properties of Determinant
Let A be a square matrix.
1 If B is obtained by interchanging two rows of A, then
det(B) = −det(A).
2 If A has a zero row, then det(A) = 0.
3 If A has two identical rows, then det(A) = 0.
4 If B is obtained by multiplying a row of A by a scalar α, then
det(B) = α det(A).
5 If the matrices A, B and C are identical except that one of the
rows of C is the sum of the corresponding rows of A and B,
then det(C ) = det(A) + det(B).
6 If B is obtained by adding a multiple of one row of A to
another row, then det(B) = det(A).

5 / 21
Properties of Determinant
Let A be a square matrix.
1 If B is obtained by interchanging two rows of A, then
det(B) = −det(A).
2 If A has a zero row, then det(A) = 0.
3 If A has two identical rows, then det(A) = 0.
4 If B is obtained by multiplying a row of A by a scalar α, then
det(B) = α det(A).
5 If the matrices A, B and C are identical except that one of the
rows of C is the sum of the corresponding rows of A and B,
then det(C ) = det(A) + det(B).
6 If B is obtained by adding a multiple of one row of A to
another row, then det(B) = det(A).
7 det (αA) = αn det A.
5 / 21
Expansion of det along rows and columns

Theorem: Let A ∈ Mn (F).

6 / 21
Expansion of det along rows and columns

Theorem: Let A ∈ Mn (F). Then for any i, j ∈ {1, . . . , n}

det(A) = (−1)i+1 ai1 det (Ai1 ) + · · · + (−1)i+n ain det (Ain )

6 / 21
Expansion of det along rows and columns

Theorem: Let A ∈ Mn (F). Then for any i, j ∈ {1, . . . , n}

det(A) = (−1)i+1 ai1 det (Ai1 ) + · · · + (−1)i+n ain det (Ain )


(Expansion along the i-th row)

6 / 21
Expansion of det along rows and columns

Theorem: Let A ∈ Mn (F). Then for any i, j ∈ {1, . . . , n}

det(A) = (−1)i+1 ai1 det (Ai1 ) + · · · + (−1)i+n ain det (Ain )


(Expansion along the i-th row)
1+j
= (−1) a1j det (A1j ) + · · · + (−1)n+j anj det (Anj )

6 / 21
Expansion of det along rows and columns

Theorem: Let A ∈ Mn (F). Then for any i, j ∈ {1, . . . , n}

det(A) = (−1)i+1 ai1 det (Ai1 ) + · · · + (−1)i+n ain det (Ain )


(Expansion along the i-th row)
1+j
= (−1) a1j det (A1j ) + · · · + (−1)n+j anj det (Anj )
(Expansion along the j-th column).

6 / 21
Expansion of det along rows and columns

Theorem: Let A ∈ Mn (F). Then for any i, j ∈ {1, . . . , n}

det(A) = (−1)i+1 ai1 det (Ai1 ) + · · · + (−1)i+n ain det (Ain )


(Expansion along the i-th row)
1+j
= (−1) a1j det (A1j ) + · · · + (−1)n+j anj det (Anj )
(Expansion along the j-th column).

Theorem:
Determinants of elementary matrices:
det Eij =

6 / 21
Expansion of det along rows and columns

Theorem: Let A ∈ Mn (F). Then for any i, j ∈ {1, . . . , n}

det(A) = (−1)i+1 ai1 det (Ai1 ) + · · · + (−1)i+n ain det (Ain )


(Expansion along the i-th row)
1+j
= (−1) a1j det (A1j ) + · · · + (−1)n+j anj det (Anj )
(Expansion along the j-th column).

Theorem:
Determinants of elementary matrices:
det Eij = − 1, det Ei (α) =

6 / 21
Expansion of det along rows and columns

Theorem: Let A ∈ Mn (F). Then for any i, j ∈ {1, . . . , n}

det(A) = (−1)i+1 ai1 det (Ai1 ) + · · · + (−1)i+n ain det (Ain )


(Expansion along the i-th row)
1+j
= (−1) a1j det (A1j ) + · · · + (−1)n+j anj det (Anj )
(Expansion along the j-th column).

Theorem:
Determinants of elementary matrices:
det Eij = − 1, det Ei (α) = α, det Eij (α) =

6 / 21
Expansion of det along rows and columns

Theorem: Let A ∈ Mn (F). Then for any i, j ∈ {1, . . . , n}

det(A) = (−1)i+1 ai1 det (Ai1 ) + · · · + (−1)i+n ain det (Ain )


(Expansion along the i-th row)
1+j
= (−1) a1j det (A1j ) + · · · + (−1)n+j anj det (Anj )
(Expansion along the j-th column).

Theorem:
Determinants of elementary matrices:
det Eij = − 1, det Ei (α) = α, det Eij (α) = 1.

6 / 21
Expansion of det along rows and columns

Theorem: Let A ∈ Mn (F). Then for any i, j ∈ {1, . . . , n}

det(A) = (−1)i+1 ai1 det (Ai1 ) + · · · + (−1)i+n ain det (Ain )


(Expansion along the i-th row)
1+j
= (−1) a1j det (A1j ) + · · · + (−1)n+j anj det (Anj )
(Expansion along the j-th column).

Theorem:
Determinants of elementary matrices:
det Eij = − 1, det Ei (α) = α, det Eij (α) = 1.
For any elementary matrix E , det (E ) = det (E T ).
For any elementary matrix E , det (EA) = det (E ) det (A).

6 / 21
Expansion of det along rows and columns

Theorem: Let A ∈ Mn (F). Then for any i, j ∈ {1, . . . , n}

det(A) = (−1)i+1 ai1 det (Ai1 ) + · · · + (−1)i+n ain det (Ain )


(Expansion along the i-th row)
1+j
= (−1) a1j det (A1j ) + · · · + (−1)n+j anj det (Anj )
(Expansion along the j-th column).

Theorem:
Determinants of elementary matrices:
det Eij = − 1, det Ei (α) = α, det Eij (α) = 1.
For any elementary matrix E , det (E ) = det (E T ).
For any elementary matrix E , det (EA) = det (E ) det (A).

6 / 21
Two important properties
Suppose A ∈ Mn (F).

7 / 21
Two important properties
Suppose A ∈ Mn (F). Then A = Ek · · · E1 · rref(A) for some
elementary Ei .

7 / 21
Two important properties
Suppose A ∈ Mn (F). Then A = Ek · · · E1 · rref(A) for some
elementary Ei . We have
det (A) = det (Ek ) · · · det (E1 ) det (rref(A)).

7 / 21
Two important properties
Suppose A ∈ Mn (F). Then A = Ek · · · E1 · rref(A) for some
elementary Ei . We have
det (A) = det (Ek ) · · · det (E1 ) det (rref(A)).
Thus,
det (A) 6= 0 iff

7 / 21
Two important properties
Suppose A ∈ Mn (F). Then A = Ek · · · E1 · rref(A) for some
elementary Ei . We have
det (A) = det (Ek ) · · · det (E1 ) det (rref(A)).
Thus,
det (A) 6= 0 iff det (rref(A)) 6= 0 iff

7 / 21
Two important properties
Suppose A ∈ Mn (F). Then A = Ek · · · E1 · rref(A) for some
elementary Ei . We have
det (A) = det (Ek ) · · · det (E1 ) det (rref(A)).
Thus,
det (A) 6= 0 iff det (rref(A)) 6= 0 iff rref(A) = In iff

7 / 21
Two important properties
Suppose A ∈ Mn (F). Then A = Ek · · · E1 · rref(A) for some
elementary Ei . We have
det (A) = det (Ek ) · · · det (E1 ) det (rref(A)).
Thus,
det (A) 6= 0 iff det (rref(A)) 6= 0 iff rref(A) = In iff A is invertible.

7 / 21
Two important properties
Suppose A ∈ Mn (F). Then A = Ek · · · E1 · rref(A) for some
elementary Ei . We have
det (A) = det (Ek ) · · · det (E1 ) det (rref(A)).
Thus,
det (A) 6= 0 iff det (rref(A)) 6= 0 iff rref(A) = In iff A is invertible.
Definition A matrix A ∈ Mn (F) is said to be singular or
non-singular according as

7 / 21
Two important properties
Suppose A ∈ Mn (F). Then A = Ek · · · E1 · rref(A) for some
elementary Ei . We have
det (A) = det (Ek ) · · · det (E1 ) det (rref(A)).
Thus,
det (A) 6= 0 iff det (rref(A)) 6= 0 iff rref(A) = In iff A is invertible.
Definition A matrix A ∈ Mn (F) is said to be singular or
non-singular according as det(A) = 0 or det(A) 6= 0.

7 / 21
Two important properties
Suppose A ∈ Mn (F). Then A = Ek · · · E1 · rref(A) for some
elementary Ei . We have
det (A) = det (Ek ) · · · det (E1 ) det (rref(A)).
Thus,
det (A) 6= 0 iff det (rref(A)) 6= 0 iff rref(A) = In iff A is invertible.
Definition A matrix A ∈ Mn (F) is said to be singular or
non-singular according as det(A) = 0 or det(A) 6= 0.

Theorem: A matrix A ∈ Mn (F) is invertible if and only if it is


non-singular.

7 / 21
Two important properties
Suppose A ∈ Mn (F). Then A = Ek · · · E1 · rref(A) for some
elementary Ei . We have
det (A) = det (Ek ) · · · det (E1 ) det (rref(A)).
Thus,
det (A) 6= 0 iff det (rref(A)) 6= 0 iff rref(A) = In iff A is invertible.
Definition A matrix A ∈ Mn (F) is said to be singular or
non-singular according as det(A) = 0 or det(A) 6= 0.

Theorem: A matrix A ∈ Mn (F) is invertible if and only if it is


non-singular.

Theorem:
(a) For any matrix A ∈ Mn (F), det (A) = det (AT ).
(b) For any A, B ∈ Mn (F), det (AB) = det (A) det (B).
7 / 21
Two important properties
Suppose A ∈ Mn (F). Then A = Ek · · · E1 · rref(A) for some
elementary Ei . We have
det (A) = det (Ek ) · · · det (E1 ) det (rref(A)).
Thus,
det (A) 6= 0 iff det (rref(A)) 6= 0 iff rref(A) = In iff A is invertible.
Definition A matrix A ∈ Mn (F) is said to be singular or
non-singular according as det(A) = 0 or det(A) 6= 0.

Theorem: A matrix A ∈ Mn (F) is invertible if and only if it is


non-singular.

Theorem:
(a) For any matrix A ∈ Mn (F), det (A) = det (AT ).
(b) For any A, B ∈ Mn (F), det (AB) = det (A) det (B).
7 / 21
Eigenvalues and Eigenvectors

8 / 21
Eigenvalues and Eigenvectors

Suppose V is a vector space and T : V → V is a linear


transformation.

8 / 21
Eigenvalues and Eigenvectors

Suppose V is a vector space and T : V → V is a linear


transformation. Suppose B = {v1 , . . . , vn } is a basis of V and
λi ∈ F are such that T (vi ) = λi vi .

8 / 21
Eigenvalues and Eigenvectors

Suppose V is a vector space and T : V → V is a linear


transformation. Suppose B = {v1 , . . . , vn } is a basis of V and
λi ∈ F are such that T (vi ) = λi vi . Then, T is nicely described by

T (α1 v1 + · · · + αn vn ) = λ1 α1 v1 + · · · + λn αn vn .

8 / 21
Eigenvalues and Eigenvectors

Suppose V is a vector space and T : V → V is a linear


transformation. Suppose B = {v1 , . . . , vn } is a basis of V and
λi ∈ F are such that T (vi ) = λi vi . Then, T is nicely described by

T (α1 v1 + · · · + αn vn ) = λ1 α1 v1 + · · · + λn αn vn .

Remark
[T vi ]B = λi [vi ]B .

8 / 21
Eigenvalues and Eigenvectors

Suppose V is a vector space and T : V → V is a linear


transformation. Suppose B = {v1 , . . . , vn } is a basis of V and
λi ∈ F are such that T (vi ) = λi vi . Then, T is nicely described by

T (α1 v1 + · · · + αn vn ) = λ1 α1 v1 + · · · + λn αn vn .

Remark
[T vi ]B = λi [vi ]B .
[T ]B =

8 / 21
Eigenvalues and Eigenvectors

Suppose V is a vector space and T : V → V is a linear


transformation. Suppose B = {v1 , . . . , vn } is a basis of V and
λi ∈ F are such that T (vi ) = λi vi . Then, T is nicely described by

T (α1 v1 + · · · + αn vn ) = λ1 α1 v1 + · · · + λn αn vn .

Remark
[T vi ]B = λi [vi ]B .
 
[T ]B = [T (v1 )]B , . . . , [T (vn )]B =

8 / 21
Eigenvalues and Eigenvectors

Suppose V is a vector space and T : V → V is a linear


transformation. Suppose B = {v1 , . . . , vn } is a basis of V and
λi ∈ F are such that T (vi ) = λi vi . Then, T is nicely described by

T (α1 v1 + · · · + αn vn ) = λ1 α1 v1 + · · · + λn αn vn .

Remark
[T vi ]B = λi [vi ]B .
 
[T ]B = [T (v1 )]B , . . . , [T (vn )]B = diag{λ1 , . . . , λn }.

8 / 21
Eigenvalues and Eigenvectors

Suppose V is a vector space and T : V → V is a linear


transformation. Suppose B = {v1 , . . . , vn } is a basis of V and
λi ∈ F are such that T (vi ) = λi vi . Then, T is nicely described by

T (α1 v1 + · · · + αn vn ) = λ1 α1 v1 + · · · + λn αn vn .

Remark
[T vi ]B = λi [vi ]B .
 
[T ]B = [T (v1 )]B , . . . , [T (vn )]B = diag{λ1 , . . . , λn }.

Definition: Let T ∈ L(V). If there is a scalar λ ∈ F and a nonzero


vector v ∈ V such that T (v) = λv

8 / 21
Eigenvalues and Eigenvectors

Suppose V is a vector space and T : V → V is a linear


transformation. Suppose B = {v1 , . . . , vn } is a basis of V and
λi ∈ F are such that T (vi ) = λi vi . Then, T is nicely described by

T (α1 v1 + · · · + αn vn ) = λ1 α1 v1 + · · · + λn αn vn .

Remark
[T vi ]B = λi [vi ]B .
 
[T ]B = [T (v1 )]B , . . . , [T (vn )]B = diag{λ1 , . . . , λn }.

Definition: Let T ∈ L(V). If there is a scalar λ ∈ F and a nonzero


vector v ∈ V such that T (v) = λv then λ is called an eigenvalue of
T and

8 / 21
Eigenvalues and Eigenvectors

Suppose V is a vector space and T : V → V is a linear


transformation. Suppose B = {v1 , . . . , vn } is a basis of V and
λi ∈ F are such that T (vi ) = λi vi . Then, T is nicely described by

T (α1 v1 + · · · + αn vn ) = λ1 α1 v1 + · · · + λn αn vn .

Remark
[T vi ]B = λi [vi ]B .
 
[T ]B = [T (v1 )]B , . . . , [T (vn )]B = diag{λ1 , . . . , λn }.

Definition: Let T ∈ L(V). If there is a scalar λ ∈ F and a nonzero


vector v ∈ V such that T (v) = λv then λ is called an eigenvalue of
T and v an eigenvector of T corresponding to λ.

8 / 21
Eigenvalues and eigenvectors
Definition: Let A ∈ Mn (F).

9 / 21
Eigenvalues and eigenvectors
Definition: Let A ∈ Mn (F). If Ax = λx for some λ ∈ F and
0 6= x ∈ Fn ,

9 / 21
Eigenvalues and eigenvectors
Definition: Let A ∈ Mn (F). If Ax = λx for some λ ∈ F and
0 6= x ∈ Fn , then λ is called an eigenvalue of A and

9 / 21
Eigenvalues and eigenvectors
Definition: Let A ∈ Mn (F). If Ax = λx for some λ ∈ F and
0 6= x ∈ Fn , then λ is called an eigenvalue of A and x an
eigenvector of A corresponding to λ.

9 / 21
Eigenvalues and eigenvectors
Definition: Let A ∈ Mn (F). If Ax = λx for some λ ∈ F and
0 6= x ∈ Fn , then λ is called an eigenvalue of A and x an
eigenvector of A corresponding to λ.
Fact: Suppose dim(V) = n, and T : V → V is an LT.

9 / 21
Eigenvalues and eigenvectors
Definition: Let A ∈ Mn (F). If Ax = λx for some λ ∈ F and
0 6= x ∈ Fn , then λ is called an eigenvalue of A and x an
eigenvector of A corresponding to λ.
Fact: Suppose dim(V) = n, and T : V → V is an LT. Then
T (v) = λv iff

9 / 21
Eigenvalues and eigenvectors
Definition: Let A ∈ Mn (F). If Ax = λx for some λ ∈ F and
0 6= x ∈ Fn , then λ is called an eigenvalue of A and x an
eigenvector of A corresponding to λ.
Fact: Suppose dim(V) = n, and T : V → V is an LT. Then
T (v) = λv iff [T (v)]B = λ[v]B for any ordered basis B of V.

9 / 21
Eigenvalues and eigenvectors
Definition: Let A ∈ Mn (F). If Ax = λx for some λ ∈ F and
0 6= x ∈ Fn , then λ is called an eigenvalue of A and x an
eigenvector of A corresponding to λ.
Fact: Suppose dim(V) = n, and T : V → V is an LT. Then
T (v) = λv iff [T (v)]B = λ[v]B for any ordered basis B of V.

9 / 21
Eigenvalues and eigenvectors
Definition: Let A ∈ Mn (F). If Ax = λx for some λ ∈ F and
0 6= x ∈ Fn , then λ is called an eigenvalue of A and x an
eigenvector of A corresponding to λ.
Fact: Suppose dim(V) = n, and T : V → V is an LT. Then
T (v) = λv iff [T (v)]B = λ[v]B for any ordered basis B of V.
Thus it is enough to study eigenvales/eigenvectors of square
matrices.

9 / 21
Eigenvalues and eigenvectors
Definition: Let A ∈ Mn (F). If Ax = λx for some λ ∈ F and
0 6= x ∈ Fn , then λ is called an eigenvalue of A and x an
eigenvector of A corresponding to λ.
Fact: Suppose dim(V) = n, and T : V → V is an LT. Then
T (v) = λv iff [T (v)]B = λ[v]B for any ordered basis B of V.
Thus it is enough to study eigenvales/eigenvectors of square
matrices.
" #
1 3
Example: Consider the matrix A = .
3 1

9 / 21
Eigenvalues and eigenvectors
Definition: Let A ∈ Mn (F). If Ax = λx for some λ ∈ F and
0 6= x ∈ Fn , then λ is called an eigenvalue of A and x an
eigenvector of A corresponding to λ.
Fact: Suppose dim(V) = n, and T : V → V is an LT. Then
T (v) = λv iff [T (v)]B = λ[v]B for any ordered basis B of V.
Thus it is enough to study eigenvales/eigenvectors of square
matrices.
" #
1 3
Example: Consider the matrix A = . Then 4 is an
3 1
eigenvalue of A with [1, 1]> as a corresponding eigenvector.

9 / 21
Eigenvalues and eigenvectors
Definition: Let A ∈ Mn (F). If Ax = λx for some λ ∈ F and
0 6= x ∈ Fn , then λ is called an eigenvalue of A and x an
eigenvector of A corresponding to λ.
Fact: Suppose dim(V) = n, and T : V → V is an LT. Then
T (v) = λv iff [T (v)]B = λ[v]B for any ordered basis B of V.
Thus it is enough to study eigenvales/eigenvectors of square
matrices.
" #
1 3
Example: Consider the matrix A = . Then 4 is an
3 1
eigenvalue of A with [1, 1]> as a corresponding eigenvector.
" #
cos θ − sin θ
Next, consider the rotation matrix A = . Then
sin θ cos θ
Av 6= λv for any λ ∈ R and 0 6= v ∈ R2 .

9 / 21
Eigenvalues and eigenvectors
Definition: Let A ∈ Mn (F). If Ax = λx for some λ ∈ F and
0 6= x ∈ Fn , then λ is called an eigenvalue of A and x an
eigenvector of A corresponding to λ.
Fact: Suppose dim(V) = n, and T : V → V is an LT. Then
T (v) = λv iff [T (v)]B = λ[v]B for any ordered basis B of V.
Thus it is enough to study eigenvales/eigenvectors of square
matrices.
" #
1 3
Example: Consider the matrix A = . Then 4 is an
3 1
eigenvalue of A with [1, 1]> as a corresponding eigenvector.
" #
cos θ − sin θ
Next, consider the rotation matrix A = . Then
sin θ cos θ
Av 6= λv for any λ ∈ R and 0 6= v ∈ R2 .
Henceforth, we consider F = C. 9 / 21
Eigenspace
Definition: Let λ be an eigenvalue of a matrix A ∈ Mn (C).

10 / 21
Eigenspace
Definition: Let λ be an eigenvalue of a matrix A ∈ Mn (C). Then

Eλ = {x | Ax = λx}

is called the eigenspace of λ.

10 / 21
Eigenspace
Definition: Let λ be an eigenvalue of a matrix A ∈ Mn (C). Then

Eλ = {x | Ax = λx}

is called the eigenspace of λ. Eλ consists of all eigenvectors of A


corresponding to λ, together with the zero vector.

10 / 21
Eigenspace
Definition: Let λ be an eigenvalue of a matrix A ∈ Mn (C). Then

Eλ = {x | Ax = λx}

is called the eigenspace of λ. Eλ consists of all eigenvectors of A


corresponding to λ, together with the zero vector.
Theorem: Let A ∈ Mn (C). Then the following conditions are
equivalent.
λ ∈ C is an eigenvalue of A.

10 / 21
Eigenspace
Definition: Let λ be an eigenvalue of a matrix A ∈ Mn (C). Then

Eλ = {x | Ax = λx}

is called the eigenspace of λ. Eλ consists of all eigenvectors of A


corresponding to λ, together with the zero vector.
Theorem: Let A ∈ Mn (C). Then the following conditions are
equivalent.
λ ∈ C is an eigenvalue of A.
Eλ = null(A − λI ) is a nontrivial subspace of Cn .

10 / 21
Eigenspace
Definition: Let λ be an eigenvalue of a matrix A ∈ Mn (C). Then

Eλ = {x | Ax = λx}

is called the eigenspace of λ. Eλ consists of all eigenvectors of A


corresponding to λ, together with the zero vector.
Theorem: Let A ∈ Mn (C). Then the following conditions are
equivalent.
λ ∈ C is an eigenvalue of A.
Eλ = null(A − λI ) is a nontrivial subspace of Cn .
rank(A − λI ) < n.

10 / 21
Eigenspace
Definition: Let λ be an eigenvalue of a matrix A ∈ Mn (C). Then

Eλ = {x | Ax = λx}

is called the eigenspace of λ. Eλ consists of all eigenvectors of A


corresponding to λ, together with the zero vector.
Theorem: Let A ∈ Mn (C). Then the following conditions are
equivalent.
λ ∈ C is an eigenvalue of A.
Eλ = null(A − λI ) is a nontrivial subspace of Cn .
rank(A − λI ) < n.
det(A − λI ) = 0.

10 / 21
Eigenspace
Definition: Let λ be an eigenvalue of a matrix A ∈ Mn (C). Then

Eλ = {x | Ax = λx}

is called the eigenspace of λ. Eλ consists of all eigenvectors of A


corresponding to λ, together with the zero vector.
Theorem: Let A ∈ Mn (C). Then the following conditions are
equivalent.
λ ∈ C is an eigenvalue of A.
Eλ = null(A − λI ) is a nontrivial subspace of Cn .
rank(A − λI ) < n.
det(A − λI ) = 0.
Definition: Let A be an n × n matrix.

10 / 21
Eigenspace
Definition: Let λ be an eigenvalue of a matrix A ∈ Mn (C). Then

Eλ = {x | Ax = λx}

is called the eigenspace of λ. Eλ consists of all eigenvectors of A


corresponding to λ, together with the zero vector.
Theorem: Let A ∈ Mn (C). Then the following conditions are
equivalent.
λ ∈ C is an eigenvalue of A.
Eλ = null(A − λI ) is a nontrivial subspace of Cn .
rank(A − λI ) < n.
det(A − λI ) = 0.
Definition: Let A be an n × n matrix. Then
det(A − λI ) is called the characteristic polynomial of A.

10 / 21
Eigenspace
Definition: Let λ be an eigenvalue of a matrix A ∈ Mn (C). Then

Eλ = {x | Ax = λx}

is called the eigenspace of λ. Eλ consists of all eigenvectors of A


corresponding to λ, together with the zero vector.
Theorem: Let A ∈ Mn (C). Then the following conditions are
equivalent.
λ ∈ C is an eigenvalue of A.
Eλ = null(A − λI ) is a nontrivial subspace of Cn .
rank(A − λI ) < n.
det(A − λI ) = 0.
Definition: Let A be an n × n matrix. Then
det(A − λI ) is called the characteristic polynomial of A.
det(A − λI ) = 0 is called the characteristic equation of A.
10 / 21
Facts:
The eigenvalues of A are the zeros of the characteristic
polynomial det(A − λI ) of A. Hence A has at most n
eigenvalues.

11 / 21
Facts:
The eigenvalues of A are the zeros of the characteristic
polynomial det(A − λI ) of A. Hence A has at most n
eigenvalues.
The eigenvalues of a triangular matrix are its diagonal entries.

11 / 21
Facts:
The eigenvalues of A are the zeros of the characteristic
polynomial det(A − λI ) of A. Hence A has at most n
eigenvalues.
The eigenvalues of a triangular matrix are its diagonal entries.
A is invertible iff 0 is not an eigenvalue of A.

11 / 21
Facts:
The eigenvalues of A are the zeros of the characteristic
polynomial det(A − λI ) of A. Hence A has at most n
eigenvalues.
The eigenvalues of a triangular matrix are its diagonal entries.
A is invertible iff 0 is not an eigenvalue of A.
 
1 3
Example: What are the eigenvalues of A = and their
3 1
eigenspaces?

11 / 21
Facts:
The eigenvalues of A are the zeros of the characteristic
polynomial det(A − λI ) of A. Hence A has at most n
eigenvalues.
The eigenvalues of a triangular matrix are its diagonal entries.
A is invertible iff 0 is not an eigenvalue of A.
 
1 3
Example: What are the eigenvalues of A = and their
3 1
eigenspaces? We have

det (A − λI ) =

11 / 21
Facts:
The eigenvalues of A are the zeros of the characteristic
polynomial det(A − λI ) of A. Hence A has at most n
eigenvalues.
The eigenvalues of a triangular matrix are its diagonal entries.
A is invertible iff 0 is not an eigenvalue of A.
 
1 3
Example: What are the eigenvalues of A = and their
3 1
eigenspaces? We have
 
1−λ 3
det (A − λI ) = det =0
3 1−λ

11 / 21
Facts:
The eigenvalues of A are the zeros of the characteristic
polynomial det(A − λI ) of A. Hence A has at most n
eigenvalues.
The eigenvalues of a triangular matrix are its diagonal entries.
A is invertible iff 0 is not an eigenvalue of A.
 
1 3
Example: What are the eigenvalues of A = and their
3 1
eigenspaces? We have
 
1−λ 3
det (A − λI ) = det =0 ⇔ λ = 4, −2,
3 1−λ

11 / 21
Facts:
The eigenvalues of A are the zeros of the characteristic
polynomial det(A − λI ) of A. Hence A has at most n
eigenvalues.
The eigenvalues of a triangular matrix are its diagonal entries.
A is invertible iff 0 is not an eigenvalue of A.
 
1 3
Example: What are the eigenvalues of A = and their
3 1
eigenspaces? We have
 
1−λ 3
det (A − λI ) = det =0 ⇔ λ = 4, −2,
3 1−λ

E4 = null(A − 4I ) =

11 / 21
Facts:
The eigenvalues of A are the zeros of the characteristic
polynomial det(A − λI ) of A. Hence A has at most n
eigenvalues.
The eigenvalues of a triangular matrix are its diagonal entries.
A is invertible iff 0 is not an eigenvalue of A.
 
1 3
Example: What are the eigenvalues of A = and their
3 1
eigenspaces? We have
 
1−λ 3
det (A − λI ) = det =0 ⇔ λ = 4, −2,
3 1−λ
   
−3 3
E4 = null(A − 4I ) = x: x=0 =
3 −3

11 / 21
Facts:
The eigenvalues of A are the zeros of the characteristic
polynomial det(A − λI ) of A. Hence A has at most n
eigenvalues.
The eigenvalues of a triangular matrix are its diagonal entries.
A is invertible iff 0 is not an eigenvalue of A.
 
1 3
Example: What are the eigenvalues of A = and their
3 1
eigenspaces? We have
 
1−λ 3
det (A − λI ) = det =0 ⇔ λ = 4, −2,
3 1−λ
   
−3 3
E4 = null(A − 4I ) = x: x=0 = span([1, 1]> ),
3 −3

11 / 21
Facts:
The eigenvalues of A are the zeros of the characteristic
polynomial det(A − λI ) of A. Hence A has at most n
eigenvalues.
The eigenvalues of a triangular matrix are its diagonal entries.
A is invertible iff 0 is not an eigenvalue of A.
 
1 3
Example: What are the eigenvalues of A = and their
3 1
eigenspaces? We have
 
1−λ 3
det (A − λI ) = det =0 ⇔ λ = 4, −2,
3 1−λ
   
−3 3
E4 = null(A − 4I ) = x: x=0 = span([1, 1]> ),
3 −3

E(−2) = null(A + 2I ) =

11 / 21
Facts:
The eigenvalues of A are the zeros of the characteristic
polynomial det(A − λI ) of A. Hence A has at most n
eigenvalues.
The eigenvalues of a triangular matrix are its diagonal entries.
A is invertible iff 0 is not an eigenvalue of A.
 
1 3
Example: What are the eigenvalues of A = and their
3 1
eigenspaces? We have
 
1−λ 3
det (A − λI ) = det =0 ⇔ λ = 4, −2,
3 1−λ
   
−3 3
E4 = null(A − 4I ) = x: x=0 = span([1, 1]> ),
3 −3
   
3 3
E(−2) = null(A + 2I ) = x: x=0 =
3 3

11 / 21
Facts:
The eigenvalues of A are the zeros of the characteristic
polynomial det(A − λI ) of A. Hence A has at most n
eigenvalues.
The eigenvalues of a triangular matrix are its diagonal entries.
A is invertible iff 0 is not an eigenvalue of A.
 
1 3
Example: What are the eigenvalues of A = and their
3 1
eigenspaces? We have
 
1−λ 3
det (A − λI ) = det =0 ⇔ λ = 4, −2,
3 1−λ
   
−3 3
E4 = null(A − 4I ) = x: x=0 = span([1, 1]> ),
3 −3
   
3 3
E(−2) = null(A + 2I ) = x: x=0 = span([1, −1]> ),
3 3

11 / 21
Computation of eigenvalues and bases for eigenspaces

1 Compute the char. poly. det(A − λI ).

12 / 21
Computation of eigenvalues and bases for eigenspaces

1 Compute the char. poly. det(A − λI ).

2 Solve the char. eqn. det(A − λI ) = 0 for eigenvalues λ.

12 / 21
Computation of eigenvalues and bases for eigenspaces

1 Compute the char. poly. det(A − λI ).

2 Solve the char. eqn. det(A − λI ) = 0 for eigenvalues λ.

3 For each eigenvalue λ, find a basis of null(A − λI ) = Eλ .

12 / 21
Computation of eigenvalues and bases for eigenspaces

1 Compute the char. poly. det(A − λI ).

2 Solve the char. eqn. det(A − λI ) = 0 for eigenvalues λ.

3 For each eigenvalue λ, find a basis of null(A − λI ) = Eλ .

Example
A real matrix may have complex eigenvalues and complex
eigenvectors.

12 / 21
Computation of eigenvalues and bases for eigenspaces

1 Compute the char. poly. det(A − λI ).

2 Solve the char. eqn. det(A − λI ) = 0 for eigenvalues λ.

3 For each eigenvalue λ, find a basis of null(A − λI ) = Eλ .

Example
A real matrix may have complex
 eigenvalues
 and complex
0 1
eigenvectors. Consider A = .
−1 0

12 / 21
Computation of eigenvalues and bases for eigenspaces

1 Compute the char. poly. det(A − λI ).

2 Solve the char. eqn. det(A − λI ) = 0 for eigenvalues λ.

3 For each eigenvalue λ, find a basis of null(A − λI ) = Eλ .

Example
A real matrix may have complex
 eigenvalues
 and complex
0 1
eigenvectors. Consider A = . The char. poly. of A is
−1 0
λ2 + 1 and so the eigenvalues are ± ι̇.

12 / 21
Computation of eigenvalues and bases for eigenspaces

1 Compute the char. poly. det(A − λI ).

2 Solve the char. eqn. det(A − λI ) = 0 for eigenvalues λ.

3 For each eigenvalue λ, find a basis of null(A − λI ) = Eλ .

Example
A real matrix may have complex
 eigenvalues
 and complex
0 1
eigenvectors. Consider A = . The char. poly. of A is
−1 0
λ2 + 1 and so the eigenvalues are ± ι̇. Corresponding to the
eigenvalues ± ι̇, it has eigenvectors [1, ι̇]> and [ι̇, 1]> , respectively.

12 / 21
Computation of eigenvalues and bases for eigenspaces

1 Compute the char. poly. det(A − λI ).

2 Solve the char. eqn. det(A − λI ) = 0 for eigenvalues λ.

3 For each eigenvalue λ, find a basis of null(A − λI ) = Eλ .

Example
A real matrix may have complex
 eigenvalues
 and complex
0 1
eigenvectors. Consider A = . The char. poly. of A is
−1 0
λ2 + 1 and so the eigenvalues are ± ι̇. Corresponding to the
eigenvalues ± ι̇, it has eigenvectors [1, ι̇]> and [ι̇, 1]> , respectively.
The eigenspaces Eι̇ and E(−ι̇) are one dimensional.

12 / 21
Algebraic and geometric multiplicities
Definition: Let λ be an eigenvalue of A ∈ Mn (C).

13 / 21
Algebraic and geometric multiplicities
Definition: Let λ be an eigenvalue of A ∈ Mn (C).
The algebraic multiplicity of λ is the multiplicity of λ as a
root of the characteristic polynomial of A.

13 / 21
Algebraic and geometric multiplicities
Definition: Let λ be an eigenvalue of A ∈ Mn (C).
The algebraic multiplicity of λ is the multiplicity of λ as a
root of the characteristic polynomial of A.
The geometric multiplicity of λ is the dimension of Eλ .

13 / 21
Algebraic and geometric multiplicities
Definition: Let λ be an eigenvalue of A ∈ Mn (C).
The algebraic multiplicity of λ is the multiplicity of λ as a
root of the characteristic polynomial of A.
The geometric multiplicity of λ is the dimension of Eλ .

Theorem: Let λ be an eigenvalue of A. Then the algebraic


multiplicity of λ ≥ the geometric multiplicity of λ.

13 / 21
Algebraic and geometric multiplicities
Definition: Let λ be an eigenvalue of A ∈ Mn (C).
The algebraic multiplicity of λ is the multiplicity of λ as a
root of the characteristic polynomial of A.
The geometric multiplicity of λ is the dimension of Eλ .

Theorem: Let λ be an eigenvalue of A. Then the algebraic


multiplicity of λ ≥ the geometric multiplicity of λ.

Example: What are "the eigenvalues and the corresponding


1 1 1
#
eigenspaces of A = 0 1 1 ?
0 0 1

13 / 21
Algebraic and geometric multiplicities
Definition: Let λ be an eigenvalue of A ∈ Mn (C).
The algebraic multiplicity of λ is the multiplicity of λ as a
root of the characteristic polynomial of A.
The geometric multiplicity of λ is the dimension of Eλ .

Theorem: Let λ be an eigenvalue of A. Then the algebraic


multiplicity of λ ≥ the geometric multiplicity of λ.

Example: What are "the eigenvalues and the corresponding


1 1 1
#
eigenspaces of A = 0 1 1 ? Eigenvalues are 1, 1, 1 (1
0 0 1
appearing three times, multiplicity three).

13 / 21
Algebraic and geometric multiplicities
Definition: Let λ be an eigenvalue of A ∈ Mn (C).
The algebraic multiplicity of λ is the multiplicity of λ as a
root of the characteristic polynomial of A.
The geometric multiplicity of λ is the dimension of Eλ .

Theorem: Let λ be an eigenvalue of A. Then the algebraic


multiplicity of λ ≥ the geometric multiplicity of λ.

Example: What are "the eigenvalues and the corresponding


1 1 1
#
eigenspaces of A = 0 1 1 ? Eigenvalues are 1, 1, 1 (1
0 0 1
appearing three times, multiplicity three). Moreover,
0 1 1 1
" #! " #!
E1 = null 0 0 1 = span 0 .
0 0 0 0

13 / 21
Algebraic and geometric multiplicities
Definition: Let λ be an eigenvalue of A ∈ Mn (C).
The algebraic multiplicity of λ is the multiplicity of λ as a
root of the characteristic polynomial of A.
The geometric multiplicity of λ is the dimension of Eλ .

Theorem: Let λ be an eigenvalue of A. Then the algebraic


multiplicity of λ ≥ the geometric multiplicity of λ.

Example: What are "the eigenvalues and the corresponding


1 1 1
#
eigenspaces of A = 0 1 1 ? Eigenvalues are 1, 1, 1 (1
0 0 1
appearing three times, multiplicity three). Moreover,
0 1 1 1
" #! " #!
E1 = null 0 0 1 = span 0 .
0 0 0 0
We have dim(E1 ) = 1.
13 / 21
Linearly independent eigenvectors
Theorem: Eigenvectors corresponding to distinct eigenvalues are
linearly independent.

14 / 21
Linearly independent eigenvectors
Theorem: Eigenvectors corresponding to distinct eigenvalues are
linearly independent.
Proof. Suppose that λ1 , . . . , λm are distinct eigenvalues of A,

14 / 21
Linearly independent eigenvectors
Theorem: Eigenvectors corresponding to distinct eigenvalues are
linearly independent.
Proof. Suppose that λ1 , . . . , λm are distinct eigenvalues of A, xi
is an eigenvector of A corresponding to λi .

14 / 21
Linearly independent eigenvectors
Theorem: Eigenvectors corresponding to distinct eigenvalues are
linearly independent.
Proof. Suppose that λ1 , . . . , λm are distinct eigenvalues of A, xi
is an eigenvector of A corresponding to λi . Suppose, if possible, xi
are not LI.

14 / 21
Linearly independent eigenvectors
Theorem: Eigenvectors corresponding to distinct eigenvalues are
linearly independent.
Proof. Suppose that λ1 , . . . , λm are distinct eigenvalues of A, xi
is an eigenvector of A corresponding to λi . Suppose, if possible, xi
are not LI. Then there is k > 1 such that {x1 , . . . , xk−1 } is LI, but
{x1 , . . . , xk } is not.

14 / 21
Linearly independent eigenvectors
Theorem: Eigenvectors corresponding to distinct eigenvalues are
linearly independent.
Proof. Suppose that λ1 , . . . , λm are distinct eigenvalues of A, xi
is an eigenvector of A corresponding to λi . Suppose, if possible, xi
are not LI. Then there is k > 1 such that {x1 , . . . , xk−1 } is LI, but
{x1 , . . . , xk } is not. Let xk = α1 x1 + · · · + αk−1 xk−1 .

14 / 21
Linearly independent eigenvectors
Theorem: Eigenvectors corresponding to distinct eigenvalues are
linearly independent.
Proof. Suppose that λ1 , . . . , λm are distinct eigenvalues of A, xi
is an eigenvector of A corresponding to λi . Suppose, if possible, xi
are not LI. Then there is k > 1 such that {x1 , . . . , xk−1 } is LI, but
{x1 , . . . , xk } is not. Let xk = α1 x1 + · · · + αk−1 xk−1 . Then,
Axk = λ1 α1 x1 + · · · + λk−1 αk−1 xk−1 .

14 / 21
Linearly independent eigenvectors
Theorem: Eigenvectors corresponding to distinct eigenvalues are
linearly independent.
Proof. Suppose that λ1 , . . . , λm are distinct eigenvalues of A, xi
is an eigenvector of A corresponding to λi . Suppose, if possible, xi
are not LI. Then there is k > 1 such that {x1 , . . . , xk−1 } is LI, but
{x1 , . . . , xk } is not. Let xk = α1 x1 + · · · + αk−1 xk−1 . Then,
Axk = λ1 α1 x1 + · · · + λk−1 αk−1 xk−1 .

Therefore, 0 = λk xk − Axk

14 / 21
Linearly independent eigenvectors
Theorem: Eigenvectors corresponding to distinct eigenvalues are
linearly independent.
Proof. Suppose that λ1 , . . . , λm are distinct eigenvalues of A, xi
is an eigenvector of A corresponding to λi . Suppose, if possible, xi
are not LI. Then there is k > 1 such that {x1 , . . . , xk−1 } is LI, but
{x1 , . . . , xk } is not. Let xk = α1 x1 + · · · + αk−1 xk−1 . Then,
Axk = λ1 α1 x1 + · · · + λk−1 αk−1 xk−1 .

Therefore, 0 = λk xk − Axk
= (λk − λ1 )α1 x1 + · · · + (λk − λk−1 )αk−1 xk−1 .

14 / 21
Linearly independent eigenvectors
Theorem: Eigenvectors corresponding to distinct eigenvalues are
linearly independent.
Proof. Suppose that λ1 , . . . , λm are distinct eigenvalues of A, xi
is an eigenvector of A corresponding to λi . Suppose, if possible, xi
are not LI. Then there is k > 1 such that {x1 , . . . , xk−1 } is LI, but
{x1 , . . . , xk } is not. Let xk = α1 x1 + · · · + αk−1 xk−1 . Then,
Axk = λ1 α1 x1 + · · · + λk−1 αk−1 xk−1 .

Therefore, 0 = λk xk − Axk
= (λk − λ1 )α1 x1 + · · · + (λk − λk−1 )αk−1 xk−1 .
Since {x1 , . . . , xk−1 } is LI,
(λk − λ1 )α1 = · · · = (λk − λk−1 )αk−1 = 0,

14 / 21
Linearly independent eigenvectors
Theorem: Eigenvectors corresponding to distinct eigenvalues are
linearly independent.
Proof. Suppose that λ1 , . . . , λm are distinct eigenvalues of A, xi
is an eigenvector of A corresponding to λi . Suppose, if possible, xi
are not LI. Then there is k > 1 such that {x1 , . . . , xk−1 } is LI, but
{x1 , . . . , xk } is not. Let xk = α1 x1 + · · · + αk−1 xk−1 . Then,
Axk = λ1 α1 x1 + · · · + λk−1 αk−1 xk−1 .

Therefore, 0 = λk xk − Axk
= (λk − λ1 )α1 x1 + · · · + (λk − λk−1 )αk−1 xk−1 .
Since {x1 , . . . , xk−1 } is LI,
(λk − λ1 )α1 = · · · = (λk − λk−1 )αk−1 = 0,
i.e., α1 = · · · = αk−1 = 0

14 / 21
Linearly independent eigenvectors
Theorem: Eigenvectors corresponding to distinct eigenvalues are
linearly independent.
Proof. Suppose that λ1 , . . . , λm are distinct eigenvalues of A, xi
is an eigenvector of A corresponding to λi . Suppose, if possible, xi
are not LI. Then there is k > 1 such that {x1 , . . . , xk−1 } is LI, but
{x1 , . . . , xk } is not. Let xk = α1 x1 + · · · + αk−1 xk−1 . Then,
Axk = λ1 α1 x1 + · · · + λk−1 αk−1 xk−1 .

Therefore, 0 = λk xk − Axk
= (λk − λ1 )α1 x1 + · · · + (λk − λk−1 )αk−1 xk−1 .
Since {x1 , . . . , xk−1 } is LI,
(λk − λ1 )α1 = · · · = (λk − λk−1 )αk−1 = 0,
i.e., α1 = · · · = αk−1 = 0 (as λi are distinct),

14 / 21
Linearly independent eigenvectors
Theorem: Eigenvectors corresponding to distinct eigenvalues are
linearly independent.
Proof. Suppose that λ1 , . . . , λm are distinct eigenvalues of A, xi
is an eigenvector of A corresponding to λi . Suppose, if possible, xi
are not LI. Then there is k > 1 such that {x1 , . . . , xk−1 } is LI, but
{x1 , . . . , xk } is not. Let xk = α1 x1 + · · · + αk−1 xk−1 . Then,
Axk = λ1 α1 x1 + · · · + λk−1 αk−1 xk−1 .

Therefore, 0 = λk xk − Axk
= (λk − λ1 )α1 x1 + · · · + (λk − λk−1 )αk−1 xk−1 .
Since {x1 , . . . , xk−1 } is LI,
(λk − λ1 )α1 = · · · = (λk − λk−1 )αk−1 = 0,
i.e., α1 = · · · = αk−1 = 0 (as λi are distinct), i.e. xk = 0, a
contradiction.
14 / 21
Cayley-Hamilton Theorem

15 / 21
Cayley-Hamilton Theorem
Let p(λ) be the characteristic polynomial of a matrix A.

15 / 21
Cayley-Hamilton Theorem
Let p(λ) be the characteristic polynomial of a matrix A. Then
p(A) = 0, the zero matrix.

15 / 21
Cayley-Hamilton Theorem
Let p(λ) be the characteristic polynomial of a matrix A. Then
p(A) = 0, the zero matrix.

[This is a beautiful and useful theorem.

15 / 21
Cayley-Hamilton Theorem
Let p(λ) be the characteristic polynomial of a matrix A. Then
p(A) = 0, the zero matrix.

[This is a beautiful and useful theorem. However, we omit the


proof.]

15 / 21
Cayley-Hamilton Theorem
Let p(λ) be the characteristic polynomial of a matrix A. Then
p(A) = 0, the zero matrix.

[This is a beautiful and useful theorem. However, we omit the


proof.]
Exercise  
1 2
Let A = .
2 1

15 / 21
Cayley-Hamilton Theorem
Let p(λ) be the characteristic polynomial of a matrix A. Then
p(A) = 0, the zero matrix.

[This is a beautiful and useful theorem. However, we omit the


proof.]
Exercise  
1 2
Let A = .
2 1
Verify that char. poly. of A is λ2 − 2λ − 3.

15 / 21
Cayley-Hamilton Theorem
Let p(λ) be the characteristic polynomial of a matrix A. Then
p(A) = 0, the zero matrix.

[This is a beautiful and useful theorem. However, we omit the


proof.]
Exercise  
1 2
Let A = .
2 1
Verify that char. poly. of A is λ2 − 2λ − 3.
Verify that A2 − 2A − 3I = 0.

15 / 21
Cayley-Hamilton Theorem
Let p(λ) be the characteristic polynomial of a matrix A. Then
p(A) = 0, the zero matrix.

[This is a beautiful and useful theorem. However, we omit the


proof.]
Exercise  
1 2
Let A = .
2 1
Verify that char. poly. of A is λ2 − 2λ − 3.
Verify that A2 − 2A − 3I = 0.
Use the fact A2 = 2A + 3I to compute A5 without computing
any matrix multiplication.

15 / 21
Cayley-Hamilton Theorem
Let p(λ) be the characteristic polynomial of a matrix A. Then
p(A) = 0, the zero matrix.

[This is a beautiful and useful theorem. However, we omit the


proof.]
Exercise  
1 2
Let A = .
2 1
Verify that char. poly. of A is λ2 − 2λ − 3.
Verify that A2 − 2A − 3I = 0.
Use the fact A2 = 2A + 3I to compute A5 without computing
any matrix multiplication.
Argue that A is invertible using its char. poly.

15 / 21
Cayley-Hamilton Theorem
Let p(λ) be the characteristic polynomial of a matrix A. Then
p(A) = 0, the zero matrix.

[This is a beautiful and useful theorem. However, we omit the


proof.]
Exercise  
1 2
Let A = .
2 1
Verify that char. poly. of A is λ2 − 2λ − 3.
Verify that A2 − 2A − 3I = 0.
Use the fact A2 = 2A + 3I to compute A5 without computing
any matrix multiplication.
Argue that A is invertible using its char. poly. Since
3I = A2 − 2A, we should have A−1 = 13 A − 2I .
 

15 / 21
Cayley-Hamilton Theorem
Let p(λ) be the characteristic polynomial of a matrix A. Then
p(A) = 0, the zero matrix.

[This is a beautiful and useful theorem. However, we omit the


proof.]
Exercise  
1 2
Let A = .
2 1
Verify that char. poly. of A is λ2 − 2λ − 3.
Verify that A2 − 2A − 3I = 0.
Use the fact A2 = 2A + 3I to compute A5 without computing
any matrix multiplication.
Argue that A is invertible using its char. poly. Since
3I = A2 − 2A, we should have A−1 = 13 A − 2I . Verify.
 

15 / 21
Exercise
Let A be a matrix with eigenvalue λ and corresponding
eigenvector x.
For any positive integer n, show that λn is an eigenvalue of An
with corresponding eigenvector x.
If A is invertible, then show that λ1 is an eigenvalue of A−1
with corresponding eigenvector x.
If A is invertible then show that for any integer n, λ−n is an
eigenvalue of A−n with corresponding eigenvector x.
Let v1 , v2 , . . . , vm be eigenvectors of a matrix A with
corresponding eigenvalues λ1 , λ2 , . . . , λm , respectively. Let
x = c1 v1 + c2 v2 + . . . + cm vm . Show that for any positive
integer k,

Ak x = c1 λk1 v1 + c2 λk2 v2 + . . . + cm λkm vm .

16 / 21
Google Search Engine

The task of extracting information from all the web pages available
on the Internet is performed by search engines.

17 / 21
Google Search Engine

The task of extracting information from all the web pages available
on the Internet is performed by search engines.

Google search engine does three basic things:


Crawl the web and locate all web pages with public access.

17 / 21
Google Search Engine

The task of extracting information from all the web pages available
on the Internet is performed by search engines.

Google search engine does three basic things:


Crawl the web and locate all web pages with public access.
Index the data (create term-document matrix) from step-1 so
that they can be searched efficiently for relevant key words or
phrases.

17 / 21
Google Search Engine

The task of extracting information from all the web pages available
on the Internet is performed by search engines.

Google search engine does three basic things:


Crawl the web and locate all web pages with public access.
Index the data (create term-document matrix) from step-1 so
that they can be searched efficiently for relevant key words or
phrases.

Rank each page in the database, so that when a user does a


search and the subset of pages in the database with the
desired information has been found, the more important pages
can be presented first.

17 / 21
PageRank Algorithm
Google PageRank algorithm assigns ranks to all the web pages and
is formulated as a matrix eigenvalue problem.

18 / 21
PageRank Algorithm
Google PageRank algorithm assigns ranks to all the web pages and
is formulated as a matrix eigenvalue problem.

The web is an example of a directed graph. Let all the web pages
be ordered as P1 , . . . , Pn . Link from Pi to Pj represents an arrow.
Google assigns rank to a page based on its in-links and out-links.

18 / 21
PageRank Algorithm
Google PageRank algorithm assigns ranks to all the web pages and
is formulated as a matrix eigenvalue problem.

The web is an example of a directed graph. Let all the web pages
be ordered as P1 , . . . , Pn . Link from Pi to Pj represents an arrow.
Google assigns rank to a page based on its in-links and out-links.

In-links of Pj : All the incoming links (pages) to Pj .


Out-links of Pj : All the outgoing links (pages) from Pj .

18 / 21
PageRank Algorithm
Google PageRank algorithm assigns ranks to all the web pages and
is formulated as a matrix eigenvalue problem.

The web is an example of a directed graph. Let all the web pages
be ordered as P1 , . . . , Pn . Link from Pi to Pj represents an arrow.
Google assigns rank to a page based on its in-links and out-links.

In-links of Pj : All the incoming links (pages) to Pj .


Out-links of Pj : All the outgoing links (pages) from Pj .
Google assigns a high rank to a web page if it has backlinks from
other pages that have a high rank. This self-referencing statement
can be formulated mathematically as an eigenvalue problem.

18 / 21
PageRank Algorithm
Google PageRank algorithm assigns ranks to all the web pages and
is formulated as a matrix eigenvalue problem.

The web is an example of a directed graph. Let all the web pages
be ordered as P1 , . . . , Pn . Link from Pi to Pj represents an arrow.
Google assigns rank to a page based on its in-links and out-links.

In-links of Pj : All the incoming links (pages) to Pj .


Out-links of Pj : All the outgoing links (pages) from Pj .
Google assigns a high rank to a web page if it has backlinks from
other pages that have a high rank. This self-referencing statement
can be formulated mathematically as an eigenvalue problem.
However, a ranking system based only on the number of backlinks
is easy to manipulate.

18 / 21
PageRank Algorithm
Google PageRank algorithm assigns ranks to all the web pages and
is formulated as a matrix eigenvalue problem.

The web is an example of a directed graph. Let all the web pages
be ordered as P1 , . . . , Pn . Link from Pi to Pj represents an arrow.
Google assigns rank to a page based on its in-links and out-links.

In-links of Pj : All the incoming links (pages) to Pj .


Out-links of Pj : All the outgoing links (pages) from Pj .
Google assigns a high rank to a web page if it has backlinks from
other pages that have a high rank. This self-referencing statement
can be formulated mathematically as an eigenvalue problem.
However, a ranking system based only on the number of backlinks
is easy to manipulate.

18 / 21
PageRank Algorithm (eigenvalue problem)
Let xj ≥ 0 be the rank of page Pj . Then xj > xk =⇒ Pj is more
important than page Pk .

19 / 21
PageRank Algorithm (eigenvalue problem)
Let xj ≥ 0 be the rank of page Pj . Then xj > xk =⇒ Pj is more
important than page Pk .
If page Pj contains nj out-links, one of which links to page Pk ,
then page Pk ’s score is boosted by xj /nj .

19 / 21
PageRank Algorithm (eigenvalue problem)
Let xj ≥ 0 be the rank of page Pj . Then xj > xk =⇒ Pj is more
important than page Pk .
If page Pj contains nj out-links, one of which links to page Pk ,
then page Pk ’s score is boosted by xj /nj .

Let Lk denote the set of in-links Pk . Then the rank xk of Pk is


defined by X xj
xk = (1)
nj
j∈Lk

where nj is the number of outgoing links (outlinks) from page Pj .


Setting v := [x1 , . . . , xn ]T , equation (1) is rewritten as

19 / 21
PageRank Algorithm (eigenvalue problem)
Let xj ≥ 0 be the rank of page Pj . Then xj > xk =⇒ Pj is more
important than page Pk .
If page Pj contains nj out-links, one of which links to page Pk ,
then page Pk ’s score is boosted by xj /nj .

Let Lk denote the set of in-links Pk . Then the rank xk of Pk is


defined by X xj
xk = (1)
nj
j∈Lk

where nj is the number of outgoing links (outlinks) from page Pj .


Setting v := [x1 , . . . , xn ]T , equation (1) is rewritten as

Eigenvalue problem Hv = v ,

where H is the hyperlink matrix and v is the page rank vector.

19 / 21
The Google matrix
The hyperlink matrix is modified to obtain the Google matrix

G := (1 − m)H + mS,

20 / 21
The Google matrix
The hyperlink matrix is modified to obtain the Google matrix

G := (1 − m)H + mS,

where S is an n × n matrix with all entries 1/n and 0 ≤ m ≤ 1.

20 / 21
The Google matrix
The hyperlink matrix is modified to obtain the Google matrix

G := (1 − m)H + mS,

where S is an n × n matrix with all entries 1/n and 0 ≤ m ≤ 1.


Both H and G have 1 as the dominant eigenvalue.

20 / 21
The Google matrix
The hyperlink matrix is modified to obtain the Google matrix

G := (1 − m)H + mS,

where S is an n × n matrix with all entries 1/n and 0 ≤ m ≤ 1.


Both H and G have 1 as the dominant eigenvalue.
By Perron-Frobenius theorem, there is a unique vector
v = [x1 , . . . , xn ]> with xj > 0 for j = 1 : n such that
n
X
Gv = v and xj = 1.
j=1

20 / 21
The Google matrix
The hyperlink matrix is modified to obtain the Google matrix

G := (1 − m)H + mS,

where S is an n × n matrix with all entries 1/n and 0 ≤ m ≤ 1.


Both H and G have 1 as the dominant eigenvalue.
By Perron-Frobenius theorem, there is a unique vector
v = [x1 , . . . , xn ]> with xj > 0 for j = 1 : n such that
n
X
Gv = v and xj = 1.
j=1

The rank of the web page Pj is given by the j-th component xj of


the eigenvector v of the Google matrix G . Google sets m = 0.15.

20 / 21
References

For more on Google search engine, read the following paper/book.

21 / 21
References

For more on Google search engine, read the following paper/book.


K. Bryan and T. Leise, The $25,000,000,000 Eigenvector:
The Linear Algebra behind Google, SIAM Review, 48(2006),
pp.569-581.

21 / 21
References

For more on Google search engine, read the following paper/book.


K. Bryan and T. Leise, The $25,000,000,000 Eigenvector:
The Linear Algebra behind Google, SIAM Review, 48(2006),
pp.569-581.
Amy N. Langville and Carl D. Meyer, Googles PageRank and
Beyond: The Science of Search Engine Rankings, Princeton
University Press, 2006

***

21 / 21

Vous aimerez peut-être aussi