Académique Documents
Professionnel Documents
Culture Documents
Introduction
Ng Tin Yau (PhD)
Department of Mechanical Engineering
The Hong Kong Polytechnic University
Jan 2015
Table of Contents
Linear Systems
Applications to Differential Equations
Direct Methods
Iterative Methods
Eigenproblems
Diagonalization Problem
A Transformation Method Jacobi
Exercises
Matrix
Denote K as either the set of real numbers R or complex numbers C.
An m n matrix A is an array of numbers that belong to K, that is,
A= .
..
.. aij K, 1 i m, 1 j n
..
..
.
.
.
am1 am2
amn
0 0 0
0 0 0
O = . . .
. . ...
.. ..
0 0
Two matrices A and A0 are said to be equal if and only if aij = a0ij .
1 0 0
0 1 0
I = . . .
..
.
.
.
. .
. .
0 0
Matrix Addition
Let A and B be two matrices of same dimension. Then the addition
of A and B is defined as
A + B aij + bij
(1)
Example
Compute 2A B with
1 0
A=
2 3
0 4
and B =
1 1
Solution:
1 0
0 4
2 4
2A B = 2
=
2 3
1 1
3 5
Matrix Multiplication
Let A and B be matrices of dimensions n p and p m, respectively.
Then the matrix multiplication of A and B is defined as
AB
p
X
aik bkj
(3)
k=1
Example
Given
A=
1 0
2 3
and B =
4
2
Compute AB.
Solution:
4
AB =
14
a11
a21
A= .
..
a12
a22
..
.
..
.
an1 an2
a1n
a2n
..
.
ann
n
X
1j )
(1)1+j a1j det(A
(4)
j=1
This formula is called cofactor expansion along the first row of A. The
ij ) is called the cofactor of the entry of A in row
scalar (1)i+j det(A
i, column j.
A 3 3 Example
For a 2 2 matrix, we have
a11 a12
a11 a12
= a11 a22 a12 a21
det
=
a21 a22
a21 a22
(5)
Given a 3 3 matrix A,
3
X
1j )
(1)1+j a1j det(A
j=1
11 ) a12 det(A
12 ) + a13 det(A
13 )
= a11 det(A
a22 a23
a21 a23
a21 a22
a11
a12
+ a13
a32 a33
a31 a33
a31 a32
Properties of Determinants
1. Suppose that A is a n n matrix and k is any scalar, then
det(kA) = k n det(A)
(6)
(7)
3. If A is invertible, then
1
det(A)
(8)
det(A) = det(AT )
(9)
det(A1 ) =
4. If A is a square matrix, then
Inverse of a 2 2 Matrix
Given a matrix
A=
a b
c d
Then
det(A) = ad bc
and
1
1
d b
=
det(A) c a
(10)
(12)
(13)
(14)
(15)
a11 a12 a1n
x1
b1
x2
a21 a22 a2n
b2
=
(16)
..
..
.
.
.
..
..
..
..
.
.
.
xn
bn
an1 an2 ann
In symbolic notation, we write Ax = b where A is called the
coefficient matrix of the system. Also, the augmented matrix of
is the matrix
A, denoted A
a11 a1n b1
a21 a2n b2
[A | b] = .
(17)
..
..
..
..
.
.
.
an1
ann bn
Theorem
Let A be a square matrix with real entries. Then the following
statements are equivalent:
1
A is invertible
det(A) 6= 0
Common Methods
BVP1 - ODE
(18)
BVP1 - discretization
For the approximate solution we choose an equidistant subdivision of
the interval [0, 1] by setting
xj = jh
j = 0, 1, . . . , n + 1
(19)
1
[u(xj+1 ) 2u(xj ) + u(xj1 )]
h2
(20)
1
[uj1 2uj + uj+1 ] = f (xj , uj )
h2
(21)
for j = 1, . . . , n.
BVP1 - 3 3 case
In the case where n = 3
2
1
1
h2
0
1 0 u1 f (x1 , u1 )
2 1 u2 = f (x2 , u2 )
(22)
f (x3 , u3 )
u3
1 2
2 1 0
f (x1 , u1 )
1
A = 2 1 2 1 and f = f (x2 , u2 )
h
0 1 2
f (x3 , u3 )
(23)
BVP2 - PDE
x21 x22
with Dirichlet boundary condition u(x) = 0 for all x . Here,
f : R R is a given continuous function, and we are looking for a
solution u C 2 (). Boundary value problems of this type arise, for
example, in torsional bar problem and steady-state heat conduction
problem.
BVP2 - discretization
For the approximate solution we choose an equidistant subdivision of
the region [0, 1] [0, 1] by setting
xi,j = (ih, jh) j = 0, 1, . . . , n + 1
(25)
(27)
for i, j = 1, . . . , n.
Theorem
Row-equivalent linear systems have the same set of solutions
An Example
Consider the following linear system
2 1 1 x1 4
4 3 1 x2 = 6
3 2
2
x3
15
(28)
2 1 1
4
2 1 1
4
2E1 +E2
4 3 1 6
0 5 3 2
3/2E1 +E3
1
9
3 2
2 15
0 27
2
2 1 1
4
7/10E2 +E3
0 5 3 2
52
13
0 0
5
5
To this end, we finished the so-called forward elimination. Next
procedure called the back substitution. That is, we have x3 = 4 and
the solve for x2 = 2 and finally x1 = 1.
(1)
(2)
a11 a12 a13 b1
a11 a12 a13 b1
a11 a12 a13 b1
a21 a22 a23 b2 0 a22 a23 b2 0 a22 a23 b2
a31 a32 a33 b3
0 a32 a33 b3
0
0 a33 b3
(2)
(2)
(2)
b1
a11 a12 a13 a1n
x1
..
..
.
.
.
.
.
0
a
a
=
33
3n
..
..
..
..
..
..
..
.
.
.
.
.
.
.
bn
xn
0
0
0 ann
Now if we set aj,n+1 = bj where 1 j n, then
an,n+1
xn =
ann
and for i = n 1, . . . , 1 we have
n
X
1
xi =
ai,n+1
aij xj
aii
(29)
(30)
(31)
j=i+1
0 8 2 x1 7
3 5 2 x2 =
8
26
x3
6 2 8
In this case, we have a11 = 0, therefore, pivoting is necessary. The
greatest coefficient in column 1 is |a31 | = 6, in this case we interchange
E1 and E3 to give the system as in problem 1.
(32)
where ei = xi x
i . Notice that e2 = 0.0007 which gives e1 = 2.4535.
(33)
Iteration Methods
Vectors in Rn
Denote x = (x1 , x2 , . . . , xn )T Rn where xi R. Then Rn becomes a
vector space if for all elements x, y of Rn and scalars R we have
1
x + y = (x1 + y1 , x2 + y2 , . . . , xn + yn )T
x = (x1 , x2 , . . . , xn )T
(34)
i=1
2
Infinity norm:
kxk = max {|xi |}
1in
(35)
Example
Let x = (1, 1, 2)T R3 . Calculate kxk2 and kxk .
Ans:
kxk2 =
(1)2 + 12 + (2)2 = 6
and
kxk = max{| 1|, |1|, | 2|} = 2
nkxk
(36)
Then
kxk2 = |xj |2 = x2j
n
X
x2i = kxk22
(37)
(38)
i=1
n
X
i=1
x2i
n
X
(39)
i=1
nkxk .
Convergent Sequences
n
A sequence {x(k) }
k=0 of vectors in R is said to converge to x with
respect to the norm k k if, given any > 0, there exists an integer
N () such that
kx(k) xk < k N ()
(40)
Theorem
The sequence of vectors {x(k) } converges to x in Rn with respect to
(k)
k k if and only if limk xi = xi for each i = 1, 2, . . . , n.
(k)
Example 1
Let x(k) R4 be defined by
T
1 3 k
(k)
x = 1, 2 + , 2 , e sin k
k k
Since limk 1 = 1, limk 2 + 1/k = 2, limk 3/k 2 = 0 and
limk ek sin k = 0, therefore, x(k) converges to x = (1, 2, 0, 0)T with
respect to k k . In other words, given > 0, there exists an integer
N (/2) with property that
kx(k) xk <
2
kx(k) xk < /2 whenever k N (/2). Since the Euclidean norm
and the infinity norm are equivalent, this implies that
n
X
(aij xj ) + bi
for i = 1, 2, . . . , n
(41)
j=1,j6=i
(k)
n
X
1
(k)
(k1)
xi =
aij xj
+ bi for i = 1, 2, . . . , n
aii
(42)
j=1,j6=i
Example 2
Given a linear system Ax = b as follows:
10x1 x2 + 2x3 + x4 = 6
x1 + 11x2 x3 + 3x4 = 25
2x1 x2 + 10x3 x4 = 11
3x2 x3 + 8x4 = 15
Use Jacobis iterative technique to find approximations x(k) to x
starting with x(0) = (0, 0, 0, 0)T until
kx(k) x(k1) k
< 0.0002
kx(k) k
x1 =
(1)
x2 =
(1)
x3 =
(1)
x4 =
Since
(0)
(0)
(0)
1
10 (x2 2x3 x4 + 6) = 0.6000
(0)
(0)
(0)
1
11 (x1 + x3 3x4 + 25) = 2.2727
(0)
(0)
(0)
1
10 (2x1 + x2 + x4 11) = 1.1000
(0)
(0)
1
8 (3x2 + x3 + 15) = 1.8750
kx(1) x(0) k
= 1 > 0.0002
kx(1) k
x1 =
(2)
x2 =
(2)
x3 =
(2)
x4 =
Now
(1)
(1)
(1)
1
10 (x2 2x3 x4 + 6) = 0.8598
(1)
(1)
(1)
1
11 (x1 + x3 3x4 + 25) = 1.7159
(1)
(1)
(1)
1
10 (2x1 + x2 + x4 11) = 0.8052
(1)
(1)
1
8 (3x2 + x3 + 15) = 0.8852
kx(2) x(1) k
= 0.5768 > 0.0002
kx(2) k
(k)
x1
0.6000
0.8598
0.8441
0.8929
0.8868
0.8944
0.8932
0.8943
0.8941
0.8943
0.8943
(k)
x2
2.2727
1.7159
2.0363
1.9491
1.9987
1.9842
1.9920
1.9896
1.9909
1.9905
1.9907
(k)
x3
1.1000
0.8052
1.0118
0.9521
0.9852
0.9750
0.9802
0.9785
0.9794
0.9791
0.9792
(k)
x4
1.8750
0.8852
1.1309
0.9849
1.0251
1.0023
1.0090
1.0055
1.0066
1.0060
1.0062
ek
1.0000
0.5768
0.1573
0.0582
0.0249
0.0147
0.0039
0.0018
0.0006
0.0003
0.0001
Since e11 = 0.0001 < 0.0002, therefore the approximate solution is x(11)
and Matlab gives (0.8943, 1.9906, 0.9792, 1.0061)T .
j=n
X
|aij |
(43)
j=1,j6=i
What is an eigenproblems?
We shall denote Kn be the collection of all n-tuples such that each
component belongs to K. Unless otherwise stated, we shall denote A to
be any n n matrix over K.
Given a square matrix A. Suppose that there exist K and a
nonzero vector v Kn such that
Av = v
(44)
or
(A I)v = 0
(45)
(47)
Theorem
Let A be a complex square matrix. Then C is an eigenvalue of A if
and only if det(I A) = 0.
Eigenproblems By Ng Tin Yau (PhD) 40/65
Theorem
Let A be a square matrix over Kn . Then
1
A warm up example
Example
Given
1 4 3
A=
10 6 7
Verfy that
1
1,
2
and
1
1
,
10 1
and
4/10 3/10 1
1
=1
6/10 7/10 2
2
1
1
4/10 3/10
1
=
6/10 7/10 1
10 1
Eigenproblems By Ng Tin Yau (PhD) 42/65
Cont
(2) Estimate the number of cars at the depots in Edmonton and
Calgary in the long run.
Analysis: Denote zk = (xk , yk )T and A as the coefficient matrix of
the system of difference equations for the model. Then, zk+1 = Azk
and notice that zk = Ak z0 . Recall that the eigenvalues of A are 1 = 1
T
and 2 = 0.1 and their corresponding eigenvectors
P2 v1 = (1, 2) and
T
v2 = (1, 1) , respectively. By writing z0 = i=1 ci vi and using the
fact that Avi = i vi , then we have
k
zk = A z0 =
2
X
i=1
ci A vi =
2
X
i=1
ci ki vi
=
c1 + c2 (0.1)k
2c1 c2 (0.1)k
Example 3
Example
Compute the eigenvalues and their corresponding eigenvectors of
matrix
3 4
A=
2 6
First, we need to compute
det(I A) = det
3
4
=0
2 + 6
Example 3 cont
Next we need to use the equation (I A)v = 0 to determine the
corresponding eigenvectors.
For = 2, the equation becomes
( (1) )
1
4
v1
0
(1 I A)v(1) =
=
(1)
2 8
0
v2
(1)
(1)
(2)
(2)
Example 4
Example
Compute the eigenvalues of matrix
2 6
A=
3
4
The characteristic polynomial is
+2
6
det(I A) = det
= 2 2 + 10 = 0
3 4
Notice that if we restrict that R, then we have no eigenvalue that
satisfy the characteristic polynomial! However, if we allow C, then
we have
1 = 1 + 3i and 2 = 1 3i
where i = 1.
Eigenproblems By Ng Tin Yau (PhD) 47/65
Example 5
Example
Determine the eigenvalues for the following matrix.
7
13 16
A = 13 10 13
16 13
7
First, we need to compute
( + 7)
13
16
( + 10)
13 = 0
det(A I) = det 13
16
13
( + 7)
This leads to the charisteristic equation
p() = 3 + 242 405 + 972 = 0
Solving this equation gives
1 = 36 2 = 9 3 = 3
Eigenproblems By Ng Tin Yau (PhD) 48/65
Diagonalization Problem
A square matrix A is called a diagonal matrix if Aij = 0 when i 6= j.
It is easy to see that working with a diagonal matrix is much more
convenient than working with a nondiagonal matrix. A matrix A is
said to be similar to a matrix B if there exists an nonsingular matrix
P such that P1 AP = B. In particular, if A is similar to a diagonal
matrix D, then it is said to be diagonalizable.
Now the two key questions are:
1
Idea of constructing P
Suppose that A is diagonalizable, then P1 AP = D. Denote Dii = i
where i is the diagonal entry at row i. Since P is invertible, then the
column space forms a linearly independent subset of Cn . Denote
column i of P as v(i) and thus,
P = [v(1)
v(2)
v(n) ]
(48)
(49)
Corollary
Let Abe an n n matrix. If A has n distinct eigenvalues, then A is
diagonalizable.
Example
7
13 16
3 4
The matrices
and 13 10 13 are diagonalizable.
2 6
16 13
7
Symmetric Matrices
A matrix A is said to be symmetric if A = AT . An n n matrix Q
is said to be an orthogonal matrix if Q1 = QT .
Theorem
If A is a real symmetric square matrix and D is a diagonal matrix
whose diagonal entries are the eigenvalues of A, then there exists an
orthogonal matrix Q such that D = QT AQ.
The following corollary to the above theorem demonstrate some of the
interesting properties of symmetric matrices.
Corollary
If A is a real symmetric n n matrix, then there exist n eigenvectors
of A that form an orthonormal set and the eigenvalues of A are real
numbers.
(50)
where
Dk = QT
k Dk1 Qk
kN
(51)
A021
A022
(54)
(55)
(56)
2A12
A11 A22
(57)
I
0
0
0
0
0 cos k 0 sin k 0
0
0
I
0
0
Qk =
0 sin k 0 cos k 0
0
0
0
0
I nn
(58)
for all k N. Here the sine and cosine entries appear in the position
(i, i), (i, j), (j, i) and (j, j). In this case, we require
(k+1)
(k+1)
Dij
= Dji
= 0 which gives
(k)
tan 2k+1 =
2Dij
(k)
(k)
(59)
Dii Djj
Example 6
Example
Find the eigenvalues and eigenvectors of the matrix
1 1 1
A = 1 2 2
1 2 3
Ans: The largest off-diagonal term is A23 = 2. In this case, we have
i = 2 and j = 3. Thus
4
1
2A23
1
1 = tan
tan 21 =
= 37.981878
A22 A33
2
23
and
1
0
0
1.0
0
0
0.7882054 0.6154122
Q1 = 0 cos 1 sin 1 = 0
0 sin 1 cos 1
0 0.6154122 0.7882054
Eigenproblems By Ng Tin Yau (PhD) 59/65
1.0
0.1727932 1.4036176
0.0
D1 = QT
1 D0 Q1 = 0.1727932 0.4384472
1.4036176
0.0
4.5615525
Now we try to reduce the largest off-diagonal term of D1 , namely,
(1)
D13 = 1.4036176 to zero. In this case, we have i = 1 and j = 3.
(1)
tan 22 =
2D13
(1)
D11
(1)
D33
1
2 = tan1
2
2.8072352
1.0 4.5615525
= 19.122686
and
cos 2 0 sin 2
0.9448193
0 0.3275920
1
0 =
0
1.0
0
Q2 = 0
sin 2 0 cos 2
0.3275920 0 0.9448193
Eigenproblems By Ng Tin Yau (PhD) 60/65
0.5133313 0.1632584
0.0
D2 = Q T
2 D1 Q2 = 0.1632584 0.4384472 0.0566057
0.0
0.0566057 5.0482211
2 = 0.1632584 to zero.
The largest off-diagonal term of D2 , namely, D12
0.3265167
0.5133313 0.4384472
(2)
2D12
(2)
(2)
D11 D22
which
= 38.541515
and
cos 3 sin 3 0
0.7821569 0.6230815 0.0
Q3 = sin 3 cos 3 0 = 0.6230815 0.7821569 0.0
0
0
1
0.0
0.0
1.0
Eigenproblems By Ng Tin Yau (PhD) 61/65
0.6433861
0.0
0.0352699
0.0
0.3083924 0.0442745
D3 = Q T
3 D2 Q3 =
0.0352699 0.0442745 5.0482211
Suppose that you want to stop the process, then the three eigenvalues
are
1 = 0.6433861 2 = 0.3083924 3 = 5.0482211
In fact the eigenvalues obtained by Matlab are
1 = 0.6431 2 = 0.3080
3 = 5.0489
Example 6 - Eigenvectors
To obtain the corresponding eigenvectors we compute
0.7389969
0.5886994
0.3275920
0.3334301
0.7421160
v(1) =
v(2) =
v(3) = 0.5814533
0.5854125
0.3204631
0.7447116
Using Matlab, we have the corresponding eigenvectors
0.7370
0.591
0.3280
0.3280
0.7370
v(1) =
v(2) =
v(3) = 0.5910
0.5910
0.3280
0.7370
Eigenproblems By Ng Tin Yau (PhD) 63/65
Set
(1) Solve the following system using Gauss elimination method with
partial pivoting.
x1 x2 + 2x3 x4 = 8
2x1 2x2 + 3x3 3x4 = 20
x1 + x2 + x3 = 2
x1 x2 + 4x3 + 3x4 = 4
(2) Suppose that z = x y. Compute kzk5 and kzk if = 3 and
= 2, and x = (5, 3, 8)T and y = (0, 2, 9)T .
(3) Perform five iterations to the following linear system using the
Jacobi method. (Using x(0) = 0 as the initial approximation.)
4x1 + x2 x3 = 5
x1 + 3x2 + x3 = 4
2x1 + 2x2 + 5x3 = 1
Exercises By Ng Tin Yau (PhD) 64/65
Set
(4) Given a matrix
3
1 2
5
C = 1 0
1 1 4
Determine the eigenvalues and eigenvectors of C
method.
(5) Given matrices
6 7 2
3
A = 4 5 2 and B = 1
1 1 1
0
by conventional
1 0
4 2
2 3