Vous êtes sur la page 1sur 18

The Conjugate Gradient Method:

Lecture 2:
Iterative methods for system of linear equations: The
conjugate gradient method
Computer Tutorial 3: Implementation

Reference: J. R. Shewchuk, An introduction to the


conjugate gradient method without the agonizing
pain, (1994). available at:
http://www.cs.cmu.edu/~jrs/jrspapers.html
2.1: The Conjugate Gradient Method

Objective: Given a Hermitian matrix A, and a


vector b, solve the linear system
Ax = b
Some linear algebra:

A: Hermitian matrix and symmetric, positive


definite
zTAz > 0 for all nonzero vectors, z, with real elements.
positive definite example:

1 0 z0 1 0 z0
A , z z0 z1 z02 z12 0.
0 1 z1 0 1 z1
non-positive definite example:

0 1 1 0 1 1
A , z 1 1 2 0.
1 0 1 1 0 1
Some linear algebra:

Inner product (dot, scalar) of two vectors, x, y:


n
T T
x, y x y y x xi yi
i 1
Orthogonal vectors: <x,y> = 0.
Transpose of matrix multiplication: (AB)T = BTAT
Inverse of matrix multiplication: (AB)-1 = B-1A-1
The quadratic form:

1 T T
f ( x) x Ax b x c
2
The quadratic form:
3 2 2
Example: A , b , c 0.
2 6 8
Plot of f(x):
The quadratic form and Ax=b

Solution of Ax = b: x = [2, -2]T Where is it on the


figure?
2.1: The Gradient of f(x):
T
f f f
f ( x) f ( x)
x1 x2 xn

For every x, the gradient points in the direction of


steepest increase of f(x), and is orthogonal to the
contour lines
Instead of solving Ax=b...

Inner product (dot, scalar) of two vectors, x, y:


1 T
f ( x) x Ax bT x c
2
1 T 1 A is symmetric:
f ( x) A x Ax b Ax b AT = A
2 2

Setting f(x) = 0 gives the minimization problem for


f(x). Hence, Ax = b can be solved by finding x that
minimizes f(x).
Method of Steepest Descent
Start with an arbitrary point: x(0).
Find residual vector: r(i) = b Ax(i)
This indicates how far we are from the correct value of b.
Note that r(i) = -f (x(i))
Also, if e(i) = x(i) x is the (error) vector indicating how far
we are from the solution, then r(i) = -Ae(i)
Determine the direction for the next step: move in the
direction in which f(x) decreases most quickly, i.e.
opposite f (x), that is, r(i) .
How big a step should be taken? x(1) = x(0) + r(0)
Determine by the condition that it should minimize f:
d T d
f ( x(1) ) f ( x(1) ) x(1) f ( x(1) )T r( 0) 0
d d
Method of Steepest Descent

Note that f (x(1)) = -r(1)


T
r(1) r( 0 ) 0
(b Ax(1) )T r( 0 ) 0
(b A( x( 0 ) r( 0 ) ))T r( 0 ) 0
(b Ax( 0) )T r( 0 ) ( Ar( 0 ) )T r( 0 ) 0
(b Ax( 0 ) )T r( 0 ) ( Ar( 0 ) )T r( 0 )
r(T0 ) r( 0 ) r(T0 ) ( Ar( 0 ) )
r(T0 ) r( 0 )
r(T0 ) Ar( 0 )
Method of Steepest Descent
Start with an arbitrary point
r(i ) b Ax(i )
T
r(i ) r(i )
(i ) T
r Ar(i )
(i )
x(i 1) x(i ) r
(i ) (i )
Premultiplying last equation by A
and adding b gives:

r(i 1) r(i ) (i ) Ar(i )


Use this for i > 0. CAUTION: Since the feedback from x(i) is not
present here, use the form above periodically to prevent
misconvergence
Method of Conjugate Gradient
Method of Steepest Descent was constructing
steps with successive residual vectors being
orthogonal:
T
r r
(1) ( 0 ) 0
Conjugate gradient method employs vectors that
are A-orthogonal (or conjugate)
T
d Ad ( j )
(i ) 0
Details of the derivation of the method are
omitted
Method of Conjugate Gradients

d (0) b Ax( 0 )
r(Ti) r(i )
(i )
d (Ti ) Ad (i )
x(i 1) x(i ) (i ) d (i )

r(i 1) r(i ) ( i ) Ad ( i )

r(Ti 1) r(i 1)
( i 1)
r(Ti) r(i )
r(i 1) r(i 1) ( i 1) d (i )
Preconditioned Conjugate Gradient Method

If the matrix A is ill conditioned, the CG method may suffer


from numerical errors (rounding, overflow, underflow).
2 1 x 3
x 1501.5, y 3000
2 1.001 y 0
2 1 x 3
x 751.5, y 1500
2 1.002 y 0
1 1, ill conditioned
Matrix condition number: cond A A A
1, well conditioned
n
Matrix norm: A max Aij
1 i n
j 1

For this example cond(A) = 5001 >>1


Preconditioned Conjugate Gradient Method

Suppose that M is a
symmetric positive definite
r( 0 ) b Ax( 0 )
matrix that approximates A, d (0) M 1r( 0 )
but easier to invert (well T 1
conditioned). Then we can r( i ) M r( i )
solve instead: M-1 Ax = M-1x (i )
d (Ti ) Ad ( i )
x(i 1) x( i ) (i ) d (i )

r(i 1) r(i ) ( i ) Ad ( i )
T 1
r( i 1) M r( i 1)
( i 1)
r(Ti) M 1r( i )
1
d (i 1) M r( i 1) ( i 1) d (i )
Preconditioned Conjugate Gradient Method

Jacobi preconditioner: Aii i j


M ij
0 i j
Symmetric successive overrelaxation
preconditioner:
A L D LT
where L is the strictly lower part of A and D is
diagonal of A.
D 1 D
M ( L) D ( LT )
2
in the interval ]0,2[ is the relaxation parameter to
be chosen.
CG Method: sample code for Matlab
CG Method: sample problem

Sample problem:
4 1 1 x1 12
1 4 2 x2 1
1 2 4 x3 5

exact solution: x1 = 3, x2 = x3 = 1.

Vous aimerez peut-être aussi