Vous êtes sur la page 1sur 33

Numerical Integration

1.1 Gaussian Numerical Integration Continued


The numerical methods studied in the last section were based on integrating
linear and quadratic interpolating polynomials, and the resulting formulas were
applied on subdivisions of ever smaller subintervals. In this section, we
consider a numerical method that is based on the exact integration of
polynomials of increasing degree; no subdivision of the integration interval is
used. To motivate this approach, recall from Section 2.4 of Chapter 2 the
material on approximation of functions.
Let f (x) be continuous on [a, b]. Then p
n
(f ) denotes the smallest error bound
that can be attained in approximating f (x) with a polynomial p
n
(x) of degree s n
on the given interval a s x s b. The polynomial p
n
(x) that yields this
approximation is called the minimax approximation of degree n for f (x),
max
asxsb
f (x) p
n
(x) = p
n
(f )
and p
n
(f ) is called the minimax error. From Theorem 3.1 of Chapter 3, it can be
seen that p
n
(f ) will often converge to zero quite rapidly.
If we have a numerical integration formula to integrate low- to moderate-degree
polynomials exactly, then the hope is that the same formula will integrate other
functions f (x) almost exactly, if f (x) is well approximated by such polynomials.
To illustrate the derivation of such integration formulas, we restrict our attention
to the integral
I(f ) =
_
1
1
f (x) dx.
Its relation to integrals over other intervals [a, b] will be discussed later.
The integration formula is to have the general form
2012 G. Baumann
I
n
(f ) = _
j=1
n
w
j
f |x
j
]
and we require that the nodes x
1
, , x
n
and weights w
1
, , w
n
be so
chosen that I
n
(f ) = I(f ) for all polynomials f (x) of as large a degree as possible.
Case n = 1 The integration formula has the form
_
1
1
f (x) dx w
1
f (x
1
).
It is to be exact for polynomials of as large a degree as possible.
Using f (x) 1 and forcing equality in (0.0) gives us
2 = w
1
.
Now use f (x) = x and again force equality in (0.0). Then
0 = w
1
x
1
which implies x
1
= 0. Thus (0.0) becomes
_
1
1
f (x) dx 2 f (0) I
1
(f ).
This is the midpoint formula from the trapezoidal approximation. The formula
(0.0) is exact for all linear polynomials.
To see that (0.0) is not exact for quadratics, let f (x) = x
2
. Then the error in (0.0)
is given by
_
1
1
x
2
dx 2 (0)
2
=
2
3
= 0.
Case n = 2 The integration formula is
_
1
1
f (x) dx w
1
f (x
1
) +w
2
f (x
2
).
and it has four unspecified quantities: x
1
, x
2
, w
1
, and w
2
. To determine these,
we require it to be exact for the four monomials
2 Lecture_010.nb
2012 G. Baumann
f (x) e |1, x, x
2
, x
3
|.
This leads to the four equations
2 = w
1
+w
2
0 = w
1
x
1
+w
2
x
2
2
3
= w
1
x
1
2
+w
2
x
2
2
0 = w
1
x
1
3
+w
2
x
2
3
This is a nonlinear system in four unknowns;
equations = 2 f w1 + w2, 0 f w1 x1 + w2 x2,
2
3
f w1 x1
2
+ w2 x2
2
,
0 f w1 x1
3
+ w2 x2
3
!;
equations ss TableForm
2 f w1 +w2
0 f w1 x1 +w2 x2
2
3
f w1 x1
2
+w2 x2
2
0 f w1 x1
3
+w2 x2
3
its solution can be shown to be
solution = Solve#equations, x1, x2, w1, w2'
{{w1 1, w2 1, x2
1
3
, x1
1
3
,
{w1 1, w2 1, x2
1
3
, x1
1
3

This yields the integration formula
Clear# f '
Lecture_010.nb 3
2012 G. Baumann
interpol = w1 f +x1/ + w2 f +x2/ s.solution
{ f
1
3
+ f
1
3
, f
1
3
+ f
1
3

Thus the second order interpolation formula becomes
I
2
(f ) = f
1
3
+f
1
3
=
_
1
1
f (x) dx.
From being exact for the monomials in (0.0), one can show this formula will be
exact for all polynomials of degree s 3. It also can be shown by direct
calculation to not be exact for the degree 4 polynomial f (x) = x
4
. Thus I
2
(f ) has
degree of precision 3.
For cases n 3 there occurs a problem in the solution for the weights and the
interpolation points because the determining system of equations becomes
nonlinear. The following function generates the determining equations for a
Gau integration.
gaussIntegration#f_, x_, a_, b_, n_' := Block%,
varsX =
Table#ToExpression#StringJoin#"x", ToString#i''',
i, 1, n';
varsW =
Table#ToExpression#StringJoin#"w", ToString#i''',
i, 1, n';
vec1 = Table$varsX
i
, i, 0, 2 n - 1(;
vecB = Table%If%EvenQ#i', Abs%
2
2 i - 1
), 0),
i, 0, 2 n - 1);
equations = Thread#Map#varsW. &, vec1' == vecB'
)
4 Lecture_010.nb
2012 G. Baumann
soli = gaussIntegration#f, x, a, b, 3' ss TableForm
w1 w2 w3 m 2
w1 x1 w2 x2 w3 x3 m 0
w1 x1
2
w2 x2
2
w3 x3
2
m
2
3
w1 x1
3
w2 x2
3
w3 x3
3
m 0
w1 x1
4
w2 x2
4
w3 x3
4
m
2
7
w1 x1
5
w2 x2
5
w3 x3
5
m 0
We clearly observe that the equations are nonlinear due to the fact that the x
i
are not specified yet. The problem here is that there exist no reliable procedure
to find the solutions for nonlinear algebraic equations.
1.2 Sinc Quadrature
The standard approach uses shifted Sinc functions as a basis and inverse
conformal maps to set up the approximation points. A function f defined on
x e (a, b) is thus approximated by
f (x) ! _
k=M
N
f (x
k
) S(k, h)(x)
where = log((x a) f (b x)) is the conformal map and x
k
= (k h) =
1
(k h)
are the Sinc points based on the equdistant discrete values k h of step length
h = r N . The shifted Sinc function S(k, h)(x) = Sinc(r ((x) k h) f h) is
used as basis for the approximation. In fact this relation is an approximation of
the cardinal function C(f ) defined for N = M = o.
A definite integral on a finite interval x e (a, b) is given by the following
approximation formula
_
a
b
f (x) dx ! h _
k=M
N
f (x
k
)
1
' (x
k
)
= h V
m
(f ).V
m
(1 f ').
Lecture_010.nb 5
2012 G. Baumann
This relation follows from
_
a
b
f (x) dx !
_
a
b
_
k=M
N
f (x
k
) S(k, h)(x) dx ! _
k=M
N
f (x
k
)
_
a
b
S(k, h)(x) dx !
_
k=M
N
f (x
k
) h
_
(a)
(b)
S(k, h)u
1
' (u +k h)
du ! h _
k=M
N
f (x
k
)
1
' (x
k
)
.
Example
Given the function f (x) = :
x
find the integral for x e [1, 1] using Sinc methods
with differnt M
First define the function
f#x_' := e
-x
The inverse conformal map is
#x_, a_, b_' := +a + b e
x
/s+1 + e
x
/
The step length is given by
h =
r
M
S
M
The conformal map is
#x_, a_, b_' := Log%
x - a
b - x
)
For M = 2 we find
6 Lecture_010.nb
2012 G. Baumann
M = 2
2
integral2 = N%h
k=-M
M
f ++h k, -1, 1//
+x,-1,1/
x
s. x -+h k, -1, 1/ )
2.31753
For M = 4 we find
M = 4
4
integral4 = N%h
k=-M
M
f ++h k, -1, 1//
+x,-1,1/
x
s. x -+h k, -1, 1/ )
2.34454
For M = 8 we find
M = 8
8
integral8 = N%h
k=-M
M
f ++h k, -1, 1//
+x,-1,1/
x
s. x -+h k, -1, 1/ )
2.34992
Lecture_010.nb 7
2012 G. Baumann
For M = 16 we find
M = 16
16
integral16 = N%h
k=-M
M
f ++h k, -1, 1//
+x,-1,1/
x
s. x -+h k, -1, 1/ )
2.35039
The exact integral is
exact = N%

-1
1
f +x/ dx)
2.3504
Comparing the exact value with the different approximations shows an
exponential decay of the error
8 Lecture_010.nb
2012 G. Baumann
ListLogLogPlot#2, Abs#exact - integral2',
4, Abs#exact - integral4', 8, Abs#exact - integral8',
16, Abs#exact - integral16', Frame - True,
FrameLabel - "M", "e"'
10.0 5.0 2.0 3.0 15.0 7.0
10
4
0.001
0.01
0.1
1
M
e
Solutions of Equations
Systems of simultaneous linear equations occur in solving problems in a wide
variety of disciplines, including mathematics, statistics, the physical, biological,
and social sciences, engineering, and business. They arise directly in solving
real-world problems, and they also occur as part of the solution process for
other problems, for example, solving systems of simultaneous nonlinear
equations. Numerical solutions of boundary value problems and initial boundary
value problems for differential equations are a rich source of linear systems,
especially large-size ones. In this chapter we will examine some classical
methods for solving linear systems, including direct methods such as the
Gauian elimination method, and iterative methods such as the Jacobi method
and Gau-Seidel method.
Lecture_010.nb 9
2012 G. Baumann
2.1 Systems of linear Equations
One of the topics studied in elementary algebra is the solution of pairs of linear
equations such as
a x + b y = c
d x + e y = f
The coefficients a, b, , f are given constants, and the task is to find the
unknown values x, y. In this chapter, we examine the problem of finding
solutions to longer systems of linear equations, containing more equations and
unknowns.
To write the most general system of linear equations that we will study, we must
change the notation used in (0.0) to something more convenient. Let n be a
positive integer. The general form for a system of n linear equations in the n
unknowns x
1
, x
2
, x
3
, x
n
is
a
11
x
1
+a
12
x
2
+... +a
1 n
x
n
= b
1
a
21
x
1
+a
22
x
2
+... +a
1 n
x
n
= b
2

a
m1
x
1
+a
m2
x
2
+... +a
mn
x
n
= b
m
.
The coefficients are given symbolically by a
ij
, with i the number of the equation
and j the number of the associated unknown component. On some occasions,
to avoid possible confusion, we also use the symbol a
i, j
. The right-hand side
b
1
, b
2
, , b
n
are given numbers; and the problem is to calculate the unknowns
x
1
, x
2
, , x
n
. The linear system is said to be of order n.
A solution of a linear equation a
1
x
1
+a
2
x
2
++a
n
x
n
= b is a sequence of n
numbers s
1
, s
2
, s
3
, , s
n
such that the equation is satisfied when we substitute
x
1
= s
1
, x
2
= s
2
, , x
n
= s
n
. The set of all solutions of the equation is called its
solution set or sometimes the general solution of the equation.
A finite set of linear equations in the variables x
1
, x
2
, , x
n
is called a system
of linear equations or a linear system. The sequence of numbers s
1
, s
2
, s
3
, ,
s
n
is called a solution of the system if x
1
= s
1
, x
2
= s
2
, , x
n
= s
n
is a solution
of every equation in the system.
10 Lecture_010.nb
2012 G. Baumann
A system of equations that has no solutions is said to be inconsistent; if there is
at least one solution of the system, it is called consistent. To illustrate the
possibilities that can occur in solving systems of linear equations, consider a
general system of two linear equations in the unknowns x
1
= x and x
2
= y:
a
1
x +b
1
y = c
1
with a
1
, b
1
not both zero
a
2
x + b
2
y = c
2
with a
2
, b
2
not both zero.
The graphs of these equations are lines. Since a point (x, y) lies on a line if and
only if the numbers x and y satisfy the equation of the line, the solutions of the
system of equations correspond to points of intersection of line l
1
and line l
2
.
There are three possibilities, illustrated in Figure 0.0
Lecture_010.nb 11
2012 G. Baumann
l
1
l
2
0 1 2 3 4 5
1.0
0.5
0.0
0.5
1.0
1.5
2.0
2.5
x
y
l
1
l
2
0 1 2 3 4 5
0.0
0.5
1.0
1.5
2.0
2.5
3.0
x
y
l
1
l
2
0 1 2 3 4 5
0.0
0.5
1.0
1.5
2.0
2.5
x
y
Figure 0.0. The three possible scenarios to solve a system of linear equations.
Top a single and unique solution exists, middle no solution exists, and bottom
an infinite number of solutions exists.
The line l
1
and l
2
may intersect at only one point, in which the system has
exactly one solution
The lines l
1
and l
2
may be parallel, in which case there is no intersection and
consequently no solution to the system.
The lines l
1
and l
2
may coincide, in which case there are infinitely many points
12 Lecture_010.nb
2012 G. Baumann
of intersection and consequently infinitely many solutions to the system.
Although we have considered only two equations with two unknowns here, we
will show that the same three possibilities hold for arbitrary linear systems:
Remark 0.0. Every system of linear equations has no solutions, or has exactly
one solution, or has infinitely many solutions.
For linear systems of small orders, such as the system (0.0) it is possible to
solve them by paper and pencil calculations or with the help of a calculator, with
methods learned in elementary algebra. For systems arising in most
applications, however, it is common to have larger orders, from several dozens
to millions. Evidently, there is no hope to solve such large systems by hand. We
need to employ numerical methods for their solutions. For this purpose, it is
most convenient to use the matrix/vector notation to represent linear systems
and to use the corresponding matrix/vector arithmetic for their numerical
treatment.
The linear system of equations (0.0) is completely specified by knowing the
coefficients a
ij
and the right-hand constants b
i
. These coefficients are arranged
as the elements of a matrix:
A =
a
11
a
12
a
1 n
a
21
a
22
a
2 n

a
n1
a
n2
a
nn
We say a
ij
is the (i, j) entry of a matrix A. Similarly, the right-hand constants b
i
are arranged in the form of a vector
b =
b
1
b
2

b
n
The letters A and b are the names given to the matrix and the vector. The
indices of a
ij
now give the numbers of the row and column of A that contain a
ij
.
The solution x
1
, x
2
, , x
n
is written similarly
Lecture_010.nb 13
2012 G. Baumann
x =
x
1
x
2

x
n
With this notation the linear system (0.0) is then written in compact form
A.x = b.
The reader with some knowledge of linear algebra will immediately recognize
that the left-hand side of (0.0) is the matrix multiplied by a vector x, and (0.0)
expresses the equality between the two vectors A.x and b.
The basic method for solving a system of linear equations is to replace the
given system by a new system that has the same solution set but is easier to
solve. This new system is generally obtained in a series of steps by applying
the following three types of operations to eliminate unknowns symbolically:
1. Multiply an equation through by a nonzero constant.
2. Interchange two equations.
3. Add a multiple of one equation to another.
Since the rows (horizontal lines) of an augmented matrix correspond to the
equations in the associated system, these three operations correspond to the
following operations on the rows of the augmented matrix:
4. Multiply a row through by a nonzero constant.
5. Interchange two rows.
6. Add a multiple of one row to another row.
These are called elementary row operations. The following example illustrates
how these operations can be used to solve systems of linear equations. Since a
systematic procedure for finding solutions will be derived in the next section, it
is not necessary to worry about how the steps in this example were selected.
The main effort at this time should be devoted to understanding the
computations and the discussion.
Example 0.0. Using Elementary Row Operations
14 Lecture_010.nb
2012 G. Baumann
In the first line below we solve a system of linear equations by operating on the
equations in the system, and in the same line we solve the same system by
operating on the rows of the augmented matrix.
Solution 0.0.
x +y +2 z = 9
2 x +4 y 3 z = 1
3 x +6 y 5 z = 0
1 1 2 9
2 4 3 1
3 6 5 0
Add 2 times the first equation (row) to the second to obtain which
x +y +2 z = 9
2 y 7 z = 17
3 x +6 y 5 z = 0
1 1 2 9
0 2 7 17
3 6 5 0
Add 3 times the first equation (row) to the third to obtain
x +y +2 z = 9
2 y 7 z = 17
3 y 11 z = 27
1 1 2 9
0 2 7 17
0 3 11 27
Multiply the second equation (row) by 1 f 2 to obtain
x +y +2 z = 9
y
7
2
z =
17
2
3 y 11 z = 27
1 1 2 9
0 1
7
2
17
2
0 3 11 27
Add 3 times the second equation (row) to the third to obtain
x +y +2 z = 9
y
7
2
z =
17
2
1
2
z =
3
2
1 1 2 9
0 1
7
2
17
2
0 0
1
2
3
2
Multiply the third equation (row) by 2 to obtain
x +y +2 z = 9
y
7
2
z =
17
2
z = 3
1 1 2 9
0 1
7
2
17
2
0 0 1 3
Lecture_010.nb 15
2012 G. Baumann
Add 1 times the second equation (row) to the first to obtain
x +
11
2
z =
35
2
y
7
2
z =
17
2
z = 3
1 0
11
2
35
2
0 1
7
2
17
2
0 0 1 3
Add
11
2
times the third equation (row) to the first and
7
2
times the third equation
to the second to obtain
x = 1
y = 2
z = 3
1 0 0 1
0 1 0 2
0 0 1 3
The solution thus is x = 1, y = 2, z = 3.
2.1.1 Gau Elimination Method
We have just seen how easy it is to solve a system of linear equations once its
augmented matrix is in reduced row-echelon form. Now we shall give a step-by-
step elimination procedure that can be used to reduce any matrix to reduced
row-echelon form. As we state each step in the procedure, we shall illustrate
the idea by reducing the following matrix to reduced row-echelon form. To
demonstrate the procedure let us consider the following augmented matrix
0 0 2 0 7 12
2 4 10 6 12 28
2 4 5 6 5 1
Step 1: Locate the leftmost column that does not consist entirely of zeros.
0 0 2 0 7 12
2 4 10 6 12 28
2 4 5 6 5 1
Step 2: Interchange the top row with another row, if necessary, to bring a
nonzero entry to the top of the column found in step 1.
16 Lecture_010.nb
2012 G. Baumann
2 4 -10 6 12 28
0 0 2 0 7 12
2 4 5 6 5 1
Step 3: If the entry that is now at the top of the column found in step 1 is a,
multiply the first row by 1 f a in order to introduce a leading 1.
1 2 -5 3 6 14
0 0 2 0 7 12
2 4 5 6 5 1
Step 4: Add suitable multiples of the top row to the rows below so that all
enters below the leading 1 become zeros.
1 2 5 3 6 14
0 0 2 0 7 12
0 0 5 0 -17 -29
Step 5: Now cover the top row in the matrix and begin again with step 1 applied
to the submatrix that remains. Continue in this way until the entire matrix is in
row-echelon form.
1 2 5 3 6 14
0 0 1 0
-7
2
-6
0 0 5 0 -17 -29
1 2 5 3 6 14
0 0 1 0
7
2
6
0 0 0 0
1
2
1
1 2 5 3 6 14
0 0 1 0
7
2
6
0 0 0 0 1 2
The entire matrix is now in row-echelon form. To find the reduced row echelon
form we need the following additional step.
Step 6: Beginning with the last nonzero row and working upward, add suitable
multiples of each row to the rows above to introduce zeros above the leading
Lecture_010.nb 17
2012 G. Baumann
g
1's.
1 2 5 3 6 14
0 0 1 0
7
2
6
0 0 0 0 1 2
1 2 5 3 6 14
0 0 1 0 0 1
0 0 0 0 1 2
1 2 5 3 0 2
0 0 1 0 0 1
0 0 0 0 1 2
1 2 0 3 0 7
0 0 1 0 0 1
0 0 0 0 1 2
The last matrix is in reduced row-echelon form.
If we use only the first five steps, the above procedure creates a row-echelon
form and is called Gauian elimination. Caring out step 6 in addition to the first
five steps which generates the reduced row-echelon form is called Gau-
Jordan elimination.
Remark 0.0. It can be shown that every matrix has a unique reduced-echelon
form; that is, one will arrive at the same reduced row-echelon form for a given
matrix no matter how the row operations are varied. In contrast, a row-echelon
form of a given matrix is not unique; different sequences of row operations can
produce different row-echelon forms.
In Mathematica there exists a function which generates the reduced-echelon
form of a given matrix. For the example above the calculation is done by
18 Lecture_010.nb
2012 G. Baumann
MatrixForm%RowReduce%
0 0 -2 0 7 12
2 4 -10 6 12 28
2 4 -5 6 -5 -1
))
1 2 0 3 0 7
0 0 1 0 0 1
0 0 0 0 1 2
Example 0.0. Gau-Jordan Elimination
Solve by Gau-Jordan elimination the following system of linear equations
x
1
+5 x
2
3 x
3
+5 x
5
= 1
3 x
1
+7 x
2
4 x
3
x
4
+3 x
5
+2 x
6
= 3
2 x
1
+9 x
2
+9 x
4
+3 x
5
12 x
6
= 7
Solution 0.0. The augmented matrix of this system is
am=
1 5 -3 0 5 0 1
3 7 -4 -1 3 2 3
2 9 0 9 3 -12 7
;
The reduced-echelon form is gained by adding 3 times the first row to the
second row
am=
1 5 -3 0 5 0 1
0 -8 5 -1 -12 2 0
2 9 0 9 3 -12 7
;
Adding in addition 2 times the first row to the third one gives
am=
1 5 -3 0 5 0 1
0 -8 5 -1 -12 2 0
0 -1 6 9 -7 -12 5
;
Interchanging the second with the third row and multiplying the third by 1 will
give
Lecture_010.nb 19
2012 G. Baumann
am=
1 5 -3 0 5 0 1
0 1 -6 -9 7 12 -5
0 -8 5 -1 -12 2 0
;
A multiple of the second row of 8 will give
am=
1 5 -3 0 5 0 1
0 1 -6 -9 7 12 -5
0 0 -43 -73 44 98 -40
;
Division of the last row by 43 gives
am=
1 5 -3 0 5 0 1
0 1 -6 -9 7 12 -5
0 0 1
73
43
-44
43
-98
43
40
43
;
Adding a multiple of 6 of the third row to the second row produces
am=
1 5 -3 0 5 0 1
0 1 0
51
43
37
43
-72
43
25
43
0 0 1
73
43
-44
43
-98
43
40
43
;
Three time the last row added to the first row generates
am=
1 5 0
219
43
83
43
-294
43
163
43
0 1 0
51
43
37
43
-72
43
25
43
0 0 1
73
43
-44
43
-98
43
40
43
;
5 times the second row added to the first row gives
20 Lecture_010.nb
2012 G. Baumann
am=
1 0 0 -
36
43
-
102
43
66
43
38
43
0 1 0
51
43
37
43
-72
43
25
43
0 0 1
73
43
-44
43
-98
43
40
43
;
The same result is generated by
MatrixForm#RowReduce#am''
1 0 0
36
43

102
43
66
43
38
43
0 1 0
51
43
37
43

72
43
25
43
0 0 1
73
43

44
43

98
43
40
43
The same augmented matrix a
m
can be treated by a function which prints the
intermediate steps.
MatrixForm#am'
1 5 3 0 5 0 1
3 7 4 1 3 2 3
2 9 0 9 3 12 7
The application of the function GaussJordanForm to the matrix a
m
is shown in
the following lines. On the left of the matrix the operations carried out on rows
are stated
GaussJordanForm#am' ss MatrixForm
Forward pass
(Row 2)-(3)*(Row 1)
1 5 3 0 5 0 1
0 8 5 1 12 2 0
2 9 0 9 3 12 7
(Row 3)-(2)*(Row 1)
1 5 3 0 5 0 1
0 8 5 1 12 2 0
0 1 6 9 7 12 5
Lecture_010.nb 21
2012 G. Baumann
(Row 2)/(8)
1 5 3 0 5 0 1
0 1
5
8
1
8
3
2

1
4
0
0 1 6 9 7 12 5
(Row 3)-(1)*(Row 2)
1 5 3 0 5 0 1
0 1
5
8
1
8
3
2

1
4
0
0 0
43
8
73
8

11
2

49
4
5
(Row 3)/(
43
8
)
1 5 3 0 5 0 1
0 1
5
8
1
8
3
2

1
4
0
0 0 1
73
43

44
43

98
43
40
43
Backward pass
(Row 1)-(3)*(Row 3)
1 5 0
219
43
83
43

294
43
163
43
0 1
5
8
1
8
3
2

1
4
0
0 0 1
73
43

44
43

98
43
40
43
(Row 2)-(
5
8
)*(Row 3)
1 5 0
219
43
83
43

294
43
163
43
0 1 0
51
43
37
43

72
43
25
43
0 0 1
73
43

44
43

98
43
40
43
(Row 1)-(5)*(Row 2)
1 0 0
36
43

102
43
66
43
38
43
0 1 0
51
43
37
43

72
43
25
43
0 0 1
73
43

44
43

98
43
40
43
1 0 0
36
43

102
43
66
43
38
43
0 1 0
51
43
37
43

72
43
25
43
0 0 1
73
43

44
43

98
43
40
43
in a single step. The result is that there are three leading variables x
1
, x
2
and x
3
which are determined by x
4
, x
5
, and x
6

22 Lecture_010.nb
2012 G. Baumann
2.1.2 Operations Count
It is important to know the length of a computation and, for that reason, we
count the number of arithmetic operations involved in Gau-Jordan elimination.
For reasons that will be apparent in later sections, the count will be divided into
three parts.
Table 0.0. The following Table contains the number of operations for each of
the steps in the elimination.
Step Additions Multiplications Divisions
1 (n1)
2
(n 1)
2
n 1
2 (n2)
2
(n 2)
2
n 2

n 1 1 1 1
Total
n(n1) (2 n1)
6
n(n1) (2 n1)
6
n(n1)
2
The elimination step. We count the additions/subtractions (A,S),
multiplications (M), and the divisions (D) in going from the original system to the
triangular system. We consider only the operations for the coefficients of A and
not for the right-hand side b. Generally the division and multiplications are
counted together, since they are about the same in operation time. Doing this
gives us
AS =
n (n 1) (2 n1)
6
MD =
n (n 1) (2 n1)
6
+
n (n 1)
2
=
1
3
n |n
2
1]
AS denotes the number of addition and subtraction and MD denotes that of
multiplication and division.
Modification of the right side b. Proceeding as before, we get
AS = (n 1) +(n 2) ++1 =
n(n 1)
2
Lecture_010.nb 23
2012 G. Baumann
MD = (n1) +(n 2) ++1 =
n(n 1)
2
The back substitution step. As before
AS = 0 +1++(n 1) =
n(n1)
2
MD = 1 +2++n =
n(n +1)
2
.
Combining these results, we observe that the total number of operations to
obtain x is
AS =
n(n 1) (2 n1)
6
+
n(n 1)
2
+
n(n 1)
2
=
n(n 1) (2 n+5)
6
MD =
n|n
2
+3 n1]
3
Since AS and MD are almost the same in all of these counts, only MD is
discussed. These operations are also slightly more expensive in running time.
For large values of n, the operation count for Gau-Jordan elimination is about
n
3
3. This means that as n is doubled, the cost of solving the linear system
goes by a factor of 8. In addition, most of the cost of Gau-Jordan elimination
is in the elimination step since for the remaining steps
MD =
n(n 1)
2
+
n(n+1)
2
= n
2
.
Thus, once the elimination steps have been completed, it is much less
expensive to solve the linear system.
Consider solving the linear system
A.x = b
where A has order n and is non singular. Then A
1
exists and
A
1
.|A.x] = A
1
.b
24 Lecture_010.nb
2012 G. Baumann
x = A
1
.b.
Thus, if A
1
is known, then x can be found by matrix multiplication.
For this, it might at first seem reasonable to find A
1
and to solve for x. But this
is not an efficient procedure, because of the great cost needed to find A
1
. The
operations cost for finding A
1
can be shown to be
MD = n
3
.
This is about three times the cost of finding x by the Gau-Jordan elimination
method. Actually there is no savings in using A
1
.
The chief value of A
1
is a theoretical tool for examining the solution of non
singular systems of linear equations. With a few exceptions, one seldom needs
to calculate A
1
explicitly.
2.1.3 Iterative Solutions
The linear system A.x = b that occur in many applications can have a very large
order. For such systems, the Gauian elimination method is often too
expensive in either computation time or computer memory requirements, or
possibly both. Moreover the accumulation of round-off errors can sometimes
prevent the numerical solution from being accurate. As an alternative, such
linear systems are usually solved with iteration methods, and that is the subject
of this section.
In an iterative method, a sequence of progressively accurate iterates is
produced to approximate the solution. Thus, in general, we do not expect to get
the exact solution in a finite number of iteration steps, even if the round-off error
effect is not taken into account. In contrast, if round-off errors are ignored, the
Gauian elimination method produces the exact solution after (n 1) steps of
elimination and backward substitution for the resulting upper triangular system.
Gauian elimination method and its variants are usually called direct methods.
In the study of iteration methods, a most important issue is the convergence
property. We provide a framework for the convergence analysis of a general
iteration method. For the two classical iteration methods, the Jacobi and Gau-
Seidel methods, studied in this section, a sufficient condition for convergence is
stated.
Lecture_010.nb 25
2012 G. Baumann
We begin with some numerical examples that illustrate to popular iteration
methods. Following that, we give a more general discussion of iteration
methods. Consider the linear system
9 x
1
+x
2
+x
3
= b
1
2 x
1
+10 x
2
+3 x
3
= b
2
3 x
1
+4 x
2
+11 x
3
= b
3
One class of iteration methods for solving (0.0) proceeds as follows. In the
equation numbered k, solve for x
k
in terms of the remaining unknowns. In the
above case,
x
1
=
1
9
(b
1
x
2
x
3
)
x
2
=
1
10
(b
2
2 x
1
3 x
3
)
x
3
=
1
11
(b
3
3 x
1
4 x
2
)
Let x
(0)
= {x
1
(0)
, x
2
(0)
, x
3
(0)

T
be an initial guess of the true solution x. Then define
an iteration sequence:
x
1
(k+1)
=
1
9
b
1
x
2
(k)
x
3
(k)

x
2
(k+1)
=
1
10
b
2
2 x
1
(k)
3 x
3
(k)

x
3
(k+1)
=
1
11
b
3
3 x
1
(k)
4 x
2
(k)

for k = 0, 1, 2, This is called the Jacobi iteration method or the method of


simultaneous replacements.
The system from above is solved for
b = 10, 19, 0
10, 19, 0
and the initial guess for the solution
26 Lecture_010.nb
2012 G. Baumann
x1, x2, x3 = 0, 0, 0
0, 0, 0
The following steps show interactively how an iterative method works step by
step. The first step uses the initial guess of the solution
x1, x2, x3 =
N%
1
9
+b317 - x2 - x3/,
1
10
+b327 - 2 x1 - 3 x3/,
1
11
+b337 - 3 x1 - 4 x2/!)
1.11111, 1.9, 0.
The second step which uses the values from the first iteration step generates
x1, x2, x3 =
N%
1
9
+b317 - x2 - x3/,
1
10
+b327 - 2 x1 - 3 x3/,
1
11
+b337 - 3 x1 - 4 x2/!)
0.9, 1.67778, 0.993939
The values derived are used again in the next iteration step
x1, x2, x3 =
N%
1
9
+b317 - x2 - x3/,
1
10
+b327 - 2 x1 - 3 x3/,
1
11
+b337 - 3 x1 - 4 x2/!)
1.03513, 2.01818, 0.855556
again the resulting values are used in the next step
Lecture_010.nb 27
2012 G. Baumann
x1, x2, x3 =
N%
1
9
+b317 - x2 - x3/,
1
10
+b327 - 2 x1 - 3 x3/,
1
11
+b337 - 3 x1 - 4 x2/!)
0.98193, 1.94964, 1.01619
and so on
x1, x2, x3 =
N%
1
9
+b317 - x2 - x3/,
1
10
+b327 - 2 x1 - 3 x3/,
1
11
+b337 - 3 x1 - 4 x2/!)
1.00739, 2.00847, 0.97676
The final result after this few iterations is an approximation of the true solution
vector x = 1, 2, 1. To measure the accuracy or the error of the solution we
use the norm of vectors. Thus the error of the solution is estimated to be
e = x
(k+1)
x
(k)

which gives a crude estimation of the total error in the calculation. The error of
the individual components may be different for each component because the
norm estimates an integral error of the iterates.
To understand the behavior of the iteration method it is best to put the iteration
formula into a vector-matrix format. Rewrite the linear system A.x = b as
N.x = b +P.x
where A = NP is a splitting of A. The matrix N must be non singular; and
usually it is chosen so that the linear systems
N.z = f
.
are relatively easy to solve for general vectors f
.
. For example, N could be
diagonal, triangular, or tridiagonal. The iteration method is defined by
N.x
(k+1)
= b +P.x
(k)
k = 0, 1, 2,
28 Lecture_010.nb
2012 G. Baumann
The Jacobi iteration method is based on the diagonal structure of N. The
following function implements the steps of a simple Jacobi iteration
jacobiMethod#A_, b_, x0_, eps___' :=
Block%e = 10
-2
, ea = 100, P, x0in,
++--- determine the diagonal matrix N and its
inverse ---+/
diag = DiagonalMatrix#Tr#A, List'';
idiag = DiagonalMatrix#1sTr#A, List'';
P = diag - A;
Print#"N^+-1/", " P = ", N#Norm#idiag.P, 1''';
Print#"+N^+-1/ P/2 = ", N#Norm#idiag.P, 2''';
++--- set the initial guess for the solution ---+/
x0in = x0;
++--- iterate as long as the error is larger
than the specified value ---+/
While%ea > e,
xk1 = idiag.+P.x0in/ + idiag.b;
ea = N% +xk1 - x0in/.+xk1 - x0in/ );
Print#x
k
, " = ", PaddedForm#N#xk1', 8, 6',
" e = ", PaddedForm#ea, 8, 6'';
x0in = N#xk1'
);
xk1
)
The application of the function to the example discussed above delivers the
successive iterates and the estimated error in printed form and a list of
numbers as final result.
jacobiMethod#9, 1, 1, 2, 10, 3, 3, 4, 11,
10, 19, 0, 0, 0, 0'
Lecture_010.nb 29
2012 G. Baumann
N^+1/ P 0.474747
+N^+1/ P/2 0.496804
x
k
1.111111, 1.900000, 0.000000 H 2.201038
x
k
0.900000, 1.677778, 0.993939 H 1.040128
x
k
1.035129, 2.018182, 0.855556 H 0.391516
x
k
0.981930, 1.949641, 1.016192 H 0.182571
x
k
1.007395, 2.008472, 0.976760 H 0.075262
x
k
0.996476, 1.991549, 1.005097 H 0.034765
x
k
1.001505, 2.002234, 0.995966 H 0.014928
x
k
0.999304, 1.998489, 1.001223 H 0.006820
0.999304, 1.99849, 1.00122
The printed values demonstrate that the error decreases and the solution
approaches the result x = 1, 2, 1 as expected from the calculations above.
For the general matrix A = a
ij
of order n, the Jacobi method is defined with
N =
a
11
0 0
0 a
22

0
0 0 a
nn
and P = NA. For the Gau-Seidel method, let
N =
a
11
0 0
a
21
a
22

0
a
n1
a
n2
a
nn
and P = NA. The linear system N.z = f
.
is easily solved because N is diagonal
for the Jacobi iteration methods and lower triangular for the Gau-Seidel
method. For systems A.x = b to which the Jacobi and Gau-Seidel methods
are often applied, the above matrices N are non singular.
To analyze the convergence of the iteration, subtract (0.0) from (0.0) and let
e
.
(k)
= x x
(k)
, obtaining
30 Lecture_010.nb
2012 G. Baumann
N e
.(k+1)
= P e
.(k)
e
.(k+1)
= Me
.(k)
where M = N
1
P. Using the vector and matrix norm, we get
e
.(k+1)
s M e
.(k)
.
By induction on k, this implies
e
.(k)
s M
k
e
.(0)
.
Thus, the error converges to zero if
|| M || < 1.
We attempt to choose the splitting A = NP so that this will be true, while also
having the system N.z = f
.
be easily solvable. The result also says the error will
decrease at each step by at least a factor of M; and that convergence will
occur for any initial guess x
(0)
.
The following lines implement the Seidel method. In Matrix form the Gau-
Seidel method reads
(L +D).x
(k+1)
= U.x
(k)
+b
which is equivalent to
x
(k+1)
= (L +D)
1
.|bU.x
(k)
]
and so can be viewed as a linear system of equations for x
(k+1)
with lower
triangular coefficient matrix L +D.
Lecture_010.nb 31
2012 G. Baumann
seidelMethod#A_, b_, x0_, eps___' :=
Block%e = 10
-6
, ea = 100, P, x0in,
++--- determine the diagonal matrix N and its
inverse ---+/
diag = DiagonalMatrix#Tr#A, List'';
off = A - diag;
l = Table#If#i > j, A3i, j7, 0', i, 1, Length#A',
j, 1, Length#A'';
u = off - l;
ld = diag + l;
ip = Inverse#ld';
Print#"N^+-1/ P = ", N#Norm#ip.u, 1''';
Print#"+N^+-1/ P/2 = ", N#Norm#ip.u, 2''';
++--- set the initial guess for the soluti
on ---+/
x0in = x0;
++--- iterate as long as the error is larger
than the default value ---+/
While%ea > e,
xk1 = ip.+b - u.x0in/;
ea = N% +xk1 - x0in/.+xk1 - x0in/ );
Print#x
k
, " = ", PaddedForm#N#xk1', 8, 6',
" e = ", PaddedForm#ea, 8, 6'';
x0in = N#xk1'
);
xk1
)
The same example as used for the Gau-Jacobi method is used to
demonstrate that the method converges quite rapidly.
seidelMethod#9, 1, 1, 2, 10, 3, 3, 4, 11,
10, 19, 0, 0, 0, 0'
32 Lecture_010.nb
2012 G. Baumann
N^+1/ P 0.520202
+N^+1/ P/2 0.328064
t
k
1.111111, 1.677778, 0.913131 H 2.209822
t
k
1.026150, 1.968709, 0.995753 H 0.314143
t
k
1.003005, 1.998125, 1.000138 H 0.037686
t
k
1.000224, 1.999997, 1.000060 H 0.003353
t
k
1.000007, 2.000017, 1.000008 H 0.000224
t
k
0.999999, 2.000003, 1.000001 H 0.000018
t
k
1.000000, 2.000000, 1.000000
H 2.52311110
6
t
k
1.000000, 2.000000, 1.000000
H 2.98062210
7
1., 2., 1.

Lecture_010.nb 33
2012 G. Baumann

Vous aimerez peut-être aussi