Vous êtes sur la page 1sur 15

054374 NUMERICAL METHODS LECTURE FIVE

Daniel R. Lewin, Technion 1


LECTURE FIVE NUMERICAL METHODS - (c) Daniel R. Lewin 1
Process Analysis
using Numerical Methods
LECTURE FIVE
Solution of Sets of Non-linear Equations
LECTURE FIVE NUMERICAL METHODS - (c) Daniel R. Lewin 2
Lecture Five: Nonlinear Equations
Methods for the solution of a nonlinear equation are at
the heart of many numerical
methods: from the
solution of M & E
balances, to the
optimization of
chemical processes.
Furthermore, the
need for numerical
solution of nonlinear
equations also arises
from the formulation
of other numerical
methods.

Solution f(x)=0
Nonlinear
Regression
Linear
Regression
Solution
of ODE's
Solution
of IVPDE's
Solution
of BVP's
Part One: Basic Building Blocks
Part Two: Applications
Solution Ax=b
Interpolation
min/max f(x)
Solution f(x)=0 Line Integrals
Finite Difference Approximations
054374 NUMERICAL METHODS LECTURE FIVE
Daniel R. Lewin, Technion 2
LECTURE FIVE NUMERICAL METHODS - (c) Daniel R. Lewin 3
Examples:
The minimization or maximization of a multivariable
objective function can be formulated as the solution of a
set of nonlinear equations generated by differentiating
the objective relative to each of the independent
variables. These applications are covered in Day 6.
The numerical solution of a set of ordinary differential
equations can either be carried out explicitly or
implicitly. In implicit methods, the dependent variables
are computed in each integration step in an iterative
manner by solving a set of nonlinear equations. These
applications are covered in Day 10.
LECTURE FIVE NUMERICAL METHODS - (c) Daniel R. Lewin 4
Lecture Five: Objectives
This is an extension of last weeks lecture to sets of
equations.
On completion of this material, the reader should be
able to:
Formulate and implement the Newton-Raphson method
for a set of nonlinear equations.
Use a steepest descent method to provide robust
initialization of Newton-Raphsons method.
Formulate and implement the multivariable extension of
the method of successive substitution, including
acceleration using Wegsteins method.
054374 NUMERICAL METHODS LECTURE FIVE
Daniel R. Lewin, Technion 3
LECTURE FIVE NUMERICAL METHODS - (c) Daniel R. Lewin 5
5.1 Newton-Raphson Method
Solve the set of nonlinear equations:
( ) ( ) ( ) 0 3 25 . 0 3 4 ,
2
2 2 1
2
1 2 1 1
= + = x x x x x x f
( ) ( ) ( ) 0 2 2 5 ,
3
2
2
1 2 1 2
= = x x x x f
(5.1)
(5.2)
( )
( )
( )
( )
( )
( )
( )
( )
( )
0
2 2
2
1 0
1 1
1
1 0
1 2 1 1
0 0
, x x
x
f
x x
x
f
x f x x f
x x

+
( )
2 1
1
1
25 . 0 3 2 x x
x
f
+ =

( )
1 2
2
1
25 . 0 3 2 x x
x
f
+ =

Consider initial guess of x = [2,4]


T
. Approximating
first equation using a Taylor expansion:
where:
Thus: ,
a linear plane.
( (( ( ) )) ) ( (( ( ) )) ) ( (( ( ) )) )
1 1 2 1 2
, 4 3 2 1.5 4 f x x x x + + + + (5.3)
LECTURE FIVE NUMERICAL METHODS - (c) Daniel R. Lewin 6
5.1 Newton-Raphson Method (Contd)
f
1
(x
1
,x
2
)
( ) ( ) ( ) 4 5 . 1 2 3 4 ,
2 1 2 1 1
+ x x x x f
( ) ( ) 0 4 5 . 1 2 3 4
2 1
= + x x
(5.4)
Linear plane approximating f
1
Intersection of linear plane
approximating f
1
with zero.
054374 NUMERICAL METHODS LECTURE FIVE
Daniel R. Lewin, Technion 4
LECTURE FIVE NUMERICAL METHODS - (c) Daniel R. Lewin 7
5.1 Newton-Raphson Method (Contd)
Similarly, the intersection of the linear approximation
for f
2
(x
1
,x
2
) with the zero plane gives the line:
( ) ( ) 0 4 12 2 0 3
2 1
= + x x (5.5)
(

=
(

=
(


2500 . 0
4583 . 1
3
4
12 0
5 . 1 3
1
2
1
d
d
(5.6)
Eqs.(5.4)-(5.5) are a system in 2 unknowns, d
1
= x
1
2, and
d
2
= x
2
4, which are the changes in x
1
and x
2
from the
previous estimate:
The solution of the linear system of equations is a vector
that defines a change in the estimate of the solution, found
by the intersection of Eqs.(5.4) and (5.5). Thus, an updated
estimate for the solution is: x = [0.5417, 3.7500]
T
LECTURE FIVE NUMERICAL METHODS - (c) Daniel R. Lewin 8
5.1 Newton-Raphson Method (Contd)
f
1
(x
1
,x
2
) f
2
(x
1
,x
2
)
The intersection of
the two linear
planes generate a
linear vector d
(0)
,
from initial guess,
x
(0)
. This intersects
the zero plane at
x
(1)
, which is the
next estimate of
the solution.
054374 NUMERICAL METHODS LECTURE FIVE
Daniel R. Lewin, Technion 5
LECTURE FIVE NUMERICAL METHODS - (c) Daniel R. Lewin 9
This constitutes a single step of the NR method. This is
continued to convergence to 4 sig. figs. in four iterations:
5.1 Newton-Raphson Method (Contd)
k x
1
x
2
f(x
2) f(x
1) ||d
(k)
||2 ||f(x
(k)
)||2
0 2.0000 4.0000 4.000 3.000 1.4796 5.0000
1 0.5417 3.7500 2.098 2.486 0.3611 3.2531
2 0.8606 3.5806 0.144 0.247 0.0348 0.2862
3 0.8836 3.5546 1.3610
-3
3.7210
-3
4.9210
-4
3.9510
-3
4 0.8838 3.5542 2.6410
-7
1.0010
-6
1.3310
-7
1.0410
-6

It is noted that the Newton-Raphson method exhibits
quadratic convergence near the solution; norms of
both the correction and residual vectors in the last
iteration are the square of those in the previous one.
LECTURE FIVE NUMERICAL METHODS - (c) Daniel R. Lewin 10
5.1 Newton-Raphson Method (Contd)
The method can be generalized for an n-dimensional
problem. Given the set of nonlinear equations:
( ) n i x f
i
, , 2 , 1 , 0 = = (5.7)
( ) ( ) ( ) k k k
d x x + =
+1
(5.8)
then a single step of the Newton-Raphson method is:
( )
( )
( )
( )
( )
( ) ( )
( )
( )
( )
( )
( )
( )
( ) ( )
( )
( )
( )
( )
( )
( )
( ) ( )
( ) 0
0
0
2
2
1
1
2
2
2
2
2
1
1
2
1
1
2
2
1
1
1
1
= +

+ +

= +

+ +

= +

+ +

k
n
k
n
x
n
n k
x
n k
x
n
k k
n
x
n
k
x
k
x
k k
n
x
n
k
x
k
x
x f d
x
f
d
x
f
d
x
f
x f d
x
f
d
x
f
d
x
f
x f d
x
f
d
x
f
d
x
f
k k k
k k k
k k k

(5.9)
where d
(k)
is the solution of the linear set of equations:
054374 NUMERICAL METHODS LECTURE FIVE
Daniel R. Lewin, Technion 6
LECTURE FIVE NUMERICAL METHODS - (c) Daniel R. Lewin 11
Newton-Raphson Algorithm:
Step 1: Initialize estimate, x
(0)
and k = 0.
5.1 Newton-Raphson Method (Contd)
Step 5: Test for convergence: use 2-norms for d
(k)
and f(x
(k)
).
Consider scaling both. If convergence criteria are
satisfied, END, else k = k + 1 and go to Step 2.
( ) ( ) ( ) k k k
d x x + =
+1
Step 4: Update x
(k+1)
:
( ) ( )
( ) | |
( )
( )
k k k
x f x J d
1
= Step 3: Solve for d
(k)
:
( )
( )
( )
( )
( ) k
x
j
i k
j i
k
x
f
x J x J

=
, Step 2: Compute Jacobian matrix:
LECTURE FIVE NUMERICAL METHODS - (c) Daniel R. Lewin 12
ISSUES 1: Initial estimate of the solution.
When solving a large set of nonlinear equations or a
set of highly nonlinear equations, the starting point
used is important.
It the initial guess is not a good estimate of the
desired solution, the N-R method can seek an
extraneous root or may not converge.
A robust method for estimating a good starting point
for the N-R method is the method of steepest
descent, which is described next.
5.1 Newton-Raphson Method (Contd)
054374 NUMERICAL METHODS LECTURE FIVE
Daniel R. Lewin, Technion 7
LECTURE FIVE NUMERICAL METHODS - (c) Daniel R. Lewin 13
ISSUES 2: Computation of the Jacobian matrix.
The Jacobian matrix can either be determined analytically
or can be approximated numerically.
For numerical approximations, it is recommended that
central difference rather than forward or backward
difference approximations be employed.
If the system of nonlinear equations is well behaved (i.e. if
they can be solved relatively easily), either method works
well. However, for ill-behaved systems,
analytical determination of the Jacobian is preferred.
Apart from the computationally intensive Jacobian matrix
calculation, the N-R method involves matrix inversion.
Adopting so-called Quasi-Newton methods reduces these
computational overheads.
5.1 Newton-Raphson Method (Contd)
LECTURE FIVE NUMERICAL METHODS - (c) Daniel R. Lewin 14
An accurate estimate of the solution of f(x) = 0 is
necessary to fully exploit the quadratic convergence of the
N-R method.
The method of steepest descent achieves this robustly - it
estimates a vector x that minimizes the function:
5.2 Method of Steepest Descent
Steepest Descent Algorithm:
Step 1: Evaluate g(x) at x
(0)
.
Step 2: Determine steepest descent direction.
Step 3: Move an appropriate amount in this direction and
update x
(1)
.
( (( ( ) )) ) ( (( ( ) )) ) ( (( ( ) )) )
2
1
n
i
i
g x f x
= == =
= == =

- minimized when x is a solution of f(x).
054374 NUMERICAL METHODS LECTURE FIVE
Daniel R. Lewin, Technion 8
LECTURE FIVE NUMERICAL METHODS - (c) Daniel R. Lewin 15
For the function g(x), the steepest descent direction is
-g(x):
5.2 Method of Steepest Descent (Contd)
( ) ( ) ( ) ( ) ( ) ( ) ( )
( ) ( ) ( ) ( ) ( ) ( )
( ) ( ) ( ) ( ) ( ) ( )
( ) ( ) ( ) ( ) ( ) ( )
( ) ( ) x f x J
x
x
f
x f x
x
f
x f x
x
f
x f
x
x
f
x f x
x
f
x f x
x
f
x f
x
x
f
x f x
x
f
x f x
x
f
x f
x f
x
x f
x
x f
x
x g
T
T
n
n
n
n n
n
n
n
n
T
n
i
i
n
n
i
i
n
i
i
2
2 ... 2 2
,.... 2 ... 2 2
, 2 ... 2 2
, , ,
2
2
1
1
2 2
2
2
2
1
1
1 1
2
2
1
1
1
2
1
2
1 2
2
1 1
=
|
|
.
|

+ +

+ +

\
|

+ +

=
|
|
.
|

\
|

=

= = =

LECTURE FIVE NUMERICAL METHODS - (c) Daniel R. Lewin 16
Figure illustrates contours of g(x
1
, x
2
) = x
1
2
+ x
2
2
5.2 Method of Steepest Descent (Contd)
Here, -g(x) = -(2x
1
, 2x
2
)
T
at x = [2,2]
T
,
-g(x) = -(4,4)
T
at x = [-3, 0]
T
,
-g(x) = (6, 0)
T
054374 NUMERICAL METHODS LECTURE FIVE
Daniel R. Lewin, Technion 9
LECTURE FIVE NUMERICAL METHODS - (c) Daniel R. Lewin 17
Defining the steepest descent step:
5.2 Method of Steepest Descent (Contd)
Need to select the step length, , that minimizes:
( ) ( ) ( )
( ) 0 ,
0 0 1
> = x g x x (5.15)
( )
( ) ( )
( ) ( )
0 0
x g x g h = (5.16)
Instead of differentiating h() with respect to , we
construct an interpolating polynomial, P() and select to
minimize the value of P().
g
1
g
2
g
3

h()
P()
LECTURE FIVE NUMERICAL METHODS - (c) Daniel R. Lewin 18
5.2 Method of Steepest Descent (Contd)

Interpolating polynomial, P():


where:
( (( ( ) )) ) ( (( ( ) )) ) ( (( ( ) )) ) ( (( ( ) )) )
2
1 1 1 1 1 2
P g g g = + + = + + = + + = + +
1 3
1 2
1
2
2 3
2 3
2
1 2
1 2
1
and ,


=

=
g g
g
g g
g
g g
g
(5.17)
|
|
.
|

\
|

=
1
2
1
2
5 . 0
g
g

Differentiating (5.17)
with respect to :

1
=0
Set
3
so that g
3
< g
1.
Set
2
=
3
/2.
h()

g
1
g
2
g
3
P()

1

2

3
Hence, update for x
(1)
is:
( ) ( ) ( )
( )
0 0 1
x g x x =
054374 NUMERICAL METHODS LECTURE FIVE
Daniel R. Lewin, Technion 10
LECTURE FIVE NUMERICAL METHODS - (c) Daniel R. Lewin 19
Example: Estimating initial guess for N-R using the
method of steepest descent (from x
(0)
= [0,0]
T
)
5.2 Method of Steepest Descent (Contd)
k

x
1
x
2
g(x
(k)
)
0 - 0.000 0.000 227
1 1.234 0.299 1.198 48.52
2 0.881 0.912 1.831 16.31
3 1.189 0.296 2.848 11.97
4 0.419 0.696 2.973 6.284
5 0.400 0.587 3.358 2.377
6 0.286 0.862 3.435 0.562
Notes: (a) Values of greater than 1 imply extrapolation.
(b) Convergence is much slower that with N-R.
LECTURE FIVE NUMERICAL METHODS - (c) Daniel R. Lewin 20
5.3 Wegsteins Method
Recall one-dimensional successive substitution:
( ) ( )
( )
k k
x g x =
+1
(5.19)
Convergence rate of Eq.(5.19) depends on the gradient
of g(x). For gradients close to unity, very slow
convergence is expected.
As shown on the right,
using two values of g(x),
a third value can be
predicted using linear
extrapolation, using the
estimated gradient:
( )
( )
( )
( )
( )
( )
( ) ( )
s
x x
x g x g
dx
x dg
x x

=
0 1
0 1
1
: At
054374 NUMERICAL METHODS LECTURE FIVE
Daniel R. Lewin, Technion 11
LECTURE FIVE NUMERICAL METHODS - (c) Daniel R. Lewin 21
5.3 Wegsteins Method (Cont)
Linear interpolation gives an estimate for the
function value at the next iteration, x
(2)
:
( )
( )
( )
( )
( ) ( )
( )
1 2 1 2
x x s x g x g + =
However, since at convergence, x
(2)
= g(x
(2)
), then:
( ) ( )
( )
( ) ( )
( )
1 2 1 2
x x s x g x + =
Defining q = s/(s -1), the Wegstein update is:
( ) ( )
( ) ( )
( )
q x q x g x + =
1 1 2
1 (5.21)
Note: for q = 0, Eq.(5.21) is generic successive substitution
LECTURE FIVE NUMERICAL METHODS - (c) Daniel R. Lewin 22
5.3 Wegsteins Method (Cont)
For a nonlinear set of equations, the method of successive
substitutions is:
( ) ( ) ( ) ( )
( ) n i x x x g x
k
n
k k
i
k
i
, , 2 , 1 , , , ,
2 1
1
= =
+
(5.22)
This method is commonly used in the solution of material
and energy balances in the flowsheet simulators, where
sets of equations such as Eq(5.22) are invoked while
accounting for the unit operations models that are to be
solved.
In cases involving significant material recycle, the
convergence rate of the successive substitution method
can be very slow, because the local gradient of the
functions g
i
(x) are close to unity. In such cases, the
Wegstein acceleration method is often used.
054374 NUMERICAL METHODS LECTURE FIVE
Daniel R. Lewin, Technion 12
LECTURE FIVE NUMERICAL METHODS - (c) Daniel R. Lewin 23
5.3 Wegsteins Method (Cont)
For the multivariable case, the Wegstein method is
implemented as follows:
1 Starting from an initial guess, x
(0)
, Eq(5.22) is applied twice
to generate estimates x
(1)
= g(x
(0)
) and x
(2)
= g(x
(1)
),
respectively.
( ) ( ) ( )
( )
( ) ( ) ( )
( )
( ) ( )
n i
x x
x x x g x x x g
s
k
i
k
i
k
n
k k
i
k
n
k k
i
i
, , 2 , 1 ,
, , , , , ,
1
1 1
2
1
1 2 1


=

=


2 From k = 1, the local gradients s
i
are computed:
3 The Wegstein update is used to estimate x
(2)
and
subsequent estimates of the solution:
( ) ( ) ( ) ( )
( ) ( )
( )
, 3 , 2 , , , 2 , 1 , 1 , , ,
2 1
1
= = + =
+
k n i q x q x x x g x
i
k
i i
k
n
k k
i
k
i
, 3 , 2 , , , 2 , 1 ,
1
where
i
i
i
= =

= k n i
s
s
q
LECTURE FIVE NUMERICAL METHODS - (c) Daniel R. Lewin 24
5.3 Wegsteins Method (Cont)
When solving sets of nonlinear equations, it is often
desirable to ensure that none of the equations converge at
rates outside a pre-specified range.
Resetting values of q
i
that fall outside the desired limits
(i.e., q
min
< q
i
< q
max
) ensures this.
Value of q
i
Expected convergence
0 < q
i
< 1 Damped successive substitutions.
Slow, stable convergence
q
i
= 1 Regular successive substitutions
q
i
< 0 Accelerated successive substitutions.
Can speed convergence; may cause instabilities

054374 NUMERICAL METHODS LECTURE FIVE
Daniel R. Lewin, Technion 13
LECTURE FIVE NUMERICAL METHODS - (c) Daniel R. Lewin 25
Example Application:
The figure shows a flowsheet for
the production of B from raw
material A. The gaseous feed
stream F , (99 wt.% of A and 1
wt.% of C, an inert material), is
mixed with the recycle stream R
(pure A) to form the reactor feed,
S1. The conversion of A to B in the
reactor is wt. fraction The reactor
5.3 Wegsteins Method (Cont)
P
S3
L
S2
R
S1
F
Mixer
Separator
Reactor
Splitter
products, S2 are fed to the separator, that produces a liquid
product, L , containing only B, and a vapor overhead product,
S3, which is free of B. To prevent the accumulation of inerts
in the synthesis loop, a portion of stream S3, P is purged.
LECTURE FIVE NUMERICAL METHODS - (c) Daniel R. Lewin 26
P
S3
L
S2
R
S1
F
Mixer
Separator
Reactor
Splitter
The material balances for the flowsheet consist of 15
equations involving 19 variables (4 degrees of freedom).
Since it is known that = 0.15, X
A,F
= 0.99 and F = 20
T/hr, the effect of P on the performance of the process
is investigated by solving the 15 equations iteratively:
1 Initial values are assumed for R and X
A,R
(usually zero).
2 The three mixer equations are solved.
3 The four reactor equations are solved.
4 The five separator equations are solved.
5 The three splitter equations are solved.
6 This provides updated estimates for R and X
A,R
. If the
estimates have changed by more than the convergence
tolerance, return to Step 2.
5.3 Wegsteins Method (Cont)
054374 NUMERICAL METHODS LECTURE FIVE
Daniel R. Lewin, Technion 14
LECTURE FIVE NUMERICAL METHODS - (c) Daniel R. Lewin 27
P
S3
L
S2
R
S1
F
Mixer
Separator
Reactor
Splitter
Steps 1 to 5 are equivalent to the two recursive
formulae:
5.3 Wegsteins Method (Cont)
( ) ( ) ( )
( ) constants , , ,
, 1
1
P X R g R
k
R A
k k
=
+
( ) ( ) ( )
( ) constants , ,
, 2
1
,
k
R A
k k
R A
X R g X =
+
For example, Eq.(5.25) is generated using the process
material and energy balances:
( )
( )
( ) ( ) ( )R X P X F
P
R F
RX FX
R F
P X S
P S R
R A F A
R A F A
S A
, ,
, ,
1 ,
1 1
1
1 1
3
+ =

|
|
.
|

\
|
+
+
+ =
=
=

LECTURE FIVE NUMERICAL METHODS - (c) Daniel R. Lewin 28


P
S3
L
S2
R
S1
F
Mixer
Separator
Reactor
Splitter
Hence, the recursion formula for R
(k+1)
is:
5.3 Wegsteins Method (Cont)
Similarly, the recursion formula for X
A,R
(k)
is:
( )
( ) ( )
( )
( )
( ) ( )
( ) P R g R X P X F R
k k k
R A F A
k
, 1 1
1 , ,
1
= + =
+

( )
( )
( ) ( )
( )
( ) ( )
( )
k
R A
k
F A
k
R A
k
F A
k
R A
X R FX
X R FX
X
, ,
, ,
1
,
1
1
+
+
=
+

Substituting for known X


A,F
and F in Eq.(5.27):
( ) ( )
( ) ( )
( )
( )
( ) k k
R A
k k
R X P P R g R
, 1
1
15 . 0 1 03 . 17 , + = =
+
This means that local gradient of g
1
is:
( )
( )
( )
k
R A k
X
R
g
,
1
15 . 0 1 =

( )
1 85 . 0
1
<

<
k
R
g
( )
1 0
,
< <
k
R A
X
054374 NUMERICAL METHODS LECTURE FIVE
Daniel R. Lewin, Technion 15
LECTURE FIVE NUMERICAL METHODS - (c) Daniel R. Lewin 29
5.3 Wegsteins Method (Cont)
This means that successive substitution
will converge very slowly in such cases
LECTURE FIVE NUMERICAL METHODS - (c) Daniel R. Lewin 30
Having completing this lesson, you should now be able
to:
Formulate and implement the Newton-Raphson method for a set
of nonlinear equations. You should be aware that an accurate
initial estimate of the solution may be required to guarantee
convergence.
Use a steepest descent method to provide robust initialization
of the N-R method. Since this method only gives linear
convergence, the N-R method should be applied once the
residuals are sufficiently reduced.
Formulate and implement the multivariable extension of the
method of successive substitution, including acceleration using
Wegsteins method. This method is commonly used in the
commercial flowsheet simulators to converge material and
energy balances for flowsheets involving material recycle.
Summary

Vous aimerez peut-être aussi