Vous êtes sur la page 1sur 17

Fixed-point iteration

This article needs additional citations for


verification. Learn more

In numerical analysis, fixed-point iteration


is a method of computing fixed points of
iterated functions.

More specifically, given a function


defined on the real numbers with real
values and given a point in the domain
of , the fixed point iteration is
which gives rise to the sequence
which is hoped to
converge to a point . If is continuous,
then one can prove that the obtained is a
fixed point of , i.e.,

More generally, the function can be


defined on any metric space with values in
that same space.

Examples
A first simple and useful example is the
Babylonian method for computing the
square root of a>0, which consists in

taking , i.e. the

mean value of x and a/x, to approach the


limit (from whatever starting
point ). This is a special case of
Newton's method quoted below.

The fixed-point iteration xn+1 = sin xn with initial value

x0 = 2 converges to 0. This example does not satisfy


the assumptions of the Banach fixed point theorem
and so its speed of convergence is very slow.
The fixed-point iteration
converges to the
unique fixed point of the function
for any starting point
This example does satisfy the
assumptions of the Banach fixed point
theorem. Hence, the error after n steps
satisfies

(where we can take , if we


start from .) When the error is
less than a multiple of for some
constant q, we say that we have linear
convergence. The Banach fixed-point
theorem allows one to obtain fixed-point
iterations with linear convergence.
The fixed-point iteration
will diverge unless . We say that
the fixed point of is
repelling.
The requirement that f is continuous is
important, as the following example
shows. The iteration

converges to 0 for all values of .


However, 0 is not a fixed point of the
function
as this function is not continuous at
, and in fact has no fixed points.

Applications
Newton's method for finding roots of a
given differentiable function is

If we write , we may

rewrite the Newton iteration as the fixed-


point iteration
.
If this iteration converges to a fixed
point x of g, then

, so

The reciprocal of anything is nonzero,


therefore f(x) = 0: x is a root of f. Under
the assumptions of the Banach fixed
point theorem, the Newton iteration,
framed as the fixed point method,
demonstrates linear convergence.
However, a more detailed analysis
shows quadratic convergence, i.e.,
, under certain
circumstances.
Halley's method is similar to Newton's
method but, when it works correctly, its
error is (cubic
convergence). In general, it is possible
to design methods that converge with
speed for any . As a
general rule, the higher the k, the less
stable it is, and the more
computationally expensive it gets. For
these reasons, higher order methods are
typically not used.
Runge–Kutta methods and numerical
ordinary differential equation solvers in
general can be viewed as fixed point
iterations. Indeed, the core idea when
analyzing the A-stability of ODE solvers
is to start with the special case
, where a is a complex number, and to
check whether the ODE solver
converges to the fixed point
whenever the real part of a is negative.[a]
The Picard–Lindelöf theorem, which
shows that ordinary differential
equations have solutions, is essentially
an application of the Banach fixed point
theorem to a special sequence of
functions which forms a fixed point
iteration, constructing the solution to the
equation. Solving an ODE in this way is
called Picard iteration, Picard's method,
or the Picard iterative process.
The iteration capability in Excel can be
used to find solutions to the Colebrook
equation to an accuracy of 15
significant figures.[1][2]
Some of the "successive approximation"
schemes used in dynamic programming
to solve Bellman's functional equation
are based on fixed point iterations in the
space of the return function.[3][4]

Properties
If a function defined on the real line
with real values is Lipschitz continuous
with Lipschitz constant , then this
function has precisely one fixed point,
and the fixed-point iteration converges
towards that fixed point for any initial
guess This theorem can be
generalized to any metric space.
Proof of this theorem:
Since is Lipschitz continuous with
Lipschitz constant , then for the
sequence , we
have:

and

Combining the above inequalities yields:


Since , as
Therefore, we can show is a
Cauchy sequence and thus it converges
to a point .
For the iteration , let
go to infinity on both sides of the
equation, we obtain . This
shows that is the fixed point for .
So we proved the iteration will eventually
converge to a fixed-point.
This property is very useful because not
all iterations can arrive at a convergent
fixed-point. When constructing a fixed-
point iteration, it is very important to
make sure it converges. There are
several fixed-point theorems to
guarantee the existence of the fixed
point, but since the iteration function is
continuous, we can usually use the
above theorem to test if an iteration
converges or not. The proof of the
generalized theorem to metric space is
similar.
The speed of convergence of the
iteration sequence can be increased by
using a convergence acceleration
method such as Aitken's delta-squared
process. The application of Aitken's
method to fixed-point iteration is known
as Steffensen's method, and it can be
shown that Steffensen's method yields a
rate of convergence that is at least
quadratic.

See also
Contraction mapping
Root-finding algorithm
Fixed-point theorem
Fixed-point combinator
Banach fixed-point theorem
Cobweb plot
Markov chain
Infinite compositions of analytic
functions
Iterated function
Convergence and fixed point

References
a. One may also consider certain
iterations A-stable if the iterates stay
bounded for a long time, which is
beyond the scope of this article.
1. M A Kumar (2010), Solve Implicit
Equations (Colebrook) Within
Worksheet, Createspace, ISBN 1-4528-
1619-0
2. Brkic, Dejan (2017) Solution of the
Implicit Colebrook Equation for Flow
Friction Using Excel, Spreadsheets in
Education (eJSiE): Vol. 10: Iss. 2,
Article 2. Available at:
https://sie.scholasticahq.com/article/
4663-solution-of-the-implicit-
colebrook-equation-for-flow-friction-
using-excel
3. Bellman, R. (1957). Dynamic
programming, Princeton University
Press.
4. Sniedovich, M. (2010). Dynamic
Programming: Foundations and
Principles, Taylor & Francis.
Burden, Richard L.; Faires, J. Douglas
(1985). "2.2 Fixed-Point Iteration".
Numerical Analysis (3rd ed.). PWS
Publishers. ISBN 0-87150-857-5..

External links
Fixed-point algorithms online
Fixed-point iteration online calculator
(Mathematical Assistant on Web)

Retrieved from
"https://en.wikipedia.org/w/index.php?title=Fixed-
point_iteration&oldid=898175930"

Last edited 10 days ago by Bender2…

Content is available under CC BY-SA 3.0 unless


otherwise noted.

Vous aimerez peut-être aussi