Vous êtes sur la page 1sur 6

Root-finding Methods in Mathematica

Tyler ONeill
December 11, 2013
Abstract
The purpose of this article is to provide a summary and explanation
for the implementation of root-finding methods in Wolfram Mathematica.
Root-finding methods are used to evaluate roots of a function. A root of
a function is defined to be the x-value of f (x) such that f (x) = 0. These
methods are typically iterative approximations, which means that they
are well suited for computation using a system like Mathematica. The
methods explored are Newtons method, the bisection method, and the
secant method.

1
1.1

Newtons Method
Overview

The first method is called Newtons method. Newtons method is defined by


the recurrence relation, starting at x0 :
xn = xn1

f (xn1 )
f 0 (xn1 )

Where x0 is chosen a priori as a guess for a root of f . This relation is applied


recursively until the desired accuracy is reached. This is assuming that the
relation converges, however, which it does not always (see section 1.4). Newtons
method works well for virtually any function type, provided that their derivative
is defined.

1.2

Intuition

Newtons method can be understood using simple concepts like derivatives


and roots of linear functions. If we have a function f that is differentiable near
x0 then we can say that f 0 (x0 ) is defined. f 0 (x0 ) represents the slope of the
tangent line to f at the point (x0 , f (x0 )). The equation for this tangent line
is y f (x0 ) = f 0 (x0 )(x x0 ). Written as a function, this is equivalent to:
g(x) = y = f 0 (x0 )(x x0 ) + f (x0 ). Solving for the root of g is simple, seeing as

it is just a linear function.


0 = f 0 (x0 )(x x0 ) + f (x0 )
f 0 (x0 )(x x0 ) = f (x0 )
f (x0 )
x x0 = 0
f (x0 )
f (x0 )
x = x1 = x0 0
f (x0 )
This is the recurrence relation used in Newtons method. It is clear that this is
a valid method to approximate the root of a function because xn will ideally be
closer and closer to the real root of the function.

1.3

Mathematica Demonstration

The Mathematica demonstration for this method asks for a function and an
x0 to start the process. A command is executed that shows the current value of
x and f (x) as well as a graph that shows where the current value of x is relative
to the rest of the curve. The user can change the number of iterations and see
how that improves the accuracy of the guess.

1.4

Errata

Overshooting the root It is very possible for Newtons method to fail to


converge to a given root. This can occur when the derivative of a function
behaves erratically near its root. Newtons method will fail to converge and,
antithetically, diverge to .
Bad choice for x0 A poor choice for x0 will have one of two consequences.
Either Newtons method will fail to find a root altogether and diverge to , or
it will find a root, but perhaps not the root that was intended to be found. This
obviously may or may not be problematic depending on the context.
Encountering a critical point If during the process of applying Newtons
method a point is found where the derivative is equal to zero (a critical point
of the function), then the calculation will fail because of a division by zero.

2
2.1

The Bisection Method


Overview

The bisection method is a very simple root-finding method with appreciable


intuitiveness. First, two numbers a and b are found such that for the function
f (which is continuous on [a, b]), f (a) > 0 and f (b) < 0. Then the midpoint of
the two numbers, c, is found (i.e. c = a+b
2 ). If f (c) > 0 then a is replaced with
2

c, and likewise if f (c) < 0 then b is replaced with c. This process is continued
until the desired accuracy is reached.

2.2

Intuition

Intuition for this root-finding method can be derived from the intermediate
value theorem. Because f is continuous on the interval [a, b] there must be some
value c such that f (c) = 0 because f (a) > 0 and f (b) < 0.

2.3

Mathematica Demonstration

The associated demonstration for this method is similar in functionality to


the Newtons method demonstration. First, the user inputs a function. a and
b are found by the FindInstance command in Mathematica. The graphical
demonstration shows a rectangle bounded by (b, f (b)) and (a, f (a)) overlaying
the plot of the function. Upon increasing the number of iterations, the graph
shows the bisection method cutting the rectangle in size and zooming in to the
root of the function. The values of c and f (c) are also shown.

2.4

Errata

Function continuity Perhaps the biggest problem with the bisection method
is the fact that the function must be continuous on the interval [a, b]. Failing
this prerequisite, another method must be used.
Slow convergence As opposed to Newtons method, which converges in
quadratic time, this method converges in only linear time. This means that
it will take more applications of the method to attain the same accuracy that
could have been attained when using a method like Newtons method.

3
3.1

The Secant Method


Overview

The final method to be explored is the secant method. This method can be seen
as a more primordial version of Newtons method as its underlying concepts are
the same. Essentially, with this method x0 and x1 are chosen, again a priori,
so that they are close to a root of f . Then a recurrence relation between the
root of the secant line between x0 and x1 is used such that xn will be a fairly
accurate guess for the root of the function f . Mathematically:
xn = xn1 f (xn1 )

xn1 xn2
f (xn1 ) f (xn2 )

3.2

Intuition

As explained in the overview, the secant method can be seen as a variation of


Newtons method using secant lines instead of tangent lines. The method can
be derived by first starting with the equation of the secant line:
y=

f (x1 ) f (x0 )
(x x1 ) + f (x1 )
x1 x0

The root of the secant line can be found with:


0=

f (x1 ) f (x0 )
(x x1 ) + f (x1 )
x1 x0

And solving for x we get:


x = x1 f (x1 )

x1 x0
f (x1 ) f (x0 )

This value of x is now defined to be x2 and the process is repeated with x1 and
x2 .

3.3

Mathematica Demonstration

The Mathematica demonstration associated with the secant method is very


similar to that of Newtons method. The implementation is the same.

3.4

Errata

Slow convergence The secant method converges slowly compared to the


other methods described. The intuition for this slow convergence is beyond
the scope of this paper, but it is derived from a Taylor series approximation
of the error of the method [1]. Essentially, the term that describes the rate of
convergence, , for the secant method is equal to the golden ratio, , and is
equal to 2 for Newtons method.
Failure to converge As with Newtons method, the secant method can fail
to converge when f (xn ) and f (xn1 ) are of the same sign and thus the root of
the secant line will be far from the root of the function.
Horizontal or vertical secant lines Although a contrived example, the
secant method will fail when xn equals xn1 and the slope is therefore undefined.
More likely however, is encountering a horizontal secant line when f( xn ) equals
f( xn1 ).

4
4.1

Comparison of the three methods


Newtons Method and the Bisection Method

Newtons method and the bisection method are not similar and use completely
different methodologies to compute a root. Whereas Newtons method uses linear approximations of a curve to zone-in on a root, the bisection method
simply divides a region where a root lies. Both do have constraints on the function f whose root is to be found. Newtons method requires f is continuously
differentiable near the root and the bisection method requires f to be continuous
across its parameters a and b.

4.2

Newtons Method and the Secant Method

Newtons method and the secant method share a lot in common. Both rely
on the principle of linear approximations of a curve. Truly, the only difference
is that Newtons method uses tangent lines (using the Newtonian invention
of derivatives), whereas the secant method uses secant line approximations.
Expectedly, the secant method was discovered long before Newtons method
(in the 18th B.C. century egypt, in fact [2]). As previously noted, the order of
convergence for Newtons method is 2 and for the secant method it is .

4.3

The Bisection Method and the Secant Method

The bisection method and the secant method do not share a lot in common.
They do both require two parameters: xn and xn1 for the secant method
and a and b for the bisection method. The secant methods approach is to use a
recurrence relation to move toward the root, whereas the bisection method uses
bracketing to narrow down a region to find the root.

Conclusion

The three methods demonstrated in Mathematica represent only a few of the


many root-finding methods that exist. One such method is linear interpolation.
This is essentially a more robust version of the secant method where a line is used
to interpolate an entire function, and the root of that is used to estimate the root
of the function. Other methods make use of complex arithmetic to approximate
for the root of a function. Another possibility is using a combination of methods
intelligently to converge on a root as quickly as possible (this is called Brents
method[3]). There is no perfect root-finding algorithm. All of the methods
presented have there advantages and disadvantages. Choosing the best for a
specific problem is key.

References
[1] Grinshpan The Order of Convergence for the Secant Method.
[2] Joanna M. Papakonstantinou and Richard A. Tapia Origin and Evolution
of the Secant Method in One Dimension
[3] Kristin Koehler, California State University Brents Method

Vous aimerez peut-être aussi