Vous êtes sur la page 1sur 2

J Appl Math Comput (2011) 37:221–246

DOI 10.1007/s12190-010-0431-6 JAMC

On the semilocal convergence of the Halley method


using recurrent functions

Ioannis K. Argyros · Yeol Je Cho · Saïd Hilout

Received: 5 April 2010 / Published online: 8 August 2010


© Korean Society for Computational and Applied Mathematics 2010

Abstract We provide new semilocal convergence results for the Halley method in
order to approximate a locally unique solution of a nonlinear equation in a Ba-
nach space setting. Our sufficient convergence conditions can be weaker than before,
where as the error bounds on the distances involved are finer. Our first approach uses
a Kantorovich-type analysis. The second approach uses our new idea of recurrent
functions. A comparison between the two approaches is also given.
A numerical example further validating the theoretical results is also provided in
this study.

Keywords Halley’s method · Banach space · Majorizing sequence ·


Fréchet-derivatives · Ostrowski representation

Mathematics Subject Classification (2000) 65H10 · 65G99 · 65J15 · 47H17 ·


49M15

This work was supported by the Korea Research Foundation Grant funded by the Korean
Government (KRF-2008-313-C00050).
I.K. Argyros ()
Department of Mathematics Sciences, Cameron University, Lawton, OK 73505, USA
e-mail: iargyros@cameron.edu

Y.J. Cho
Department of Mathematics Education and the RINS, Gyeongsang National University,
Chinju 660-701, South Korea
e-mail: yjcho@gnu.ac.kr

S. Hilout
Laboratoire de Mathématiques et Applications, Poitiers University, Bd. Pierre et Marie Curie,
Téléport 2, B.P. 30179, 86962 Futuroscope Chasseneuil Cedex, France
e-mail: said.hilout@math.univ-poitiers.fr
222 I.K. Argyros et al.

1 Introduction

In this study we are concerned with the problem of approximating a locally unique
solution x  of equation

F (x) = 0, (1.1)

where F is a twice Fréchet-differentiable operator defined on a convex subset D of a


Banach space X with values in a Banach space Y.
A large number of problems in applied mathematics and also in engineering are
solved by finding the solutions of certain equations. For example, dynamic systems
are mathematically modeled by difference or differential equations, and their solu-
tions usually represent the states of the systems. For the sake of simplicity, assume
that a time-invariant system is driven by the equation ẋ = Q(x), for some suitable
operator Q, where x is the state. Then the equilibrium states are determined by solv-
ing equation (1.1). Similar equations are used in the case of discrete systems. The
unknowns of engineering equations can be functions (difference, differential, and in-
tegral equations), vectors (systems of linear or nonlinear algebraic equations), or real
or complex numbers (single algebraic equations with single unknowns). Except in
special cases, the most commonly used solution methods are iterative—when starting
from one or several initial approximations a sequence is constructed that converges to
a solution of the equation. Iteration methods are also applied for solving optimization
problems. In such cases, the iteration sequences converge to an optimal solution of
the problem at hand. Since all of these methods have the same recursive structure,
they can be introduced and discussed in a general framework.
We use the Halley method (HM):

yn = xn − F  (xn )−1 F (xn ) (x0 ∈ D),


 −1 
Hn = −F (xn ) F (xn )(yn − xn ),
 −1
1 1
xn+1 = yn − F  (xn )−1 I − Hn F  (xn )(yn − xn )2 , n≥0
2 2
to generate sequences {yn }, {xn } converging to x  .
The cubical convergence of (HM) has been established by many authors under
various assumptions. A survey of such results can be found in [3], the references
there (see also [1, 2, 4–14]).
Here, in particular, motivated by optimization considerations, we provide new
semilocal convergence results for (HM). Our first approach (see, Sect. 2) uses a
Kantorovich-type analysis. The second approach (see, Sect. 3) employs our new idea
of recurrent functions [3]. This approach has already been used to provide weaker
sufficient convergence results and finer error bounds than before for some Newton-
type methods [3].
The advantages over earlier works are (under the same computational cost): finer
error bounds on the distances xn+1 − xn , xn − x   (n ≥ 0), and sufficient conver-
gence conditions that can be weaker than before [7].
A numerical example further validating the theoretical results, and a comparison
between the two approaches are also provided in this study.