Vous êtes sur la page 1sur 15

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/304704493

A MODIFIED NEWTON'S METHOD FOR SOLVING NONLINEAR PROGRAMING


PROBLEMS

Article  in  Proceedings of the Glasgow Mathematical Association · August 2016

CITATION READS

1 630

2 authors, including:

Akinsunmade Akintayo Emmanuel


University of Ilorin
4 PUBLICATIONS   1 CITATION   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

An Intelligent Optimization Method for Extended Product Selection Model View project

All content following this page was uploaded by Akinsunmade Akintayo Emmanuel on 02 July 2016.

The user has requested enhancement of the downloaded file.


A MODIFIED NEWTON’S METHOD FOR
SOLVING NONLINEAR PROGRAMING
PROBLEMS
C. N. EJIEJI AND A. E. AKINSUNMADE
ejieji.cn@unilorin.edu.ng , akintayoakinsunmade@gmail.com

DEPARTMENT OF MATHEMATICS, FACULTY OF PHYSICAL


SCIENCES, UNIVERSITY OF ILORIN, ILORIN, NIGERIA.

Abstract

A modified Newton’s method for solving non-linear programing problems


is presented in this work. The scheme was constructed from the Taylor’s se-
ries expansion and Adomian decomposition method. A comparative study
of the new method, with the Newton’s method and other existing meth-
ods developed in recent years, is established by means of examples, using
a programing tool. The proposed method is found to be reliable and con-
verges faster than the Newton’s method, as well as some of the existing
modifications of the Newton’s method for solving nonlinear optimization
problems.

Keywords: Newton’s Algorithm, Nonlinear Problems, Adomian Decom-


position method.

1 INTRODUCTION

Solving nonlinear problems is an important part of optimization. In re-


cent years, interest has grown considerably in developing effective iterative
methods for computing solutions for large systems in science and engineer-
ing, to mention a few. The world is filled with limited resources and deciding

1
how best to use these available limited resources as an individual, organi-
zation, or system, is a universal problem. For these limited resources to be
properly managed, there is the need to introduce skills, measures and mod-
els for the resources to be allocated appropriately, to avoid inadequacies
or excesses. Optimization, is concerned with the process of finding suit-
able conditions that give the minimum or maximum value of a function. It
is the act of obtaining the best possible result under given circumstances,
and can be expressed as a mathematical procedure for determining optimal
allocation of scarce resources.
In general, optimization problem can be classified into two broad cat-
egories, constrained and unconstrained optimization problem. Problems
with or without constraints arise in various fields, where numerical infor-
mation is processed. Constrained Optimization problems are problems that
are subject to constraints and are generally expressed in the form

Minimizef (x) (1)

subject to
gi (x) = 0, i = 1, 2, . . . , m (2)
hj (x) ≥ 0, j = 1, 2, . . . n, (3)
where the function f (x), is called the objective function, gi (x) and hj (x)
are respectively referred to as equality and inequality constraints. An un-
constrained optimization problem is one where the value of the solution
vector  
x1
 x2 
x= (4)
 
.. 
 . 
xn
is such that optimizes the objective function f (x) stated in (1).
This problem can be considered as a particular case of the general con-
strained problem.
The Newton’s method has been proposed as one of the most efficient
methods used in solving non-linear problems. It serves as one of the fun-
damental tools in numerical analysis, operation research, optimization, and
control. It has numerous applications in management science, medicine,
data management, engineering, etc. It’s roles in optimization can not be

2
over estimated, as it serves as the basis for most effective procedures in
linear and non-linear programing.
Consider the non-linear optimization problem

Minimize {f (x), x ∈ <, f : < ⇒ <} (5)

where f is a non-linear twice differentiable function.


One of the oldest and the most basic problems in optimization, is that
of solving non-linear problems of the form (5). As the name Newton’s
method implies, this method is due to Sir Isaac Newton. Newton applied
the methodology to polynomials and computed a sequence of polynomials.
Joseph Raphson in 1690, modified the Newton’s method and his method
termed Newton-Raphson method, is today used in finding successive better
approximations to roots of a function f (x).
In recent years, many modified Newton-typed methods have been re-
ported.
Tseng (1998), considered a Newton-type uni-variate optimization algo-
rithm for locating the extremum, where he established the importance of
Newton’s method in operation research. Some other researchers like Kahya
(2005), considered a new uni dimensional search methods, where he studied
interpolation methods.
Meanwhile, despite vast researches already done on Newton’s method,
researchers continue to work for new improved methods, that will produce
efficient results, since the desired goal in optimization is to achieve a better
and improved outcome. Many numerical approaches in recent years have
served as updates to the Newton’s method for solving the problem given
by (5). Some of these have been reported to converge quadratically, and
some cubically. Homiere (2005), proposed a Newton-typed method using
the quadrature formula, which he later proved to have converged cubically.
In his work a comparative study of his proposed method and the original
Newton’s method was done, which made him establish that quadrature for-
mula is effective in making Newton’s method have an efficient convergence
rate. An update to the method of Homiere (2005) was recently proposed by
Zavalaus (2014), who showed that the quadrature formula has a third or-
der convergence, and analyzed that the quadrature formula is more efficient
than the Newton’s method.
Chun (2005) updated the original Newton’s method using decomposi-
tion method and established that decomposition method produces efficient

3
results when applied as a corrector to the Newton’s method.
Kanwar et.al (2003), considered New numerical techniques for solving
non-linear equations, and finally proposed a modification to the Newton’s
method, which was called the external touch algorithm for solving non-linear
functions.
Basto et.al (2006), considered the Newton’s method using Adomian de-
composition to resolve the series obtained after expanding the function f (x),
in Taylor’s series. The Adomian decomposition method is a method which
considered the solution of an infinite series usually converging to an accurate
solution, and this has been successfully applied to a wide class of functional
equations over the last two decades.
Feng (2009), introduced two step methods for solving non-linear equa-
tion with a comparative study of the new method of his, with the original
Newton’s method and that of Basto et.al (2006). Analysis showed that the
method converges for some values and diverges for some other values, de-
pending on the type of function to be minimized.
Based on the foundation laid by Kanwar et.al (2003), Karthikeyan (2010),
looked further into the external touch method and arrived at the conclusion
that though it converges faster than the Newton’s method for some func-
tions, the external touch method is not generally more effective than the
Newton’s method. He went ahead to deduce a new method which is an im-
provement on the work of Basto et.al (2006), using decomposition method.
This approach produces his version of modified Newton’s method, which
he called the efficient algorithm for minimizing a non-linear programing
problem of the form (5).
In this paper, we propose a modification of the Newtons method for
solving problems of the form (5). A comparative study of the modified
method, the original Newton’s method, and Efficient Algorithm proposed
by Karthikeyan (2010), is presented by means of examples iteratively solved
using a programing tool.

1.1 The Newton’s Algorithm


Step 1: chose X0 as an estimate of the minimum of f (x)
Step 2: repeat for n = 0, 1,0 2, ...
Step 3: set xn+1 = xn − ff00(x n)
(xn )
Step 4: Stop when the absolute value of the derivative of the function of

4
the new iterate is sufficiently small i.e. |f 0 (xn+1 )| ≤ 

1.2 Efficient Algorithm proposed by Karthikeyan


Step 1: chose X0 as an estimate of the minimum of f (x)
Step 2: repeat for n = 0, 1, 2, ...
f 0 (xn )
Step 3: set yn = xn − f 00 (xn )
0
Step 4: set xn+1 = yn − 2f 00 (xfn )−f
(yn )
00 (y )
n
Step 5: Stop when the absolute value of the derivative of the function of
the new iterate is sufficiently small i.e. |f 0 (xn+1 )| ≤ 

2 THE NEW MODIFIED NEWTON’S METHOD

Consider the problem

M inimizef (x), x ∈ R (6)


where f (x) is a function of a single real variable x.
The Newton’s method is a method used in finding the minimum point
of the function f (x) stated in equation (6), by using both first and second
derivatives.
Let us consider the function
g(x)
G(x) = x −
g 0 (x)

where g(x) = f 0 (x) and f (x) is the function to be minimized.


We consider the nonlinear equation g(x) = 0 and write g(x + h) in Taylor’s
series expansion about x, to obtain

g(x + h) = g(x) + hg 0 (x) + φ(h), (7)

from which we get

φ(h) = g(x + h) − g(x) − hg 0 (x). (8)

Suppose that g 0 (x) 6= 0, we can search for a value of h, such that g(x+h) = 0,
to obtain
g(x) + hg 0 (x) + φ(h) = 0 (9)

5
which can be simplified to obtain

hg 0 (x) = −g(x) − φ(h). (10)

From (10) we have


g(x) φ(h)
h=− 0
− 0 (11)
g (x) g (x)
h can be simplified and thus written as

h = c + L(h) (12)

where
g(x)
c=− , (13)
g 0 (x)
and
φ(h)
L(h) = − (14)
g 0 (x)
In equation (14) above, we assume that φ(h) = g(x + h), so we have

g(x + h)
L(h) = − . (15)
g 0 (x)

Applying the Adomian decomposition method to (12), which can be written


as
h − L(h) = c (16)
where c is a constant and L(h) is a nonlinear function.

X
h= hn (17)
n=0
! ∞
X X
L(h) = L hn = An (18)
n=0 n=0

where An are given as


" ∞
!#
1 dn X
An = L λn hn where n = 0, 1, 2, ... (19)
n! dλ n=0 λ=0

6
the first few polynomials are given by
A0 = L(h0 )
A1 = h1 L0 (h0 )
h2
A2 = h2 L0 (h0 ) + 2!1 L00 (h0 )
substituting (17) and (18) into (12) yields

X ∞
X
hn = c + An (20)
n=0 n=0

it follows from (20) that


h0 = c
hn+1 = c + An n = 0,1,2,...
since c = − gg(x)
0 (x) from (13), we have

g(x)
h0 = − (21)
g 0 (x)

where h ≈ h0 and x + h ≈ x − h0
thus
g(x)
x + h0 = x − (22)
g 0 (x)
this yields an iterative form

g(xn )
y n = xn − (23)
g 0 (xn )

where y = (x + h0 ).
for h1 , we have h1 = h0 + A0

g(x + h0 )
A0 = L(h0 ) = − (24)
g 0 (x)

g(y)
L(h0 ) = − (25)
g 0 (x)
h1 can thus be written as
g(y)
h1 = h0 − (26)
g 0 (x)

7
substituting (21) in (26) above

g(yn )
x + h1 = yn − (27)
g 0 (xn )

thus (27) gives


g(xn ) g(yn )
xn+1 = xn − 0
− 0 (28)
g (xn ) g (xn )
taking g(x) = f 0 (x), (27) and (28) yields

f 0 (xn )
yn = xn − (29)
f 00 (xn )

f 0 (yn )
xn+1 = yn − 00 (30)
f (xn )
(29) serves as a predictor which is the original newton’s method, while (30)
serves as the corrector.
Conclusively, (29) and (30) are called the Modified Newton’s Method for
minimizing a uni-variate function f (x).

2.1 The Algorithm for the New Modified Newton Method


The modification of the Newton’s method constructed above is presented
in the following algorithm:
Step 1: chose x0 as an estimate of the minimum of f (x)
Step 2: repeat for n = 0, 1, 2, ...
f 0 (xn )
Step 3: set yn = xn − f 00 (xn )
0
Step 4: set xn+1 = yn − ff00(yn)
(xn )
Step 5: Stop when the absolute value of the first derivative of the function
at the new point is sufficiently small i.e. |f 0 (xn+1 )| ≤ 

2.2 Convergence Analysis of the Proposed Algorithm

Proposition 2.1
Let α ∈ I, be a zero of a sufficiently differentiable function g : I → R for
an open interval I. If x0 is sufficiently close to α, then the new algorithm

8
has quadratic convergence.
proof: Since g is sufficiently differentiable, by expanding g(xn ) and g 0 (xn )
about α, we have

g(xn ) = g 0 (α)(en + c2 e2n + c3 e3n + c4 e4n + O(e5n )) (31)

g 0 (xn ) = g 0 (α)(1 + 2c2 en + 3c3 e2n + 4c4 e3n + O(e4n )) (32)


g (k) (α)
where en = xn − α, ck = k!g0 (α) , k = 2, 3, . . ..
From (31) and (32), we have
g(xn )
= en − c2 e2n + 2(c22 − c3 )e3n + (7c2 c3 − 3c4 − 4c32 )e4n + O(e5n )) (33)
g 0 (xn )
Substitute en = xn − α, (33) becomes

yn = α + c2 e2n + 2(c3 − c32 )e3n + (3c4 + 4c32 − 7c2 c3 )e4n + O(e5n )) (34)

from (23) we have yn = xn − gg(x n)


0 (x )
n
Expanding g(yn ) about α, using (34), we have

g(yn ) = g 0 (α)(c2 e2n + 2(c3 − c32 )e3n + (5c32 − 7c2 c3 + 3c4 )e4n + O(e5n )) (35)

using (35) and (32) we have


g(yn ) g 0 (α)(c2 e2n + 2(c3 − c32 )e3n + (5c32 − 7c2 c3 + 3c4 )e4n + O(e5n ))
= (36)
g 0 (xn ) g 0 (α)(1 + 2c2 en + 3c3 e2n + 4c4 e3n + O(e4n ))
using the result obtained from (36) and (34), we can write RHS of (28) i.e
the proposed method, as

xn+1 = α + c2 e2n + 2(c3 − c22 )e3n + . . . (37)

which implies that the constructed method presented in (28) has quadratic
convergence.

3 NUMERICAL EXAMPLES
The following test problems were solved using three methods namely:
Newton’s method, Karthikeyan(2010) method and the New Modified method,
using a programing tool.

9
Solved Problems
Problem 3.1 Minimize f (x) = x3 − 2x − 5 initial guess x0 = 1
Problem 3.2 Minimize f (x) = xex − 1 , initial guess x0 = 1
Problem 3.3 Minimize f (x) = cos x − x initial guess x0 = 0
0.03
Problem 3.4 Minimize f (x) = 500 + 0.9x + x
(150000) initial
guess x0 = 0.5
Problem 3.5 Minimize f (x) = 1 − x sin x, initial guess x = 2

The results obtained from the computations are as shown below:

Table of Results for the Solved Problems


Table 3.1: Table showing computational results obtained for Problem 3.1

Iterations Newton’s Method Karthikeyan’s Method(2010) The New Proposed Method


0 1.0000 1.0000 1.0000
1 8.3333e-001 8.2143e-001 8.1944e-001
2 8.1667e-001 8.1650e-001 8.1650e-001
3 8.1650e-001

Table 3.2: Table showing computational results obtained for Problem 3.2

Iterations Newton’s Method Karthikeyan’s Method(2010) The New Proposed Method


0 1.0000 1.0000 1.0000
1 3.333e-001 1.9078e-001 1.0515e-001
2 -2.3810e-001 -4.2650e-001 -5.8286e-001
3 -6.7053e-001 -8.7130e-001 -9.4176e-001
4 -9.1835e-001 -9.95010e-001 -9.9966e-001
5 -9.9384e-001 -1.0000e+001 -1.0000e+001
6 -9.9996e-001
7 -1.0000e+001

10
Table 3.3: Table showing computational results obtained for Problem 3.3

Iterations Newton’s Method Karthikeyan’s Method(2010) The New Proposed Method


0 0 0 0
1 -1 -1.1086 -1.1585
2 -1.2934 -1.3823 -1.4190
3 -1.4330 -1.4925 -1.5140
4 -1.5020 -1.5382 -1.5495
5 -1.5364 -1.5572 -1.5628
6 -1.5536 -1.5651 -1.5678
7 -1.5622 -1.5684 -1.5697
8 -1.5665 -1.5698 -1.5704
9 -1.5686 -1.5704 -1.5706
10 -1.5697 -1.5706 -1.5707
11 -1.5703 -1.5707
12 -1.5705
13 -1.5707

Table 3.4: Table showing computational results obtained for Problem 3.4

Iterations Newton’s Method Karthikeyan’s Method(2010) The New Proposed Method


0 5.0000e-001 5.0000e-001 5.0000e-001
1 7.4999e-001 8.1520e-001 8.6109e-001
2 1.1249 1.3291 1.4829
3 1.6873 2.1666 2.5533
4 2.5304 3.5311 4.3945
5 3.7940 5.7513 7.5539
6 5.6856 9.3511 1.2936e+001
7 1.8850e+001 1.5135e+001 2.1911e+001
8 2.7605e+001 2.4203e+001 3.5959e+001
9 3.9304e+001 3.7531e+001 5.4209e+001
10 5.2884e+001 5.4052e+001 6.8085e+001
11 6.4536e+001 6.7194e+001 7.0696e+001
12 6.9925e+001 7.0649e+001 7.0711e+001
13 7.0698e+001 7.0711e+001
14 7.0711e+001

Table 3.5 Table showing computational results obtained for Problem 3.5
Iterations Newton’s Method Karthikeyan’s Method(2010) The New Proposed Method
0 2.0000 2.0000 2.0000
1 2.0287 2.0287 2.0288
2 2.0288 2.0288

11
Discussion of Numerical Results
From the tables presented above, the following observations can be
made.
In Table 3.1, the original Newton’s method, converges at the 3rd iter-
ation, while Karthikeyan’s method (2010) and the New proposed Method,
converge at the 2nd iteration; The result showed that there is a more im-
proved result at every stage of iteration, in the Karthikeyan (2010) Method
and the new proposed method. Meanwhile, the New proposed method ap-
proaches its minimum rapidly than that of Karthikeyan (2010), and the
original Newton’s method.
In Table 3.2, the original Newton’s method, converges at the 7th iter-
ation, while Karthikeyan’s method (2010) and the New proposed Method,
converge at the 5th iteration; This result vividly showed that there is a more
improved results at every stage of iteration in the New proposed method
compared to Karthikeyan (2010) Method and the original Newton’s method.
In Table 3.3, the original Newton’s method, converges at the 13th iter-
ation, Karthikeyan’s method (2010) converges at the 11th iteration, while
the New proposed Method converges at the 10th iteration; showing that the
new proposed method, converges faster than the original Newton’s method
and that which was reported by Karthikeyan in 2010.
From table 3.4, the Newton’s method approaches its minimum at the
14th iteration, Karthikeyan’s method at the 13th iteration and the new
proposed method at the 12th iteration; thus the new proposed method,
approaches its minimum faster than the original Newton’s method, and
Kathikeyan (2010) method.
In table 3.5, the Newton’s method and Karthikeyan’s method (2010)
approach the minimum point at the 2nd iteration, while the new proposed
method converges at the 1st iteration.
From the above results and analysis, it is worthwhile to reach a con-
clusion that the New proposed method generates an improved result in
each stages of iteration compared to other methods which were reported
in earlier results, as well as the original Newton-Raphson method. The
new proposed method has an improved convergence rate compared to other
methods reported in the literature and it can also be said to guarantee a
better approximation for the roots of nonlinear equations.

12
4 Summary and Conclusion
Summary
A new scheme for solving nonlinear problems of the form f (x) = 0
where f (x) is a unimodal function, is presented in this work. The proposed
algorithm was worked out using Taylor’s series expansion for the function
f (x), and solved using Adomian decomposition method, the new developed
scheme serves as a corrector for the Newton-Raphson method for solving
nonlinear programing problems. A proposition to back up the convergence
of the new method was also presented. Numerical solution were presented
using five test functions and a comparative analysis using MATLAB re-
vealed that the New Proposed method is more efficient than other existing
Newton-type methods presented in the literature, as well as the original
Newton’s method.

Conclusion
Nonlinear problems are the most solved problems in optimization and
are often solved iteratively before they approach optimum points. In this
work, we have shown that new iterative methods can be developed using the
concepts of previous existing methods. The method proposed in this work
has been clearly reported to have converged quadratically, and produces
better results when compared with other methods using a programing tool.

References
Adomian. G and Rach. R, (1985), On Solution of Algebraic Equations
By the Decomposition Method, Journal of Mathematical Analysis and
Applications Vol. 105, pp 141-166

Adomian, G. (1986). Nonlinear Stochastic Operator Equations, Academic


Press, Orlando, Florida.

Basto M., Semiano V. (2006). A new iterative method to compute nonlin-


ear equations. Applied Computational Mathematics Vol. 173 pp 468
- 483

13
Chun C. (2005), Iterative methods improving Newton’s method by the
decomposition method. computational Mathematics Application Vol.
50 pp 1559 - 1568

Feng J. (2009), A new two-type method for solving Nonlinear equations.


International Journal of Science Vol. 8(1) pp 40 - 44

Homeier H. (2005), A Newton-type method with cubic convergence. Jour-


nal of computation and Applied Mathematics Vol. 176 pp 245 - 432

Kahya E. (2005), A new unidimensional search method for optimization


linear interpolation methods. Applied Mathematics and computation
Vol. 171(2) pp 912 - 926

Kanwar V. (2010), New variants of Newton’s Method for nonlinear un-


constrained optimization problems. Intelligent journal of Information
management Vol.2, pp 40 - 45

Kanwar V., Singh S. and Sharma J (2003), New numerical techniques


for solving non-linear equations. Indian Journal of pure and applied
Mathematics Vol.34(9) pp 1339 - 1349

Karthikeyan K. (2010), Efficient Algorithm for Minimization of Non-Linear


Function. Applied Mathematical Sciences, Vol.4, 69, pp 3437 - 3446

Tseng C. L. (1998), A newton-type univariate optimization algorithm for


locating nearest extremum. European Journal of operation research
Vol. 105, pp 236 - 246

Zavalaus G. (2014), A modification of Newton method with third-order


convergence. American Journal of Numerical Analysis Vol. 2(4), pp
98 - 101

14

View publication stats

Vous aimerez peut-être aussi