Académique Documents
Professionnel Documents
Culture Documents
net/publication/304704493
CITATION READS
1 630
2 authors, including:
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
An Intelligent Optimization Method for Extended Product Selection Model View project
All content following this page was uploaded by Akinsunmade Akintayo Emmanuel on 02 July 2016.
Abstract
1 INTRODUCTION
1
how best to use these available limited resources as an individual, organi-
zation, or system, is a universal problem. For these limited resources to be
properly managed, there is the need to introduce skills, measures and mod-
els for the resources to be allocated appropriately, to avoid inadequacies
or excesses. Optimization, is concerned with the process of finding suit-
able conditions that give the minimum or maximum value of a function. It
is the act of obtaining the best possible result under given circumstances,
and can be expressed as a mathematical procedure for determining optimal
allocation of scarce resources.
In general, optimization problem can be classified into two broad cat-
egories, constrained and unconstrained optimization problem. Problems
with or without constraints arise in various fields, where numerical infor-
mation is processed. Constrained Optimization problems are problems that
are subject to constraints and are generally expressed in the form
subject to
gi (x) = 0, i = 1, 2, . . . , m (2)
hj (x) ≥ 0, j = 1, 2, . . . n, (3)
where the function f (x), is called the objective function, gi (x) and hj (x)
are respectively referred to as equality and inequality constraints. An un-
constrained optimization problem is one where the value of the solution
vector
x1
x2
x= (4)
..
.
xn
is such that optimizes the objective function f (x) stated in (1).
This problem can be considered as a particular case of the general con-
strained problem.
The Newton’s method has been proposed as one of the most efficient
methods used in solving non-linear problems. It serves as one of the fun-
damental tools in numerical analysis, operation research, optimization, and
control. It has numerous applications in management science, medicine,
data management, engineering, etc. It’s roles in optimization can not be
2
over estimated, as it serves as the basis for most effective procedures in
linear and non-linear programing.
Consider the non-linear optimization problem
3
results when applied as a corrector to the Newton’s method.
Kanwar et.al (2003), considered New numerical techniques for solving
non-linear equations, and finally proposed a modification to the Newton’s
method, which was called the external touch algorithm for solving non-linear
functions.
Basto et.al (2006), considered the Newton’s method using Adomian de-
composition to resolve the series obtained after expanding the function f (x),
in Taylor’s series. The Adomian decomposition method is a method which
considered the solution of an infinite series usually converging to an accurate
solution, and this has been successfully applied to a wide class of functional
equations over the last two decades.
Feng (2009), introduced two step methods for solving non-linear equa-
tion with a comparative study of the new method of his, with the original
Newton’s method and that of Basto et.al (2006). Analysis showed that the
method converges for some values and diverges for some other values, de-
pending on the type of function to be minimized.
Based on the foundation laid by Kanwar et.al (2003), Karthikeyan (2010),
looked further into the external touch method and arrived at the conclusion
that though it converges faster than the Newton’s method for some func-
tions, the external touch method is not generally more effective than the
Newton’s method. He went ahead to deduce a new method which is an im-
provement on the work of Basto et.al (2006), using decomposition method.
This approach produces his version of modified Newton’s method, which
he called the efficient algorithm for minimizing a non-linear programing
problem of the form (5).
In this paper, we propose a modification of the Newtons method for
solving problems of the form (5). A comparative study of the modified
method, the original Newton’s method, and Efficient Algorithm proposed
by Karthikeyan (2010), is presented by means of examples iteratively solved
using a programing tool.
4
the new iterate is sufficiently small i.e. |f 0 (xn+1 )| ≤
Suppose that g 0 (x) 6= 0, we can search for a value of h, such that g(x+h) = 0,
to obtain
g(x) + hg 0 (x) + φ(h) = 0 (9)
5
which can be simplified to obtain
h = c + L(h) (12)
where
g(x)
c=− , (13)
g 0 (x)
and
φ(h)
L(h) = − (14)
g 0 (x)
In equation (14) above, we assume that φ(h) = g(x + h), so we have
g(x + h)
L(h) = − . (15)
g 0 (x)
6
the first few polynomials are given by
A0 = L(h0 )
A1 = h1 L0 (h0 )
h2
A2 = h2 L0 (h0 ) + 2!1 L00 (h0 )
substituting (17) and (18) into (12) yields
∞
X ∞
X
hn = c + An (20)
n=0 n=0
g(x)
h0 = − (21)
g 0 (x)
where h ≈ h0 and x + h ≈ x − h0
thus
g(x)
x + h0 = x − (22)
g 0 (x)
this yields an iterative form
g(xn )
y n = xn − (23)
g 0 (xn )
where y = (x + h0 ).
for h1 , we have h1 = h0 + A0
g(x + h0 )
A0 = L(h0 ) = − (24)
g 0 (x)
g(y)
L(h0 ) = − (25)
g 0 (x)
h1 can thus be written as
g(y)
h1 = h0 − (26)
g 0 (x)
7
substituting (21) in (26) above
g(yn )
x + h1 = yn − (27)
g 0 (xn )
f 0 (xn )
yn = xn − (29)
f 00 (xn )
f 0 (yn )
xn+1 = yn − 00 (30)
f (xn )
(29) serves as a predictor which is the original newton’s method, while (30)
serves as the corrector.
Conclusively, (29) and (30) are called the Modified Newton’s Method for
minimizing a uni-variate function f (x).
Proposition 2.1
Let α ∈ I, be a zero of a sufficiently differentiable function g : I → R for
an open interval I. If x0 is sufficiently close to α, then the new algorithm
8
has quadratic convergence.
proof: Since g is sufficiently differentiable, by expanding g(xn ) and g 0 (xn )
about α, we have
yn = α + c2 e2n + 2(c3 − c32 )e3n + (3c4 + 4c32 − 7c2 c3 )e4n + O(e5n )) (34)
g(yn ) = g 0 (α)(c2 e2n + 2(c3 − c32 )e3n + (5c32 − 7c2 c3 + 3c4 )e4n + O(e5n )) (35)
which implies that the constructed method presented in (28) has quadratic
convergence.
3 NUMERICAL EXAMPLES
The following test problems were solved using three methods namely:
Newton’s method, Karthikeyan(2010) method and the New Modified method,
using a programing tool.
9
Solved Problems
Problem 3.1 Minimize f (x) = x3 − 2x − 5 initial guess x0 = 1
Problem 3.2 Minimize f (x) = xex − 1 , initial guess x0 = 1
Problem 3.3 Minimize f (x) = cos x − x initial guess x0 = 0
0.03
Problem 3.4 Minimize f (x) = 500 + 0.9x + x
(150000) initial
guess x0 = 0.5
Problem 3.5 Minimize f (x) = 1 − x sin x, initial guess x = 2
Table 3.2: Table showing computational results obtained for Problem 3.2
10
Table 3.3: Table showing computational results obtained for Problem 3.3
Table 3.4: Table showing computational results obtained for Problem 3.4
Table 3.5 Table showing computational results obtained for Problem 3.5
Iterations Newton’s Method Karthikeyan’s Method(2010) The New Proposed Method
0 2.0000 2.0000 2.0000
1 2.0287 2.0287 2.0288
2 2.0288 2.0288
11
Discussion of Numerical Results
From the tables presented above, the following observations can be
made.
In Table 3.1, the original Newton’s method, converges at the 3rd iter-
ation, while Karthikeyan’s method (2010) and the New proposed Method,
converge at the 2nd iteration; The result showed that there is a more im-
proved result at every stage of iteration, in the Karthikeyan (2010) Method
and the new proposed method. Meanwhile, the New proposed method ap-
proaches its minimum rapidly than that of Karthikeyan (2010), and the
original Newton’s method.
In Table 3.2, the original Newton’s method, converges at the 7th iter-
ation, while Karthikeyan’s method (2010) and the New proposed Method,
converge at the 5th iteration; This result vividly showed that there is a more
improved results at every stage of iteration in the New proposed method
compared to Karthikeyan (2010) Method and the original Newton’s method.
In Table 3.3, the original Newton’s method, converges at the 13th iter-
ation, Karthikeyan’s method (2010) converges at the 11th iteration, while
the New proposed Method converges at the 10th iteration; showing that the
new proposed method, converges faster than the original Newton’s method
and that which was reported by Karthikeyan in 2010.
From table 3.4, the Newton’s method approaches its minimum at the
14th iteration, Karthikeyan’s method at the 13th iteration and the new
proposed method at the 12th iteration; thus the new proposed method,
approaches its minimum faster than the original Newton’s method, and
Kathikeyan (2010) method.
In table 3.5, the Newton’s method and Karthikeyan’s method (2010)
approach the minimum point at the 2nd iteration, while the new proposed
method converges at the 1st iteration.
From the above results and analysis, it is worthwhile to reach a con-
clusion that the New proposed method generates an improved result in
each stages of iteration compared to other methods which were reported
in earlier results, as well as the original Newton-Raphson method. The
new proposed method has an improved convergence rate compared to other
methods reported in the literature and it can also be said to guarantee a
better approximation for the roots of nonlinear equations.
12
4 Summary and Conclusion
Summary
A new scheme for solving nonlinear problems of the form f (x) = 0
where f (x) is a unimodal function, is presented in this work. The proposed
algorithm was worked out using Taylor’s series expansion for the function
f (x), and solved using Adomian decomposition method, the new developed
scheme serves as a corrector for the Newton-Raphson method for solving
nonlinear programing problems. A proposition to back up the convergence
of the new method was also presented. Numerical solution were presented
using five test functions and a comparative analysis using MATLAB re-
vealed that the New Proposed method is more efficient than other existing
Newton-type methods presented in the literature, as well as the original
Newton’s method.
Conclusion
Nonlinear problems are the most solved problems in optimization and
are often solved iteratively before they approach optimum points. In this
work, we have shown that new iterative methods can be developed using the
concepts of previous existing methods. The method proposed in this work
has been clearly reported to have converged quadratically, and produces
better results when compared with other methods using a programing tool.
References
Adomian. G and Rach. R, (1985), On Solution of Algebraic Equations
By the Decomposition Method, Journal of Mathematical Analysis and
Applications Vol. 105, pp 141-166
13
Chun C. (2005), Iterative methods improving Newton’s method by the
decomposition method. computational Mathematics Application Vol.
50 pp 1559 - 1568
14