Académique Documents
Professionnel Documents
Culture Documents
Research Article
Novel Interior Point Algorithms for Solving Nonlinear Convex
Optimization Problems
Copyright © 2015 Sakineh Tahmasebzadeh et al. This is an open access article distributed under the Creative Commons Attribution
License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
This paper proposes three numerical algorithms based on Karmarkar’s interior point technique for solving nonlinear convex
programming problems subject to linear constraints. The first algorithm uses the Karmarkar idea and linearization of the objective
function. The second and third algorithms are modification of the first algorithm using the Schrijver and Malek-Naseri approaches,
respectively. These three novel schemes are tested against the algorithm of Kebiche-Keraghel-Yassine (KKY). It is shown that these
three novel algorithms are more efficient and converge to the correct optimal solution, while the KKY algorithm fails in some cases.
Numerical results are given to illustrate the performance of the proposed algorithms.
where 𝑓 : 𝑅𝑛 → 𝑅 is a nonlinear, convex, and differentiable Example 1. Consider the quadratic convex problem:
function. 𝐴 is 𝑚 × 𝑛 matrix of rank 𝑚 and 𝑏 is 𝑚 × 1. The
starting point 𝑥0 > 0 is chosen to be feasible. 𝐼 is the identity min 𝑓 (𝑥) = 𝑥12 + 𝑥22 − 2𝑥1 − 4𝑥2
𝑇
matrix with order (𝑛+1) and 𝑒𝑛+1 is a row vector of 𝑛+1 ones.
s.t. 𝑥1 + 4𝑥2 + 𝑥3 = 5,
Assume 𝑦 = (𝑦1 , 𝑦2 , . . . , 𝑦𝑛+1 )𝑇 = (𝑦[𝑛], 𝑦𝑛+1 )𝑇 , the (3)
simplex 𝑆 = {𝑦 ∈ 𝑅𝑛+1 | 𝑒𝑛+1
𝑇
𝑦 = 1, 𝑦 ≥ 0}, and the projective 2𝑥1 + 3𝑥2 + 𝑥4 = 6,
transformation of Karmarkar T𝑘 defined by T𝑘 : 𝑅𝑛 → 𝑆
−1 𝑥1 , 𝑥2 , 𝑥3 , 𝑥4 ≥ 0.
such that T𝑘 (𝑥) = 𝑦. Consider 𝑔(𝑦) = 𝑦𝑛+1 [𝑓(T𝑘+1 (𝑦)) −
−1 𝑛+1
𝑓(T𝑘 (𝑦))], where 𝑔 : 𝑅 → 𝑅 is a nonlinear, convex, For tolerance 𝜖 = 10−8 , after 𝑘 = 161 iterations for KKY
and differentiable function and the optimal value of 𝑔 is algorithm we have
zero. Using the linearization process to the function 𝑔 in
neighborhood of the ball of center 𝑦0 as 𝑔(𝑦) = 𝑔(𝑦0 ) + 𝑥161 = [0.00000000, 2.00005934, −3.00008255,
∇𝑔(𝑦0 )(𝑦 − 𝑦0 ) for 𝑦 ∈ {𝑦 ∈ 𝑅𝑛+1 | ‖𝑦 − 𝑦0 ‖ ≤ 𝛽} along −0.00000000]𝑇 and 𝑓(𝑥161 ) = −3.9999999964. Since
with Karmarkar projective transformation [12], we conclude the condition 𝑓(𝑥162 ) − 𝑓(𝑥161 ) = 4.47 × 10−9 < 10−8
that Problem (1) is equivalent to holds, the algorithm stopped, whereas this solution is
not feasible.
𝑇
min ∇𝑔 (𝑦0 ) 𝑦 Example 2. Consider the nonlinear convex problem:
s.t. 𝐴 𝑘 𝑦 = 0, min 𝑓 (𝑥)
(2)
𝑇 3 2
𝑒𝑛+1 𝑦 = 1,
= (𝑥 + 𝑥22 ) + 2 (𝑥32 + 𝑥42 ) − ln (𝑥1 𝑥4 )
2 1
𝑦 − 𝑦0 ≤ 𝛽.
− 3𝑥1 𝑥2 + 4𝑥3 𝑥4 − 2𝑥1 − 3𝑥4
Compute 𝐶𝑝 = [𝐼 − 𝑃𝑇 (𝑃𝑃𝑇 )−1 𝑃]∇𝑔(𝑦0 ), 𝑦𝑘+1 = 𝑦0 − 3(𝑛 + 2))(1/√(𝑛 + 1)(𝑛 + 2)) is used. Hence the solution
𝛽(𝐶𝑝 /‖𝐶𝑝 ‖), and 𝑥𝑘+1 = 𝐷𝑘 𝑦𝑘+1 [𝑛]/𝑦𝑛+1
𝐾
. in each iteration remains feasible. In this case, the optimal
solution for Problem (2) is given by 𝑦𝑘+1 = 𝑦0 − 𝛽(𝐶𝑝 /‖𝐶𝑝 ‖),
Step 3. While 𝑓(𝑥𝑘+1 ) − 𝑓(𝑥𝑘 ) > 𝜖, let 𝑘 = 𝑘 + 1, and go to where 𝛽 = 𝛼𝑟. Applying KKY algorithm with 𝛽 = 𝛼𝑟 to the
Step 2. problem in Example 2, it gives the feasible solution 𝑥80 =
[0.7517262, 0.0000174, 0.0000056, 1.0209196, 0.0008742,
However, as we can see in the following two examples, the 0.9999943, 0.0000035]𝑇 , 𝑓(𝑥80 ) = −1.3692774913, and
KKY algorithm does not work for every nonlinear problem. 𝑓(𝑥81 ) = −1.3692882373. The algorithm stops after 𝑘 = 80
Advances in Operations Research 3
iterations since 𝑓(𝑥81 ) − 𝑓(𝑥80 ) = −1.0746 × 10−5 , while the Step 1. Compute 𝑦0 = (1/(𝑛 + 1), . . . , 1/(𝑛 + 1))𝑇 , 𝑟 =
suitable accuracy 𝜖 = 10−8 is not reached. In each iteration, 1/√(𝑛 + 1)(𝑛 + 2), and 𝛼 = (𝑛 + 1)/3(𝑛 + 2). Put 𝜆 = 1 and
linear search method is used to find 0 < 𝜆 ≤ 1 such that 𝑘 = 0.
𝑓(𝑥𝑘+1 ) ≤ 𝑓(𝑥𝑘 ), where the suitable tolerance satisfies. Thus
one may write Problem (2) in the following form: Step 2. Build 𝐷𝑘 = diag{𝑥𝑘 }, 𝐴 𝑘 = [𝐴𝐷𝑘 , −𝑏], and 𝑃 =
𝐴
( 𝑒𝑇 𝑘 ).
𝑇
∇𝑔 (𝑦0 ) 𝑦
𝑛+1
min Compute 𝐶𝑝 = [𝐼 − 𝑃𝑇 (𝑃𝑃𝑇 )−1 𝑃]∇𝑔(𝑦0 ), 𝑦𝑘+1 = 𝑦0 −
s.t. 𝐴 𝑘 𝑦 = 0, 𝜆𝛼𝑟(𝐶𝑝 /‖𝐶𝑝 ‖), and 𝑥𝑘+1 = 𝐷𝑘 𝑦𝑘+1 [𝑛]/𝑦𝑛+1
𝐾
.
(5)
𝑇
𝑒𝑛+1 𝑦 = 1, Step 3. While 𝑓(𝑥𝑘+1 ) − 𝑓(𝑥𝑘 ) > 𝜖, put 𝑘 = 𝑘 + 1.
𝐴
Build 𝐷𝑘 = diag{𝑥𝑘 }, 𝐴 𝑘 = [𝐴𝐷𝑘 , −𝑏], and 𝑃 = ( 𝑒𝑇 𝑘 ).
𝑦 − 𝑦0 ≤ 𝜆𝛼𝑟. 𝑛+1
Compute 𝐶𝑝 = [𝐼 − 𝑃𝑇 (𝑃𝑃𝑇)−1 𝑃]∇𝑔(𝑦0 ) and 𝑦𝑘+1 = 𝑦0 −
Lemma 3. The optimal solution for Problem (5) is given by 𝜆𝛼𝑟(𝐶𝑝 /‖𝐶𝑝 ‖).
𝑦𝑘+1 = 𝑦0 − 𝜆𝛼𝑟(𝐶𝑝 /‖𝐶𝑝 ‖). Then 𝑥𝑘+1 = 𝑇𝑘−1 (𝑦𝑘+1 ) = 𝐷𝑘 𝑦𝑘+1 [𝑛]/𝑦𝑛+1
𝐾
.
Let 𝑘 = 𝑘 + 1, and go to Step 3.
𝐴
Proof. We put 𝑥 = 𝑦 − 𝑦0 , and then we have 𝑃𝑥 = ( 𝑒𝑇 𝑘 ) (𝑦 −
𝑛+1 Step 4. While 𝑓(𝑥𝑘+1 ) > 𝑓(𝑥𝑘 ), compute 𝜆 = (1/2)𝜆, 𝑦𝑘+1 =
𝑦0 ) = 0 and Problem (5) is equivalent to 𝑦0 − 𝜆𝛼𝑟(𝐶𝑝 /‖𝐶𝑝 ‖), and 𝑥𝑘+1 = 𝐷𝑘 𝑦𝑘+1 [𝑛]/𝑦𝑛+1
𝐾
.
𝑇
min ∇𝑔 (𝑦0 ) 𝑥 Step 5. Let 𝑘 = 𝑘 + 1, and go to Step 3.
s.t. 𝑃𝑥 = 0, (6)
4. Convergence for Modified
2 2 2 2 Algorithm (MKKY)
‖𝑥‖ ≤ 𝜆 𝛼 𝑟 .
In order to establish the convergence of the modified algo-
𝑥∗ is a solution of Problem (6) if and only if there exist 𝜔 ∈
rithm, we introduce a potential function associated with
𝑅𝑚+1 and 𝜇 ≥ 0 such that
Problem (1) defined by
𝑛
∇𝑔 (𝑦0 ) + 𝑃𝑇 𝜔 + 𝜇𝑥∗ = 0. (7)
𝜙 (𝑥) = (𝑛 + 1) ln (𝑓 (𝑥) − 𝑧∗ ) − ∑ ln (𝑥𝑖 ) , (10)
∗ 𝑖=1
Multiplying both sides of (7) by 𝑃 and since 𝑃𝑥 = 0, we
get 𝑃∇𝑔(𝑦0 ) + 𝑃𝑃𝑇 𝜔 = 0. Then 𝜔 = −(𝑃𝑃𝑇)−1 (𝑃∇𝑔(𝑦0 )); where 𝑧∗ is the optimal value of the objective function.
by substituting 𝜔 in (7) we find
Lemma 4. If 𝑦𝑘+1 is the optimal solution of Problem (5), then
1 −1 𝑔(𝑦𝑘+1 ) < 𝑔(𝑦0 ).
𝑥∗ = − [𝐼 − 𝑃𝑇 (𝑃𝑃𝑇 ) 𝑃] ∇𝑔 (𝑦0 ) . (8)
𝜇
Proof. Since 𝑔(𝑦𝑘+1 ) = 𝑔(𝑦0 )+∇𝑔(𝑦0 )𝑇 (𝑦𝑘+1 −𝑦0 ) and 𝑦𝑘+1 =
By assuming 𝐶𝑃 = [𝐼 − 𝑃𝑇 (𝑃𝑃𝑇 )−1 𝑃]∇𝑔(𝑦0 ), we have 𝑦0 − 𝜆𝛼𝑟(𝐶𝑃 /‖𝐶𝑃 ‖), then
𝑇 𝐶
∗ 1 𝑔 (𝑦𝑘+1 ) − 𝑔 (𝑦0 ) = ∇𝑔 (𝑦0 ) (−𝜆𝛼𝑟 𝑃 )
𝑥 = 𝐶𝑃 = 𝜆𝛼𝑟 𝐶𝑃
𝜇
1 𝜆𝛼𝑟 𝜆𝛼𝑟
⇒ = , = − (∇𝑔 (𝑦0 ) 𝐶𝑃 ) (11)
𝜇 𝐶𝑃 𝐶𝑃
(9)
𝐶 𝜆𝛼𝑟 2
⇒ 𝑥∗ = −𝜆𝛼𝑟 𝑃 . = − 𝐶𝑃 = −𝜆𝛼𝑟 𝐶𝑃 < 0.
𝐶𝑃 𝐶𝑃
Thus 𝑔(𝑦𝑘+1 ) < 𝑔(𝑦0 ) and a reduction is obtained in each
And we have 𝑦𝑘+1 = 𝑦∗ = 𝑦0 + 𝑥∗ = 𝑦0 − 𝜆𝛼𝑟(𝐶𝑃 /‖𝐶𝑃 ‖). iteration.
Note that here we have proposed the algorithm similar Lemma 5. If 𝑦𝑘+1 is the optimal solution of Problem (5), then
to KKY, where 𝛽 = 𝜆𝛼𝑟. This modified algorithm has the
𝜆𝛼
advantage that can find the feasible approximate solution in 𝑔 (𝑦𝑘+1 ) ≤ (1 − ) 𝑔 (𝑦0 ) . (12)
the suitable tolerance. 𝑛+1
Proof. Let 𝑥∗ be the optimal solution of Problem (1); we can
3.2. Modified KKY Algorithm (MKKY). Let 𝜖 > 0 be a given write 𝑥∗ = T𝑘−1 (𝑦∗ ). 𝐵(𝑦0 , 𝜆𝛼𝑟) is a ball of center 𝑦0 with
tolerance and 𝑥0 is a strictly feasible point. radius 𝜆𝛼𝑟. There are two cases:
4 Advances in Operations Research
Table 1: Computed solutions using four different algorithms for Examples 8, 9, 10, and 11.
Table 2: Comparison between four different algorithms for Examples 8, 9, 10, and 11.
According to Theorem 6, after 𝑘 iterations, we have 𝜙(𝑥𝑘 ) − (Schrijver-Modified Kebiche-Keraghel-Yassine) and MN-
𝜙(𝑥0 ) ≤ −𝑘𝛿. Thus, MKKY (Malek-Naseri-Modified Kebiche-Keraghel-Yassine).
These algorithms are different from MKKY in the use of
𝑘𝛿 optimal length in each iteration.
𝑓 (𝑥𝑘 ) − 𝑧∗ ≤ 22𝑙 (𝑓 (𝑥0 ) − 𝑧∗ ) exp (− )
𝑛+1
(25)
𝑘𝛿 4.1. Hybrid Algorithms. Let us assume that 𝑟 and 𝑦0 in
5𝐿
≤ 2 exp (− ) ≤ 2−4𝐿 . the modified algorithm are expressed as follows: 𝑟 =
𝑛+1
√(𝑛 + 2)/(𝑛 + 1) and 𝑦0 = (1, 1, . . . , 1)𝑇 . In the Sch-MKKY
Therefore, 𝑓(𝑥𝑘 ) − 𝑧∗ ≤ 2−4𝐿 for 𝑘 ≥ 45𝐿(𝑛 + 1). and MN-MKKY algorithms, choose 𝛼 = 1/(1 + 𝑟) and 𝛼 =
1−1/(𝑛+2)4 (1+√𝑛(𝑛 + 1)), respectively. It is easy to prove that
In the next section MKKY algorithm is combined with the theorems in Section 4 satisfy Sch-MKKY and MN-MKKY
the algorithm of Schrijver [6, 7] and Malek-Naseri’s algorithm algorithms. Thus with the recent step length the convergence
[8] to propose novel hybrid algorithms called Sch-MKKY is guaranteed.
6 Advances in Operations Research
600 200
180
500
160
140
Number of iterations
Number of iterations
400
120
300 100
MKKY
MKKY 80
200 MN-MKKY Sch-MKKY
60
Sch-MKKY
40
100
20 MN-MKKY
0 0
−8 −7 −6 −5 −4 −3 −2 −1 −8 −7 −6 −5 −4 −3 −2 −1
log(tolerance) log(tolerance)
(a) Example 8 (b) Example 9
80 70
70 60
MKKY
60 MKKY
50
Number of iterations
Number of iterations
50
40
40
30
Sch-MKKY
30 Sch-MKKY
20
20
MN-MKKY MN-MKKY
10 10
0 0
−8 −7 −6 −5 −4 −3 −2 −1 −8 −7 −6 −5 −4 −3 −2 −1
log(tolerance) log(tolerance)
(c) Example 10 (d) Example 11
Figure 1: The tolerances against number of iterations are compared for MKKY, Sch-MKKY, and MN-MKKY algorithms for Examples 8, 9,
10, and 11.
it is obvious that MN-MKKY algorithm is the best for this [6] C. Roos, T. Terlaky, and J. Vial, Theory and Algorithms for Linear
example. Optimization, Princton University, 2001.
[7] A. Schrijver, Theory of Linear and Integer Programming, John
Example 10. Consider the quadratic convex problem: Wiley & Sons, New York, NY, USA, 1986.
[8] A. Malek and R. Naseri, “A new fast algorithm based on
min 𝑓 (𝑥) = 𝑥12 + 𝑥22 − 8𝑥2 + 8 Karmarkar’s gradient projected method for solving linear pro-
gramming problem,” Advanced Modeling and Optimization, vol.
s.t. 𝑥1 + 2𝑥2 + 𝑥3 = 4, (27) 6, no. 2, pp. 43–51, 2004.
𝑥1 , 𝑥2 , 𝑥3 ≥ 0. [9] E. Tse and Y. Ye, “An extension of Karmarkar’s projective
algorithm for convex quadratic programming,” Mathematical
Example 11. Consider the nonlinear convex problem: Programming, vol. 44, no. 2, pp. 157–179, 1989.
[10] Z. Kebbiche and D. Benterki, “A weighted path-following
min 𝑓 (𝑥) = 𝑥12 + 2𝑥25 − 𝑥1 method for linearly constrained convex programming,” Revue
Roumaine de Mathématique Pures et Appliquées, vol. 57, no. 3,
s.t. 𝑥1 + 2𝑥2 − 𝑥3 = 1, pp. 245–256, 2012.
(28) [11] P. Fei and Y. Wang, “A primal infeasible interior point algorithm
− 3𝑥1 + 𝑥2 + 𝑥4 = −2, for linearly constrained convex programming,” Control and
Cybernetics, vol. 38, no. 3, pp. 687–704, 2009.
𝑥1 , 𝑥2 , 𝑥3 , 𝑥4 ≥ 0.
[12] Z. Kebbiche, A. Keraghel, and A. Yassine, “Extension of a
projective interior point method for linearly constrained convex
6. Conclusion programming,” Applied Mathematics and Computation, vol. 193,
no. 2, pp. 553–559, 2007.
Having two ideas in our mind, (i) calculation of feasible
solution in each iteration and (ii) the fact that the objective
function value must decrease in each iteration with the
fixed desired tolerance, this paper proposed three hybrid
algorithms for solving nonlinear convex programming prob-
lem based on the interior point idea using various 𝛽’s of
Karmarkar, Schrijver, and Malek-Naseri techniques. These
methods have better performance than the standard Kar-
markar algorithm, since in the latter algorithm one may not
check the feasibility of solution in each iteration.
Our numerical simulation shows that the MN-MKKY
algorithm has the best performance among the other algo-
rithms. This algorithm uses less number of iterations to
solve the general nonlinear optimization problems with linear
constraints, since it uses the step length 𝛽 of Malek-Naseri
type.
Conflict of Interests
The authors declare that there is no conflict of interests
regarding the publication of this paper.
References
[1] N. Karmarkar, “A new polynomial-time algorithm for linear
programming,” Combinatorica, vol. 4, no. 4, pp. 373–395, 1984.
[2] H. Navidi, A. Malek, and P. Khosravi, “Efficient hybrid algo-
rithm for solving large scale constrained linear programming
problems,” Journal of Applied Sciences, vol. 9, no. 18, pp. 3402–
3406, 2009.
[3] M. S. Bazzara, J. Jarvis, and H. D. Sherali, Linear Programming
and Network Flows, John Wiley & Sons, New York, NY, USA,
1984.
[4] A. T. Hamdy, Operations Research, Macmillan, New York, NY,
USA, 1992.
[5] R. M. R. Karp, “George Dantzig’s impact on the theory of
computation,” Discrete Optimization, vol. 5, no. 2, pp. 174–185,
2008.
Advances in Advances in Journal of Journal of
Operations Research
Hindawi Publishing Corporation
Decision Sciences
Hindawi Publishing Corporation
Applied Mathematics
Hindawi Publishing Corporation
Algebra
Hindawi Publishing Corporation
Probability and Statistics
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014
International
Journal of Journal of
Mathematics and
Mathematical
Discrete Mathematics
Sciences