Vous êtes sur la page 1sur 4




Assoc. Prof. Dr. Y. İlker Topcu

I would like to acknowledge Prof. W.L. Winston's "Operations Research: Applications and Algorithms“
(slides submitted by Brooks/Cole, a division of Thomson Learning, Inc.) and Prof. J.E. Beasley's
lecture notes which greatly influence these notes...
I retain responsibility for all errors and would love to hear from readers...


Assoc. Prof. Dr. Y. İlker Topcu (www.ilkertopcu.net)


Interior Point Methods (IPM) still follow the improving search paradigm for LP, but they
employ moves quite different from those in simplex method.
Much more effort turns out to be required per move (iteration) with IPM but the number
of moves decreases dramatically.
The simplex algorithm is an exponential time algorithm for solving LPs.
If an LP of size n is solved by the simplex, then there exists a positive number a
such that for any n, the simplex algorithm will find the optimal solution in a time of
at most a2n.
IPM, on the other hand, is a polynomial time algorithm.
This implies that if an LP of size n is solved by an IPM, then there exist positive
numbers b and c such that for any n, LP can be solved in a time of at most bnc
Instead of staying on the boundary of the feasible region and passing from extreme point
to extreme point, IPM proceed directly across the interior.
IPM begin at and move through a sequence of interior feasible solutions, converging to
the boundary of the feasible region only at an optimal solution.

Popular methods
• Karmarkar’s Projective Transformation
• Affine Scaling
• Log Barrier

IPM are applied to an LP in the following standard form:

max c · x
s.t. Ax = b

Assoc. Prof. Dr. Y. İlker Topcu (www.ilkertopcu.net)

Interior in LP Standard Form
A feasible solution for an LP in standard form is an interior point if every component
(variable) of the solution that can be positive in any feasible solution is strictly positive in
the given point.

Projecting to Deal with Equality Constraints

A move direction ∆x is feasible for equality constraints Ax=b if it satisfies A∆x=0.
The projection of a move vector d on a given system of equalities is a direction
preserving those constraints and minimizing the total squared difference between its
components and those of d.
The projection of direction d onto conditions A∆x=0 preserving linear inequalities Ax=b
can be computed as ∆x=Pd where projection matrix P = I – AT (AAT)–1 A

Improvement with Projected Directions

The projection ∆x= ± Pc of (nonzero) objective function vector c onto equality constraints
Ax=b is an improving direction at every x.

Affine Scaling Approach

(x1’, x2’, x3’): initial solution Æ
⎡ x1 ' 0 0 ⎤
⎢ ⎥
Diagonal matrix D = 0 x ' 0
⎢ 2 ⎥
⎢⎣ 0 0 x3 '⎥⎦

such that x = D~ x
~ −1
Rescaled variable: x = D x [Centering scheme (1, 1, 1)]
For new coordinates
A = AD
~c = Dc
~ ~~ ~
Projection matrix:P = I − A T (AA T ) −1 A
Projected gradient: c = ± P c

Assoc. Prof. Dr. Y. İlker Topcu (www.ilkertopcu.net)

~ α
Update Æ xnew = ~
x + cP
where v is the absolute value of the negative component of cp having largest
absolute value and a is arbitrarily chosen as 0.8
α measures the fraction used of the distance that could be moved before the
feasible region is left. An α value close to upper bound of 1 is good for giving a
relatively large step toward optimality on the current iteration. However, the
problem with a value too close to 1 is that the next trial solution then is jammed
against a constraint boundary, thereby making it difficult to take large improving
steps during subsequent iterations.

In original coordinates: x new = D ~

When the solution value is no longer changing very much, algorithm stops

Example 1. Frannie’s Firewood

(Rardin 6.1., p. 274)
max 90 x1 + 150 x2
s.t. 0.5 x1 + x2 ≤ 3
x1, x2 ≥ 0
Please refer to www.isl.itu.edu.tr/ya/interior.htm for the answer

Assoc. Prof. Dr. Y. İlker Topcu (www.ilkertopcu.net)