Vous êtes sur la page 1sur 26

ECEN 687

VLSI Design Automation

Lecture 2 Nonlinear Programming

ECEN 687 Lecture 2 1


Nonlinear Programming
Min f ( x )
s.t. gi ( x ) 0 i 1,2,..., m
hj ( x ) 0 j 1,2,..., l
x X En

En is the n-dimensional real Euclidean space

ECEN 687 Lecture 2 2


Convex Set
A set S in En is convex if the line segment joining any
two points of the set also belongs to the set
If x1 and x2 are in S, then x1+(1-)x2 must also
belong to the S for each in [0,1].

Convex Non-convex

ECEN 687 Lecture 2 3


Convex Function
Let f : S E1, where S is a nonempty convex set in En.
The function f is said to be convex on S if
f ( x1 (1 ) x 2) f ( x1) (1 ) f ( x 2)
for each x1, x 2 S and for each (0,1).

x1 x1+(1-)x2 x2

ECEN 687 Lecture 2 4


Concave Function
Function f is concave if f is convex

f(x)

ECEN 687 Lecture 2 5


Subgradient
Let S be a nonempty convex set in En, and let f:S E1 be
convex. Then, is called a subgradient of f at x S if
t
f ( x) f (x ) (x x ) x S

f(x)
t(x-x)

x
x
ECEN 687 Lecture 2 6
Subgradient for Concave Function
Let S be a nonempty convex set in En, and let f:S E1 be
concave. Then, is called a subgradient of f at x S if
t
f ( x) f (x ) (x x ) x S

t(x-x)

x
x
ECEN 687 Lecture 2 7
Example of Subgradient
f(x) = |x|

Subgradient is not uniquely defined at x = 0


Subgradient is useful for non-differentiable functions

ECEN 687 Lecture 2 8


Criterion for Convexity
Let S be a nonempty convex set in En, and let f:S E1 be
differentiable on S . Then, f is convex iff for any at x S , we have
t
f ( x) f (x ) f (x ) (x x ) x S

ECEN 687 Lecture 2 9


Hessian Matrix
Let S be a nonempty set in En and let f:S E1. If f is
twice differentiable at x S , the Hessian matrix is given by
2 2 2
f(x ) f(x ) f(x )
2
...
2
x 1 xx
2
1 2 xx
2
1 n

f(x ) f(x ) f(x )


H(x ) 2
...
xx 2 1 x 2 xx 2 n
... ... ... ...
2 2 2
f(x ) f(x ) f(x )
... 2
xx n 1 xx n 2 x n

ECEN 687 Lecture 2 10


Another Criterion for Convexity
Let S be nonempty convex set in En, and let f:S->E1
be twice differentiable on S. Then, f is convex iff the
Hessian matrix is positive semidefinite at each point in
S
A symmetric matrix H is positive semidefinite if
xTHx 0 for any x
or all eigenvalues of H are non-negative

ECEN 687 Lecture 2 11


Do It Yourself
Please derive the criterion of positive semidefinite
matrix when n = 2
a b
H
b c

ECEN 687 Lecture 2 12


Why Convexity Matters?
If f is convex, local minimum global minimum
In mathematic programming, convexity is much more
important than linearity

ECEN 687 Lecture 2 13


Solve Unconstrained NLP
Heuristic
Start from an initial solution
Iteratively moves the solution toward the optimal

Move
Direction
Step size

ECEN 687 Lecture 2 14


Line Search
Min f(x)
f(x) is convex

a b c d a b c e d
Test at x = b, c Test at x = e
As f(b) > f(c), interval (a,b) is As f(c) < f(e), interval (e,d) is
excluded for further search excluded for further search

ECEN 687 Lecture 2 15


Steepest Descent Heuristic
Initialization Step: Let >0 be the termination
scalar. Choose a starting point x1, let k = 1, and go
to the main step
Main Step:
1. If || f(xk)||<, stop
2. Otherwise, let dk=- f(xk), and let k be an optimal solution
to minimize f(xk+dk) subject to 0
3. Let xk+1=xk+k dk
4. k=k+1
5. Goto 1

ECEN 687 Lecture 2 16


Solve Constrained NLP
Penalty function

Min f(x) Min f(x) g (x)


2

s.t. g(x) 0 s.t. x En


x En 0
Constraint g(x)=0 is satisfied
if is sufficiently large

ECEN 687 Lecture 2 17


Karush-Kuhn-Tucker (KKT)
Necessary Conditions
Let X be a nonempty set in En. Let f: En->E1 and gi:En->E1, for i =
1,2,,m. Consider the problem
P Min f(x)
s.t. gi(x) 0, for i 1,2 ,...,m
x X
Let x* be a feasible solution. Denote I={i:gi(x*)=0} the active
constraints. Suppose f and gi for i in I are differentiable at x* and
that gi for i not in I are continuous at x*, gi(x*) for i in I are
linearly independent. If x* locally solves P, then there exist
scalars ui for i in I such that

f(x*) ui gi(x*) 0
i I

ui 0 for i I
ECEN 687 Lecture 2 18
Karush-Kuhn-Tucker (KKT)
Necessary Conditions
Let X be a nonempty set in En. Let f: En->E1 and gi:En->E1, for i =
1,2,,m. Consider the problem
P Min f(x)
s.t. gi(x) 0, for i 1,2,...,m
Primal Feasibility (PF)
x X
Let x* be a feasible solution. Denote I={i:gi(x*)=0} the active
constraints. Suppose f and gi for i in I are differentiable at x* and
that gi for i not in I are continuous at x*, gi(x*) for i in I are
linearly independent. If x* locally solves P, then there exist
scalars ui for i in I such that
m
f(x*) ui gi (x*) 0
i 1
Dual Feasibility (DF)
Lagrangian
ui 0 for i 1,2,..., m
multiplier
uigi (x*) 0 for i 1,2,..., m Complementary Slackness (CS)
ECEN 687 Lecture 2 19
Example
2 2
Min ( x 3) ( x 2)
1 2

2 2 Consider solution x*=(2,1)t


s.t. x x 5 (1)
1 2

x 2 x 4 (2)
1 2 x2
x 01
(3)
2
x 02
(4)

g2(x*)
f(x*) = (-2, -2)t 1 g1(x*)
f(x*)
g1(x*) = (4, 2)t
x1
g2(x*) = (1, 2)t
1 2 3
ECEN 687 Lecture 2 20
Example of KKT Conditions
2 2
Min ( x 3) ( x 2)
1 2 Constraints (3) and (4) are not
s.t.
2
x x 5 (1)
2
tight at x* => u3=u4=0
1 2

x 2 x 4 (2)
1 2
Solve
x 01
(3)
2 4 1 0
x 02
(4) u1 u2
2 2 2 0
Solution x*=(2,1)t
u1= 1/3, u2 = 2/3
f(x*) = (-2, -2)t
g1(x*) = (4, 2)t KKT conditions are satisfied at x* !
g2(x*) = (1, 2)t

ECEN 687 Lecture 2 21


Cases of NLP
Quadratic programming
Quadratic objective function
Convex programming
Convex objective function, easy to solve as LP
Geometric programming
Can be transformed to convex programming
Semidefinite programming
Linear objective function, constraints involve positive
semidefinite matrix
Stochastic programming
Involves random variables

ECEN 687 Lecture 2 22


Geometric Programming
a1 a2 an
Monomial c x1x2 ... xn

where c, xi are positive real, and ai are real numbers


Posinomial is a sum of monomials
K
a1 k a2k ank
f (x) ck x1 x 2
... xn
k 1

Geometric program (GP)


Min f0(x)
s.t. fi(x) 1, i = 1,2, , m
gi(x) = 1, i = 1,2,, p
where fi(x) are posinomial functions and gi(x) are monomials

ECEN 687 Lecture 2 23


Example of GP
0.5
x y z 2.3xz
1 1
Min 4 xyz
1 4 2 0.5
y 3y z
2 1

3x
s.t. 1

x 2 y 3z 1
1
xy 1
2

ECEN 687 Lecture 2 24


GP => Convex Programming
New variable yi = log xi
New and equivalent formulation
Min log f0(ey)
s.t. log fi(ey) 0, i = 1, , m
log gi(ey) = 0, i = 1, , p
For each monomial
a1 a2 an
g ( x ) c x1 x 2
... xn
y
log g (e ) log c a1 log x1 ... an log xn
log c a1 y1 ... anyn

ECEN 687 Lecture 2 25


Optimization Tree

ECEN 687 Lecture 2 26