Vous êtes sur la page 1sur 6

Chapter 1

Exercise 1.1: Prove that if f, g are convex, monotonic and positive then the product f g is convex.
Similarly if f is positive,convex and increasing while g is positive, concave and decreasing then f /g
is convex.
Exercise 1.2: Let f : R R is twice differentiable function. Prove that f is convex if and only if
either conditions satisfied.
(First order condition) f (y) f (x) + f 0 (x)(y x) x, y R
(Second order condition) f 00 (x) 0
Hint: For first condition, apply the definition of convexity for z = x + t(x y). For second condition,
use Taylors theorem.
Exercise 1.3: Prove that if function f : R is strongly convex on its domain i.e there exists
m > 0 such that 2 f (x)  mI x then we have:
f (y) f (x) + f (x)T (y x) +

m
||y x||22 , x, y
2

Use this result to prove the stopping condition of gradient descent method i.e.
||f (x)|| (2m)1/2 f (x) p 
Hint: Use Taylors theorem to prove the first problem and find the minimum of its right hand side
with respect to y.
Exercise 1.4: Using gradient descent and exact line search find the optimal value of the quadratic
function:
1
min f (x) = (x21 + x22 )
x
2
Prove that the optimal solution is x1 = 0, x2 = 0 


k+1

1
Hint: Prove that at iteration k-th, xk = 1+
(1)k
Exercise 1.5: Prove that if x is the stationary point of a convex differentiable function then f (x )
is the global minimizer.
Exercise 1.6: Prove that if f is strongly convex and twice differentiable and 2 f (x)  M I,
then the algorithm using gradient descent with exact line search will always converge i.e.  >
0 k0 , f (xk ) p  for all k > k0
Hint: Use Taylor theorem to prove that f (y) f (x) + f (x)T (y x) + M2 ||y x||22 , x, y. At each
step, replace xk+1 = xk tf (xk ) to the inequality. Use exercise 1.3.
Exercise 1.7: Let f : R R f (x) = (x a)4 where a R. Apply Newton method to find the
minimizer of f .
1

1. Write update equation.


2. Let yk = |xk a| where xk is the kiterate of the Newtons method. Show that the sequence
yk+1 = 23 yk . From that find the optimal point of f .

Chapter 2
Exercise 2.1 Consider the problem:
min ||Ax b||

(1)

s.t xT 1 = 1
xi 0

(2)
(3)

Prove that this problem convex.


Exercise 2.2: Consider the problem:
min
x

s.t.

Eu [f0 (x, u)]

(4)

Eu [fi (x, u)] i = 1, .., m

(5)

If f0 and fi convex, prove that this problem is also convex.


Exercise 2.3: Consider the quadratic problem minx f (x) = (1/2)xT Px + qT x
where P Rn is a symmetric matrix.
1. Show that if P is not a semidefinite matrix then the problem is unbounded below.
2. Suppose P is a semidefinite matrix, but if Px = q has no solution then the problem is unbounded below.
Hint: if P is not semidefinite then v vT Pv < 0
Exercise 2.4 Consider the following convex problem:
min
x

s.t

x21 + x22 + x23 + x24


x1 + x2 + x3 + x 4 = 1

Use KKT conditions find the optimal solution. Hint: solution is (1/4,1/4,1/4,1/4).
Exercise 2.5: Consider the following problem:
minimize tf0 (x)

m
X
i=1

subject to Ax = b
2

log(fi (x))

with the central path x (t). Given u > p , let z (u) is the solution of
P
min
log(u f0 (x)) m
i=1 log(fi (x))
x

s.t.

Ax = b

Show that the set of z (u) is also the central path of the first problem. Hint: for any u > p find t such
that x (t) = z (u)
Exercise 2.6: Consider the log-barrier problem:
minimize tf0 (x) + (x)
subject to Ax = b
P

where (x) = m
i=1 log(fi (x)). Denote x is the optimal solution for the log-barrier problem and p
is the solution of the original problem. Prove that f0 (x ) p m/t.
Hint: Write the KKT conditions for this problem, define i = tfi1
show that , , x are also opti(x )
mal solutions of the original dual problem.
Exercise 2.7 Consider the convex optimization as follows:
minimize f (x)
subject to Ax = b
Prove that, for projected subgradient method, if the starting point x0 is feasible then for any k 0,
x0 is also feasible i.e. Axk = b. Hint: proof by induction.

Chapter 3
Branch and Bound Method
Exercise 3.1: Solving this integer programming problem:
max
s.t.

U = 5x1 + 4x2
x1 + x2 5
10x1 + 6x2 45
x1 , x2 N

Hint: Assume x1 and x2 have real values. Apply Linear programming to obtain the solution.
From that, choose the variable that is not an integer and perform branching i.e. adding constraints
using the nearest smaller integer and nearest larger integer of this variable.

Exercise 3.2: Solving this integer problem using branch and bound
min
s.t.

x21 + (x3 1)2 + (x2 x3 + 2)2


x1 + x2 + x3 = 5
xi 0 and xi Z

Hint: Relaxation by assuming xi can take real value, this problem is a convex quadratic optimization. You can use quadrog function in MATLAB to solve the relaxed version efficiently and get the
lower bound. The answer is (1,1,3).
Cutting Plane Method:
Exercise 3.3: Prove that the extra cut x1 + 2x2 5 is a valid constraint for the integer linear programming:
min x1 + 2x2
s.t x1 + 5x2 5
x1 + x2 4
x1 , x2 0, x1 , x2 Z
From here can you guess the optimal solution.
Exercise 3.4: Applying the cutting plane method to solve the following integer linear programming:
max f (x1 , x2 ) = 7x1 + 10x2
s.t x1 + 3x2
6
7x1 + x2
35
x1 , x2 0,
x1 , x2 Z
Hint: Denote x3 and x4 as slack variable. Solve this linear programming using simplex method.
Keep adding new constraints based on the solutions using Gromory method until a integer solution
is obtained. Solution is (x1 , x2 , x3 , x4 ) = (4, 3, 1, 4)

Chapter 4: Stochastic Optimization


Exercise 4.1(Lobo 1998): Consider the stochastic linear programming problem:
min
s.t.

cT x
Pr(aT x b) p

Assume vector a is a Gaussian random vector with mean


a, covariance matrix and p 0.5 . Prove
that this problem is equivalent to a second-order conic programming:
min
s.t.

cT x
aT x + 1 (p)||1/2 x||2 b
4

R t2 /2
1
where 1 is the inverse of the normal cumulative distribution function i.e (z) = 2
e
dt.
z
T
Hint: Denote u = a x, this is also a Gaussian random variable. Using the definition of normal
distribution we can obtain the close form of the probability. Since p 0.5 we have 1 (p) 0.

Exercise 4.2: Prove that if x1 , x2 , .., xN are independent exponential distributed random variables
with mean E[xi ] = 1/i , then
!
!
N
N
X
Y
1
Pr x1
xi = 1
1 + 1i
i=2
i=2
R
Hint: Use formula Pr(x > y) = t=0 Pr(x > t)fy (t)dt. This formula can be used to calculated the
outage probability at a base station where fading follows Rayleigh distribution.

Exercise 4.3: Consider the following inequality


f

M
X


aTm c pm

m=1

Vector c = (c1 , ..., cN ) is a random vector with covariance matrix G and mean
c. Prove that if c
belongs to the uncertainty set U = {
c + G1/2 u, ||u||2 } then the robust solution satisfies
f

M 
X


p
aTm
c aTm Gam pm

m=1

p
c and aTm Gam are actually the mean and stanHint: Use Cauchy-Swartz inequality. Note that aTm
dard deviation of aTm c. This problem shows that ellipsoid is a natural way to model the uncertainty
set.

Chapter 5
Exercise 5.1: Using Dynamic Programming, write down the steps and the recursive formula required
to solve the Knapsack problem:
min z = r1 m1 + r2 m2 + ... + rn mn
m

s.t. w1 m1 + ... + wn mn W
mi N

(6)

where mi is the number of units, and ri is the weight of each unit of item i.
Hint: define stages as the index of item, state xi as the total weight assigned to item from i to n.
Contribution at stage i is the function fi (xi ) which is the minimum total weight for items from i to n.
Show that the return at state i is a function of xi only.

Exercise 5.2: A traveling salesmen needs to travel from one location A to the another location H
by passing some other locations in between. Given the duration of traveling between two locations,
find his minimum time to reach H from A.
A
B
C
D
E
F
H

A
0
5
1
3

B
5
0

2
5

C
1

3
2

D
3

0
0
2

2
5

3
2
2

0
6

1
6
0

Exercise 5.3: Maximize the following non-linear programming problem using Dynamic Programming
max
s.t.

U = x1 (1 x2 )x3
x 1 x2 + x3 1
xi 0

(7)
(8)
(9)

Hint: Define state of the problem as the remaining resources in the constraint when each variable is
introduced i.e S1 = 1, S2 = 1 x1 , S3 = 1 x1 + x2 . Applying backward recursive for the objective
function. Optimal solution is (2/3,2/3,2/3)
Exercise 5.4: Using Dynamic Programming, solve the nonlinear integer problem
min
s.t.

U = (x1 + 2)2 + x2 x3 + (x4 5)2


x 1 + x2 + x3 + x4 5
xi 0, xi Z

Hint: Define state as previous exercise. Group x2 and x3 into one state. Solution is (0,0,0,5)

(10)
(11)
(12)

Vous aimerez peut-être aussi