Vous êtes sur la page 1sur 30

Optimization — Introduction

Optimization in Systems and Control Optimization — Introduction 1 / 30


Optimization deals with how to do things in the best possible
manner:
◮ Design of multi-criteria controllers
◮ Clustering in fuzzy modeling
◮ Trajectory planning of robots
◮ Scheduling in process industry
◮ Estimation of system parameters
◮ Simulation of continuous time systems on digital computers
◮ Design of predictive controllers with input-saturation

Related courses:
◮ SC4060: Model predictive control
◮ SC4040: Filtering & identification
◮ WI4217: Optimal control theory: Fundamentals & related
topics

Optimization in Systems and Control Optimization — Introduction 2 / 30


Overview

Three subproblems:
Formulation:
Translation of engineering demands and requirements
into a mathematical well-defined optimization
problem
Initialization:
Choice of right algorithm
Choice of initial values for parameters
Optimization procedure;
Various optimization techniques
Various computer platforms

Optimization in Systems and Control Optimization — Introduction 3 / 30


Teaching goals

I. Optimization problem → most efficient and best suited


optimization algorithm
II. Engineering problem → optimization problem:
◮ specifications → mathematical formulation
◮ simplifications/approximations?
◮ efficiency
◮ implementation

Optimization in Systems and Control Optimization — Introduction 4 / 30


Contents

Part I. Optimization Techniques


Part II. Formulating the Controller Design Problem as an
Optimization Problem

Optimization in Systems and Control Optimization — Introduction 5 / 30


Contents of Part I

I. Optimization Techniques
1. Introduction
2. Linear Programming
3. Quadratic Programming
4. Nonlinear Optimization
5. Constraints in Nonlinear Optimization
6. Convex Optimization
7. Global Optimization
8. Summary
9. Matlab Optimization Toolbox
10. Multi-Objective Optimization
Appendix E. Integer Optimization

Optimization in Systems and Control Optimization — Introduction 6 / 30


Mathematical framework

min f (x)
x
s.t. h(x) = 0
g (x) 6 0

◮ f : objective function
◮ x : parameter vector
◮ h(x) = 0 : equality constraints
◮ g (x) 6 0 : inequality constraints

f (x) is a scalar
g (x) and h(x) may be vectors

Optimization in Systems and Control Optimization — Introduction 7 / 30


◮ Unconstrained optimization:

f (x ∗ ) = min f (x)
x

where
x ∗ = arg min f (x)
x

◮ Constrained optimization:

f (x ∗ ) = min f (x)
x

h(x ∗ ) = 0
g (x ∗ ) 6 0
where
n o
x ∗ = arg min f (x), h(x) = 0, g (x) 6 0
x

Optimization in Systems and Control Optimization — Introduction 8 / 30


Maximization = Minimization


max f (x) = − min − f (x)
x x

−f

x∗

Optimization in Systems and Control Optimization — Introduction 9 / 30


Classes of optimization problems
◮ Linear programming
min c T x , A x = b , x > 0
x

min c T x , A x 6 b , x > 0
x
◮ Quadratic programming
min x T Hx + c T x , A x = b , x > 0
x

min x T Hx + c T x , A x 6 b , x > 0
x
◮ Convex optimization
min f (x) , g (x) 6 0 where f and g are convex
x
◮ Nonlinear optimization
min f (x) , h(x) = 0 , g (x) 6 0
x
where f , h, and g are non-convex and nonlinear
Optimization in Systems and Control Optimization — Introduction 10 / 30
Convex set
Set C in Rn is convex if x, y ∈ C , λ ∈ [0, 1] :

(1 − λ)x + λy ∈ C

x (λ = 0)

(1 − λ)x + λy

y (λ = 1)

Optimization in Systems and Control Optimization — Introduction 11 / 30


Unimodal function

A function f is unimodal if
a) The domain dom(f ) is a convex set.
b) ∃ x ∗ ∈ dom(f ) such

f (x ∗ ) 6 f (x) ∀ x ∈ dom(f )

c) For all x0 ∈ dom(f )


there is a trajectory x(λ) ∈ dom(f )
with x(0) = x0 and x(1) = x ∗
such that
 
f x(λ) 6 f (x0 ) ∀λ ∈ [0, 1]

Optimization in Systems and Control Optimization — Introduction 12 / 30


Inverted Mexican hat

xT x
f (x) = x ∈ R2
1 + xT x

2
1

0.8
1

0.6 x2 0
f 0.4
−1
0.2

0 −2
4
2
−3
0 4
2
−2 0
−4
x2 −4 −4
−2
x1 −4 −3 −2 −1 0 1 2 3 4
x1

Optimization in Systems and Control Optimization — Introduction 13 / 30


Rosenbrock function

f (x1 , x2 ) = 100(x2 − x12 )2 + (1 − x1 )2

2.5

2
3000
1.5
2500

2000 x2 1
f 1500

1000 0.5

500
0
0
3
2 2 −0.5
1 1
0
0 −1 −1
x2 −1 −2 x1
−2 −1.5 −1 −0.5 0
x1
0.5 1 1.5 2

Optimization in Systems and Control Optimization — Introduction 14 / 30


Quasiconvex function

A function f is quasiconvex if
a) Domain dom(f ) is a convex set
b) For all x, y ∈ dom(f )
and 0 6 λ 6 1
there holds
 
f (1 − λ)x + λy 6 max (f (x), f (y ))

Optimization in Systems and Control Optimization — Introduction 15 / 30


Quasiconvex function

Alternative definition:
A function f is quasiconvex if the sublevel set

L(α) = { x ∈ dom(f ) : f (x) 6 α }

is convex for every real number α

Optimization in Systems and Control Optimization — Introduction 16 / 30


Convex function

A function f is convex if
a) Domain dom(f ) is a convex set.
b) For all x, y ∈ dom(f )
and 0 6 λ 6 1
there holds
 
f (1 − λ)x + λy 6 (1 − λ)f (x) + λf (y )

Optimization in Systems and Control Optimization — Introduction 17 / 30


Convex function

f (y )

(1 − λ)f (x) + λf (y )

f (x)


f (1 − λ)x + λy

x (1 − λ)x + λy y
(λ = 0) (λ = 1)
Optimization in Systems and Control Optimization — Introduction 18 / 30
Test: Unimodal, quasiconvex, convex
Given: Function f with graph
f

Question: f is (check all that apply)


 unimodal
 quasiconvex
 convex
Optimization in Systems and Control Optimization — Introduction 19 / 30
Gradient and Hessian

 
∂f
 ∂x1 
 ∂f 
Gradient of f : ∇f (x) = 
 ∂x2 
.. 

 . 

∂f
∂xn

∂2f ∂2f ∂2f


 
...
∂x12 ∂x1 ∂x2 ∂x1 ∂xn
 
∂2f ∂2f ∂2f
...
 
∂x22
 ∂x1 ∂x2 ∂x2 ∂xn

Hessian of f : H(x) =  .. .. ..

 .. 

 . . . . 

∂2f ∂2f ∂2f
...
∂x1 ∂xn ∂x2 ∂xn ∂xn2

Optimization in Systems and Control Optimization — Introduction 20 / 30


Jacobian

x: vector
h: vector-valued

 ∂h1 ∂h2 ∂hm 


∂x1 ∂x1
... ∂x1
∂h1 ∂h2 ∂hm
 

∂x2 ∂x2
... ∂x2

∇h(x) = 
 
.. .. .. .. 

 . . . . 

∂h1 ∂h2 ∂hm
∂xn ∂xn
... ∂xn

Optimization in Systems and Control Optimization — Introduction 21 / 30


Graphical interpretation of gradient

◮ Directional derivative of function f in x0 in direction of unit


vector β:

Dβ f (x0 ) = ∇T f (x0 ) · β = k∇f (x0 )k2 cos θ

with θ angle between ∇f (x0 ) and β


◮ Dβ f (x0 ) is maximal if ∇f (x0 ) and β are parallel
→ function values exhibit largest increase in direction of ∇f (x0 )
→ function values exhibit largest decrease in direction of −∇f (x0 )
◮ −∇f (x0 ) is called steepest descent direction
◮ Dβ f (x0 ) is equal to 0 (i.e., function values f do not change) if
∇f (x0 ) ⊥ β
→ ∇f (x0 ) is perpendicular to contour line through x0

Optimization in Systems and Control Optimization — Introduction 22 / 30


tan α = Dβ f (x0 )

f
α

f (x0 )

x2

∇f (x0 )
θ
β
x1 x0

Optimization in Systems and Control Optimization — Introduction 23 / 30


Subgradient
Let f be a convex function.
∇f (x0 ) is a subgradient of f in x0 if

f (x) > f (x0 ) + ∇Tf (x0 ) (x − x0 )

for all x ∈ Rn

f (x0 ) + ∇T f (x0 )(x − x0 )

x̃0 x0
Optimization in Systems and Control Optimization — Introduction 24 / 30
Classes of optimization problems
◮ Linear programming
min c T x , A x = b , x > 0
x

min c T x , A x 6 b , x > 0
x
◮ Quadratic programming
min x T Hx + c T x , A x = b , x > 0
x

min x T Hx + c T x , A x 6 b , x > 0
x
◮ Convex optimization
min f (x) , g (x) 6 0 where f and g are convex
x
◮ Nonlinear optimization
min f (x) , h(x) = 0 , g (x) 6 0
x
where f , h, and g are non-convex and nonlinear
Optimization in Systems and Control Optimization — Introduction 25 / 30
Conditions for extremum → learn by heart!
◮ Unconstrained optimization problem: min f (x)
x
Zero-gradient condition: ∇f (x) = 0
◮ Equality constrained optimization problem: min f (x)
x
Lagrange conditions: s.t. h(x) = 0
∇f (x) + ∇h(x) λ = 0
h(x) = 0
◮ Inequality constrained optimization problem: min f (x)
x
Kuhn-Tucker conditions: s.t. g (x) 6 0
∇f (x) + ∇g (x) µ + ∇h(x) λ = 0 h(x) = 0
µT g (x) = 0
µ>0
h(x) = 0
g (x) 6 0
Optimization in Systems and Control Optimization — Introduction 26 / 30
Convergence

k xk+1 − x ∗ k
β1 = lim
k→∞ k xk − x ∗ k

Linear convergent if 0 < β1 < 1


Super-linear convergent if β1 = 0

k xk+1 − x ∗ k
β2 = lim
k→∞ k xk − x ∗ k2

Quadratic convergent if 0 < β2 < 1

Optimization in Systems and Control Optimization — Introduction 27 / 30


Stopping criteria
◮ Linear and Quadratic programming: Finite number of steps
◮ Convex optimization: kf (xk ) − f (x ∗ )k 6 εf , g (xk ) 6 εg , and
for ellipsoid: kxk − x ∗ k 6 εx
◮ Unconstrained nonlinear optimization: k∇f (xk )k 6 ε∇
◮ Constrained nonlinear optimization:

k ∇f (xk ) + ∇g (xk ) µ + ∇h(xk ) λ k 6 εKT 1


k µT g (xk ) k 6 εKT 2
µ > −εKT 3
k h(xk ) k 6 εKT 4
g (xk ) 6 εKT 5

◮ Maximum number of steps


◮ Heuristic stopping criteria (last resort):
kxk+1 − xk k 6 εx or kf (xk ) − f (xk+1 )k 6 εf
Optimization in Systems and Control Optimization — Introduction 28 / 30
Summary

◮ Standard form of optimization problem:


min f (x) s.t. h(x) = 0, g (x) 6 0
x
◮ Classes of optimization problems: linear, quadratic, convex,
nonlinear
◮ Convex sets & functions
◮ Gradient, subgradient, and Hessian
◮ Conditions for extremum
◮ Stopping criteria

Optimization in Systems and Control Optimization — Introduction 29 / 30


Test: Gradient
Given: Level lines of unimodal function f with minimum x ∗ , a
point x0 , and vectors v1 , v2 , v3 , v4 , v5 , one of which is equal to
∇f (x0).
Question: Which vector vi is equal to ∇f (x0)?

v4 v5
v3
x0
v2
v1
x∗

Optimization in Systems and Control Optimization — Introduction 30 / 30

Vous aimerez peut-être aussi