Vous êtes sur la page 1sur 9

# WMS

MA252
Combinatorial
Optimisation

Revision Guide

## Written by Lewis Woodgate

WMS
ii MA252 Combinatorial Optimisation

Contents

Algorithms 1
1 Definitions and Notation 1

## 2 Minimum Spanning Tree 2

3 Shortest Path 2

4 Maximum Flow 3

5 Matchings 4

Linear Programming 5
6 Standard Forms 5

7 Simplex Algorithm 5

8 Tableau Method 6

9 Linear Duality 7

Introduction
This revision guide for MA252 Combinatorial Optimisation has been designed as an aid to revision, not
a substitute for it. To begin with, it contains very few proofs and examples. The best way to revise is to
use this revision guide as a quick reference and just keep trying example sheets and past exam questions.
This course is more applied than many others, with the first half being dedicated to specific examples
of linear programs and the latter part covering a more general case. You should understand the ideas
behind why these algorithms work, as well as being able to apply them. Good luck.

Disclaimer: Use at your own risk. No guarantee is made that this revision guide is accurate or
complete, or that it will improve your exam performance, or make you 20% cooler. Use of this guide
will increase entropy, contributing to the heat death of the universe. Contains no GM ingredients. Your
mileage may vary. All your base are belong to us.

Authors
Written in 2013 by L.S.Woodgate (l.s.woodgate@warwick.ac.uk).
Based upon lectures given by Diane Maclagan at the University of Warwick, 2013.
Any corrections or improvements should be entered into our feedback form at http://tinyurl.com/
WMSGuides (alternatively email revision.guides@warwickmaths.org).

History
First Edition: June 15, 2013
Second Edition (first printed): February 21, 2016
MA252 Combinatorial Optimisation 1

Algorithms
1 Definitions and Notation
Definition 1.1 (Algorithm). An algorithm is a list of instructions that solve a problem. They must be
guaranteed to terminate.
You may be required to write an algorithm to solve a particular problem in pseudo-code, remember
to cover any cases which would cause loops. Your algorithm must terminate!
Definition 1.2 (Big-O). Let f, g : R → R. Then we say f (x) is O(g(x)) as x → ∞ if ∃x0 , M > 0 s.t.
∀x > x0 we have |f (x)| < M |g(x)|.
In words, f (x) is big-O of g(x) if it is eventually bounded by a constant multiple of g(x).
Big-O can be used to give an indication of how long an algorithm takes for different input sizes. For
example,
• If our algorithm is O(n) then doubling the input will double the running time;
• If our algorithm is O(n2 ) then doubling the input will quadruple the running time;
• If our algorithm is O(2n ) then doubling the input will square the running time;
Sometimes we need to use bit size to decide how long an algorithm takes. A number n has bit size k if
2k−1 < n < 2k .
This can make a huge difference, as if our algorithm was O(n), then in bit size it is O(2k ).
If, in our algorithm, n counts something, then use n directly when using big-O notation. If n is a
number then use bit size.
Definition 1.3 (Decision Problem). A problem is a decision problem if it has a yes/no answer. We say a
decision problem runs in polynomial time (“P”) if, for an input of size n, it is O(nk ) for some k ∈ N. We
say it runs in non-deterministic polynomial time (“NP”) if a “yes” answer can be checked in polynomial
time.
Definition 1.4 (Graph). A graph (V, E) is a pair consisting of a set V and collection E of unordered
pairs of elements of V , called edges.
Definition 1.5 (Degree). The degree of a vertex v ∈ V is the number of adjacent edges. i.e, deg(v) =
|{(u, v) : u ∈ V }|
Lemma 1.6 (Handshake lemma). X
deg(v) = 2|E|.
v∈V

## Definition 1.7 (Path). For v, w ∈ V , a path from v to w is a sequence v = v0 , v1 , v2 , . . . , vk = w with

(vi , vk+1 ) ∈ E ∀i = 0, 1, . . . , k − 1. A path is simple if vi =
6 vj for 1 ≤ i < j ≤ k − 1. A simple path from
a vertex v to itself is a circuit.
Definition 1.8 (Properties of graphs).

## • A graph is simple if it contains no loops or multiple edges (i.e. if E is a set);

• A graph is connected if ∀u, v ∈ V , there is a path from u to v;
• A graph is weighted if there is a function ω : E → R associated with it. ω(e) is referred to as the
weight (sometimes cost) of an edge.
Definition 1.9 (Subraph). A subgraph G0 = (V 0 , E 0 ) of a graph G = (V, E) is a graph such that V 0 ⊆ V
and E 0 ⊆ E. A subgraph is a spanning subgraph is V 0 = V .
Definition 1.10 (Trees and Forests). A forest is a graph without circuits. A tree is a connected forest.
In a tree, |E| = |V | − 1, this can be shown using induction.
Definition 1.11 (Cut). A cut of G = (V, E) is a decomposition V = V1 ∪ V2 with V1 ∩ V2 = ∅. We
denote δ(V1 ) = {(u, v) ∈ E : u ∈ V1 , v ∈ V2 }. Sometimes denoted c(V1 , V2 ).
2 MA252 Combinatorial Optimisation

## 2 Minimum Spanning Tree

Given a connected weighted graph G = (V, E) with weight function ω such that ω(e) ≥ 0 ∀e ∈ E, we
want to find a minimum cost spanning subgraph. Given these conditions the subgraph will be a tree.

Kruskal’s Algorithm
Input Connected weighted graph G = (V, E)
Output Spanning tree H = (V, F )
1) F = ∅, H = (V, F )
2) While |F | < |V | − 1
Find the lowest cost edge e ∈ E\F such that (V, F ∪ {e}) is still a forest. F = F ∪ {e}.
3) Output H = (V, F )

Prim’s Algorithm
Input Connected weighted graph G = (V, E)
Output Spanning tree H = (V, F )
1) F = ∅, H = (V, F )
2) While |F | < |V | − 1
Find the lowest cost edge e ∈ E\F such that (V, F ∪ {e}) is still a tree. F = F ∪ {e}.
3) Output H = (V, F )
These must terminate (meaning they are algorithms) as long as the set E is finite, as the number of
edges in E\F decreases by one in every step. To demonstrate they are correct a definition and theorem
are required.
Definition 2.1. Let G = (V, E) be a graph, we say Ê is extendible to an MST if there is a minimal
spanning subgraph (V, T ) of G such that Ê ⊆ T .
Theorem 2.2. Suppose Ê ⊆ E with Ê extendible to a MST. Let there be a cut V = V1 ∪ V2 with
δ(V1 ) ∩ Ê = ∅. Choose e ∈ δ(V1 ) with ω(e) ≤ ω(e0 ) for all e0 ∈ δ(V1 ). Then Ê ∪ {e} is extendible to a
MST.
Proof. If e ∈ T then we are done, if not then consider f ∈ δ(V1 ) ∩ T 6= ∅ and show (V, T \{f } ∪ {e}) is
an MST.
With this theorem we can show that at each stage in the algorithm, F is extendible to a MST.
This is because we can consider the set of vertices v with deg(v) > 0, call it S. Then taking the cut
V = S ∪ (V \S) we apply the above theorem.
Thus, when we get our output H = (V, F ), it is extendible to a MST, but |F | = |V | − 1. So it must
be a minimum spanning tree. We have shown that both algorithms terminate and are correct.

3 Shortest Path
Given a digraph G = (V, E) and a vertex u ∈ V , find the shortest path from u to all other vertices.
Definition 3.1 (Digraph). A digraph (directed graph) (V, E) is a pair consisting of a set V and collection
E of ordered pairs of elements of V , called edges. Most definitions from standard graphs carry over to
digraphs.
Definition 3.2. A feasible potential is a vector y ∈ R|V | such that
• yu = 0,
• yv ≤ yv0 + ω((v 0 , v)) ∀(v 0 , v) ∈ E.
MA252 Combinatorial Optimisation 3

Ford’s Algorithm
Input Weighted digraph G = (V, E), vertex u ∈ V

## 2) Fix an ordering of E, set i = 1

3) Cycle through the edges in order. If for e = (v, w) we have yw > yv + ω(e), then yw = yv + ω(e),
pw = v.

## 5) If y is feasible potential, output y, p. Else, output “Negative cost circuit exists”

Dijkstra’s Algorithm
Input Weighted digraph G = (V, E) with ω(e) ≥ 0 for all e ∈ E, vertex u ∈ V

## 2) Set S = V (S is vertices still to look at.)

3) While S 6= ∅
Choose v ∈ S with yv smallest. Delete v from S and for all e = (v, w) ∈ E with yw > yv + ω(e),
set yw = yv + ω(e) and pw = v.

4) Output y, p

4 Maximum Flow
Given a network N = (V, E, s, t, c), what is the maximum value of the flow on N ?

Definition 4.1 (Network). A network (V, E, s, t, c) is a digraph (V, E) with a source vertex s ∈ V , a
terminal vertex t ∈ V and a capacity function c : E → R≥0 such that
• @v ∈ V with (v, s) ∈ E
• @v ∈ V with (t, v) ∈ E

## Definition 4.2 (Flow). A flow on N is a vector f ∈ R|E| with

• 0P≤ fe ≤ c(e) for all
Pe ∈ E
• f
(u,v)∈E (u,v) = f 0 for all v ∈ V
P (v,u )
(v,u0 )∈E
The value of the flow |F | = f(s,v) .
(s,v)∈E

Ford-Fulkerson Algorithm
Input Network (V, E, s, t, c) with c(e) ∈ N for all e ∈ E

1) f = 0

## 2) While an augmenting path from s to t exists, augment it.1

1 For an unordered path from s to t let 4 = min{ min (c(e) − fe ), min (fe )}. Add 4 to all forward
edges traversed edges traversed
fowards backwards
edges on the path and subtract it from all backwards edges. Stop when 4 = 0 for all paths from s to t.
4 MA252 Combinatorial Optimisation

## 3) Set S = {v ∈ V there exists an augmenting path2 from s to v}, T = V \S.

4) Output f , c(S, T ).

## max |F | = min c(S, T )

flows on N cuts on N

5 Matchings
Definition 5.1. A graph G is bipartite if
• V = A ∪ B with A ∩ B = ∅
• (u, v) ∈ E ⇒ u ∈ A, v ∈ B
Definition 5.2. A Matching on a bipartite graph is a subset M ⊆ E with no two edges adjacent to the
same vertex. A matching M is maximal if for any matching M 0 , |M | ≥ |M 0 |. A matching is complete if
for any a ∈ A ∃b ∈ B such that (a, b) ∈ M .
To find a maximal matching the Ford-Fulkerson algorithm can be used, to do this
• Extend G = (A ∪ B, E) to network N by adding two vertices, s and t.
• Add edges (s, a) for all a ∈ A and (b, t) for all b ∈ B.
• Set c((s, a)) = 1, c((b, t)) = 1 for all a ∈ A, b ∈ B
• For all other edges e ∈ E, set c(e) = |A| + 1 (or anything larger than it)
Now you can apply the algorithm to this network and the result will be the maximum flow.
Definition 5.3. A vertex cover for a graph G is a subset W ⊆ V such that for any (u, v) ∈ E, u ∈ W
or v ∈ W .
Theorem 5.4 (Kőnig’s Theorem). Let G = (A ∪ B, E) be a bipartite graph. The number of edges in a
maximal matching equals the number of vertices in a minimal vertex cover.
Definition 5.5. Let G = (A ∪ B, E) be a bipartite graph, then for X ⊆ A we define neighbours of X,
nbrs(X) = {b ∈ B : ∃a ∈ A with (a, b) ∈ E}
Theorem 5.6 (Hall’s Theorem). A bipartite graph G = (A ∪ B, E) has a complete matching iff

∀X ⊆ A, | nbrs(X)| ≥ |X|

## Definition 5.7. Let G = (A ∪ B, E) be a bipartite graph, with every a ∈ A having an ordering on

nbrs(a) and with every b ∈ B having an ordering on nbrs(b). Then a matching is stable if @a ∈ A, b ∈ B
such that (a, b0 ), (a0 , b) ∈ E with b0 <a b and a0 <b a.

## Gale and Shapley Algorithm

Input Complete bipartite graph G = (A ∪ B, E) with orderings on nbrs(a), nbrs(b) for all a, b ∈ V
Output Stable matching M ⊆ V
1) Set M = ∅, i = 1
2) For each b ∈ B add edge (a, b) to M with a in position i in ordering nbrs(b)
3) For each a ∈ A keep edge (a, b) ∈ M with b >a b0 for all other (a, b0 ) ∈ M and remove others
4) If matching incomplete then i = i + 1 and go to step 2
5) Output M

2 By augmenting path it means that no edges at capacity are traversed forwards and no empty edges are traversed

backwards.
MA252 Combinatorial Optimisation 5

Linear Programming
All problems above are examples of linear programs, i.e. minimising a linear function subject to linear
inequalities.

6 Standard Forms
Definition 6.1. A polyhedron is a set in Rn of the form {x ∈ Rn : Ax ≤ b} for A ∈ Rr×n and b ∈ Rr for
some r ∈ N. A polytope is a bounded polyhedron.
Definition 6.2. The face of a polyhedron p = {x ∈ Rn : Ax ≤ b} minimising c is facec (p) = {x ∈
p : c · x ≤ c · y ∀y ∈ p}
Definition 6.3. A linear program is in standard form if it is of the form min(c · x) for x ∈ Rn satisfying
Ax = b and x ≥ 0.
Every linear program is equivalent to one in standard form, to convert one to standard form apply the
following steps.
• Make x ≥ 0
− −
If no inequality xi ≥ 0 or xi ≤ 0 exists for some i, then replace xi with x+ +
i − xi with xi , xi ≥ 0.

If an inequality xi ≤ 0 exists for some i, replace xi with −xi .
• Add slack variables, Ax ≤ b is equivalent to Ax + s = b for s ∈ Rr≥0 . We do this by adding an
identity matrix onto A and adding extending x to an Rn+r vector.
• Update c so that it is equivalent to the original c.
Notation 6.4. The following notation is used frequently over the coming sections. For matrix B ∈ Rr×n ,
vector c ∈ Rn and subset I ⊆ {1, 2, 3, . . . , n}.
• Bi Is the vector corresponding to the i’th column of B
• BI Is the r × |I| matrix consisting of the columns of B indexed by I.
• CI Is the vector in R|I| with components corresponding to I.
Definition 6.5. A vector y ∈ Rn is a basic solution to a linear program if Ay = b and {Ai : yi 6= 0} is
linearly independent. y is a basic feasible solution if it is a basic solution and y ≥ 0. A basic feasible
solution is a vertex of the polyhedron p = {x ∈ Rn : Ax = b}. A basic feasible solution is degenerate if
|{i : yi 6= 0}| < r.
Definition 6.6. Let x ∈ p where p is a polyhedron. A vector d ∈ Rn is a feasible direction if ∃θ ∈ R>0
such that x + θd ∈ p.

7 Simplex Algorithm
Input Linear program to min(c · x) with Ax = b and x ≥ 0.
Output Vector x ∈ facec (p) or “min(c · x) = −∞”
1) Find a basic feasible solution (more on this later). With corresponding set I.
2) Compute c̄ = c − cTI BI−1 A 3

## 3) If c̄ ≥ 0 then x is optimal. Stop.

4) Choose i such that c¯i < 0. Compute d.4
5) If d ≥ 0 then min(c · x) = −∞. Stop.
xj xl
6) Let θ∗ = min , choose l ∈ I with θ∗ =
dj <0 −dj −dl

3 when −1
you just need a few components they are c¯j = c − cT
I BI Aj
4d
i = 1, dI = −B −1 A, dj = 0 for j 6∈ I ∪ {i}
6 MA252 Combinatorial Optimisation

## 7) Set x = x + θ∗ d, I = I\{i} ∪ {l} and go to step 2.

The algorithm is guaranteed to terminate provided there are no degenerate basic feasible solutions. Even
if there are, we can still ensure it terminates by choosing our pivot rules carefully.

Definition 7.1. The choices we make in steps 4 and 6 are called pivoting rules, there are many different

## 7.1 Initial Solutions

There is a clever trick to obtain an initial basic feasible solution which you should be aware of, given a
linear program min(x · c) with Ax = b and x ≥ 0.
• Consider new linear program, min(y · (1, 1, . . . , 1)) with Ax + y = b and (x, y) ≥ 0.5
• This new LP has a basic feasible solution (x, y) = (0, b). Now solve the new LP.
• If the new LP has optimal value 0, then the solution (x, 0) is a basic feasible solution to the original
LP. If not, then there is no basic feasible solution to the original LP.
Sometimes you may be able to spot a basic feasible solution, in which case use it! This trick is there to
make the hard cases easier, not to make the easy cases harder...

8 Tableau Method
The full simplex tableau at any basic feasible solution is the (r + 1) × (n + 1) matrix

## −CIT BI−1 b c − cTI BI−1 A = c̄

 

T =
 BI−1 A 
BI−1 b = xI

This contains all the information we need to solve the linear program. Note that the indexing of rows
and columns starts at 0, and T0,0 = −c · x.

## 2) Compute the tableau corresponding to I.

6
3) If c̄ ≥ 0 then x is optimal. Stop.

4) Choose i such that c¯i < 0. Consider −dI = BI−1 Ai , the i’th column of T .

## 5) If this is non-positive then d ≥ 0 so min(c · x) = −∞. Stop.

Tk,0
6) Let s be the index of the row that achieves min .
Tk,i >0 Tk,i

7) Add multiples of the s’th row to all other rows7 so that the i’th column becomes es and go to step 2.

Whilst this may look more complicated it is actually faster in general than the original version with less
calculation needed, give it a try on some examples.
When reading off the x value from the tableau we have xi = Tj,0 if the i’th column is ej , or xi = 0 if
there is no basis vector in column i.
5 This is equivalent to (A|Ir ) x

y
=b
6 c̄ is the top row of T , apart than the top left.
7 Including row zero.
MA252 Combinatorial Optimisation 7

## 8.1 Lexicographic Pivot Rule

Definition 8.1 (Lexicographic ordering). We say that, for vectors a, b ∈ Rn , that a >lex b if the first
non-zero component of a − b is positive.
1
To apply this pivot rule we choose any i in step 4, but in step 6 we choose s such that Ts,i · (row s)
is smallest with respect to lex ordering. Under this pivot rule the simplex algorithm is guaranteed to
terminate.

9 Linear Duality
Theorem 9.1. If a linear program min(c · x), x ∈ Rn for Ax = b, x ≥ 0 has an optimal solution, then
so does the dual program max(b · p), p ∈ Rr for AT p ≤ c, p ≥ 0. Furthermore,

min c · x = max b · p
Ax=b AT p≤c
x≥0 p≥0

The original matrix is called the primal program, and the new one is called the dual program.