Vous êtes sur la page 1sur 51

1

Algorithm types
Simple recursive algorithms
Divide and conquer algorithms
Dynamic programming algorithms
Greedy algorithms
Backtracking algorithms
Branch and bound algorithms
Randomized algorithms
Recursive Algorithm

Definition
An algorithm that calls itself
Approach
1. Solve small problem directly
2. Simplify large problem into 1 or more smaller subproblem(s)
& solve recursively
3. Calculate solution from solution(s) for subproblem
Divide-and-Conquer
The most-well known algorithm design strategy:
1. Divide instance of problem into two or more independent
smaller instances

1. Solve smaller instances recursively

2. Obtain solution to original (larger) instance by combining
these solutions

Divide-and-Conquer
Binary Search
Merge Sort
Quick Sorts
Stranssens Matrix Multiplication
Convex Hull

Dynamic Programming
The principle of optimality applies if the optimal solution to a
problem always contains optimal solutions to all subproblems
Dynamic Programming is an algorithm design technique for
optimization problems: often minimizing or maximizing.
Like divide and conquer, DP solves problems by combining
solutions to subproblems.
Unlike divide and conquer, subproblems are not independent.
Subproblems may share subsubproblems,
However, solution to one subproblem may not affect the
solutions to other subproblems of the same problem.
Dynamic Programming
DP reduces computation by
Solving subproblems in a bottom-up fashion.
Storing solution to a subproblem the first time it is solved.
Looking up the solution when subproblem is encountered
again.
Examples :
Assembly line scheduling
Matrix chain multiplication
Longest common subsequence
Optimal binary search trees
Knapsack Problem
Shortest Paths

Steps in Dynamic Programming
1. Characterize structure of an optimal solution.
2. Define value of optimal solution recursively.
3. Compute optimal solution values either top-down with
caching or bottom-up in a table.
4. Construct an optimal solution from computed values.

Comparison with divide-and-
conquer
Divide-and-conquer algorithms split a problem into separate
subproblems, solve the subproblems, and combine the results
for a solution to the original problem
Example: Quicksort
Example: Mergesort
Example: Binary search
Divide-and-conquer algorithms can be thought of as top-down
algorithms
In contrast, a dynamic programming algorithm proceeds by
solving small problems, then combining them to find the
solution to larger problems
Dynamic programming can be thought of as bottom-up
Greedy Algorithms
Similar to dynamic programming, but simpler approach
Also used for optimization problems
Idea: When we have a choice to make, make the one that looks
best right now
Make a locally optimal choice in hope of getting a globally
optimal solution
Greedy algorithms dont always yield an optimal solution
When the problem has certain general characteristics, greedy
algorithms give optimal solutions
Greedy Algorithms
Minimum spanning trees
Shortest path
Knapsack problem
Task Scheduling
Activity selection problem

Dynamic Programming vs.
Greedy Algorithms
Dynamic programming
We make a choice at each step
The choice depends on solutions to subproblems
Bottom up solution, from smaller to larger subproblems
Greedy algorithm
Make the greedy choice and THEN
Solve the subproblem arising after the choice is made
The choice we make may depend on previous choices, but
not on solutions to subproblems
Top down solution, problems decrease in size

Backtracking Algorithm
Is used to solve problems for which a sequence of objects is to
be selected from a set such that the sequence satisfies some
constraint.
Traverses the state Tree using a depth-first search with
pruning.
Performs a depth-first traversal of a tree.
Continues until it reaches a node that is non-viable or non-
promising.
Prunes the sub tree rooted at this node and continues the
depth-first traversal of the tree.
This gives a significant advantage over an exhaustive search of
the tree for the average problem.
Worst case: Algorithm tries every path, traversing the entire
search space as in an exhaustive search.


Backtracking Algorithm
Travelling Salesman Problem
Graph Coloring
n-Queen Problem
Hamiltonian Cycles
Sum of subsets
Knapsack problem
Branch and Bound
Branch and bound (BB) is a general search method, especially
in discrete optimization.
There is a way to split the solution space (branch)
There is a way to predict a lower bound for a class of solutions.
There is also a way to find a upper bound of an optimal
solution. If the lower bound of a solution exceeds the upper
bound, this solution cannot be optimal and thus we should
terminate the branching associated with this solution.
divide it into subproblems. The algorithm is applied
recursively to the subproblems. The search proceeds until all
nodes have been solved or pruned.
Branch and Bound
Backtracking uses a depth-first search with pruning, the
branch and bound algorithm uses a breadth-first search with
pruning
Branch and bound uses a queue as an auxiliary data structure
In many types of problems, branch and bound is faster than
backtracking, due to the use of a breadth-first search instead of
a depth-first search
The worst case scenario is the same, as it will still visit every
node in the tree


Branch and Bound
Travelling Salesman Problem
Graph Coloring
n-Queen Problem
Hamiltonian Cycles
Sum of subsets
Knapsack problem
Assignment problem

Randomized algorithms

A randomized algorithm is just one that depends on random
numbers for its operation
These are randomized algorithms:
Using random numbers to help find a solution to a problem
Using random numbers to improve a solution to a problem
These are related topics:
Getting or generating random numbers
Generating random data for testing (or other) purposes

The 0/1 knapsack problem using
Backtracking Approach
Suppose that K = 16 and n = 4, and we have the following set
of objects ordered by their value density.

i p
i
w
i
p
i
/w
i

1 $45 3 $15
2 $30 5 $ 6
3 $45 9 $ 5
4 $10 5 $ 2

TotalSize = currentSize + size of remaining objects that can be
fully placed

bound (maximum potential value) = currentValue + value
of remaining objects fully placed +(K - totalSize) * (value
density of item that is partially placed)
for a node at level i in the state space tree (the first i items have
been considered for selection) and for the kth object as the one
that will not completely fit into the remaining space in the
knapsack, these formulae can be written:
When the bound of a node is less than or equal to the current
maximum value, or adding the current item to the node causes
the size of the contents to exceed the capacity of the knapsack,
the subtrees rooted at that node are pruned, and the traversal
backtracks to the previous parent in the state

k-1

totalSize = currentSize + E wj
j=i+1
k-1
bound = currentValue + E pj + (K - totalSize) * (pk/wk)
j=i+1
For the root node, currentSize = 0, currentValue = 0

totalSize = 0 + s1 + s2 = 0 + 3 + 5 = 8
bound = 0 + p1 + p2 + (K - totalSize) * (p3/w3) = 0 + $45
+ $30 + (16 - 8) * ($5) = $75 + $40 = $115

23
The 0/1 knapsack problem using
B&B
Positive integer P
1
, P
2
, , P
n
(profit)
W
1
, W
2
, , W
n
(weight)
M (capacity)

maximize P X
i i
i
n
=

1

subject to WX M
i i
i
n
s
=

1
X
i
= 0 or 1, i =1, , n.
The problem is modified:
minimize

P X
i i
i
n
1

24
How to find the upper bound?
Ans: by quickly finding a feasible solution in a
greedy manner: starting from the smallest
available i, scanning towards the largest is
until M is exceeded. The upper bound can be
calculated.
25
The 0/1 knapsack problem
E.g. n = 6, M = 34







A feasible solution: X
1
= 1, X
2
= 1, X
3
= 0, X
4
= 0,
X
5
= 0, X
6
= 0
-(P
1
+P
2
) = -16 (upper bound)
Any solution higher than -16 can not be an optimal solution.

i

1

2

3

4

5

6

P
i


6

10

4

5

6

4

W
i


10

19

8

10

12

8

(P
i
/W
i
> P
i+1
/W
i+1
)

26
How to find the lower bound?
Ans: by relaxing our restriction from X
i
= 0 or 1 to 0 s
X
i
s 1 (knapsack problem)
Let
=

P X
i i
i
n
1
be an optimal solution for 0/1
knapsack problem and '
=

P X
i
i
n
i
1
be an optimal
solution for fractional knapsack problem. Let
Y=
=

P X
i i
i
n
1
, Y

= '
=

P X
i
i
n
i
1
.
Y

s Y
27
The knapsack problem
We can use the greedy method to find an optimal solution for
knapsack problem.
For example, for the state of X
1
=1 and X
2
=1, we have
X
1
= 1, X
2
=1, X
3
= (34-10-19)/8=5/8, X
4
= 0, X
5
= 0, X
6
=0
-(P
1
+P
2
+5/8P
3
) = -18.5 (lower bound)
-18 is our lower bound. (only consider integers)


28
How to expand the tree?
By the best-first search scheme
That is, by expanding the node with the best
lower bound. If two nodes have the same
lower bounds, expand the node with the
lower upper bound.
29












0/1 Knapsack Problem Solved by Branch-and-Bound Strategy

30
Node 2 is terminated because its lower bound
is equal to the upper bound of node 14.
Nodes 16, 18 and others are terminated
because the local lower bound is equal to the
local upper bound.
(lower bound s optimal solution s upper
bound)
The assignment problem
We want to assign n people to n jobs so that
the total cost of the assignment is as small as
possible (lower bound)
Select one element in each row of the cost matrix C so that:
no two selected elements are in the same column; and
the sum is minimized
For example:
Job 1 Job 2 Job 3 Job 4
Person a 9 2 7 8
Person b 6 4 3 7
Person c 5 8 1 8
Person d 7 6 9 4
Lower bound: Any solution to this problem will have
total cost of at least:
Example: The assignment problem
sum of the smallest element in each row =
10
Assignment problem: lower bounds
State-space levels 0, 1, 2
Complete state-space
Basic Matrix Multiplication

Suppose we want to multiply two matrices of size N
x N: for example A x B = C.
C
11
= a
11
b
11
+ a
12
b
21

C
12
= a
11
b
12
+ a
12
b
22

C
21
= a
21
b
11
+ a
22
b
21

C
22
= a
21
b
12
+ a
22
b
22
2x2 matrix multiplication can be
accomplished in 8 multiplication.(2
log
2
8
=2
3
)
Strassenss Matrix Multiplication

Strassen showed that 2x2 matrix multiplication can be
accomplished in 7 multiplication and 18 additions or
subtractions. .(2
log
2
7
=2
2.807
)

This reduce can be done by Divide and Conquer
Approach.
Divide-and-Conquer
Divide-and conquer is a general algorithm design
paradigm:
Divide: divide the input data S in two or more disjoint
subsets S
1
, S
2
,
Recur: solve the subproblems recursively
Conquer: combine the solutions for S
1
, S
2
, , into a
solution for S
The base case for the recursion are subproblems of
constant size
Analysis can be done using recurrence equations

Strassenss Matrix Multiplication
P
1
= (A
11
+ A
22
)(B
11
+B
22
)
P
2
= (A
21
+ A
22
) * B
11

P
3
= A
11
* (B
12
- B
22
)
P
4
= A
22
* (B
21
- B
11
)
P
5
= (A
11
+ A
12
) * B
22

P
6
= (A
21
- A
11
) * (B
11
+ B
12
)
P
7
= (A
12
- A
22
) * (B
21
+ B
22
)
C
11
= P
1
+ P
4
- P
5
+ P
7

C
12
= P
3
+ P
5

C
21
= P
2
+ P
4

C
22
= P
1
+ P
3
- P
2
+ P
6


Convex Hull
The ever-present structure in computational geometry
Used to construct other structures
Useful in many applications: robot motion planning, shape
analysis etc.
one of the early success stories in computational geometry
that sparked interest among Computer Scientists by the
invention of O(nlogn) algorithm rather than a O(n^3)
algorithm.
Convex hulls
Preliminaries and definitions
Intuitive definition
Given a set S = {P0, P1, ..., Pn-1} of n points in the plane,
the convex hull H(S) is the smallest convex polygon in the plane
that contains all of the points of S.
Imagine nails pounded halfway into the plane at the points of S.
The convex hull corresponds to a rubber band stretched around them.
Convex hulls
Preliminaries and definitions
Convex polygon
A polygon is convex iff for any two points in the polygon
(interior boundary) the segment connecting the points is
entirely within the polygon.

convex
not convex
Convex hulls
Quickhull
Quicksort
The Quickhull algorithm is based on the Quicksort algorithm.
Recall how quicksort operates: at each level of recursion,
an array of numbers to be sorted is partitioned into two subarrays,
such that each term of the first (left) subarray is not larger
than each term of the second (right) subarray.
LEFT RIGHT
Two pointers to the array cells (LEFT and RIGHT)
initially point to the opposite extreme ends of the array.
LEFT and RIGHT move towards each other, one cell at a time.
At any given time, one pointer is moving and one is not.
If the numbers pointed to by LEFT and RIGHT violate
the desired sort order, they are swapped, then the moving pointer
is halted and the halted pointer becomes the moving pointer.
When the two pointers point to the same cell, the array is split
at that cell and the process recurses on the subarrays.

Quicksort runs in expected O(N log N) time,
if the subarrays are well balanced,
but can require as much as O(N
2
) time in the worst case.
Convex hulls
Quickhull
Quickhull overview

Quickhull operates in a similar manner.
It recursively partitions the point set S,
so as to find the convex hull for each subset.
The hull at each level of the recursion is formed by
concatenating the hulls found at the next level down.
S
Convex hulls
Quickhull
Initial partition
The initial partition of S is determined by a line L
through the points l, r e S with the smallest and largest abscissa.
S
(1)
_ S is the subset of S on or above L.
S
(2)
_ S is the subset of S on or below L.
Note that {S
(1)
, S
(2)
} is not a strict partition of S,
as S
(1)
S
(2)
_ {l,r}. This is not a difficulty.

The idea now is to construct hulls H(S
(1)
) and H(S
(2)
),
then concatenate them to get H(S).
The process is the same for S
(1)
and S
(2)
, we consider S
(1)
.
l
r
L
S
(1)

S
(2)

Convex hulls
Quickhull
Finding the apex
Find the point h e S
(1)
such that
(1) triangle hlr has the maximum area of all triangles {plr : p e S
(1)
},
and if there are > 1 triangles with maximum area,
(2) the one where angle hlr is maximum.

This condition ensures that h e H(S). Why?
Construct a line parallel to line L through h, call it L'.
There will be no points of S
(1)
(or S) above L', by condition (1).
There may be other points on L', but h will be the leftmost,
by condition (2),
hence it is not a convex combination of any two points of S.
h e H(S).
Apex h can be found in O(N) time by checking each point of S
(1)
.
l
r
L
S
(1)

L'
h
Convex hulls
Quickhull
Partitioning the point set
Construct two directed lines, L
1
from l to h, and L
2
from h to r.
Each point of S
(1)
is classified relative to L
1
and L
2

(e.g., point-line classification).
No point can be to the left of both L
1
and L
2
.
Points to the right of both are not in H(S),
as they are within triangle hlr,
and are eliminated from further consideration.
Points left of L
1
are S
(1,1)
.
Points left of L
2
are S
(1,2)
.
l
r
L
eliminated
L
1

L
2

S
(1,1)
S
(1,2)
h
Convex hulls
Quickhull
Recursion
The process recurses on S
(1,1)
and are S
(1,2)
.

(set, left endpoint, right endpoint)

(S
()
,l,r)


(S
(,1)
,l,h) (S
(,2)
,h,r)
The recursion continues until S
()
has 0 points,
i.e., all internal points have been eliminated,
which implies that segment lr is an edge of H(S).
L
l
r
L
1

L
2

S
(1,2,2)
h
S
(1,2)
Convex hulls
Quickhull
Geometric primitives

The geometric primitives used by this algorithm are:
1. Point-line classification
2. Area of a triangle
Both of these require O(1) time.
Convex hulls
Quickhull
General function
S is assumed to have at least 2 elements
(the recursion ends otherwise).
FURTHEST(S, l, r) is a function, not given here,
that finds the apex point h as previously defined.
The operator || denotes list concatentation.
Procedure QUICKHULL returns an ordered list of points.

1 procedure QUICKHULL(S, l, r)
2 begin
3 if S = {l, r} then
4 return (l, r) /* lr is an edge of H(S) */
5 else
6 h = FURTHEST(S, l, r)
7 S
(1)
= p e S p is on or left of line lh
8 S
(2)
= p e S p is on or left of line hr
9 return QUICKHULL(S
(1)
, l, h) ||
(QUICKHULL(S
(2)
, h, r) - h)
10 end
11 end

Initial call
1 begin
2 l
0
= (x
0
, y
0
) /* point of S with smallest abscissa */
3 r
0
= (x
0
, y
0
- c)
4 result = QUICKHULL(S, l
0
, r
0
) - r
0
/*

The point r
0
is eliminated from the final list*/
5 end
Convex hulls
Quickhull
Analysis

Worst case time: O(N
2
)
Expected time: O(N log N)
Storage: O(N
2
)

At each level of the recursion, partitioning S into S
(1)
and S
(2)

requires O(N) time. If S
(1)
and S
(2)
were guaranteed to have
a size equal to a fixed portion of S, and this held at each level,
the worst case time would be O(N log N).

However, those criteria do not apply;
S
(1)
and S
(2)
may have size in O(N) (they are not balanced).
Hence the worst case time is O(N
2
),
O(N) at each of O(N) levels of recursion.
The same applies to storage.

Vous aimerez peut-être aussi