Vous êtes sur la page 1sur 41

m Not a specific algorithm, but a technique (like

divide-and-conquer).
m Developed back in the day when ³programming´
meant ³tabular method´ (like linear programming).
m Doesn¶t really refer to computer programming.
© Àsed for optimization problems: Find § solution with 
optimal value.
© 3inimization or maximization. (We¶ll see both.)
. Characterize the structure of an optimal solution.
2. Recursively define the value of an optimal
solution.
3. Compute the value of an optimal solution,
typically in a bottom-up fashion.
4. Construct an optimal solution from computed
information.
m ½ow to cut steel rods into pieces in order to
maximize the revenue you can get? Each cut is
free. Rod lengths are always an integral number of
inches.
m  0 length ÷ and table of prices , for  = ,2,.
. .÷.
m uŒhe maximum revenue obtainable for
rods whose lengths sum to ÷, computed as the
sum of the prices for the individual rods.
© f ÷ is large enough, an optimal solution might require no
cuts, i.e., just leave the rod as ÷ inches long.
m length :  2 3 4 5 6 7 8
m price :  5 8 9  7 7 2
© Can cut up a rod in 2÷- different ways, because can
choose to cut or not cut after each of the first ÷ - 
inches.
© ½ere are all 8 ways to cut a rod of length 4, with the costs
from the example:
m Œhe best way is to cut it into two 2-inch pieces,
getting a revenue of  -  = 5 - 5 =  .
m Det ? be the maximum revenue for a rod of length
. Can express a solution as a sum of individual
rod lengths.
m Œo solve the original problem of size ÷, solve subproblems on
smaller sizes. 0fter making a cut, we have two subproblems. Œhe
optimal solution to the original problem incorporates optimal
solutions to the subproblems. We may solve the subproblems
independently.

m ë § : For ÷ = 7, one of the optimal solutions makes a cut at 3


inches, giving two subproblems, of lengths 3 and 4. We need to
solve both of them optimally. Œhe optimal solution for the problem
of length 4, cutting into 2 pieces, each of length 2, is used in the
optimal solution to the original problem with length 7.
m Every optimal solution has a leftmost cut. n other
words, there¶s some cut that gives a first piece of
length  cut off the left end, and a remaining piece
of length ÷ -  on the right.
© Need to divide only the remainder, not the first pieces
© Deaves only one subproblem to solve, rather than two
subproblems.
© Say that the solution with no cuts has first piece size  = ÷
with revenue ÷ and remainder size with revenue ? = .
© ‰ives a simpler version of the equation for ?÷ :
m Direct implementation of the simpler equation for
?÷ .
m Œhe call CÀŒ-ROD( , ÷) returns the optimal
revenue?÷ :
m Œhis procedure works, but it is terribly ÷÷ f you code it up
and run it, it could take more than an hour for ÷ = 4 . Running
time almost doubles each time ÷ increases by .

m a  
: CÀŒ-ROD calls itself repeatedly, even on
subproblems it has already solved. ½ere¶s a tree of recursive
calls for ÷ = 4. nside each node is the value of ÷ for the call
represented by the node:
m Dots of repeated subproblems. Solve the
subproblem for size 2 twice, for size  four times,
and for size eight times.
m ë ÷÷§ ?: Det Œ(÷) equal the number of
calls to CÀŒ-ROD with second parameter equal to
÷. Œhen
m nstead of solving the same subproblems repeatedly,
arrange to solve each subproblem just once.
m Save the solution to a subproblem in a table, and refer
back to the table whenever we revisit the subproblem.
m ³Store, don¶t recompute´ [ time-memory trade-off.
m Can turn an exponential-time solution into a
polynomial-time solution.
m Œwo basic approaches: top-down with memoization,
and bottom-up.
m Solve recursively, but store each result in a table.
m Œo find the solution to a subproblem, first look in the
table. f the answer is there, use it. Otherwise,
compute the solution to the subproblem and then store
the solution in the table for future use.
m 3   is remembering what we have computed
previously
m 3emoized version of the recursive solution, storing the
solution to the subproblem of length  in array entry ?[]:
m Sort the subproblems by size and solve the
smaller ones first. Œhat way, when solving a
subproblem, have already solved the smaller
subproblems we need.
m oth the top-down and bottom-up versions run in
Ĭ(÷2) time.
©    Doubly nested loops. Number of iterations of
inner | loop forms an arithmetic series.
© Œ   3E3OED-CÀŒ-ROD solves each
subproblem just once, and it solves subproblems for
sizes , , . . . , ÷. Œo solve a subproblem of size ÷, the
| loop iterates n times [ over all recursive calls, total
number of iterations forms an arithmetic series.
m ½ow to understand the subproblems
involved and how they depend on each
other.
m Directed graph:
© One vertex for each distinct subproblem.
© ½as a directed edge ( , ) if computing an
optimal solution to subproblem ? requires
knowing an optimal solution to subproblem .
m ë  For rod-cutting problem with ÷ =
4:
m Can think of the subproblem graph as a collapsed
version of the tree of recursive calls, where all nodes
for the same subproblem are collapsed into a single
vertex, and all edges go from parent to child.
m Subproblem graph can help determine running time.
ecause we solve each subproblem just once, running
time is sum of times needed to solve each
subproblem.
© Œime to compute solution to a subproblem is typically linear in
the out-degree (number of outgoing edges) of its vertex.
© Number of subproblems equals number of vertices.
m When these conditions hold, running time is linear in
number of vertices and edges.
m So far, have focused on computing the value of an
optimal solution, rather than the  that
produced an optimal solution
m Extend the bottom-up approach to record not just
optimal values, but optimal choices. Save the
optimal choices in a separate table. Œhen use a
separate procedure to print the optimal choices.
m å  ‰iven 2 sequences, A
  
§÷
  ÷  Find a subsequence common
to both whose length is longest. 0 subsequence
doesn¶t have to be consecutive, but it has to be in
order.
m ë  
m For every subsequence of X, check whether it¶s a
subsequence of Y .
m Œime: Ĭ(n2m).
m 2m subsequences of X to check.
m Each subsequence takes Ĭ(n) time to check: scan
Y for first letter, from there scan for second, and
so on.
m Notation:
m A prefix
  
m  prefix
  

Π 
m Det  =
   be any DCS of X and Y.
. f xm = yn, then  = xm = yn and k- is an DCS of Xm- and Yn-
.
2. f xm Ë yn, then  Ë xm â  is an DCS of Xm- and Y.
3. f xm Ë yn, then  Ë yn â  is an DCS of X and Yn-.
!§  First show that  = xm = yn
Suppose not. Œhen make a subsequence ¶ =
  
 t¶s a common subsequence of X and Y and has
length k -ⶠis a longer common subsequence than 
â contradicts  being an DCS.
m Now show k- is an DCS of Xm- and Yn-. Clearly, it¶s a
common subsequence.
m Now suppose there exists a common subsequence W of
Xm- and Yn- that¶s longer than k- â length of W k.
3ake subsequence W¶ by appending xm to W . W¶ is
common subsequence of X and Y , has length k- â
contradicts  being an DCS.
m !§  f zk Ë xm, then  is a common
subsequence of Xm- and Y . Suppose there
exists a subsequence W of Xm- and Y with length
> k. Œhen W is a common subsequence of X and
Y â contradicts

m !§  ´ Symmetric to 2

m Œherefore, an DCS of two sequences contains as


a prefix an DCS of prefixes of the sequences.
m 0gain, we could write a recursive algorithm based
on this formulation.
m Œry with bozo, bat.
m Dots of repeated subproblems.
m nstead of recomputing, store in a table.
m nitial call is PRNŒ-DCS<›A÷>.
m ›[] points to table entry whose subproblem we
used in solving DCS of Xi and Yj.
m When ›[] =, we have extended DCS by one
character. So longest common subsequence =
entries with  in them.
m What do Õ  and    have in
common? (Show only c[i, j] )

m 0nswer: pain.
m Œ Ĭ(mn)
m 3entioned already:
© optimal substructure
© overlapping subproblems
m Show that a solution to a problem consists of making a choice,
which leaves one or subproblems to solve.
m Suppose that you are given this last choice that leads to an
optimal solution.
m ‰iven this choice, determine which subproblems arise and how
to characterize the resulting space of subproblems.
m Show that the solutions to the subproblems used within the
optimal solution must themselves be optimal. Àsually use cut-
and-paste:
© Suppose that one of the subproblem solutions is not optimal.
© !  it out
© å§ in an optimal solution
© ‰et a better solution to the original problem. Contradicts optimality of
problem solution.
m Œhat was optimal substructure.
m Need to ensure that you consider a wide enough
range of choices and subproblems that you get
them all. Œry all the choices, solve all the
subproblems resulting from each choice, and pick
the choice whose solution, along with subproblem
solutions, is best.
m ½ow to characterize the space of subproblems?
© ùeep the space as simple as possible.
© Expand it as necessary.
m   
© Space of subproblems was rods of length ÷ - , for  ”  ”
÷.
© No need to try a more general space of subproblems.
m Optimal substructure varies across problem
domains:
  §÷  › ?› are used in an optimal
solution.
  §÷  in determining which
subproblem(s) to use.
m Rod cutting:
©  subproblem (of size ÷ - )
© ÷ choices
m Dongest common subsequence:
©  subproblem
© Either
J  choice (if  = , DCS of A- and -), or
J 2 choices (if  ' , DCS of A- and , and DCS of A and -)
m nformally, running time depends on (# of
subproblems overall) × (# of choices).
© Rod cutting: Ĭ(÷) subproblems, ” ÷ choices for each [
O(÷2) running time.
© Dongest common subsequence: Ĭ(÷) subproblems, ” 2
choices for each [ O(÷) running time.
m Dynamic programming uses optimal substructure
› 
© ? find optimal solutions to subproblems.
© Œ÷ choose which to use in optimal solution to the problem
m When we look at greedy algorithms, we¶ll see that they
work  ÷ ? make a choice that looks best,
÷ solve the resulting subproblem
m Don¶t be fooled into thinking optimal substructure
applies to all optimization problems.
m t doesn¶t. We need to have    
subproblems.
© Œhese occur when a recursive algorithm revisits the
same problem over and over.
© ‰ood divide-and-conquer algorithms usually generate a
brand new problem at each stage of recursion.
© Example: merge sort
m 0lternative approach to dynamic programming:
  

© ³Store, don¶t recompute.´
© 3ake a table indexed by subproblem.
© When solving a subproblem:
J Dookup in table.
J f answer is there, use it.
J Else, compute answer, then store it.
© n bottom-up dynamic programming, we go one step
further. We determine in what order we¶d want to access
the table, and fill it in that way.