Vous êtes sur la page 1sur 14

BY:SOURAV DEY

Introduction
Optimization problem: there can be many possible solutions. Each solution has a value, and we wish to find a solution with the optimal (minimum or maximum) value Greedy algorithm: an algorithmic technique to solve optimization problems
Always makes the choice that looks best at the moment. Makes a locally optimal choice in the hope that this choice will lead to a globally optimal solution

Making Change
Someone comes to your store and makes a purchase of 98.67. He/she gives you 100. You want to give back change using the least number of coins.
INPUT: The values of coins: C1, C2, . . . , Ck, and an

integer N. Assume that some coin has value 1.


GOAL: To find a multi-set of coins S whose sum is

N where the total number of coins is minimized. A greedy approach is to add the highest value coin possible.

Example: Counting money


Suppose you want to count out a certain amount of money, using the fewest possible bills and coins A greedy algorithm would do this would be: At each step, take the largest possible bill or coin that does not overshoot Example: To make $6.39, you can choose:

a $5 bill a $1 bill, to make $6 a 25 coin, to make $6.25 A 10 coin, to make $6.35 four 1 coins, to make $6.39

For US money, the greedy algorithm always gives the optimum solution
4

The algorithm
Greedy algorithm (C, N) 1. sort coins so C1 C2 . . . Ck 2. S = F; 3. Change = 0 4. i=1 \\ Check for next coin 5. while Change N do \\ all most valuable coins 6. if Change + Ci N then 7. Change = Change + Ci 8. S = S {Ci} 9. else i = i+1

A failure of the greedy algorithm


In some (fictional) monetary system, krons come in 1 kron, 7 kron, and 10 kron coins Using a greedy algorithm to count out 15 krons, you would get A 10 kron piece Five 1 kron pieces, for a total of 15 krons This requires six coins A better solution would be to use two 7 kron pieces and one 1 kron piece This only requires three coins The greedy algorithm results in a solution, but not in an optimal solution
6

General characteristics of Greedy algorithms


Solve a problem in an optimal way. As the algorithms proceeds, we accumulate two other sets. One

contains candidates that have already been considered and chosen, while the other contains candidates that have been considered and rejected. There is a function that checks whether a particular set of candidates provides a solution to our problem, ignoring questions of optimality for the time being. A second function checks whether a set of candidates is feasible, that is, whether or not it is possible to complete the set by adding further candidates so as to obtain at least one solution to our problem. Another function, the selective function, indicates at any time which of the remaining candidates, that have neither been chosen nor rejected, is the most promising. Finally, an objective function gives the value of a solution we have found.

Steps: 1. Initially, the set of chosen candidates is empty. 2. The best remaining untried candidate is considered for adding in this set. 3. If the enlarged set of chosen candidates would no longer be feasible, we reject the candidate we are currently considering. 4. The candidate that has been tried and rejected is never considered again. 5. If the enlarged set is still feasible, then we add the current candidate to the set of chosen candidates, where it will stay from now on. 6. Each time we have to check whether it now constitutes a solution of our problem.

The Fractional Knapsack Problem


Given: A set S of n items, with each item i having
vi - a positive value wi - a positive weight

Goal: Choose items with maximum total value but with

weight at most W. If we are allowed to take fractional amounts, then this is the fractional knapsack problem.
In this case, we let xi denote the amount we take of item i Objective: maximize

v (x / w )
iS i i i i i i

Constraint:

x w W,0 x
iS

1,1 i n

Example
n=5, W=100
W v v/w 10 20 2.0 20 30 1.5 30 66 2.2 40 40 1.0 50 60 1.2

Select: Max vi Min wi Max vi/wi

xi 0 0 1 0.5 1 1 1 1 1 0 1 1 1 0 0.8

Value 146 156 164

Theorem: If objects are selected in order of decreasing vi / wi , then algorithm knapsack finds an optimal solution. Proof: Let X=(x1,, xn) be the solution.

Case 1: If all the xi =1, then the solution is optimal. Case 2: Let j be the smallest index, such that xj <1. n xi =1, when i<j. xi =0, when i>j and xi wi = W n i =1 The value of the solution X be V ( X ) = xi vi Now, Y=(y1,, yn) be the feasible solution.
i =1

y w W
i =1 i i

and

( x - y )w 0
i =1 i i i

The value of the solution Y be V (Y ) = yi vi


i =1

vi Now, V ( X ) - V (Y ) = ( xi - yi )vi = ( xi - yi )wi


n n i =1 i =1

When, i<j, xi =1 and so xi yi is positive or zero, while vi / wi

wi

vj / w j When, i>j, xi =0 and so xi yi is negative or zero, while vi / w i vj / w j When, i=j, vi / wi = vj / wj Thus, in every case, (xi yi)(vi / wi ) (xi yi)( vj / wj ). Hence, n

V ( X ) - V (Y ) (v j / w j ) ( xi - yi ) wi 0
i =1

So, no feasible solution can have a value greater can have a value greater than V(X) , so the solution X is optimal.

GREEDY-FRACTIONAL-KNAPSACK (w, v, W) 1. for i 1 to n 2. do x [i] 0 3. weight 0 4. while weight < W 5. do i best remaining item 6. if weight + w [i] W 7. then x [i] =1 8. weight weight + w [i] 9. Else 10. x [i] (w-weight) / w [i] 11. weight W 12. return x

Analysis
If the items are already sorted into decreasing order of vi / wi , then the while loop (line 4) takes a time O (n), therefore, the total time including the sort is O (n log n). If we keep the items in heap with largest vi / wi at the root. Then (a) Creating the heap takes O (n) time. (b) While loop now takes O (n log n) time (since heap property must be restored after the removal of root).

Vous aimerez peut-être aussi