Vous êtes sur la page 1sur 5

Homework 1 Solutions

Fall 2012 EECE 320 UBC


1. (Proofs.) (a) Show that a + b can be less than min(a, b). (Using predicates and quantiers, we can state this as: Show that (a, b) R2 a + b < min(a, b). R is the set of real numbers.) (b) Show that a b can be less than min(a, b). (c) Prove by induction (on n 1) that for every x = 1,

i=0

xi =

xn+1 1 . x1

(This is the sum of a geometric progression. You must remember this result for future reference.) (d) Prove by induction that for n 1,

i=1

i(i + 1) = n + 1 .

Solution: (a) Let a = 2 and b = 3. Then 2 + 3 = 5 < 3 = min(2, 3). (b) Let a = 2 and b = 3. Then 2 3 = 6 < 2 = min(2, 3). (c) For n = 1, 1 xi = 1 + x. The Left Hand Side (LHS) of the equation above i=0 evaluates to (x2 1)/(x 1) = (1 + x) for n = 1. This simplication is valid because x = 1. Since the RHS is equal to the LHS, we are done for n = 1. Suppose that n xn+1 1 xi = x 1 i=0 holds for some integer n
n+1 i=0 n

2. Then

xi = xi + xn+1
= + xn+1 [by the induction hypothesis] x1 xn+1 1 + xn+2 xn+1 = x1 n+2 1 x . = x1 1.
i=0 xn+1 1

Thus the result follows for all n

( ) (d) For n = 1, 1 1/ i(i + 1) = 1/2. The RHS of the given equation evaluates to i=1 1/2 as well for n = 1, so we have shown the base case. Suppose that n 1 n i(i + 1) = n + 1 i=1 holds for some integer n
n+1 i=1

2. Then 1

i(i + 1) = i(i + 1) + (n + 1)(n + 2)


i=1

= = =

n 1 + n + 1 (n + 1)(n + 2) n2 + 2n + 1 (n + 1)(n + 2)

[by the induction hypothesis]

(n + 1)2 (n + 1)(n + 2) n+1 = . n+2 The induction hypothesis for any n 2 thus implies that the result holds for n + 1. Combined with the base case, we have shown that the result holds for all n 1. Grading rubric. Parts (a) and (b) are worth 1 point each. Parts (c) and (d) are worth 4 points each. For the proofs by induction, the base case is with 1 point, the induction hypothesis is worth 1 point, and the induction step is worth 2 points. 2. (Proofs of correctness.) (a) Prove the correctness of the following sorting procedure (Algorithm 1). function sort(A : list[1 .. n]) var int i, j for i n downto 1 do for j 1 to i 1 do if A[ j] > A[ j + 1] then swap the values in A[ j] and A[ j + 1] end if end for end for Algorithm 1: A simple sorting algorithm. (b) Prove the correctness of the following recursive algorithm (Algorithm 2) for integer multiplication.

Require: x > 0, y > 0 function multiply(int x, int y) c2 if y = 0 then return 0 else return multiply(cx, y/c) + x(y mod c) end if Comment: y is the greatest integer z such that z y. Comment: y mod c is the remainder when y is divided by c. Algorithm 2: Multiplying two natural numbers.

Solution: (a) This algorithm is called Bubble Sort. The inner loop maintains the following invariant: after executing iteration j, A[ j + 1] is the maximum of A[1] through A[ j + 1]; the inner loop pushes the largest element among A[1] through A[ j] to A[i] for every i in the outer loop. Notice that the outer loop starts at i = n down to i = 1, so every iteration makes sure that the largest of A[1] to A[i], starting with whole array, is pushed to location i, i.e., to the leftmost of the array. Thus max{A[1], . . . , A[n]} is placed at location n after the rst iteration of the outer loop. In the second iteration, (i = 2), the sub-array considered is A[1] through A[n 1]. But there cannot possibly be an element in this subarray that is greater that the maximum of the elements of the whole array. So at the end of the second iteration, the second largest element of the whole array is placed at location n 1. Continuing in this manner, at the end of the last iteration of the outer loop, the smallest element (or nth largest, in a dual sense) is placed at location 1. Thus after the algorithm terminates, the elements of A will be sorted in ascending order. (b) We can prove that the multiply algorithm is correct using induction. Base case. For y = 0, the algorithm correctly returns 0. Induction hypothesis. Let us assume that the algorithm works correctly for x 0 and y n. Induction step. We need to show that the algorithm works correctly for x 0 and y = n+1. Now the algorithm involves a recursive call to multiply(2x, (n+ 1)/2) + x((n + 1) mod 2)) and the result of the recursive call is added to x((n + 1) mod 2). Now, (n + 1)/2 < n and therefore, by the induction hypothesis, the recursive call to multiply returns the correct result. Using the denition of the oor function, 2(n + 1)/2 = (n + 1) (n + 1) mod 2. Combining this observation with the fact that the recursive call to multiply returns the correct product of 2x and (n + 1)/2, and using the term that is added to this product, the algorithm correctly returns the product of x and n + 1. Grading rubric. Each proof is worth 5 points. For part (a), stating the correct invariant is worth 2 points. Establishing that the invariant holds is worth 2 points. Drawing the conclusion is worth 1 point. For part (b), the base case is worth 1 point and the induction hypothesis is worth 1 point. The induction step is worth 3 points, with 1 point being for stating the useful fact about the oor function.

3. (Asymptotic complexity.) Arrange the following functions in ascending order of growth rate. (If function g(n) immediately follows function f (n) in your list then it should be the case that f (n) is O(g(n)).) h1 (n) = n2.9 h2 (n) = 2n + 3 h3 (n) = n + 17 h4 (n) = 10 2 n h5 (n) = 100n h6 (n) = n2 log n. Solution: We know from the text that polynomials (i.e., a sum of terms where n is raised to xed powers, even of they are not integers) grow slower than exponentials. Thus, we will consider h1 , h2 , h3 , h6 as a group, and then put h4 and h5 after them. For polynomials hi and h j , we know that hi and h j can be ordered by comparing the highest exponent on any term in hi to the highest exponent on any terms in h j . Thus, we can but h2 before h3 before h1 . Now, where to insert h6 ? It grows faster than n2 , and from the text we know that logarithms grow slower than polynomials, so h6 grows slower than nc for any c > 2. Thus we can insert h6 in this order between h3 and h1 . Finally come h4 and h5 . We know that exponentials can be ordered by their bases, so we can h4 before h5 . Grading rubric. The correct ordering is worth 6 points, and the explanations are worth 4 points. Any inversion in the ordering will result in a 1 point reduction. Although we have presented a simpler proof of the ordering, you should establish the ordering using the denition of the O() notation and by nding appropriate constants. 4. (Array operations.) For an integer array A of size n, compute a 2D array B such j that B[i, j] = k=i A[k] for (i < j). B[i, j] for i > j is undened and does not matter. An elementary algorithm to compute B is as follows:
1 2 3 4 5 6
3

for i = 1 to n do for j = i + 1 to n do Sum array entries A[i] through A[ j] Store the result in B[i, j] end end Algorithm 3: Computing array B. (a) Give a bound of the form O( f (n)) on the running time of this algorithm on an input of size n. (b) For this same function f , show that the running time of the algorithm on an input of size n is also ( f (n)). (This shows an asymptotically tight bound of ( f (n)) on the running time.)

(c) Although the algorithm you just analyzed is an elementary approach to the problem, it is inefcient. Describe an improved algorithm, with asymptotically better running time. In other words, you should design an algorithm with running time O(g(n)), where limn g(n)/ f (n) = 0. Solution: (a) We prove this for f (n) = n3 . The outer loop of the given algorithm run for exactly n iteration, and the inner loop of the algorithm runs for at most n iterations every time it is executed. Therefore, the line of code that adds up array entries A[i] through A[ j] (for various is and js) is executed at most n2 times. Adding up array entries A[i] through A[ j] takes O( j i + 1) operations, which is always at most O(n). Storing the result in B[i, j] requires only constant time. Therefore, the running time of the entire algorithm is at most n2 O(n), and so the algorithm runs in time O(n3 ). (b) Consider the times during the execution of the algorithm when i n/4 and j 3n/4. In these cases, j 1 + 1 3n/4 n/4 + 1 > n/2. Therefore, adding up the array entries A[i] through A[ j] would require at least n/2 operations, since there are more than n/2 entries to add up. How many times during the execution of the algorithm do we encounter such cases? There are (n/4)2 pairs (i, j) with i n/4 and j 3n/4. The given algorithm enumerates over all of them, and as shown above, it must perform at least n/2 operations for each such pair. Therefore, the algorithm must perform at least n/2 (n/4)2 = n3 /32 operations. This is (n3 ), as desired. (c) Consider the following algorithm. The algorithm works since the values B[i, j 1] were already computed in
1 2 3 4 5 6 7 8 9

for i = 1 to n do Set B[i, i + 1] to A[i] + A[i + 1] for k = 2 to n 1 do for i = 1 to n k do Set j = i + k Set B[i, j] to be B[i, j 1] + A[ j] end end end in the previous iteration of the outer for loop, when k was j 1 i, since j 1 i < j i. It rst computes B[i, i + 1] for all i by summing A[i] with A[i + 1]. This requires O(n) operations. For each k, it then computes all B[i, j] for j i = k by setting B[i, j] = B[i, j 1] + A[ j]. For each k, this algorithm performs O(n) operations since there are at most n B[i, j]s such that j i = k. There are less than n values of k to iterate over, so this algorithm has running time O(n2 ). Grading rubric. Part (a) is worth 2 points. No credit for an incorrect answer. Part (b) is worth 2 points. Part (c) is worth 6 points. 2 points for a clear description of the algorithm. 1 point for a proof of correctness. 2 points for deriving the runtime complexity. 1 point for establishing the required limit.

Vous aimerez peut-être aussi