Académique Documents
Professionnel Documents
Culture Documents
Recursively sorts two halves of the array, then merges the two halves into sorted
full-length array.
How to merge: Both halves sorted, so take smallest element in either array
(global min) and add to final array. Continue until both arrays are empty.
Claim: merge sort requires <= 6nlog2(n) + 6n ops to sort n>=1 numbers
Recall: log2(n) = # of times you divide n by 2 until you get down to 1 (grows
much less quickly than n)
Proof of claim:
Assume n=power of 2 for simplicity
Total # ops at level j (not including actually calling the next recursive call): <=
2^j(6(n/(2^j))) -> 6n per level
Comments/Takeaways:
1) Used worst-case analysis so that runtime bound holds for every array you
might encounter. Could test on benchmarks or “practical” inputs
a. Why? Difficult to define real data
b. Easier to analyze
c. Focus is to design algorithms that always do well
regardless of scenario
2) Won’t pay much attention to constants or lower order terms
a. Reasons: way easier to ignore.
b. Lose very little predictive power by ignoring constants
c. Really only interested in big problems (large n)
3) Fast Algorithm: In worst case runtime, runtime grows slowly with
input n
4) Usually want as close to linear (O(n)) time as possible (binary
search actually log2(n) time, faster than O(n)
Asymptotic Notation
Ex # 1: if
T(n) = akn^k + … + a1n + a0
Then T(n) = O(n^k)
Proof choose n0 = 1
C= abs(ak) + abs(a(k-1)) + … + abs(a1) + abs (a0)
=C*n^k QED
Ex #2: non-example
Proof by contradiction:
Suppose n^k = O(n^(k-1))
There exist constants C and n0 such that:
n^k <= C*n^(k-1) for all n > n0
wrong! Cancel off n^k-1, we get n<=C for all n > n0
We can choose arbitrarily large n so this is false.
Basically, where do these two functions cross (n0) such that T(n) is
always greater than C*f(n)