Académique Documents
Professionnel Documents
Culture Documents
Moores Law :
- Number of transistors on CPU doubles every year
(well more precisely every 18 months)
- i.e. System performance doubles (well almost) every 18 months
or so.
If this is so and with current speeds of cpus these days, then why
bother worrying about how efficient our code is?
Sequence :
- A series of statements that do not alter the path within the algorithm
- A call to another method is considered a sequence statement also.
Selection :
- Our if else statements
Loop :
- Our familiar for, while, do-while loops.
- Then, count how many steps are used for an input of that size:
+ A step is an elementary operation such as +, <, =, A[i]
n = 10 => 53 steps
n = 100 => 503 steps
n = 1,000 => 5003 steps
n = 1,000,000 => 5,000,003 steps
Asymptotic Complexity: As n gets large, ignore all lower order terms and
concentrate on the highest order term only:
i.e. :
- Drop lower order terms such as +3
- Drop the constant coefficient of the highest order term.
Programming and Data Structures 9
Asymptotic Complexity (2)
The 5n+3 time bound is said to "grow asymptotically" like n.
0.05 n2 = O(n2)
Time (steps)
3n = O(n)
N = 60 Input (size)
If f(n) and g(n) are two complexity functions then we can say:
cg(n)
f(n) Function cg(n) always dominates
f(n) to the right of n0
n0 n
Programming and Data Structures 11
Big O Notation
Think of f(n) = O(g(n)) as
- " f(n) grows at most like g(n)" or
- " f grows no faster than g"
(ignoring constant factors for n)
Important:
- Big-O is not a function!
- Never read = as "equals"
- Examples:
5n + 3 = O(n)
7n2 2n + 1 = O(n2)
If f(n) and g(n) are two complexity functions then we can say:
f(n)
cg(n) In this instance, function cg(n) is
dominated by function f(n) to the right
of n0
n0 n
Example : 3n + 2 = (n)
cg(n)
f(n)
c1g(n)
n
f (n)
1. If lim n = 0 f(n) = O(g(n) )
g (n )
f (n)
2. If lim n = f(n) = (g(n) )
g (n)
f (n )
3. If lim n = c f(n) = (g(n) )
g (n )
Why?
Increasing Complexity
O(logbn) = O(log n) Logarithmic Time
O(n) Linear Time
O(n log n)
O(n2) Quadratic Time
O(n3) Cubic Time
...
O(kn) Exponential Time
O(n!) Exponential Time
Suppose a program has run time O(n!) and the run time for
n = 10 is 1 second
n log2n 5n n log2n n2 2n
8 3 40 24 64 256
16 4 80 64 256 65536
32 5 160 160 1024 ~109
64 6 320 384 4096 ~1019
128 7 640 896 16384 ~1038
256 8 1280 2048 65536 ~1076
In the code segment above, we cannot say that we iterate 1000 times as the number of
times we iterate is governed by the i variable.
On inspection we can see that the change in value, n, to i is in base 2 to the power of x
(i.e. n = 2x)
Expressing this in terms of logarithms, we can say that there are log2n iterations.
Programming and Data Structures 24
Analysing Loops Logarithmic Loops (3)
This means that all steps within our loop get done log2n times
Treat just like a single loop and evaluate each level of nesting as needed.
Total number of iterations is the product of total number of inner loop itterations and
outer loop itterations.
Programming and Data Structures 26
Analysing Loops Nested Loops (2)
Our time complexity function for our bit of code is then :
f(n) = 1+ (n + n + n) * (n + n)
= 1 + 3n*2n
= 1+ 6n2
Solution :
Analyse inner and outer loop together:
- Number of iterations of the outer and inner loop together:
0 + 1 + 2 + ... + (n-1) = O(n2)
Gauss figured out that the sum of the first n numbers is always:
n
i=
i=1
n * (n+1)
2
n2 + n
= 2 = O(n2)
where statement1 runs in O(n) time and statement2 runs in O(n2) time?
We use "worst case" complexity: among all inputs of size n, what is the
maximum running time?
Many ways to solve a recurrence equation once derived. Will look at three here :
- Iteration method
- Recursion Tree
- Master method
However if N 2, then running time T(N) is the cost of each step taken plus time required
to compute power(x,n-1). (i.e. T(N) = 2+T(N-1) for N 2)
There are some common recurrence relations that appear time and time again which we will
introduce on the next slide along with the actual solution.
T(1) = 1 for N = 1
T(N) = T(N-1) + N for N 2
This recurrence arises for a recursive algorithm that loops
N ( N + 1) through the input to eliminate one item:
Sol : T(N) = = O(N2)
2
T(1) = 1 for N = 1 This recurrence arises for a recursive algorithm that halves the
T(N) = T(N/2) + 1 for N 2 input in one step.
Hint for solving this : assume that N = 2n, so that the recurrence is
Sol : T(N) = lg N +1 = O(lg N) always defined (Note that this means that n = log2 N).
T(1) = 0 for N = 1 This recurrence arises for a recursive program that halves the input, but
T(N) = T(N/2) + N for N 2 perhaps must examine every item in the input.
Hint for solving this : Expand out to a geometric series and reason out.
Sol : T(N) = 2N = O(N)
T(1) = 0 for N = 1
T(N) = 2T(N/2) + N for N 2 This recurrence applies to a family of standard divide-and-conquer
algorithms.
Sol : T(N) = N lg N = O(N lg N)
gives, a = 4, b = 2
therefore,
a > b and,
T(n) = O( nlg 4)
= O(n2)