Vous êtes sur la page 1sur 7

# Running Time (§3.

1)
Most algorithms transform best case
average case
input objects into output
Analysis of Algorithms objects. 120
worst case

## The running time of an 100

algorithm typically grows

Running Time
80
with the input size.
60
Average case time is often
difficult to determine. 40

## Input Algorithm Output running time. 0

1000 2000 3000 4000
 Easier to analyze Input Size
An algorithm is a step- by- step procedure for  Crucial to applications such as
solving a problem in a finite amount of time. games, finance and robotics

## Experimental Studies Limitations of Experiments

Write a program 9000
It is necessary to implement the
implementing the 8000

algorithm 7000
algorithm, which may be difficult
Run the program with 6000 Results may not be indicative of the
Time (ms)

inputs of varying size and 5000 running time on other inputs not included
composition
Use a method like
4000
in the experiment.
3000
System.currentTimeMillis() to 2000
In order to compare two algorithms, the
get an accurate measure same hardware and software
1000
of the actual running time
Plot the results
0 environments must be used
0 50 100
Input Size

© 2004 Goodrich, Tamassia Analysis of Algorithms 3 © 2004 Goodrich, Tamassia Analysis of Algorithms 4
Theoretical Analysis Pseudocode (§3.2)
High- level description Example: find max
Uses a high-level description of the of an algorithm element of an array
algorithm instead of an implementation More structured than Algorithm arrayMax(A, n)
Characterizes running time as a English prose Input array A of n integers
Less detailed than a Output maximum element of A
function of the input size, n. program
currentMax ← A
Takes into account all possible inputs Preferred notation for
for i ← 1 to n − 1 do
describing algorithms
Allows us to evaluate the speed of an Hides program design if A[i] > currentMax then
currentMax ← A[i]
algorithm independent of the issues
return currentMax
hardware/software environment
© 2004 Goodrich, Tamassia Analysis of Algorithms 5 © 2004 Goodrich, Tamassia Analysis of Algorithms 6

## The Random Access Machine

Pseudocode Details (RAM) Model
Control flow Method call A CPU
 if … then … [else …] var.method (arg [, arg…])
 while … do … Return value
 repeat … until … return expression An potentially unbounded
 for … do … Expressions bank of memory cells, 2
1
← Assignment 0
 Indentation replaces braces
(like = in Java)
each of which can hold an
Method declaration = Equality testing arbitrary number or
Algorithm method (arg [, arg…]) (like == in Java) character
Input … n2 Superscripts and other
Output … mathematical Memory cells are numbered and accessing
formatting allowed
any cell in memory takes unit time.
© 2004 Goodrich, Tamassia Analysis of Algorithms 7 © 2004 Goodrich, Tamassia Analysis of Algorithms 8
Seven Important Functions (§3.3) Primitive Operations
Seven functions that
often appear in 1E+30 Basic computations
algorithm analysis: 1E+28 Examples:
1E+26
Cubic
performed by an algorithm
 Constant ≈ 1 1E+24 Quadratic  Evaluating an
 Logarithmic ≈ log n 1E+22 Identifiable in pseudocode expression
Linear
1E+20 Assigning a value
 Linear ≈ n Largely independent from the 
1E+18
N-Log-N ≈ n log n to a variable
programming language


T (n )
1E+16
 Quadratic ≈ n2 1E+14  Indexing into an
 Cubic ≈ n3 1E+12
1E+10
Exact definition not important array
Exponential ≈ 2n
 1E+8 (we will see why later)  Calling a method
1E+6
In a log
- log chart, the Returning from a
1E+4 Assumed to take a constant 

method
slope of the line 1E+2
corresponds to the 1E+0 amount of time in the RAM
growth rate of the 1E+0 1E+2 1E+4
n
1E+6 1E+8 1E+10
model
function
© 2004 Goodrich, Tamassia Analysis of Algorithms 9 © 2004 Goodrich, Tamassia Analysis of Algorithms 10

Counting Primitive
Operations (§3.4) Estimating Running Time
By inspecting the pseudocode, we can determine the Algorithm arrayMax executes 8n − 2 primitive
maximum number of primitive operations executed by
an algorithm, as a function of the input size operations in the worst case. Define:
a = Time taken by the fastest primitive operation
Algorithm arrayMax(A, n) # operations b = Time taken by the slowest primitive operation
currentMax ← A 2
for i ← 1 to n − 1 do 2n Let T(n) be worst-case time of arrayMax. Then
if A[i] > currentMax then 2(n − 1) a (8n − 2) ≤ T(n) ≤ b(8n − 2)
currentMax ← A[i] 2(n − 1) Hence, the running time T(n) is bounded by two
{ increment counter i } 2(n − 1)
return currentMax 1
linear functions
Total 8n − 2

© 2004 Goodrich, Tamassia Analysis of Algorithms 11 © 2004 Goodrich, Tamassia Analysis of Algorithms 12
Growth Rate of Running Time Constant Factors
1E+26
Changing the hardware/ software The growth rate is 1E+24 Quadratic
not affected by 1E+22
environment  constant factors or
1E+20
1E+18
Linear
Linear
 Affects T(n) by a constant factor, but  lower-order terms 1E+16
1E+14

T (n )
 Does not alter the growth rate of T(n) Examples 1E+12
1E+10
 102n + 105 is a linear
The linear growth rate of the running function
1E+8
1E+6
time T(n) is an intrinsic property of  105n2 + 108n is a
1E+4
1E+2
algorithm arrayMax 1E+0
1E+0 1E+2 1E+4 1E+6 1E+8 1E+10
n

© 2004 Goodrich, Tamassia Analysis of Algorithms 13 © 2004 Goodrich, Tamassia Analysis of Algorithms 14

## Big-Oh Notation (§3.4) Big-Oh Example

10,000
Given functions f(n) and 3n
1,000,000
n^2
g(n), we say that f(n) is Example: the function 100n
1,000 2n+10 100,000
O(g(n)) if there are n2 is not O(n) 10n
n n2 ≤ cn
positive constants  10,000 n

## c and n0 such that 100  n≤c

 The above inequality 1,000
f(n) ≤ cg(n) for n ≥ n0 cannot be satisfied
10
Example: 2n + 10 is O(n) since c must be a 100

 2n + 10 ≤ cn constant
1 10
 (c − 2) n ≥ 10
1 10 100 1,000
 n ≥ 10/(c − 2) n 1
 Pick c = 3 and n0 = 10 1 10 100 1,000
n

© 2004 Goodrich, Tamassia Analysis of Algorithms 15 © 2004 Goodrich, Tamassia Analysis of Algorithms 16
More Big-Oh Examples Big-Oh and Growth Rate
7n-2
7n-2 is O(n)
The big- Ohnotation gives an upper bound on the
need c > 0 and n0 ≥ 1 such that 7n-2 ≤ c•n for n ≥ n0 growth rate of a function
this is true for c = 7 and n0 = 1 The statement “f(n) is O(g(n))” means that the growth
rate of f(n) is no more than the growth rate of g(n)
 3n3 + 20n2 + 5
We can use the big - Ohnotation to rank functions
3n3 + 20n2 + 5 is O(n3)
according to their growth rate
need c > 0 and n0 ≥ 1 such that 3n3 + 20n2 + 5 ≤ c•n3 for n ≥ n0
this is true for c = 4 and n0 = 21 f(n) is O(g(n)) g(n) is O(f(n))
 3 log n + 5 g(n) grows more Yes No
3 log n + 5 is O(log n) f(n) grows more No Yes
need c > 0 and n0 ≥ 1 such that 3 log n + 5 ≤ c•log n for n ≥ n0
Same growth Yes Yes
this is true for c = 8 and n0 = 2
© 2004 Goodrich, Tamassia Analysis of Algorithms 17 © 2004 Goodrich, Tamassia Analysis of Algorithms 18

## Big-Oh Rules Asymptotic Algorithm Analysis

The asymptotic analysis of an algorithm determines
the running time in big
- Oh notation
If is f(n) a polynomial of degree d, then f(n) is To perform the asymptotic analysis
O(nd), i.e.,  We find the worst-case number of primitive operations
executed as a function of the input size
1. Drop lower- order terms  We express this function with big-Oh notation
2. Drop constant factors Example:
Use the smallest possible class of functions  We determine that algorithm arrayMax executes at most
8n − 2 primitive operations
 Say “2n is O(n)” instead of “2n is O(n2)”  We say that algorithm arrayMax “runs in O(n) time”
Use the simplest expression of the class Since constant factors and lower- order terms are
eventually dropped anyhow, we can disregard them
 Say “3n + 5 is O(n)” instead of “3n + 5 is O(3n)”
when counting primitive operations

© 2004 Goodrich, Tamassia Analysis of Algorithms 19 © 2004 Goodrich, Tamassia Analysis of Algorithms 20
Computing Prefix Averages Prefix Averages (Quadratic)
We further illustrate The following algorithm computes prefix averages in
35
asymptotic analysis with X quadratic time by applying the definition
two algorithms for prefix 30 A Algorithm prefixAverages1(X, n)
averages 25 Input array X of n integers
The i- th prefix average of 20 Output array A of prefix averages of X #operations
an array X is average of the
first (i + 1) elements of X: 15 A ← new array of n integers n
for i ← 0 to n − 1 do n
A[i] = (X + X + … + X[i])/(i+1) 10
s ← X n
Computing the array A of 5 for j ← 1 to i do 1 + 2 + …+ (n − 1)
prefix averages of another 0 s ← s + X[j] 1 + 2 + …+ (n − 1)
array X has applications to 1 2 3 4 5 6 7 A[i] ← s / (i + 1) n
financial analysis return A 1
© 2004 Goodrich, Tamassia Analysis of Algorithms 21 © 2004 Goodrich, Tamassia Analysis of Algorithms 22

## Arithmetic Progression Prefix Averages (Linear)

7 The following algorithm computes prefix averages in
The running time of linear time by keeping a running sum
6
prefixAverages1 is
Algorithm prefixAverages2(X, n)
O(1 + 2 + …+ n) 5
Input array X of n integers
The sum of the first n 4 Output array A of prefix averages of X #operations
integers is n(n + 1) / 2 A ← new array of n integers n
3
 There is a simple visual s←0 1
proof of this fact 2
for i ← 0 to n − 1 do n
Thus, algorithm 1 s ← s + X[i] n
prefixAverages1 runs in 0 A[i] ← s / (i + 1) n
O(n2) time return A 1
1 2 3 4 5 6
Algorithm prefixAverages2 runs in O(n) time
© 2004 Goodrich, Tamassia Analysis of Algorithms 23 © 2004 Goodrich, Tamassia Analysis of Algorithms 24
Math you need to Review Relatives of Big-Oh
Summations big-Omega
Logarithms and Exponents  f(n) is Ω(g(n)) if there is a constant c > 0

## properties of logarithms: and an integer constant n0 ≥ 1 such that

logb(xy) = logbx + logby f(n) ≥ c•g(n) for n ≥ n0
logb (x/y) = logbx - logby
logbxa = alogbx big-Theta
logba = logxa/logxb
 f(n) is Θ(g(n)) if there are constants c’ > 0 and c’’
properties of exponentials:
a(b+c) = aba c
> 0 and an integer constant n0 ≥ 1 such that
abc = (ab)c c’•g(n) ≤ f(n) ≤ c’’•g(n) for n ≥ n0
Proof techniques
ab /ac = a(b-c)
Basic probability b = a logab
bc = a c*logab

© 2004 Goodrich, Tamassia Analysis of Algorithms 25 © 2004 Goodrich, Tamassia Analysis of Algorithms 26

## Intuition for Asymptotic Example Uses of the

Notation Relatives of Big-Oh
Big-Oh  5n2 is Ω(n2)
 f(n) is O(g(n)) if f(n) is asymptotically f(n) is Ω(g(n)) if there is a constant c > 0 and an integer constant n0 ≥ 1
less than or equal to g(n) such that f(n) ≥ c•g(n) for n ≥ n0
let c = 5 and n0 = 1
big-Omega  5n2 is Ω(n)
 f(n) is Ω(g(n)) if f(n) is asymptotically
f(n) is Ω(g(n)) if there is a constant c > 0 and an integer constant n0 ≥ 1
greater than or equal to g(n) such that f(n) ≥ c•g(n) for n ≥ n0
big-Theta let c = 1 and n0 = 1
 f(n) is Θ(g(n)) if f(n) is asymptotically  5n2 is Θ(n2)
equal to g(n) f(n) is Θ(g(n)) if it is Ω(n2) and O(n2). We have already seen the former,
for the latter recall that f(n) is O(g(n)) if there is a constant c > 0 and an
integer constant n0 ≥ 1 such that f(n) < c•g(n) for n ≥ n0
Let c = 5 and n0 = 1

© 2004 Goodrich, Tamassia Analysis of Algorithms 27 © 2004 Goodrich, Tamassia Analysis of Algorithms 28