Vous êtes sur la page 1sur 33

Runtime complexity

Christian Schulte
Johan Montelius
Software and Computer Systems
School of Information and Communication
Technology
KTH Royal Institute of Technology
Stockholm, Sweden

ID1218 2011

What Are the Questions?

Runtime efficiency
what

is it we are interested in?


why are we interested?
how to make statements on runtime and memory?

Clearly
faster

is better
we have to know how fast our programs compute
we have to know how much memory our programs
require
we have to know how good a program is

Statements on Runtime

Run your program for some example


data sets on some computer, make a
statement
statement

valid, if data set gets larger?


statement valid, if executed on different
computer?
statement valid, if executed on same computer
but slightly different configuration?

Testing is insufficient!

Runtime Guarantees

Testing can never yield a guarantee on


runtime!

We need a form to express runtime


guarantees
capture

size of input data


capture different computers/configurations

Runtime Guarantees

Express runtime in relation to input size


T(n)

runtime function
where n is size of input
for example
integers
lists

argument
length of list

Runtime Guarantees

We need to ignore marginal difference


between runtime functions
difference

for small input sizes


only consider for n n0

difference

by constant
c1 T(n) same as c2 T(n)

Asymptotic Complexity

Asymptotic complexity is such a runtime


guarantee: best
upper bound
on runtime
up to a constant factor
for sufficiently large inputs
Discuss runtime by using big-oh
notation

Big-Oh Notation

Assume
T(n)
f(n)

runtime, n size of input


function on non-negative integers

T(n) is of O(f(n)) T(n) is of order f(n)


iff T(n) c f(n)
for all nn0
for

some c
for some n0

(whatever computer)
(sufficiently large)

Big-Oh Notation
T(n)

cn

n0

f(n) = n, c=2

T(n) is of O(f(n)) T(n) is of order f(n)


iff T(n) c f(n) for all nn0
for some c
for some n0

(whatever computer)
(sufficiently large)

Big-Oh Notation

T(n) is of O(f(n)) sometimes written as


T(n)

= O(f(n))
T(n) O(f(n))

Examples
T(n)
T(n)
T(n)
T(n)
T(n)

=
=
=
=
=

4n + 42
is O(n)
5n + 42
is O(n)
7n + 11
is O(n)
4n + n3
is O(n3)
n100 + 2n + 8
is O(2n)

Big-Oh Notation

Often one just says for


O(n)

linear complexity
O(n2)
quadratic complexity
O(n3)
cubic complexity
O(log n) logarithmic complexity
O(2n)
exponential complexity

Summary

Formulate statements on runtime as


runtime guarantees
asymptotic complexity
for

large input
independent of computer

Expressed in big-oh notation


abstracts

away behavior of function for small

numbers
abstracts away constants
captures asymptotic behavior of functions

Determining Complexity

Approach

Take MiniErlang program


Take execution time for each expression
Give equations for runtime of functions
Solve equations
Determine asymptotic complexity

Simplification Here

We will only consider programs with one


function
Generalization to many functions
straightforward
complication:

solving equation

Execution Times for Expressions

Give inductive definition based on function


definition and structure of expressions
Function definition
pattern

Simple expression
values

matching and guards

and list construction

More involved expression


function

call
recursive function call leads to recursive equation
often called: recurrence equation

Execution Time: T(E)

Value
T(V) = cvalue

List construction
T([E1|E2]) = ccons + T(E1) + T(E2)

Time T(E) needed for executing


expression E
how

MiniErlang machine executes E

Execution Time: T(E)

to ease
notation, just
forget subscript

Value
T(V) = c
List construction
T([E1|E2]) = c + T(E1) + T(E2)
Time T(E) needed for executing
expression E
how

MiniErlang machine executes E

Function Call

For a function F define a function


TF(n)
for its runtime
and determine
size

of input for call to F


input for a function

Function Call
T(F(E1, ..., Ek)) = ccall + T(E1) + ... +
T(Ek)
+ TF(size(IF({1, ..., k})))
input

arguments IF({1, ..., k})


input arguments for F

size

size(IF({1, ..., k}))


size of input for F

Function Definition

Assume function F defined by clauses


H1 -> B1; ; Hk -> Bk.
TF(n) = cselect + max { T(B1), , T(Bk) }

Example: app/2
app([],Ys)
-> Ys;
app([X|Xr],Ys)
-> [X|app(Xr,Ys)].

What do we want to compute


Tapp(n)
Knowledge needed
input argument
first argument
size function
length of list

Computing Tapp =Ta

lets use the whiteboard

Append: Recurrence
Equation

Analysis yields
Tapp(0) = c1
Tapp(n) = c2 + Tapp(n-1)
Solution to recurrence is
Tapp(n) = c1 + c2 n
Asymptotic complexity
Tapp(n) is of O(n) linear complexity

Recurrence Equations

Analysis in general yields a system


T(n)

defined in terms of
T(m1), , T(mk)
for m1, , mk < n

T(0),

T(1),

values for certain n

Possibilities
solve

recurrence equation (difficult in general)


lookup asymptotic complexity for common case

Common Recurrence
Equations
T(n)
c + T(n1)

Asymptotic Complexity
O(n)

c1 + c2n + T(n1)

O(n2)

c + T(n/2)

O(log n)

c1 + c2n + T(n/2)

O(n)

c + 2T(n/2)

O(n)

c + 2T(n-1)

O(2n)

c1 + c2n + 2T(n/2)

O(n log n)

Example: Nave Reverse


rev([]) -> [];
rev([X|Xr]) -> app(rev(Xr),[X]).

Analysis yields
Trev(0) = c1
Trev(n) = c2 + Tapp(n-1) + Trev(n-1)
= c2 + c3(n-1) + Trev(n-1)
Asymptotic complexity
Trev(n) is of O(n2) quadratic complexity

What Is to be Done?

Size functions and input functions


Recurrence equation
Finding asymptotic complexity
understanding of programs might

be
necessary

Obs!

Making computations iterative can


change runtime!
Can

improve:
Nave reverse
O(n2)
be worse:
appending lists

reverse

O(n)

Can

O(n)

appending lists
O(n2)

Judging Asymptotic
Complexity

What does it mean?


good/bad?
even

optimal?

Consider size of computed output


if

size is of same order: perfect?


otherwise:

O(n log n) is very good (optimal for sorting)


check book on algorithms

Very difficult problem!

Hard Problems

Computer science is tough business


Most optimization, combinatorial
problems are hard (non-tractable)
problems
Graph

coloring
minimal number of colors
no connected node has same color
used in: compilation, frequency allocation,

Travelling

salesman

Hard Problems

NP problems: easy (polynomial


complexity) for checking whether a certain
candidate is a solution to problem
much

harder: finding a solution

NP-complete problems: as hard as any


other NP-complete problem
strong

suspicion: no polynomial algorithm for


finding a solution exists
examples: graph coloring, satisfiability testing,
cryptography,

Summary: Runtime
Efficiency

Asymptotic complexity gives runtime


guarantee
abstract

from constants
for all possible input
given in Big-Oh notation

Computing asymptotic complexity

Think about meaning of asymptotic


complexity

Vous aimerez peut-être aussi