Vous êtes sur la page 1sur 15

Complexity of Algorithms

Click to edit Master subtitle style

Why is it important?
Suppose you were assigned to write a program to
process some records your company receives from
time to time.

You implemented two different algorithms and


tested them on several sets of test data. The
processing times you obtained are in Table 1.

The idea behind this algorithm: Check the


number of records. If it is small enough,
run algorithm 1, otherwise run algorithm
2.
For Sorting :
if the number of elements is too small, run
InsertSort instead (as InsertSort is faster for
small inputs)
if the pivot choices lead to poor results, fall
back to MergeSort

Time Complexity : Determine the


approximate number of operations
required to solve a problem of size n.
Use the Big-O notation
Ignore house keeping
Count the expensive operations only

Basic operations:
searching algorithms - key
comparisons
sorting algorithms - list component
comparisons
numerical algorithms - floating point
ops. (flops) -multiplications/divisions
and/or additions/subtractions

Worst Case (Big O): maximum


number of operations
Average Case (Theta): mean
number of operations assuming an
input probability distribution
Best case (Omega): Minimum
Number of operations

Multiply an n x n matrix A by a scalar c to produce


the matrix B:
procedure (n, c, A, B)
for i from 1 to n do
for j from 1 to n do
B(i, j) = cA(i, j)
end do
end do

Analysis (worst case):

Count the number of floating point multiplications.

n2 elements requires n2 multiplications.

time complexity is O(n2) or Quadratic Complexity.

Bubble Sort Analysis


Bubble sort: L is a list of elements to be
sorted.
- We assume nothing about the initial
order
- The list is in ascending order upon
completion.
Analysis (worst case):
Count the number of list comparisons
required.

Bubble Sort Algorithm


procedure bubble (n, L)
/*
- L is a list of n elements
- swap is an intermediate swap location
*/
for i from n - 1 to 1 by -1 do
for j from 1 to i do
if L(j) > L(j + 1) do
swap = L(j + 1)
L(j + 1) = L (j)
L(j) = swap
end do
end do
end do

Bubble the largest element to the 'top' by starting at the bottom


swap elements until the largest in the top position.
Bubble the second largest to the position below the top.
Continue until the list is sorted.
n-1 comparison on the first pass
n-2 comparisons on the second pass
.
.
.
1 comparison on the last pass
Total:

(n - 1)+ (n - 2) + . . . . + 1 = O(n2)
or
quadratic complexity

The most common metric for calculating


timecomplexity is Big O notation.
This removes all constant factors so that
the running time can be estimated in
relation to N as N approaches infinity.
statement;
Is constant. The running time of the
statement will not change in relation to N.

for ( i = 0; i < N; i++ )


statement;
Is linear. The running time of the loop is directly
proportional to N. When N doubles, so does the
running time.

for ( i = 0; i < N; i++ ) {


for ( j = 0; j < N; j++ )
statement; }
Is quadratic. The running time of the two loops
is proportional to the square of N. When N

Is logarithmic. The running time of


the algorithm is proportional to the
number of times N can be divided by
2.
This is because the algorithm divides
the working area in half with each
iteration.

void quicksort ( int list[], int left, int


right ) {
int pivot = partition ( list, left,
right ); quicksort ( list, left, pivot - 1
);
quicksort ( list, pivot + 1, right ); }

Is N * log ( N ). The running time


consists of N loops (iterative or
recursive) that are logarithmic, thus
the algorithm is a combination of

When speaking about the time complexity of an algorithm,


instead of using the formal (f (n))-notation we may simply
state the class of functions f belongs to

f (N) = (N), we call the algorithm linear


f (N) = (logN): logarithmic

f (N) = (N2): quadratic

f (N) = (N3): cubic


f (N) = O(Nk) for some k: polynomial
f (N) = (2N): exponential

For graph problems, the complexity (N + M) is known as "linear


in the graph size".

Vous aimerez peut-être aussi