Vous êtes sur la page 1sur 42

UNIT 1

Introduction to Algorithms

Syllabus :
Introduction : ‘O’,’Ω’ and ‘Ө’ asymptotic notations, Average, Best and Worst
case analysis of algorithms for Time and Space complexity, Amortized
Analysis, Solving Recurrence Equations, Proof Techniques: by Contradiction,
by Mathematical Induction.
Priority Queues : Heaps & Heap sort.

1.1 Introduction to Algorithms

1.1.1 Origin of the Word

The word Algorithm comes from the name of the 9th century Persian
mathematician Abu Abdullah Muhammad ibn Musa al-Khwarizmi whose
works introduced Indian numerals and algebraic concepts. The word algorism
originally referred only to the rules of performing arithmetic using Arabic
numerals but evolved via European Latin translation of al-Khwarizmi’s name
into algorithm by the 18th century. The word evolved to include all definite
procedures for solving problems or performing tasks
In mathematics, computing, linguistics and related subjects, an
algorithm is a sequence of finite instructions, often used for calculation and
data processing. It is formally a type of effective method in which a list of well-
defined instructions for completing a task will, when given an initial state,
proceed through a well-defined series of successive states, eventually
terminating in an end-state.

1.1.2 Why Algorithms are Necessary ? An Informal Definition

While there is no generally accepted formal definition of “algorithm”, an


informal definition could be “an algorithm is a process that performs some
sequence of operations.” For some people, a program is only an algorithm if it
stops eventually. For others, a program is only an algorithm if it stops before a
given number of calculation steps.

A prototypical example of an “algorithm” is Euclid's algorithm to


determine the maximum common divisor of two integers greater than one:
“subtract the smaller number from the larger one; repeat until you get a zero or
a one.” This procedure is known to stop always and the number of subtractions
needed is always smaller than the larger of the two numbers.

1.1.3 Definition of Algorithm

• A well defined computational procedure that takes some value or set of


values, as input and produces some value, or set of values, as output.
Thus all algorithms must satisfy the following criteria:
1. Input : Zero or more quantities are externally supplied.

2. Output : At least one quantity is produced.

3. Definiteness : Each instruction is clear and unambiguous.

4. Finiteness : If we trace out the instructions of an algorithm, then for


all cases, the algorithm terminates after a finite number of steps.

5. Effectiveness : Every instruction must be very basic so that it can be


carried out, in principle, by a person using only pencil and paper.
• An algorithm is composed of a finite set of steps, each of which may
require one or more operations.
• Algorithms that are definite and effective are also called computational
procedure.
• The study of algorithms includes many important and active areas of
research. There are four distinct areas of study one can identify :
1. How to devise algorithms : Creating an algorithm is an art which
may never be fully automated.

2. How to validate, algorithms : Once an algorithm is devised, it is


necessary to show that it computes the correct answer for all
possible legal inputs. We refer to this process as algorithm
validation.

3. How to analyze algorithms : This field of study is called analysis


of algorithms. As an algorithm is executed, it uses the computer's
central processing unit (CPU) to perform operations and its memory
(both immediate and auxiliary) to hold the program and data. This is
a challenging area which sometimes requires great mathematical
skill.

4. How to test a program : Testing a program consists of two phases,


debugging and profiling (or performance measurement).
a. Debugging is process of executing programs on sample data-
sets to determine whether faulty results occur and, if so, to
correct them.
b. Profiling performance measurement is the process of
executing a correct program on data sets and measuring the
time and space it takes to compute the results.

1.1.2 Designing Algorithms

• Algorithm design is a specific method to create a mathematical process in


solving problems.
• Algorithm design is identified and incorporated into many solution
theories of operation research, such as dynamic programming and divide-
and-conquer.
• Techniques for designing and implementing algorithm designs are
algorithm design patterns, such as template method patterns and
decorator patterns, and uses of data structures, and name and sort lists.
• Some current day uses of algorithm design can be found in internet
retrieval processes of web crawling packet routing and caching.
• It is frequently important to know how much of a particular resource
(such as time or storage) is required for a given algorithm.
• Methods have been developed for the analysis of algorithms to obtain
such quantitative answers; for example, the algorithm above has a time
requirement of O(n), using the big O notation with n as the length of the
list.
• At all times the algorithm only needs to remember two values: the largest
number found so far, and its current position in the input list. Therefore it
is said to have a space requirement of O(1), if the space required to store
the input numbers is not counted, or O (log n) if it is counted.
• Different algorithms may complete the same task with a different set of
instructions in less or more time, space, or 'effort' than others. For
example, a binary search algorithm will usually outperform a brute force
sequential search when used for table lookups on sorted lists.

1.1.3 Analyzing Algorithms

We want to predict the resources that the algorithm requires. Usually,


running time. In order to predict resource requirements, we need a
computational model.

Random-access machine (RAM) model

• Instructions are executed one after another.


• No concurrent operations, but the instructions are executed one after the
other.
• It is too tedious to define each of the instructions and their associated
time costs Instead, we recognize that we will use instructions commonly
found in real computers. Arithmetic ( add, subtract, multiply, divide,
remainder, door, ceiling). Also shift left/shift right (good for
multiplying/dividing by 2. Data movement (load, store, copy) and
Control (conditional/unconditional branch, subroutine call and return).
• Each of these instructions takes a constant amount of time
• The RAM model uses integer and floating point types.
• We do not worry about precision, although it is crucial in certain
numerical applications.

1.2 Asymptotic Notations

Big O-notation

Two important ways to characterize the effectiveness of an algorithm are


its space complexity and time complexity. Time complexity of an algorithm
concerns determining an expression of the number of steps needed as a function
of the problem size. Since the step count measure is somewhat coarse, one does
not aim at obtaining an exact step count. Instead, one attempts only to get
asymptotic bounds on the step count. Asymptotic analysis makes use of the O
(Big Oh) notation. Two other notational constructs used by computer scientists
in the analysis of algorithms are θ (Big Theta) notation and Ω (Big Omega)
notation. The performance evaluation of an algorithm is obtained by totaling the
number of occurrences of each operation when running the algorithm. The
performance of an algorithm is evaluated as a function of the input size n and is
to be considered modulo a multiplicative constant.

The following notations are commonly use notations in performance


analysis and used to characterize the complexity of an algorithm.
1.2.1 θ -Notation (Same order)

This notation bounds a function to within constant factors. We say f(n) =


θ (g(n)) if there exist positive constants n0, c1 and c2 such that to the right of n0
the value of f(n) always lies between c1g(n) and c2g(n) inclusive.

Example 1 : Use the definition of big-theta to prove that 7x2 + 1 = θ (x2).


Soln. :

Clearly, we have 7x2 + 1 = Ω (x2). We now need to show 7x 2 + 1 =


θ (x2).

We have 7x2 ≤ 7x2 + 1 ≤ 7x2 + x2 ≤ 8x2 (where we need x ≥1 to obtain


the
second inequality).

Therefore, x ≥ 1, 7x2 ≤ 7x2 + 1 ≤ 8x2. if This says that 7x2 + 1 = θ (x2).

Example 2 : n2/2 − 2n = θ (n2), with c1 = 1/4, c2 = 1/2, and n0 = 8.

Example 3 : The function 3n+2 = θ (n) as 3n+ 2 ≥ 3n for all n ≥ 2 and 3n+2 ≤
4n for all n ≥ 2, so c1 = 3, c2 = 4 and n0 =2.

The theta notation is more precise than both the big oh and omega notations.
The function f(n) = θ (g(n)) if g(n) is both an upper and lower bound on f(n).

f (n) = θ (g(n))
Fig. 1.1

1.2.2 O-Notation (Upper Bound)


This notation gives an upper bound for a function to within a constant
factor. We write f(n) = O(g(n)) if there are positive constants n0 and c such that
to the right of n0, the value of f(n) always lies on or below cg(n).

The functions used in this estimation often include the following :

1, log (n), n, n log (n), n2, 2n, n!

It has two main areas of application :

• In computer science, it is useful in the analysis of the complexity of


algorithms
(the number of operations an algorithm uses as its input grows).

• In mathematics, it is usually used to characterize the residual term of a


truncated infinite series, especially an asymptotic series.

Big-O gives us a formal way of expressing asymptotic upper bounds, a


way of bounding from above the growth of a function.

f (n) = Ο (g(n))
Fig. 1.2

Example 1 : Use the definition of big-O to prove that

f(x) = 5x4 – 37x3 + 13x – 4 = O(x4)


Soln. : We must find integers C and k such that 5x 4 – 37x3 + 13x – 4 ≤ C | x4 |
for
all x ≥k.
We can proceed as follows:
| 5x4 – 37x3 + 13x – 4 | ≤ | 5x4 + 37x3 + 13x + 4 | ≤ | 5x4 + 37x4 + 13x4 + 4x4 | =
59 | x4 |
where the first inequality is satisfied if x ≥ 0 and the second inequality is
satisfied
if x ≥ 1.
Therefore, | 5x4 – 37x3 + 13x – 4 | ≤ 59 | x4 | if x ≥ 1, so we have C = 59 and k
= 1.

Example 2 : 3n+2 = O(n) because 3n+2 <= 4n for all n >= 2. c = 4, n0 = 2

Example 3 : 10n2+4n+2 = O(n2) because 10n2+4n+2 <= 11n2 for all n >=5.

Example 4 : 6*2n+n2 = O(2n) because 6*2n+n2 <=7*2n for all n>=4.

Algorithms can be: O(1) → constant; O(log n) → logarithmic; O(nlogn); O(n) →


linear; O(n2) → quadratic; O(n3) → cubic; O(2n) → exponential.

1.2.3 Ω-Notation (Lower Bound)

This notation gives a lower bound for a function to within a constant


factor. We write f(n) = Ω(g(n)) if there are positive constants n0 and c such that
to the right of n0, the value of f(n) always lies on or above cg(n).

f (n) = Ω (g(n))
Fig. 1.3

g(n) is an asymptotic lower bound for f (n).

Example 1 : = Ω (lg n), with c = 1 and n0 = 16.


2
Examples of functions in Ω(n ).
2
n
n2 + n
n2 – n
1000 n2 + 1000n
1000 n2 – 1000n
n2 – 2n = (n2) – (c=1/2, n0=4) or
n2 – 2n = Ω (n) – (c=1, n0=3),
but it is false to claim that
n2 – 2n = Ω (n3).
The function 3n+2 = Ω (n) as 3n+2 ≥ 3n for n ≥ 1
10n2+4n+2= Ω (n2) as 10n2+4n+2 ≥ n2

1.3. Performance Analysis

1.3.1 Algorithm Analysis

We are interested in algorithms which have been precisely specified


using an appropriate mathematical formalism such as a programming language.

Given such an expression of an algorithm, what can we do with it? Well,


obviously we can run the program and observe its behavior. This is not likely to
be very useful or informative in the general case. If we run a particular program
on a particular computer with a particular set of inputs, then all know is the
behavior of the program in a single instance. Such knowledge is anecdotal and
we must be careful when drawing conclusions based upon anecdotal evidence.

In order to learn more about an algorithm, we can “analyze” it. By this


we mean to study the specification of the algorithm and to draw conclusions
about how the implementation of that algorithm the program will perform in
general. But what can we analyze? We can

• determine the running time of a program as a function of its inputs;

• determine the total or maximum memory space needed for program


data;

• determine the total size of the program code;

• determine whether the program correctly computes the desired result;

• determine the complexity of the program e.g., how easy is it to read,


understand, and modify; and,

• determine the robustness of the program e.g., how well does it deal with
unexpected or erroneous inputs?

In this text, we are concerned primarily with the running time. We also
consider the memory space needed to execute the program. There are many
factors that affect the running time of a program. Among these are the
algorithm itself, the input data, and the computer system used to run the
program. The performance of a computer is determined by

• the hardware :

• processor used (type and speed),

• memory available (cache and RAM), and

• disk available;
• the programming language in which the algorithm is specified;

• the language compiler/interpreter used; and

• the computer operating system software.

A detailed analysis of the performance of a program which takes all of


these factors into account is a very difficult and time-consuming undertaking.
Furthermore, such an analysis is not likely to have lasting significance.

1.3.2 Time Complexity

The time complexity of a problem is the number of steps that it takes to


solve an instance of the problem as a function of the size of the input (usually
measured in bits), using the most efficient algorithm.
Time required T(P) to run a program P also consists of two
components:
A fixed part: compile time which is independent of the problem
instance → c.
A variable part: run time which depends on the problem
instance → tp(instance)
T(P) = c + tp(instance)
How to measure T(P)?
Measure experimentally, using a “stop watch”
→ T(P) obtained in secs, msecs.
Count program steps → T(P) obtained as a step count.
Fixed part is usually ignored; only the variable part tp() is
measured
What is a program step?
a+b+b*c+(a+b)/(a-b) → one step;
comments → zero steps;
while (<expr>) do → step count equal to the number of times
<expr> is executed.
for i=<expr> to <expr1> do → step count equal to number of
times <expr1> is checked
Example 1

Sr. No. Statements S/E Freq. Total

1 Algorithm Sum(a[],n) 0 - 0

2 { 0 - 0

3 S = 0.0; 1 1 1

4 for i=1 to n do 1 n+1 n+1

5 s = s+a[i]; 1 N n
6 return s; 1 1 1

7 } 0 - 0

2n+3
Example 2

Sr. No. Statements S/E Freq. Total

1 Algorithm Sum(a[],n,m) 0 - 0

2 { 0 - 0

3 for i=1 to n do; 1 n+1 n+1

4 for j=1 to m do 1 n(m+1) n(m+1)

5 s = s+a[i][j]; 1 Nm nm

6 return s; 1 1 1

7 } 0 - 0

2nm+2n+2

Examples : Searching an unsorted list of words for a particular word is a


linear time operation, because you must inspect every element in the list exactly
once. Whereas, searching in a dictionary is a logarithmic time problem, because
you can employ an algorithm that does not visit every element to find your
word.

1.3.3 Space Complexity

The space complexity of a problem is a related concept that measures the


amount of space or memory required by the algorithm. An informal analogy
would be the amount of scratch paper needed while working out a problem with
pen and paper. Space complexity is also measured with Big O notation.

There are other measures of computational complexity. For instance,


communication complexity is a measure of complexity for distributed
computations.

A different measure of problem complexity, which is useful in some


cases, is circuit complexity. This is a measure of the size of a boolean circuit
needed to compute the answer to a problem, in terms of the number of logic
gates required to build the circuit. Such a measure is useful, for example, when
designing hardware microchips to compute the function instead of software.

Thus Space Complexity of an algorithm is the amount of memory it


needs to run to completion
Memory space S(P) needed by a program P, consists of two components:
A fixed part: needed for instruction space, simple variable
space, constant space etc. → c
A variable part: dependent on a particular instance of input and
output data. → Sp(instance)
S(P) = c + Sp(instance)
Example 1
Algorithm abc (a, b, c)
{
return a+b+b*c+(a+b-c)/(a+b)+4.0;
}
For every instance 3 computer words required to store
variables: a, b, and c. Therefore Sp()= 3. S(P) = 3.
Example 2
Algorithm Sum(a[ ], n)
{
s:= 0.0;
for i = 1 to n do
s := s + a[i];
return s;
}
Every instance needs to store array a[] & n.
Space needed to store n = 1 word.
Space needed to store a[ ] = n floating point words (or at least
n words)
Space needed to store i and s = 2 words
Sp(n) = (n + 3). Hence S(P) = (n + 3).

1.3 Amortized Analysis

An amortized analysis is any strategy for analyzing a sequence of


operations to show that the average cost per operation is small, even
though a single operation within the sequence might be expensive.
Even though we’re taking averages, however, probability is not involved!
An amortized analysis guarantees the average performance of each
operation in the worst case.

Types of amortized analyses


•the aggregate method,
•the accounting method,
•the potential method.

Amortized analysis
The aggregate method, though simple, lacks the precision of the other two
methods. In particular, the accounting and potential methods allow a
specific amortized cost to be allocated to each operation.

Accounting Method
Potential method
Amortized costs can provide a clean abstraction of data-structure
performance.
Any of the analysis methods can be used when an amortized analysis is
called for, but each method has some situations where it is arguably the
simplest or most precise.

1.4 Solving Recurrence Equations

1.5 Proof Techniques


1.5.1 By Contracdiction
A contradiction is a statement which is false no matter what the truth value
of its constituent parts
Proof by contradiction, also known as indirect proof, consists of demonstrating
the truth of a statement by proving that its negation yields a contradiction. In
other words, assume you wish to prove statement S. For example, S could be
"there are infinitely many prime numbers". To give an indirect proof of S, you
start by assuming that S is false (or, equivalently, by assuming that "not S " is
true). What can you conclude if, from that assumption, mathematical reasoning
establishes the truth of an obviously false statement? Naturally, it could be that
the reasoning in question was flawed. However, if the reasoning is correct, the
only remaining explanation is that the original assumption was false. Indeed,
only from a false hypothesis is it possible to mathematically "prove" the truth of
another falsehood.
Theorem 1. There are infinitely many prime numbers.

Proof Let P denote the set of all prime numbers. Assume for a contradiction that
P is a
finite set. The set P is not empty since it contains at least the integer 2. Since P is
finite and nonempty, if makes sense to multiply all its elements. Let x denote that
product, and let y denote x + 1. Consider the smallest integer d that is larger than 1
and that is a divisor of y . Such an integer certainly exists since y is larger than 1 and
we do not require that d be different from y . First note that d itself is prime, for
otherwise any proper divisor of d would also divide y and be smaller than d, which
would contradict the definition of d. Therefore, according to our assumption that P
contains each and every prime, d belongs to P. This shows that d is also a divisor
of x since x is the product of a collection of integers including d. We have reached
the conclusion that d exactly divides both
x and y . But recall that y = x + 1 . Therefore, we have obtained an integer d larger
than 1 that divides two consecutive integers x and y. This is clearly impossible: if
indeed d divides x, then the division ofybyd will necessarily leave 1 as.remainder.
The inescapable conclusion is that the original assumption was equally impossible.
But the original assumption was that the set P of all primes is finite, and therefore its
impossibility establishes that the set P is in fact infinite.

function Newprime(P : set of integers)


{The argumentP should be a nonempty finite set of
primes}
x ← product of the elements in
P
y← x +1
d← d+1
repeat d ← d + 1until d divides y
return d
Euclid's proof establishes that the value returned by Newprime(P) is a prime
number that does not belong to P.

Theorem 2 There exist two irrational numbers x and y such that xy is rational.

Proof Assume for a contradiction that xy is necessarily irrational whenever


both x and y are irrational. It is well known that √ 2 is irrational (this
was known in the days of Pythagoras, who lived even earlier than
Euclid). Let z stand for √ 2√2 A By our assumption, z is irrational since
it is the result of raising an irrational (√2) to an irrational power .Now
let w stand for z √2. Again, we have that w is irrational by our
assumption since this is so for both z and √2. But

w = z√2 = (√2√2 ) √2= (√2) (√2*√2) = ((√2)2= 2.


We have arrived at the conclusion that 2 is irrational, which is clearly false. We
must
therefore conclude that our assumption was false: it must be possible to obtain
a
rational number when raising an irrational to an irrational power.
1.5.2 By Mathematical Induction
Consider the polynomial p(n) = n 2 + n + 41. If you compute p(0), p(1),
p ( 2 ) , . . . , p ( W ) , you find 41, 43, 47, 53, 61, 71, 83, 97, 113, 131 and 151. It is
straightforward to verify that all these integers are prime numbers. Therefore, it
is natural to infer by induction that p (n) is prime for all integer values of n. But in
fact p (40) = 1681 = 41 2 is composite.

Consider the polynomial p ( n ) = 991n2 + 1. The question is whether there is a


positive integer n such that p(n) is a perfect square. If you try various values
for n, you will find it increasingly tempting to assume inductively that the
answer is negative. But in fact a perfect square can be obtained with this
polynomial: the smallest solution is obtained when
n = 12055735790331359447442538767
Mathematically, if it is true that some statement P(x) holds for each x in some
set X, and if indeed y belongs to X, then the fact that P(y) holds can be roundly
asserted.

For instance, if it is correct that P (x) is true for all x in X, but we are careless
in applying this rule to some y that does not belong to X, we may erroneously
believe that P(y) holds. Similarly, if our belief that P(x) is true for all x in X is
based on careless inductive reasoning, then P(y) may be false even if indeed y
belongs to X. In conclusion, deductive reasoning can yield a wrong result, but
only if the rules that are followed are incorrect or if they are not followed
properly.
For example
13 = 1 = 12
13 + 2 3 = 9 = 32
13+23 +33 = 36 = 62
13+23+33 +43 = 100 = 102
13+23+33+43 +53 = 225 = 152
From the above examples we can say that that the sum of the cubes of the first n
positive integers is always a perfect square. It turns out in this case that
inductive reasoning yields a correct law.
1.5.2.1 The principle of Mathematical Induction

Consider the following algorithm.


function sq(n)
if n = 0 then return 0
else return 2n + s q ( n - 1)-1

If you try it on a few small inputs, you find that


sq(0) = 0, sq(1) = 1, sq(2) = 4, sq(3) = 9, sq(4) = 16.
By induction, it seems obvious that sq( n}= n2 for all n > 0, but how could
this be proved rigorously? Is it even true? Let us say that the algorithm
succeeds on integer n whenever sq(n)= n2, and that it fails otherwise.
Consider any integer n > l and assume for the moment that the algorithm
succeeds on n - 1. By definition of the algorithm, sq(n) =2n + sq(n - 1)-1. By
our assumption sq(n - 1 ) = (n-1) 2 . Therefore

sq(n)=2n + (n-l) 2 -l = 2n + (n 2 -2n + l)-l = n 2 .


Now consider principle of mathematical induction

Consider any property P of the integers. For instance, P ( n ) could be "sq(n)= n2 ",
or "the sum of the cubes of the first n integers is equal to the square of the sum of
those integers", or "n 3 < 2 n". The first two properties hold for every n≥0, whereas
the third holds provided n ≥ 10. Consider also an integer a, known as the basis. If

1. P(a) holds and


2. P(n) must hold whenever P(n - 1) holds, for each integer n > a,
then property P(n) holds for all integers n ≥ a. Using this principle, we could
assert that sq(n)= n2 for all n≥ 0, immediately after showing that sq(0) = 0 = 02 and
that sq(n)= n2 whenever sq(n - 1) = (n - 1) 2 and n≥ 1.
Consider the following tiling problem. You are given a board divided into
equal squares. There are m squares in each row and m squares in each column,
where m is a power of 2. One arbitrary square of the board is distinguished as
special; see Figure 1.4(a).

(b) One tile


(a) Board with special square

(c) Placing the first tile (d) Solution


Design & Analysis of Algorithms (PU) 1-1 Introduction to Algorithms

Fig. 1.4. The tiling problem

You are also given a supply of tiles, each of which looks like a 2 x 2 board with one square
removed, as illustrated in Figure 1.5(b). Your puzzle is to cover the board with these tiles so
that each square is covered exactly once, with the exception of the special square, which is not
covered at all. Such a covering is called a tiling. Figure 1.5(d) gives a solution to the instance
given in Figure 1.5(a).

Theorem 3 The tiling problem can always be solved.

Proof The proof is by mathematical induction on the integer n such that m = 2n.
o Basis: The case n = 0 is trivially satisfied. Here m = 1, and the 1x1 "board" is a
single square, which is necessarily special. Such a board is tiled by doing nothing!
(If you do not like this argument, check the next simplest case: if n = 1, then m = 2
and any 2 x 2 board from which you remove one square looks exactly like a tile by
definition.)
o Induction step: Consider any n > 1. Let m = 2n. Assume the induction hypoth esis
that the theorem is true for 2n-1 x 2n-1 boards. Consider an m x m board, containing
one arbitrarily placed special square. Divide the board into 4 equal sub-boards by
halving it horizontally and vertically. The original special square now belongs to
exactly one of the sub-boards. Place one tile in the middle of the original board so as
to cover exactly one square of each of the other three sub-boards; see Figure 1.5(c).
Call each of the three squares thus covered "special" for the corresponding sub-
board. We are left with four 2n-1 x 2n-1sub-boards, each containing one special
square. By our induction hypothesis, each of these sub-boards can be tiled. The final
solution is obtained by combining the tilings of the sub-boards together with the
tile placed in the middle of the original board.

Since the theorem is true when m = 20, and since its truth for m = 2 n follows
from its assumed truth for m = 2 n-1 for all n > 1, it follows from the principle of
mathematical induction that the theorem is true for all m provided m is a power
of 2.

Theorem 4 All horses are the same colour.


Proof We shall prove that any set of horses contains only horses of a single colour. In particular,
this will be true of the set of all horses. Let H be an arbitrary set of horses. Let us prove
by mathematical induction on the number n of horses in H that they are all the same
colour.
o Basis: The case n = 0 is trivially true: if there are no horses in H, then surely they
are all the same colour!

o Induction step: Consider any number n of horses in H. Call these horses h1, h2, ...,
hn. Assume the induction hypothesis that any set of n - 1 horses contains only
horses of a single colour (but of course the horses in one set could a priori be a
Design & Analysis of Algorithms (PU) 1-2 Introduction to Algorithms

different colour from the horses in another). Let H be the set obtained by
removing horse h1 from H, and let H2 be defined similarly; see Figure 1.6.

H1 : h2 h3 h4 h5
H2 : h1 h3 h4 h5
Figure 1.6. Horses of the same colour (n = 5)

There are n - 1 horses in each of these two new sets. Therefore, the induction
hypothesis applies to them. In particular, all the horses in H1 are of a single colour,
say c1, and all the horses in H2 are also of a single colour, say C2. But is it really
possible for colour c1 to be different from colour C2 Surely not, since horse hn
belongs to both sets and therefore both
c1 and C2 must be the colour of that horse! Since all the horses in H belong
to either H1 or H2 (or both), we conclude that they are all the same colour
c = c1 = c2. This completes the induction step and the proof by mathematical
induction.

1.5.2.2 Generalized Mathematical Induction


It is sometimes necessary to prove an extended basis, that is to prove the basis on more than
one point

We are now ready to formulate a more general principle of mathematical in duction. Consider
any property P of the integers, and two integers a and b such that a ≤ b. If

1. P(n) holds for all a ≤ n < b and


2. for any integer n ≥ b, the fact that P(n) holds follows from the assumption that P(m) holds
for all m such that a ≤ m < n,

then property P(n) holds for all integers n ≥ a.

Theorem 5 Every positive composite integer can be expressed as a product of prime numbers.

Proof The proof is by generalized mathematical induction. In this case, there is no need for a basis.

o Induction step: Consider any composite integer n ≥ 4. (Note that 4 is the smallest positive
composite integer, hence it would make no sense to consider smaller values of n.) Assume the
induction hypothesis that any positive composite integer smaller than n can be expressed as a
product of prime numbers. (In the smallest case n = 4, this induction hypothesis is vacuous.)
Consider the small est integer d that is larger than 1 and that is a divisor of n. As argued in the
proof of Theorem 1, d is necessarily prime. Let m = n/d. Note that 1 < m < n because n is
composite and d > 1. There are two cases.

- If m is prime, we have decomposed n as the product of two primes: n = d x m.


Design & Analysis of Algorithms (PU) 1-3 Introduction to Algorithms

- If m is composite, it is positive and smaller than n, and therefore the induction hypothesis
applies: m can be expressed as a product of prime numbers, say m = P\P2 • • • Pk •
Therefore n = d x m can be expressed as n = dp1 p2 • • • pkr also a product of prime
numbers.
In either case, this completes the proof of the induction step and thus of the
theorem.

1.6 Priority Queues


1.6.1 Heap

A heap is a special kind of rooted tree that can be implemented efficiently in an array.
Heapsort is an algorithm design technique that uses the heap data structure.
Heap data structure
• Heap is a nearly complete binary tree.
• Height of node = number of edges on a longest simple path from the node down to a
leaf.
• Height of heap = height of root = (log n).
• A heap can be stored as an array A.
 Root of tree is A[1].
 Parent of A[i ] = A[i/2].
 Left child of A[i ] = A[2i ].
 Right child of A[i ] = A[2i + 1].
 Computing is fast with binary representation implementation.
Heap property

• For max-heaps (largest element at root), max-heap property: for all nodes i, excluding
the root, A[PARENT(i )] ≥ A[i ].
• For min-heaps (smallest element at root), min-heap property: for all nodes i , excluding
the root, A[PARENT(i )] ≤ A[i ].
By induction and transitivity of ≤, the max-heap property guarantees that the maximum
element of a max-heap is at the root. Similar argument for min-heaps. The heapsort algorithm
uses max-heaps. In general, heaps can be k-ary tree instead of binary.
The way MAX-HEAPIFY works

• Compare A[i ], A[LEFT(i )], and A[RIGHT(i )].


• If necessary, swap A[i ] with the larger of the two children to preserve heap property.
Design & Analysis of Algorithms (PU) 1-4 Introduction to Algorithms

• Continue this process of comparing and swapping down the heap, until subtree rooted
at i is max-heap. If we hit a leaf, then the subtree rooted at the leaf is trivially a max-
heap.
Run MAX-HEAPIFY on the following heap example :

Fig 1.5 : An example heap

• Node 2 violates the max-heap property.


• Compare node 2 with its children, and then swap it with the larger of the two children.
• Continue down the tree, swapping until the value is properly placed at the root of a
subtree that is a max-heap. In this case, the max-heap is a leaf.
• Time: O(log n).
Insertion into heap
To insert an element into the heap, we adds it "at the bottom" of the heap and then
compares it with its parent, grandparent, great-grandparent, and so on, until it is less than or
equal to one of these values. Following algorithm Insert describes this process in detail
 Algorithm 1.1
Insertion into heap
1 Algorithm lnsert(a,n)
2 {
3 // Inserts a[n] into the heap which is stored in a[l: n - 1].
4 i := n; item := a[n];
Design & Analysis of Algorithms (PU) 1-5 Introduction to Algorithms

5 while ((i > 1) and (a[[i/2]] < item)) do


6 {
7 a[i]:=a[[i/2]]; i:=[i/2];
8 }
9 a[i] := item; return true;
10 }

Fig 1.6 : Inserting the element 90 into an existing heap

Deletion from heap

To delete the maximum key from the max heap, we use an algorithm called Adjust.
Adjust takes as input the array a[ ] and the integers i and n.
 Algorithm 1.2

Deletion from heap

1 Algorithm Adjust(a, i, n)
2 // The complete binary trees with roots 2i and 2i + 1 are combined with node i
3 //to form a heap rooted at i. No node has an address greater than n or less than 1.
5 {
6 j := 2i; item :=a[i];
7 while (j ≤ n) do
8 {
9 if ((j < n) and (a[j] < a[j + 1])) then j: = j + 1;
Design & Analysis of Algorithms (PU) 1-6 Introduction to Algorithms

10 // Compare left and right child


11 //and let j be the larger child.
12 if (item ≥ a[j]) then break;
13 // A position for item is found.
14 a[[j/2]] :=a[j]; j := 2j;
15 }
16 a[[j/2]] := item;
17 }

1 Algorithm DelMax (a,n,x)


2 // Delete the maximum from the heap a [l : n] and store it in x.
3 {
4 if (n = 0) then
5 {
6 write ("heap is empty"); return false;
7 }
8 x := a[l]; a[l] := a[n];
9 Adjust (a, 1, n — 1); return true;
10 }

 Algorithm 1.3

A sorting Algorithm

1 Algorithm Sort(a, n)
2 // Sort the elements a[l : n].
3 {
4 for i := 1 to n do lnsert(a,i);
5 for i := n to 1 step -1 do
6 {
7 DelMax(a, i, x); a[i] := x;
8 }

9 }
Design & Analysis of Algorithms (PU) 1-7 Introduction to Algorithms

To sort n elements, it is sufficient to make n insertions followed by n deletions from a


heap. The above algorithm illustrates. Since the insertion and delation take O(log n) time each
in the worst case , the above sorting algorithm has a time complexity of O(n log n).

1.6.2 Heapsort

The best-known example of the use of a heap arises in its application to sorting. A
conceptually simple sorting strategy has been given before, in which the maximum value is
continually removed from the remaining unsorted elements. A sorting algorithm that
incorporates the fact that n elements can be inserted in O(n) time is given in Algorithm 1.4 as
follows :

 Algorithm 1.4

Heapsort Algorithm

1 Algorithm HeapSort(a, n)
2 // a[l : n] contains n elements to be sorted. HeapSort rearranges them in place into
3 // nondecreasing order.
4 {
5 for i := [n/2] to 1 step - 1 do Adjust(a, i,n);
6 }
7 for i := n to 2 step -1 do
8 {
9 t := a[i]; a[i]:= a[1]; a[1] := t;
10 Adjust (a, 1, i – 1);
11 }
12 }
Design & Analysis of Algorithms (PU) 1-8 Introduction to Algorithms

1.7 Solved Questions/Examples

Q. 1 Flowcharting and pseudocode are 2 different design tools for an algorithm. How do
they differ and how are they similar ?
Ans. : Both flowcharting and pseudocode are used to design individual parts of a program.
But flowcharting gives a pictorial representation of the logical flow of an algorithm.
This is incontrast to the other design tool, pseudocode, that provides a textual (part
English, part structured code) design solution.

Q. 2 What are the factors which contribute for running time of a program ?
Ans. : There are basically four factors on which the running time of program depend. They
are :
(i) The input to the program.
(ii) The quality of code generated by the compiler used to create the object code.
(iii) The nature and speed of the instructions on the machine used to execute the program.
(iv) The time complexity of the algorithm underlying the program.

Q. 3 The input to the program contributes for running time of a program – Explain.
Ans. : This indicates that the running time of a program should be defined as a function of the
input. In most of the cases, the running time depends not on exact input (i.e. what is the
kind of input) but depends only on the size of the input. For example, if we are sorting
five numbers, (using some algorithm), it takes less amount of time as compared to
sorting ten number (of course using the same algorithm).

Q. 4 What are the desirable characteristics of a program ?


Ans. : The desirable characteristics of a program are :

(i) Integrity :

Refers to the accuracy of program.

(ii) Clarity :

Refers to the overall readability of a program, with emphasis on its underlying logic.

(iii) Simplicity :

The clarity and accuracy of a program are usually enhanced by keeping the things as
simple as possible, consistent with the overall program objectives.
Design & Analysis of Algorithms (PU) 1-9 Introduction to Algorithms

(iv) Efficiency :

It is concerned with execution speed and efficient memory utilization.

(v) Modularity :

Many programs can be decomposed into a series of identifiable subtasks.

(vi) Generality :

Program must be as general as possible. (viz., rather than keeping fixed values for
variables, it is better to read them).

Q. 5 In what way the asymmetry between Big-Oh notation and Big-Omega Notation
helpful ?
Ans. : There are many situations where the algorithm functions faster on some inputs but not
on all the inputs. viz, let us assume that we know an algorithm to determine whether
the input to it is of prime length. This algorithm runs very fast whenever the input
length is even. Hence, we cannot find good lower bound on the running time, that is
true for all n ≥ no.

Q. 6 What are the basic components, which contribute to the space complexity ?
Ans. : Space complexity is the amount of memory, the program needs for its execution. There
are basically two components, which need to be considered while determining the
space complexity. They are :

(i) A fixed part, which is independent of the characteristics (viz., number, size) of the inputs
and outputs. This part typically includes the instruction space
(i.e. space for the code), space for simple variables and fixed-size component variables
(also called aggregate), space for constants, and so on.

(ii) A variable part which consists of the space needed by component variables whose size is
dependent on the particular problem instance being solved, space needed by reference
variables (this depends on instance characteristics) and the recursion stack space.
∴ S (P) ⇒ space requirement for program P
= c + SP , ..... SP ⇒ instance characteristics,
Where c is a constant.

Hence, we usually concentrate on determining instance characteristics as a measure of


space complexity
Design & Analysis of Algorithms (PU) 1-10 Introduction to Algorithms

Q. 7 How do we analyse and measure time complexity of algorithm?


Ans. : The complexity of the algorithm which is used will depend on the number of statement
executed. Though the execution time for each statement is different, we can roughly
check as how many time each statement is executed. Whenever we execute conditional
statements we will be skipping some statement or we might be repeating some
statement. Hence the total number of statement execution will depend on conditional
statements.
At this stage we can roughly estimate that the complexity is the number of time the
condition statement is executed.
Example : for (i=0; i<n; i++)
printf(“%d\n”,i);
The output is number from 0 to n-1. We can easily say that the printf has been executed
n time. The condition which is checked in this case is i< n. The condition is checked n+1
times, n times when it was true and once when it becomes false. Hence first statement i.e.
condition is executed n + 1 times and printf n times. The total number of statement executed
will be 2n + 1.

Actually i = 0 has been executed once and i++, n time.

The Total number of statement which are executed are

1 + (n+1) +(n) +(n) = 3n +2.

If we ignore the constant, we can say that the complexity is of the order of n. The
notation used is Big O i.e. O(n).

Q. 8 Reorder the following complexity from smallest to largest


(a) n log2 (n) (b) n + n2 + n3
(c) 24 (d)
Ans. : Complexities in ascending order :

(a) 24

(b)

(c) n log2 (n)

(d) n + n2 + n3

Q. 9 Show the step table (indicating number of steps per execution, frequency of each of
the statement) for the following segment :
Design & Analysis of Algorithms (PU) 1-11 Introduction to Algorithms

line no. int sum ( int a[ ], int n )


1 {
2 int s, i ;
3 s=0;
4 for (i = 1; i < = n ; i ++)
5 s=s+a[i];
6 return (s) ;
7 }
Ans. : s/e ⇒ steps per execution.
Line s/e Frequency Total steps
1 0 0 0
2 0 0 0
3 1 1 1
4 1 n+1 n+1
5 1 n n
6 1 1 1
7 0 0 0
Total number of steps = 2 n + 3

Q. 10 Compute the frequency count for :


for i : = 1 to n
for j : = i + 1 to n
for k : = j + 1to n
for l : = k + 1 to n
x = x + 1;
Ans. : Here i loop gets executed for n times, j loop for (n −1) times, k loop for (n −2) times and
l loop for (n −3) times.
∴ Frequency count of x = x + 1
= n · (n −1) (n −2) · (n −3)
=
=
In general, the total complexity of the code is O (n4).
Design & Analysis of Algorithms (PU) 1-12 Introduction to Algorithms

Q. 11 Find the time complexity for :


for i : = 1 to n do
for j : = i + 1 to n do
for k : = j + 1 to n do
z = z + 1;
Ans. : In this example, the ‘i’ loop executes for ‘n’ times, ‘j’ loop gets executed for n −1 times
(since j loop starts with i + 1) and k loop gets executed for n −2 times.
∴ The total count of loops = n · (n −1) · (n −2)
= n3 − 3 n2 + 2 n
The frequency count of z = z + 1, depends on the overall execution time. It can be
determined by dividing the total loop count by the total order of the loop.
∴ Frequency count of z = z + 1 is :
=
In general, the total complexity of the code is O (n3).
Design & Analysis of Algorithms (PU) 1-13 Introduction to Algorithms

Q. 12 Find out the frequency count for the following piece of code :
x = 5; y = 5;
for (i = 2; i < = x ; i ++)
for (j = y; j >= 0; j −−)
{
if (i = = j)
printf(“xxx”);
else
break;
}
Ans. :

Let us number the statements :

(i) x = 5; y = 5;

(ii) for (i = 2; i < = x; i++)

(iii) for (j = y; j >= 0; j −−)

(iv) { if (i = = j)

(v) printf (“xxx”)

(vi) else

(vii) break;
(viii) }

Note here that the break statement will cause the termination of inner loop
(i.e. j loop).
‘i’ value j value Remarks
2 5 Since i ≠ j, hence goes to statement
VII
3 5 Since i ≠ j, hence goes to statement
VII
4 5 Since i ≠ j, hence goes to statement
VII
5 5 Since i = j, hence goes to statement V.
Design & Analysis of Algorithms (PU) 1-14 Introduction to Algorithms
Design & Analysis of Algorithms (PU) 1-15 Introduction to Algorithms

Therefore the count can be tabulated as :


Line number Frequency
1 1
2 4
3 4
4 4
5 1
6 3
7 3

Note : The reader can write a simple ‘C’ code and check the answer by step-execution.

Q. 13 Find frequency count for the following :


int a = 10; b = 10;
for (i = 7; i < 9; i++)
for (j = 10; j = b; j+ +)
{ if (i < j)
printf(“DSF\n”);
else
break;
}
Ans. : Note here that i value ranges from 7 to 8 ( i = 7 and i < 9) and corresponding j value
can be only 10 (j < = b, and b = 10). So we can tabulate the count as :
Line number Frequency
1 1
2 2
3 2
4 2
5 2
6 0
7 0
Design & Analysis of Algorithms (PU) 1-16 Introduction to Algorithms

Q. 14 Find the frequency count of :


for (i – 1; i < n; i++)
for (j = 1; j <= n; j++)
a = a + 2;
Ans. : Here the i loop gets executed for n times, and j loop for n times.

∴ The total loop count is n2.

i.e. 1 (1 for a = a + 2)
= n
= n2

Q. 15 Find the frequency count of :


(I) for (i = 1; i <= n; i++)
for (j = 1; j <= i; j++)
x = x + 1;
(II) i = 1;
while (i <= n)
{
x = x + 1;
i = i + 1;
}
Ans. :
(I) Let us number the statements :
1. for (i = 1; i <= n; i++)
2. for (j = 1; j <= i; j++)
3. x = x + 1;
The j loop varies from 1 to i times, and as i goes from 1 to n;
∴ Total loop count = n ⋅ (n + 1)
= n2 + n
∴ frequency count of x = x + 1 is
Design & Analysis of Algorithms (PU) 1-17 Introduction to Algorithms

(II) Let us number the statements :


1. i=1
2. while (i< = n)
3. {
4. x = x + 1;
5. i=i+1
6. }
Now the frequency count is :
Statement Count
1 0
2 n+1
3 0
4 n
5 n
6 0

Q. 16 If the algorithm doIt has the complexity 5n, calculate the run-time complexity of the
following program segment :

j=1

loop i < = n

doIt (…)

i=i+1
Ans. : dolt has the complexity 5n,

Hence n (5n) = 5n2 = 0 (n2)

Q. 17 Suppose the complexity of an algorithm is If a step in this algorithm takes


1 nanosec (10–9), how long does it take the algorithm process an input of size 1000 ?
Ans. : Complexity of an algorithm is n3 ,when input size = 1000,

Time = (10003) × 10– 9 sec


Algorithm will take 1 sec,to process an input of size 1000.
Design & Analysis of Algorithms (PU) 1-18 Introduction to Algorithms

Q. 18 An algorithm runs a given input of size n. If is 4096, the run time is 512 millisecond. If
n is 16384, the run time 1024 millisecond. What is the complexity ? What is the big –
‘O’ notation ?
Ans. :
n1 = 4096 n2 = 16384
f(n1) = 512 f(n2) = 1024
n2 = 4 × n1
f(n2) = 2 × f(n1)
Since n increases by four while f(n) increases by only two, the complexity is n 1/2. The
big – ‘O’ notation = ‘O’(n1/2).

Review Questions

Q. 1 What is an algorithm ?

Q. 2 Explain the various characteristics of an algorithm.

Q. 3 Explain the need for algorithm analysis.

Q. 4 Compare the two functions n2 and 2n/4 for various values of n. Determine when the
second becomes larger than the first.

Q. 5 Determine the frequency counts for all statements in the following two algorithm
segments:
1. for i:= 1 to n do 1. i:= 1;
2. for j := 1 to i do 2. while (i<=n) do
3. for k := 1 to j do 3. {
4. x := x +1; 4. x := x +1;
5. i := i + 1;
6. }
(a) (b)

Q. 6 Find two function f(N) and g (N) such that neither f (N) = O(g(N)) nor g (N) = O (f(N)).
Design & Analysis of Algorithms (PU) 1-19 Introduction to Algorithms

Q. 7 For each of the following six program fragments:

a. Give an analysis of the running time (Big-Oh will do).

b. Implement the code in the language of your choice, and give the running time for
several values of N.

c. Compare your analysis with the actual running times.


(1) Sum = 0;
for (i=0; i<N; i++)
Sum++;
(2) Sum = 0;
for ( i =0; i<N; i++)
for (j =0; j<N; j++)
Sum++;
(3) Sum = 0;
for ( i =0; i < N; i++)
for (j =0; j < N*N; j++)
Sum++;
(4) Sum = 0;
for ( i =0; i < N; i++)
for (j =0; j < i ; j++)
Sum++;
(5) Sum = 0;
for (i = 0; i < N; i++)
for (j = 0; j < i * i; j++)
for(k = 0; k< j; k++)
Sum++;
(6) Sum = 0;
for (i = 0; i < N; i++)
for (j = 1; j < i * i; j++)
if( j % i == 0)
for(k = 0; k < j; k++)
Sum++;
Design & Analysis of Algorithms (PU) 1-20 Introduction to Algorithms

Q. 8 Given the following recursive function to insert an element x into the highest zero
element of an array a :

Insert (x, a[ ], i)

if (i==0) return FALSE;

else if (a [i – 1]==0) {a [i – 1] = x; return TRUE;}

else return Insert (x, a, i – 1);

What are the best, worst and average time and space complexities of Insert (x, a, n)
using the recursion tree method. Verify your solutions using the substitution method.

Q. 9 What is big O notation? Arrange the following functions by growth value :

N, , N2, N log2N, N N, 2/N, 2N, N3.

Q. 10 Show that following statements are true :

1. 20 is O(1)

2. n (n −1) /2 is O (n2 )

3. max (n3 , 10 n2 ) is O (n3 )

4. ik is O (nk+1 )

1.8 University Questions and Answers


Design & Analysis of Algorithms (PU) 1-21 Introduction to Algorithms

Dec 2006

Q. 1 (a) Prove by contradiction that “there are infinitely many prime numbers.” (8 Marks)

(b) Consider the definition of fibonacci sequence :


F0 = 0; F1 = 1; and
Fn = Fn – 1 + F n – 2 for n > = 2
Prove by mathematical induction that :
n
Fn = 1/ (φ – (–φ )– n )
Where φ = (1 + ) /2 (8 Marks)

OR

Q. 2 (a) Consider the recurrence.


T(n) = O(n)
t (1) = Ⓗ (1)
Show that above recurrence is asymptotically bound by H (n). (4 Marks)

(b) State whether the following functions are CORRECT or INCORRECT and justify your
answer.
(i) 3n + 2 = O (n)
(ii) 100 n + 6 O(n)
(iii) 10n2 + 4n + 2 = O (n2) (6 Marks)

(c) Prove that,


f(n) = am nm + … + a1 n + a0 Then
f(n) = O (nm) (6 Marks)

May 2007

Q. 1 (a) Prove by generalized mathematical induction that “every positive integer can be expressed as
product of prime numbers.” (8 Marks)

(b) Prove by contradiction. There exist two irrational numbers x and y such that xy is rational.
(8 Marks)

OR

Q. 2 (a) Prove by mathematical induction that the sum of the cubes of the first n positive integers is
equal to the square of the sum of these integers. (8 Marks)

(b) Consider the recurrence :


Design & Analysis of Algorithms (PU) 1-22 Introduction to Algorithms

T(n) = O(n)
T(1) = Ⓗ (1)
Show that above recurrence is assympotically bound by Ⓗ (n). (8 Marks)

Dec 2007

Q. 1 (a) Consider the following code:


int sum (int a [ ],a)
{
count = count + 1 ;
if (n< = 0)
{
count = count + 1 ;
return 0;
}
else
{
count = count + 1 ;
return sum (a, n – 1) + a[n];
}
}
Write the recursive formula for the above code and solve this recurrence relation. (8
Marks)

(b) Explain in brief Amortized analysis. Find the amortized cost with respect to stack operations.
(10 Marks)

OR

Q. 2 (a) Solve the recurrence equation : T (n) = 2T (n/2) + n2 . (6 Marks)


(b) Give a recurrence equation for binary search and solve the same (6 Marks)
(c) Prove using mathematical induction that every positive composite integer can be
expressed as a product of prime numbers. (6 Marks)

May 2008

Q. 1 (a) Justify the following statement :


Design & Analysis of Algorithms (PU) 1-23 Introduction to Algorithms

“ The space complexity under logarithmic cost is O (n log n)”. (4 Marks)

(b) Consider the function f (n) defined as


f(n) = nn for all integers n > = 1
0 Otherwise
the fragment of pseudo code to compute nn is given below
compute ( )
{ read n ;
if (n < = 0) return (0) ;
else { t = n ;
p=n–1;
while (p > 0)
{ t = t*n ;
p = p – 1; }
return (t) ;
}
}
Determine the time and space complexity of this code fragment. Clearly mention any
assumptions made. (6 Marks)

(c) Prove by contradiction that “there are infinitely many prime numbers” (8 Marks)
OR

Q. 2 (a) Name and explain in two or three sentences three popular methods to arrive at amortized
costs for the operations. (6 Marks)

(b) If f(n) = amnm +……………+ a1 n + a0 then prove that f(n) = O (nm). (8 Marks)

(c) State whether the following equalities are correct or incorrect. (4 Marks)
(i) 5n – 6n = Θ (n )
2 2

(ii) n! = O (nm)
(iii) n3 + 106 n2 = Θ (n2)
(iv) 6n3 /(log n + 1) = O (n3).

Dec 2008

Q. 1 (a) What are the basic components which contributes to the space
complexity ? Compute the space needed by the following algorithms
justify your answer.
Design & Analysis of Algorithms (PU) 1-24 Introduction to Algorithms

Algorithm sum (a, n)


{
s: 0.0;
for i = 1to n do
s: = s+ a[i]
return s :
}
b) Prove if f (n) = a m nm + …...+a1 + a0, then f(n) = O (nm).
OR
Q. 2 (a) Prove that following algorithm produces a uniform random permutation of
the input, assuming that all the priorities are distinct.
PERMUTE_BY_SORTING (A)
n ← length [a]
for i ← to n
do P [i] = RANDOM (1, n3)
Sort A, using P as Sort Keys
Return A.
(b) Suppose you flip a fair coin ‘n’ times, what is the longest streak of
consecutive heads that you expect to see and prove your answer.

Vous aimerez peut-être aussi