Vous êtes sur la page 1sur 29

Introduction to Algorithms

Outline
• What is algorithm
• Characteristics of algorithm
• Performance/ Analysis of Algorithm
• Time complexity
• Space complexity
Algorithm
• Algorithm is a set of well defined instructions
in sequence to solve the problem.
• A step by step procedure that will accomplish
a desired task.
• A problem solving procedure + contain the
execution of a sequence of actions in a precise
order to complete a desired task.
Characteristics of algorithm
• Every Algorithm must satisfy the following properties:
• Input- There should be 0 or more inputs supplied
externally to the algorithm.
• Output- There should be at least 1 output obtained.
• Definiteness- Every step of the algorithm should be
clear and well defined.
• Finiteness- The algorithm should have finite number
of steps.
• Correctness- Every step of the algorithm must
generate a correct output.
Algorithm must have
• Specifications- Description of the procedure
• Pre-condition- Condition on input
• Body of the algorithm- Sequence of clear and
unambiguous instructions.
• Post-condition- Condition on output
Performance/ Analysis of Algorithm
• An algorithm is said to be efficient and fast, if it
takes less time to execute and consumes less
memory space.
• Analysis is the process of estimating the running
time of an algorithm or Used to find the best
algorithm to solve the problem.
• The performance of an algorithm is measured on
the basis of following properties :
– Time Complexity
– Space Complexity
Efficiency and complexity
• Efficiency
– How much time or space is required
– Measured in terms of common basic operations
• Complexity
– How efficiency varies with the size of the task
– Expressed in terms of standard functions of n
– E.g. O(n), O(n2 ), O(log n), O(n log n)
Time complexity
• It is the amount of time an algorithm takes to
execute.
• No. of memory access, no. of comparison, no. of
shifting operations etc. required some time.
• So time depends on the no. of instruction
executed.
• Time complexity is represented by function f(n).
• f(n) represents the no. of times all the statements
of algorithm will execute where n is the
input(size) of algorithm.
Common Time Complexities
• The function can any of the following:
1. Constant i.e f(n)= 1
2. Linear f(n)=n+1
3. Quadratic f(n)= n2 +1
4. Cubic f(n)= n3
5. Double Exponential f(n)=22n
6. Logarithmic f(n)=log n
7. Exponential f(n)=2n, 2n2
Example 1
• Consider the following algorithm fragment:
for i = 1 to n do
sum = sum + i ;
• The for loop executed n+1 times for i values
1,2,....... n, n+1.
• Each instruction in the body of the loop is
executed once for each value of i = 1,2,......, n.
• So number of steps executed is 2n+1.
Example 2
• Consider another algorithm fragment:
for i = 1 to n do
for j = 1 to n do
k = k +1
• From previous example, number of instruction
executed in the inner loop is which is the body of
outer loop.
• Total number of instruction executed is
= (n+1)+n(2n+1)= 2n2 +2n+1
Problem in absolute time
• To measure the time complexity in absolute time
unit has the following problems:
– The time required for an algorithm depends on
number of instructions executed, which is a complex
polynomial.
– The execution time of an instruction depends on
computer's power. Since, different computers take
different amount of time for the same instruction.
– Different types of instructions take different amount
of time on same computer.
• Complexity analysis technique abstracts away
these machine dependent factors .
Types of cases
• Usually, the time required by an algorithm
falls under three types −
• Best Case − Minimum time required for
program execution.
• Average Case − Average time required for
program execution.
• Worst Case − Maximum time required for
program execution.
Notation of Time Complexity
• It defines the order of growth of any algorithm
which is useful to measure the performance of
any algorithm.
• The different notations are:
1. Big oh O(worst case) upper bound
2. Big omega Ω(best case) lower bound
3. Theta Θ(average case)
4. Little oh o(strictly worst case)
5. Little omega Ω (strictly best case)
Notations
• Three main types of asymptotic order
notations are used in practice:
1. Big oh O(worst case) upper bound
2. Big omega Ω(best case) lower bound
3. Theta Θ(average case)
Big oh (O)
• The Big O notation defines an upper bound of an
algorithm, it bounds a function only from above.

• The Big O notation is useful when we only have


upper bound on time complexity of an algorithm.
Big omega (Ω)
• Just as Big O notation provides an asymptotic
upper bound on a function, Ω notation
provides an asymptotic lower bound.
• Ω Notation can be useful when we have lower
bound on time complexity of an algorithm.
Theta (Θ)
• The theta notation bounds a functions from
above and below, so it defines exact
asymptotic behavior.
• A simple way to get Theta notation of an
expression is to drop low order terms and
ignore leading constants.
• For example, consider the
following expression.
3n3 + 6n2 + 6000 = Θ(n3)
Time complexity for iterative and
recursive function
Iterative function and
Recursive function
• Iterative function • Recursive function
• A() • A(n)
{ {
for (i=1 to n) If(n>1)
Max(a,b) A(n/2)
} }

• Both are equal in terms of power.


• Can be converted into another form
• Analysis of algorithm create the difference.
• In iterative function- count the number of iteration.
• In recursive function- calculate the time in terms of recursive function.
Constant complexity
• If program does not have any iterative or
recursive statements then the output does
not depends on the input size.
• It will go constant. O(1)
• In these cases, do not worry about the time.
Method to analyze iterative function
1. O(1): Time complexity of a function (or set of statements) is
considered as O(1) if it doesn’t contain loop, recursion and call to
any other non-constant time function.

2. O(n): Time Complexity of a loop is considered as O(n) if the


loop variables is incremented / decremented by a constant
amount.

3. O(nc): Time complexity of nested loops is equal to the number of


times the innermost statement is executed.

4. O(Logn) Time Complexity of a loop is considered as O(Logn) if


the loop variables is divided / multiplied by a constant amount.

5. O(LogLogn) Time Complexity of a loop is considered as


O(LogLogn) if the loop variables is reduced / increased
exponentially by a constant amount.
Methods to analyze recursive function
• Back substitution method/Iteration method
• Recursive tree method-
• Draw a recurrence tree and calculate the time taken by
every level of tree. Finally, sum the work done at all levels.
To draw the recurrence tree, start from the given
recurrence and keep drawing till you find a pattern among
levels. The pattern is typically a arithmetic or geometric
series.
• Master theorem-
– for Subtract and Conquer Recurrences
– for Divide and Conquer Recurrences
Master theorem (Divide and Conquer
Recurrences)
• T(n) = aT(n/b) + f(n)
• where n = size of the problem
a = number of subproblems in the recursion
and a >= 1
n/b = size of each subproblem
f(n) = cost of work done outside the recursive
calls like dividing into subproblems and cost of
combining them to get the solution.
Master theorem (Subtract and
Conquer Recurrences)
• Let T(n) be a function defined on positive n as
shown below:

for some constants c, a>0, b>0, k>=0 and


function f(n). If f(n) is O(nk), then
• 1. If a<1 then T(n) = O(nk)
2. If a=1 then T(n) = O(nk+1)
3. if a>1 then T(n) = O(nkan/b)
Space complexity
• Space complexity is the amount of memory used
by the algorithm (including the input values to the
algorithm) to execute and produce the result.
• Sometime Auxiliary Space is confused with Space
Complexity.
• Auxiliary Space is the extra space or the
temporary space used by the algorithm during it's
execution.
• Space Complexity = Auxiliary Space + Input space
Memory Usage while Execution
• While executing, algorithm uses memory space
for three reasons:
• Instruction Space: It's the amount of memory
used to save the compiled version of instructions.
• Data Space: Amount of space used by the
variables and constants.
• Environmental or execution space: required to
store the environment information needed to
resume the suspended functions.
Calculating the Space Complexity
• For calculating the space complexity
– we need to know the value of memory used by different
type of data type variables, which generally varies for
different operating systems, but the method for calculating
the space complexity remains the same.
• It can be calculate in two ways:
– Constant space complexity
• Algorithm requires fixed amount of space for all input values.
• Fixed steps
– Linear space complexity
• Required space will vary due to recursive or iterative functions.
References
• https://www.studytonight.com/data-structure
s/time-complexity-of-algorithms
• https://www.javatpoint.com/data-structure-in
troduction
• https://nptel.ac.in/courses/106103069/5
• https://www.geeksforgeeks.org/analysis-of-al
gorithms-set-3asymptotic-notations/
• https://www.geeksforgeeks.org/analysis-of-al
gorithms-set-4-analysis-of-loops/

Vous aimerez peut-être aussi