Complexity Analysis of Algorithms

Attribution Non-Commercial (BY-NC)

178 vues

Complexity Analysis of Algorithms

Attribution Non-Commercial (BY-NC)

- CSC263 Spring 2011 Assignment 1 Solutions
- Analysis of Algorithms I
- Lecture 01 ComplexityAnalysis
- Analysis
- Function Minimization Report
- IGNOU MCS - 031
- AoA_171333007_assign1
- Lecture 01236
- A Modified Dna Computing Approach To Tackle The Exponential Solution Space Of The Graph Coloring Problem
- Lcs 991029
- Algorithm Design By Jon Kleinberg and Eva Tardos
- DSA NOV-DEC 2010
- nlogn.pdf
- Loop the Loop
- 1-ChenDan
- Question Bank Unit 1-New
- Scheduling a variable maintenance and linear deteriorating jobs on a single machine.pdf
- T-79 5103 Introduction to the Course
- Influence of Empathic Modalities on Software Engineering
- 10.1.1.4

Vous êtes sur la page 1sur 12

Introduction

Data Structure

Representing information is fundamental to computer science. The primary purpose of most computer programs is not to perform calculations, but to store and retrieve information usually as fast as possible. Here comes the role of data structures. A data structure is simply a way of organizing data to be processed by programs. An array is the simplest data structure you are already familiar with. Even an integer or floating point number stored in the memory of a computer can also be viewed as a simple data structure. We shall encounter many more examples (not as simple as an array) as we progress through this course. For many applications, the choice of a proper data structure is really the only major decision involved in the implementation: once we settle on a rational choice, the only task that remains is a simple algorithm for data processing. For the same data, some data structures are more space efficient than others. For the same operations on given data, some data structures lead to more efficient algorithms than others. Use of appropriate data structure and algorithm can make the difference between a program running in a few seconds and one requiring many days!

Algorithm

An algorithm is any well-defined computational procedure that takes some value, or set of values, as input and produces some value, or set of values, as output in finite time. That is, it is required that an algorithm is guaranteed to terminate after a finite amount of time. An algorithm is thus a sequence of computational steps that transform the input into the output. An algorithm may be specified in a natural language (e..g. English), in pseudo code or in a programming language. Algorithms + Data Structures = Programs Niklaus Wirth (Turing Award Winner, 1984) This will become gradually obvious that choice of data structures and algorithms is closely intertwined. Obviously, no single data structure works well for all purposes, and so it is important to know the strengths and limitations of several of them.

Pseudocode

A pseudocode is a high-level description of an algorithm. It is convenient from the viewpoint of analysis and design that algorithms are presented in the form of pseudocode rather than in a specific programming language. A pseudocode is more structured than English prose and less detailed than a program and thus hides program design issues. As an example consider the following algorithm that finds maximum element in an array. Algorithm arrayMax(A, n) Input array A of n integers Output maximum element of A

currentMax A[0] for i 1 to n 1 do if A[i] currentMax then currentMax A[i] return currentMax Motivation (Why Study Data Structures & Algorithms)

One might think that with ever more powerful computers, program efficiency (measured in terms of its execution time and memory requirement) is becoming less important. Wont any efficiency problem we might have today be solved by tomorrows hardware? However, as we develop more powerful computers, our history so far has always been to use additional computing power to tackle more complex problems, be it in the form of more sophisticated user interfaces,

Page - 1 - of 12

CS-210 Data Structures & Algorithms Department of Computer & Information Systems Engineering

bigger problem sizes, or new problems previously deemed computationally infeasible. More complex problems demand more computation, making the need for efficient programs even greater. Worse yet, as tasks become more complex, they become less like our everyday experience. Todays computer scientists must be trained to have a thorough understanding of the principles behind efficient program design, because their ordinary life experiences often do not apply when designing computer programs. Algorithms are at the heart of computer science: hardware design, GUI, routing in networks, system software like compilers, assemblers, operating system routines, etc., all make extensive use of algorithms. As stated by Donald E. Knuth, an authority in the field, Computer Science is the study of algorithms. One cannot cite even a single application that does not depend on algorithms in one way or the other. Following is a representative list (far from exhaustive) of applications that rely heavily on algorithms: The Human Genome Project has the goals of identifying all the 100,000 genes in human DNA, determining the sequences of the 3 billion chemical base pairs that make up human DNA, storing this information in databases, and developing tools for data analysis. Each of these steps requires sophisticated algorithms. The Internet enables people all around the world to quickly access and retrieve large amounts of information. In order to do so, clever algorithms are employed to manage and manipulate this large volume of data. Examples of problems which must be solved include finding good routes on which the data will travel and using a search engine to quickly find pages on which particular information resides. Electronic commerce enables goods and services to be negotiated and exchanged electronically. The ability to keep information such as credit card numbers, passwords, and bank statements private is essential if electronic commerce is to be used widely. Public-key cryptography and digital signatures are among the core technologies used and are based on numerical algorithms and number theory. In manufacturing and other commercial settings, it is often important to allocate scarce resources in the most beneficial way. An oil company may wish to know where to place its wells in order to maximize its expected profit. An airline may wish to assign crews to flights in the least expensive way possible, making sure that each flight is covered and that government regulations regarding crew scheduling are met. An Internet service provider may wish to determine where to place additional resources in order to serve its customers more effectively. All of these are examples of problems that can be solved using linear programming.

Having a solid base of algorithmic knowledge and technique is one characteristic that separates the truly skilled programmers from the novices. With modern computing technology, you can accomplish some tasks without knowing much about algorithms, but with a good background in algorithms, you can do much, much more.

Page - 2 - of 12

CS-210 Data Structures & Algorithms Department of Computer & Information Systems Engineering

Analysis of Algorithms

Historical Background The question of whether a problem could be solved using an algorithm received a lot of attention in the first part of this century and especially in the 1930s. The field which is concerned with decidability and solvability of problems is referred to as theory of computation, although some computer scientists advocate the inclusion of the field of algorithms in this discipline. With the advent of digital computers, the need arose for investigating those solvable problems. In the beginning, one was content with a simple program that could solve a particular problem without worrying about the resource usage, in particular time that the program needed. Then the need for efficient programs that could use least amount of resources evolved as a result of the limited resources available and the need to develop complex algorithms. This led to the evolution of a new area in computing namely, computational complexity. In this area, a problem that is classified as solvable is studied in terms of its efficiency, that is, the time and space needed to solve that problem. Later on, other resources were introduced, e.g. communication time and the number of processors if the program is run on a parallel machine. Definition of Analysis Analyzing an algorithm has come to mean predicting the resources that the algorithm requires. Occasionally, resources such as memory, communication bandwidth, or computer hardware are of primary concern, but most often it is computational time that we want to measure. Algorithm analysis is typically performed to compare algorithms. Empirical Analysis One way to compare algorithms is to implement them as computer programs and then run them on a suitable range of inputs, measuring how much of the resources in question each program uses. This approach is often unsatisfactory for various reasons. There is the effort involved in programming and testing more than one algorithm when at best you want to keep only one. There is always the chance that one of the programs (i.e. implementation of one of the algorithms) was better written than the other, and that the relative qualities of the underlying algorithms are not truly represented by their implementations. This is especially likely to occur when the programmer has a bias regarding the algorithms. Results may not be indicative of the running time on other inputs not included in the experiment. The choice of empirical test cases might unfairly favor one algorithm. In order to compare two algorithms, the same hardware and software environments must be used. You could find that even the better of the two algorithms does not fall within your resource budget. In that case you must begin the entire process again with yet another program implementing a new algorithm. But, how would you know if any algorithm can meet the resource budget? The problem might be too difficult for any implementation to be within budget.

Theoretical Analysis To overcome these limitations of empirical analysis relying on implementation of algorithms, a theoretical approach to algorithm analysis is employed marked by following characteristics: Uses a high-level description (pseudo-code) of the algorithm instead of an implementation Characterizes running time as a function of the input size, n Takes into account all possible inputs Allows us to evaluate the speed of an algorithm independent of the hardware/software environment

Page - 3 - of 12

CS-210 Data Structures & Algorithms Department of Computer & Information Systems Engineering

Theoretical analysis has proved useful to computer scientists who must determine if a particular algorithm is worth considering for implementation. Primitive (Elementary) Operations Of primary consideration while estimating an algorithms performance is the count of primitive operations required by the algorithm to process an input of a certain size. The terms primitive operations and size are both rather vague and a precise definition is difficult. However, most often, a primitive or elementary operation is defined as a high-level operation that is largely independent of the programming platform, input data and the algorithm used. Examples of primitive operations are Performing an arithmetic operation (e.g. adding two numbers) Comparing two numbers Assigning a value to a variable Indexing into an array or following a pointer reference Calling a method or function Returning from a method or function Counting Primitive Operations By inspecting the pseudocode, we can determine the maximum number of primitive operations executed by an algorithm, as a function of the input size. For example, consider the problem of finding the largest element in an array. We determine the count of primitive operations in two cases: I) when A[0] happens to be the largest element and no assignment is needed within the loop body (regarded as the best case) and II) when the array elements are in descending order so that n 1 assignments are performed within the loop body (regarded as the worst case). Algorithm Algorithm arrayMax(A, n) Input array A of n integers Output maximum element of A

currentMax A[0] for i 1 to n 1 do if A[i] currentMax then currentMax A[i] return currentMax

TOTAL Algorithm Algorithm arrayMax(A, n) Input array A of n integers Output maximum element of A

currentMax A[0] for i 1 to n 1 do if A[i] currentMax then currentMax A[i] return currentMax

TOTAL

2 1 + n + n 1 = 2n 2(n 1) 0 1 4n + 1

Page - 4 - of 12

CS-210 Data Structures & Algorithms Department of Computer & Information Systems Engineering

Best/Average/Worst Case We have just noted that the nature of input greatly affects the execution time of algorithms. Hence, based on the input data, performance of algorithms can be categorized into three classes:

Best Case

This happens when the input is such that the algorithm runs in the shortest possible span of time. That is, it provides a lower bound on running time. Normally we are not interested in the best case, because this might happen only rarely and generally is too optimistic for a fair characterization of the algorithms running time. In other words, analysis based on the best case is not likely to be representative of the behavior of the algorithm. However, there are rare instances where a best-case analysis is useful in particular, when the best case has high probability of occurring. Worst Case This occurs when the input is such that the algorithm runs in the longest time. That is, it gives upper bound on running time. The advantage to analyzing the worst case is that you know for certain that the algorithm must perform at least that well. In other words, it's an absolute guarantee that the algorithm would not run longer, no matter what the inputs are. This is especially important for real-time applications, such as for the computers that monitor an air traffic control system. Average Case This provides an estimate of average running time. In some applications, when we wish to aggregate the cost of running the program many times on many different inputs. This means that we would like to know the typical behavior of the algorithm. This type of average case analysis is typically quite challenging. It requires us to define a probability distribution on the set of inputs, which is usually a difficult task. For example, sequential search algorithm on the average examines half of the array values. This is only true if the element with given value is equally likely to appear in any position in the array. If this assumption is not correct, then the algorithm does not necessarily examine half of the array values in the average case.

5 ms 4 ms 3 ms 2 ms 1 ms

worst-case

}

A B C

average-case? best-case

Input

The figure shows running times on seven different categories of inputs, of which, A and D give worst case, E gives best case and the remaining four correspond to average case.

Page - 5 - of 12

CS-210 Data Structures & Algorithms Department of Computer & Information Systems Engineering

Asymptotic Analysis

Order of Growth It is meaningless to say that an algorithm A, when presented with input x, runs in y seconds. This is because the actual time is not only a function of the algorithm used; it is also a function of numerous factors, e.g. the target machine, the programming environment, or even programmer's skills. It turns out that we really do not need even approximate times. This is supported by many factors: Relative Estimate - while analyzing running time of an algorithm, we usually compare its behavior with another algorithm that solves the same problem. Thus, our estimates of running times are relative as opposed to absolute. Independence - it is desirable for an algorithm to be not only machine independent, but also capable of being expressed in any language, including human languages. Moreover, it should be technology independent, that is, we want our measure of running time of an algorithm to survive technological advances. Large Input Sizes - Our main concern is not in small input sizes; rather we are mostly concerned with the behavior of algorithms for large input instances. In fact, counting the number of primitive operations in some reasonable implementation of an algorithm is very cumbersome, if not impossible, and since we are interested in running time for large input sets, we may talk about the rate of growth or the order of growth of the running time. The growth rate of an algorithm is the rate at which the cost i.e. the running time of the algorithm grows as the size of its input grows. Looking at growth rates in this way is called asymptotic analysis, where the term "asymptotic" carries the connotation of "for large values of n." Thus, we will be focusing on the growth rates of (running time of) algorithms as a function of input size n, taking a "big picture" approach, rather than being bogged down with small details. It will convey all the information if we content to say that growth rate of an algorithm A is proportional to n, implying that its true or exact running time being n times a small constant factor that depends on the hardware and software environment and varies in a certain range. In general, the time taken by an algorithm grows with the size of the input, so it is traditional to describe the running time of a program as a function of the size of its input. The best notion for input size depends on the problem being studied. For many problems, such as sorting or computing discrete Fourier transforms, the most natural measure is the number of items in the inputfor example, the array size n for sorting. For many other problems, such as multiplying two integers, the best measure of input size is the total number of bits needed to represent the input in ordinary binary notation. Example 1 Consider following algorithm again: Algorithm arrayMax(A, n) Input array A of n integers Output maximum element of A currentMax A[0] for i 1 to n 1 do if A[i] currentMax then currentMax A[i] return currentMax Here, the size of the problem is n, the number of integers stored in A. The basic operation is to compare an integers value to that of the largest value seen so far. It is reasonable to assume that it takes a fixed amount of time to do one such comparison, regardless of the value of the two integers or their positions in the array.

Page - 6 - of 12

CS-210 Data Structures & Algorithms Department of Computer & Information Systems Engineering

Let c be the amount of time (i.e. cost) required to compare two integers. We do not care right now what the precise value of c might be. Nor are we concerned with the time required to increment variable i because this must be done for each value in the array and in turn, depends on input size n, or the time for the actual assignment when a larger value is found, or the little bit of extra time taken to initialize currentMax. We just want a reasonable approximation for the time taken to execute the algorithm and thus assume that costs of all such operations are lumped into c. The total time to run this algorithm is therefore approximately cn. This is because we must make n comparisons, with each comparison costing c units of time. We say that our algorithm has a running time expressed by the equation ( ) This equation describes growth rate of running time of above algorithm. Example 2

sum = 0; for (i=1; i<=n; i++) for (j=1; j<=n; j++) sum++;

What is the running time for this code fragment? Clearly it takes longer to run when n is larger. The basic operation in this example is the increment operation for variable sum. We can assume that incrementing takes constant time; call this time c. (We can ignore the time required to initialize sum, and to increment the loop counters i and j. In practice, these costs can safely be bundled into time c.) The total number of increment operations is n2. Thus, we say that the running time is ( ) Investigating Growth Rates The figure on following page shows a graph for six equations, each meant to describe the running time for a particular program or algorithm. A variety of growth rates representative of typical algorithms are shown. The two equations labeled 10n and 20n are graphed by straight lines. A growth rate of cn (for any positive constant c) is referred to as linear growth rate or running time. This means that as the value of n grows, the running time of the algorithm grows in the same proportion. Doubling the value of n roughly doubles the running time. An algorithm whose running-time equation has a highest-order term containing a factor of n2 is said to have a quadratic growth rate. The line labeled 2n2 represents a quadratic growth rate. The line labeled 2n represents an exponential growth rate. This name comes from the fact that n appears in the exponent. The curve labeled n! also grows exponentially. As can be seen from this plot, the difference between an algorithm whose running time has cost T(n) = 10n and another with cost T(n) = 2n2 becomes tremendous as n grows. For n > 5, the algorithm with running time T(n) = 2n2 is already much slower. This is despite the fact that 10n has a greater constant factor than 2n2. Comparing the two curves marked 20n and 2n2 shows that changing the constant factor for one of the equations only shifts the point at which the two curves cross. For n > 10, the algorithm with cost T(n) = 2n2 is slower than the algorithm with cost T(n) = 20n. This graph also shows that the equation T(n) = 5nlogn grows somewhat more quickly than both T(n) = 10n and T(n) = 20n, but not nearly so quickly as the equation T(n) = 2n2. For constants a, b > 1, na grows faster than either logbn or lognb. Finally, algorithms with cost T(n) = 2n or T(n) = n! are prohibitively expensive for even modest values of n. Note that for constants a, b > 1, an grows faster than nb. It can be seen that the growth rate has a tremendous effect on the resources consumed by an algorithm.

Page - 7 - of 12

CS-210 Data Structures & Algorithms Department of Computer & Information Systems Engineering

A Faster Computer, or a Faster Algorithm? Imagine that you have a problem to solve, and you know an algorithm whose running time is proportional to n2. Unfortunately, the resulting program takes ten times too long to run. If you replace your current computer with a new one that is ten times faster, will the n2 algorithm become acceptable? If the problem size remains the same, then perhaps the faster computer will allow you to get your work done quickly enough even with an algorithm having a high growth rate. But a funny thing happens to most people who get a faster computer. They dont run the same problem faster. They run a bigger problem! Say that on your old computer you were content to sort 10,000 records because that could be done by the computer during your lunch break. On your new computer you might hope to sort 100,000 records in the same time. You wont be back from lunch any sooner, so you are better off solving a larger problem. And because the new machine is ten times faster, you would like to sort ten times as many records. If your algorithms growth rate is linear (i.e., if the equation that describes the running time on input size n is T(n) = cn for some constant c), then 100,000 records on the new machine will be sorted in the same time as 10,000 records on the old machine. If the algorithms growth rate is greater than cn, such as cn2, then you will not be able to do a problem ten times the size in the same amount of time on a machine that is ten times faster. How much larger a problem can be solved in a given amount of time by a faster computer? Assume that the new machine is ten times faster than the old one. Say that the old machine could solve a problem of size n in an hour. What is the largest problem that the new machine can solve in one hour? Following table shows how large a problem can be solved on the two machines for five running-time functions from above plot. f(n) 10n 20n 5nlogn 2n 2

2 n

The table shows increase in problem size that can be run in a fixed period of time on a computer that is ten times faster. For the purpose of this example, arbitrarily assume that the old machine can run 10,000 basic operations in one hour. The second column shows the maximum value of n that can be run in 10,000 basic operations on the old machine. The third column shows the value of n', the new maximum size for the problem that can be run in the same time on the new machine that is ten times faster. This table illustrates many important points. The first two running times are both linear; only the value of the constant factor has changed. In both cases, the machine that is ten times faster gives an increase in problem size by a factor of ten. In other words, while the value of the constant does affect the absolute size of the problem that can be solved in a

Page - 8 - of 12

CS-210 Data Structures & Algorithms Department of Computer & Information Systems Engineering

fixed amount of time, it does not affect the improvement in problem size (as a proportion to the original size) gained by a faster computer. Constant factors never affect the relative improvement gained by a faster computer. An algorithm with time equation T(n) = 2n2 does not receive nearly as great an improvement from the faster machine as an algorithm with linear growth rate. Instead of an improvement by a factor of ten, the improvement is only the square root of that: . Thus, the algorithm with higher growth rate not only solves a smaller problem in a given time in the first place, it also receives less of a speedup from a faster computer. As computers get ever faster, the disparity in problem sizes becomes ever greater. The algorithm with growth rate T(n) = 5nlogn improves by a greater amount than the one with quadratic growth rate, but not by as great an amount as the algorithms with linear growth rates. Note that something special happens in the case of the algorithm whose running time grows exponentially. The curve for the algorithm whose time is proportional to 2n goes up very quickly. As can be seen from the table above, the increase in problem size on the machine ten times as fast is shown to be about n + 3 (to be precise, it is n + log210). The increase in problem size for an algorithm with exponential growth rate is by a constant addition, not by a multiplicative factor. Because the old value of n was 13, the new problem size is 16. If next year you buy another computer ten times faster yet, then the new computer (100 times faster than the original computer) will only run a problem of size 19. Thus, an exponential growth rate is radically different than the other growth rates. Instead of buying a faster computer, consider what happens if you replace an algorithm whose running time is proportional to n2 with a new algorithm whose running time is proportional to nlogn. An algorithm with running time T(n) = n2 requires 1024 x 1024 = 1, 048, 576 time steps for an input of size n = 1024. An algorithm with running time T(n) = nlogn requires 1024 x 10 = 10, 240 time steps for an input of size n = 1024, which is an improvement of much more than a factor of ten when compared to the algorithm with running time T(n) = n2. Asymptotic Analysis Despite the larger constant for the curve labeled 10n in the graph, the curve labeled 2n2 crosses it at the relatively small value of n = 5. What if we double the value of the constant in front of the linear equation? As shown in the graph, the curve labeled 20n is surpassed by the curve labeled 2n2 once n = 10. In general, changes to a constant factor in either equation only shift where the two curves cross, not whether the two curves cross. When you buy a faster computer or a faster compiler, the new problem size that can be run in a given amount of time for a given growth rate is larger by the same factor, regardless of the constant on the running-time equation. The time curves for two algorithms with different growth rates still cross, regardless of their running-time equation constants. For these reasons, we usually ignore the constants when we want an estimate of the running time or other resource requirements of an algorithm. This simplifies the analysis and keeps us thinking about the most important aspect: the growth rate. This is called asymptotic algorithm analysis. To be precise, asymptotic analysis refers to the study of an algorithm as the input size gets big or reaches a limit (in the context of calculus). It has proved to be so useful to ignore all constant factors that asymptotic analysis is used for most algorithm comparisons. It is not always reasonable to ignore the constants. When comparing algorithms meant to run on small values of n, the constant can have a large effect. For example, if the problem is to sort a collection of exactly five records, then an algorithm designed for sorting thousands of records is probably not appropriate, even if its asymptotic analysis indicates good performance. There are rare cases where the constants for two algorithms under comparison can differ by a factor of 1000 or more, making the one with lower growth rate impractical for most purposes due to its large constant. Asymptotic analysis provides a simplified model of the running time or other resource needs of an algorithm. This simplification usually helps you understand the behavior of your algorithms. Just be aware of the limitations of asymptotic analysis in the rare situation where the constant is important.

Page - 9 - of 12

CS-210 Data Structures & Algorithms Department of Computer & Information Systems Engineering

Asymptotic Notations

For asymptotic analysis of algorithms, following definitions and asymptotic notations are in order. 1. BIG-O Notation Given functions f(n) and g(n), we c and n0 such that f(n) cg(n) for n n0. say that f(n) is O(g(n)) if there are positive constants

Thus, O notation provides an upper bound on running time of an algorithm. The statement f(n) is O(g(n)) means that the growth rate of f(n) is no more than the growth rate of g(n). The constant n0 is the smallest value of n for which the claim of an upper bound holds true. Examples a) Consider the sequential search algorithm for finding a specified value in an array of integers. If visiting and examining one value in the array requires cs steps where cs is a positive number, and if the value we search for has equal probability of appearing in any position in the array, then in the average case T(n) csn/2. Thus, for all values of n > 1, csn/2 < csn. Therefore, by definition, T(n) is O(n) for n0 = 1 and c = cs. b) For a particular algorithm, T(n) = c1n2 + c2n in the average case where c1 and c2 are positive numbers. Then, c1n2 + c2n < c1n2 + c2n2 = (c1 + c2)n2 for all n > 1. So, T(n) < cn2 for c = c1 + c2, and n0 = 1. Therefore, T(n) is O(n2) by definition. c) Assigning the value from the any position of an array to a variable takes constant time regardless of the size of the array. Thus, T(n) = c (for the best, worst, and average cases). We could say in this case that T(n) is O(c). However, it is traditional to say that an algorithm whose running time has a constant upper bound is O(1). We always seek to define the running time of an algorithm with the lowest possible upper bound. That is, g(n) must be as close to f(n) as possible. Properties of Big O 1. If f(n) is O(g(n)) then a*f(n) is O(g(n)) for any constant a. That means leading constant can be safely ignored. 2. If f(n) is O(g(n)) and h(n) is O(g'(n)) then f(n)+h(n) is O(max(g(n), g'(n))) This can be extended to any number of terms. 3. If f(n) is O(g(n)) and h(n) is O(g'(n)) then f(n)h(n) is O(g(n)g'(n)) 4. If f(n) is O(g(n)) and g(n) is O(h(n)) then f(n) is O(h(n)) This is known as transitivity. 5. If f(n) is a polynomial of degree d , then f(n) is O(nd) The higher-order terms soon swamp the lower-order terms in their contribution to the total cost as n becomes larger 6. nk = O(an), for any fixed k > 0 and a > 1 An algorithm of order n to a certain power is better than an algorithm of order a ( > 1) to the power of n

Page - 10 - of 12

CS-210 Data Structures & Algorithms Department of Computer & Information Systems Engineering

7. log nk = O(log n), for k > 0 8. logkn = O(nm) for k > 0 and m > 0 An algorithm of order log n to a certain power is better than an algorithm of n raised to any positive power. 2. BIG- Notation Given functions f(n) and g(n), we c and n0 such that f(n) > cg(n) for n n0. say that f(n) is (g(n)) if there are positive constants

Thus, notation provides a lower bound on running time of an algorithm. Like big-Oh notation, this is a measure of the algorithms growth rate. However, it says that function g(n) grows no faster than f (n). Like big-Oh notation, it works for any resource, but we usually measure the least amount of time required. Example Assume T(n) = c1n2 + c2n for c1 and c2 > 0. Then, c1n2 + c2n > c1n2 for all n > 1. So, T(n) > cn2 for c = c1 and n0 = 1. Therefore, T(n) is (n2) by definition. It is also true for the above example that T(n) is (n). However, as with big-Oh notation, we wish to get the largest lower bound possible. Thus, we prefer to say that this running time is (n2). Properties 1, 2, 3, 4 and 5 stated for O notation also hold for notation. In addition, we can state a transpose symmetry property which says that f (n) = O(g(n)) if and only if g(n) = (f (n)). 3. BIG- Notation When the upper and lower bounds are the same within a constant factor, we indicate this by using notation. An algorithm is said to be (h(n)) if it is O(h(n)) and it is (h(n)) as well. A formal definition follows: Given functions f(n) and g(n), we say that c1, c2 and n0 such that c1g(n) f(n) c2g(n) for n n0. f(n) is (g(n)) if there are positive constants

Page - 11 - of 12

CS-210 Data Structures & Algorithms Department of Computer & Information Systems Engineering

notation provides a tight bound on running time. It is generally better to use notation rather than O notation whenever we have sufficient knowledge about an algorithm to be sure that the upper and lower bounds indeed match. Limitations on our ability to analyze certain algorithms may require use of O or notations. Properties 1, 2, 3, 4 and 5 stated for O notation also hold for notation. A symmetry property can also be stated as follows: f (n) = (g(n)) if and only if g(n) = (f (n)). Classifying Functions Given functions f(n) and g(n) whose growth rates are expressed as algebraic equations, we might like to determine if one grows faster than the other. The best way to do this is to take the limit of the ratio of two functions as n grows towards infinity, ( ) ( ) If the limit goes to , then f(n) is (g(n)) because f(n) grows faster. If the limit goes to zero, then f(n) is O(g(n)) because g(n) grows faster. If the limit goes to some constant other than zero, then f(n) = (g(n)) because both grow at the same rate. Asymptotic Notation Summary O(1) : O(log log n) : O(log n) : O((log n) ) : O(n) : O(n log n) : O(n ) : O(n ) : O(2n),O(n!) :

k 2 k

Great. Constant time. Cant beat this! Very fast, almost constant time. Logarithmic time. Very good (where k is a constant) polylogarithmic time. Not bad. Linear time. The best you can do if your algorithm has to look at all the data Log-linear time. Shows up in many places Quadratic time. (where k is a constant) polynomial time. Only acceptable if k is not too large Exponential time. Unusable for any problem of reasonable size

Page - 12 - of 12

- CSC263 Spring 2011 Assignment 1 SolutionsTransféré paryellowmoog
- Analysis of Algorithms ITransféré parSerge
- Lecture 01 ComplexityAnalysisTransféré parMuhammad Ammar
- AnalysisTransféré parSerge
- Function Minimization ReportTransféré parriletm86
- IGNOU MCS - 031Transféré parShraddha Thakur
- AoA_171333007_assign1Transféré parEmmanuel Ebuka
- Lecture 01236Transféré parTarunKumar
- A Modified Dna Computing Approach To Tackle The Exponential Solution Space Of The Graph Coloring ProblemTransféré parijfcstjournal
- Lcs 991029Transféré parAbhijeet Bhagavatula
- Algorithm Design By Jon Kleinberg and Eva TardosTransféré parKiran Nanjundaswamy
- DSA NOV-DEC 2010Transféré parArivazhaganp_mca
- nlogn.pdfTransféré parmueramon
- Loop the LoopTransféré parabhishek_dharma
- 1-ChenDanTransféré parPallab Kumar
- Question Bank Unit 1-NewTransféré parNithyasri Arumugam
- Scheduling a variable maintenance and linear deteriorating jobs on a single machine.pdfTransféré parAntoniojuarezjuarez
- T-79 5103 Introduction to the CourseTransféré parashikmathewk
- Influence of Empathic Modalities on Software EngineeringTransféré parc_neagoe
- 10.1.1.4Transféré parJohncey V Joyson
- Investigation of SCSI Disks - Eduardo Paradinas Glez. de VegaTransféré parEstudio de Arquitectura Eduardo Paradinas
- scimakelatex.3268.Ruben+Judocus.Xander+HendrikTransféré parJohn
- scimakelatex.25622.One+.useful.purpose+Transféré parmdp anon
- scimakelatex.29265.Seth+ZeroTransféré parAntonio Zero
- S. Bettelli, T. Calarco and L. Serafini- Toward an architecture for quantum programmingTransféré parGholsas
- Mateer Thesis[1] CopyTransféré pargutic18
- hw0solTransféré parballechase
- The Influence of Secure Models on TheoryTransféré parSreekar Saha
- Ambimorphic Interposable Communication for the Location-Identity SplitTransféré parTamara Lewis
- Visualizing Lambda Calculus and IPv7 Using PlasmicKoboldTransféré parthrw3411

- 14 Elements of PSM - Industry ResourcesTransféré parSaad Ghouri
- 100 Safety Topics for Daily Toolbox TalkTransféré parsilash
- Safety Officer Interview NotesTransféré paraulianur564830
- ISO 14001 Guide Transition 2015Transféré parMuhammad Usman Ghani
- 11 Safety for Fuel Storage SitesTransféré parSaad Ghouri
- Guidance on Risk Assessment for Offshore InstallationsTransféré parpemburu buku
- Shackle -Rejection CriteriaTransféré parSaad Ghouri
- Process hazard and risk analysis Guideline.pdfTransféré parSaad Ghouri
- 09 COMAH GuidanceTransféré parSaad Ghouri
- SBS Profile 2013Transféré parSaad Ghouri
- Lighting Plan.docxTransféré parSaad Ghouri
- 3282 AFPM API Advancing Process SafetyTransféré parSaad Ghouri
- 05 HSG 190Transféré parSaad Ghouri
- Amine TreatmentTransféré parSaad Ghouri
- Gull WishTransféré parSaad Ghouri
- 500 Technical Questions Safety&Fire-1Transféré parSaad Ghouri
- CourseGuideline SETransféré parSaad Ghouri
- Trem CardTransféré parSaad Ghouri
- Drilling Process Safety 101Transféré parSaad Ghouri
- Awareness Session on Confined SpaceTransféré parSaad Ghouri
- Drilling and Well Operations - D-010r1Transféré parjorge_h_rivero
- Corrosion Management - OffshoreTransféré parZarra Fakt
- E_05Transféré parSaad Ghouri
- Health and Safety Warehousing and StorageTransféré parLaisattrooklai Phisit
- Understanding Action Verbs for neboshTransféré parMohammad Gita
- Rankine CycleTransféré parSaad Ghouri

- Tesla Self Driving Using IOT.pptxTransféré parBeBrave
- precision in forecastingTransféré parogangurel
- 12 Chakra SystemTransféré parsomewhereone
- Mak-banTransféré parJesus Pempengco
- Hayward Heatpro BrochureTransféré parnido77
- Tech-Exam (2)Transféré parمحمد أحمد عبدالوهاب محمد
- MANUAL RADIO.pdfTransféré parCristian Arenas Rodriguez
- Aircraft Production TOCTransféré parRohit Muna
- Vertical Axis Wind Turbine using MAGLEV TechnologyTransféré parAnonymous CUPykm6DZ
- 2.-Musolff Metaphor and SymbolTransféré parecartetescuisses
- 10.5923.j.pc.20120201.01Transféré parDika
- 2013 TDP Consultation Draft_VolI-Major Network DevelopmentTransféré pareldiablotriplesix
- 01_Probability of Simple EventsTransféré parKimberly Combs Morrow
- NEBOSH HSEP _ CORSHETransféré parahmed nawaz
- Kaddouri 's CV Business developmentTransféré parRéda Kaddouri
- Insights for Facebook Pages: Product GuideTransféré parFacebook
- Se Ds Fronius Symo en UsTransféré parMinor Rojas Solis
- Auslegung.v09.n03.300-310Transféré parBob Krueger
- Organisational BehaviourTransféré parBhumi Shah
- Potassium NitrateTransféré parMohammadAh
- Barrage DesignTransféré parFaisal Junaid
- Fresher ECE Embedded Resume Model 144Transféré parVinothkumar_Ar_615
- Section7[1]Transféré parWisit Best
- Home Power Magazine - Issue 025 - 1991-10-11.pdfTransféré parwienslaw5804
- BVA PA-Series CatalogTransféré parTitanply
- 3D Virtual Worlds and the Metaverse Current Status and Future PossibilitiesTransféré parWill Burns
- Nature 13290Transféré parUnique Ark
- geit-40041_rhythm_archive_en_12_2014__0Transféré parsridharskr
- irc3380_2880-pcTransféré parAndrei Marinas
- Pv Grid PenetrationTransféré pargiveplease