Académique Documents
Professionnel Documents
Culture Documents
In computer science, the time complexity is the computational complexity that describes the amount of time
it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary
operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of
time to perform. Thus, the amount of time taken and the number of elementary operations performed by the
algorithm are taken to differ by at most a constant factor.
Since an algorithm's running time may vary among different inputs of the same size, one commonly
considers the worst-case time complexity, which is the maximum amount of time required for inputs of a
given size. Less common, and usually specified explicitly, is the average-case complexity, which is the
average of the time taken on inputs of a given size (this makes sense because there are only a finite
number of possible inputs of a given size). In both cases, the time complexity is generally expressed as
a function of the size of the input. Since this function is generally difficult to compute exactly, and the
running time for small inputs is usually not consequential, one commonly focuses on the behavior of the
complexity when the input size increases—that is, the asymptotic behavior of the complexity.
Data Structure
Trees, heaps, graphs.
Design or Strategies
Divide-and-conquer, greedy, dynamic programming.
Divide and Conquer
Divide: the problem into a small number of pieces
Conquer: solve each piece by applying divide and conquer to it recursively
Combine: the pieces together into a global solution.
Slow Algorithm
Name Best Θ Average (O)Worst Stable Method
Bubble Sort N N2 O(n2) No Exchanging
BuildHeap
We can use Heapify to build a heap as follows. First we start with a heap in which the elements are not in
heap order. They are just in the same order that they were given to us in the array A. We build the heap by
starting at the leaf level and then invoke Heapify on each node. (Note: We cannot start at the top of the
tree. Why not? Because the precondition which Heapify assumes is that the entire tree rooted at node i is
already in heap order, except for i.) Actually, we can be a bit more efficient. Since we know that each
leaf is already in heap order, we may as well skip the leaves and start with the first non-leaf node. This
will be in position n/2.
Quicksort
Our next sorting algorithm is Quicksort. It is one of the fastest sorting algorithms known and is the
method of choice in most sorting libraries. Quicksort is based on the divide and conquer strategy. Here is
the algorithm:
Analysis of Quicksort
The running time of quicksort depends heavily on the selection of the pivot. If the rank of the pivot is
very large or very small then the partition (BST) will be unbalanced. Since the pivot is chosen randomly
in our algorithm, the expected running time is O(n log n). The worst case time, however, is O(n2).
Luckily, this happens rarely.
Efficient Algorithms
No-Comparison Sort
Other Algorithms
Graphs
Hamiltonian cycle= visits all of the vertices of the graph exactly once.
Strongly
Component
Graph
Minimum Greedy Algorithm
Spanning Tree (kruskal and prism)
Shortest Path It Depends on Single source shortest path Dijkistra Algorithm
Problem implementati problem.
on. If Simple ->Don’t Allow Negative Weights
O(V2) ->No Negative Cost Cycle
If use Correctness of Dijkistra Algorithm
Adjency list through induction.
using binary
heap
O(E log V)
https://en.wikipedia.org/wiki/Time_complexity#Table_of_common_time_complexities
Time complexity
http://bigocheatsheet.com/
NPC
3-color
Clique cover
Bolian satifiability
Use brute-force search: Even on the fastest parallel computers this approach is viable only for the
smallest instance of these problems.
Heuristics: A heuristic is a strategy for producing a valid solution but there are no guarantees how close
it to optimal. This is worthwhile if all else fails.
General search methods: Powerful techniques for solving general combinatorial optimization
problems. Branch-and-bound, A*-search, simulated annealing, and genetic algorithms
Approximation algorithm: This is an algorithm that runs in polynomial time (ideally) and produces a
solution that is within a guaranteed factor of the optimal solution.