Vous êtes sur la page 1sur 9

Time Complexity

In computer science, the time complexity is the computational complexity that describes the amount of time
it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary
operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of
time to perform. Thus, the amount of time taken and the number of elementary operations performed by the
algorithm are taken to differ by at most a constant factor.
Since an algorithm's running time may vary among different inputs of the same size, one commonly
considers the worst-case time complexity, which is the maximum amount of time required for inputs of a
given size. Less common, and usually specified explicitly, is the average-case complexity, which is the
average of the time taken on inputs of a given size (this makes sense because there are only a finite
number of possible inputs of a given size). In both cases, the time complexity is generally expressed as
a function of the size of the input. Since this function is generally difficult to compute exactly, and the
running time for small inputs is usually not consequential, one commonly focuses on the behavior of the
complexity when the input size increases—that is, the asymptotic behavior of the complexity.

Notation for worst time is = O Big O

Notation for Average Time= Θ Theta

Notation for Best Tim =Ω Omega

Data Structure
Trees, heaps, graphs.

Design or Strategies
Divide-and-conquer, greedy, dynamic programming.
Divide and Conquer
Divide: the problem into a small number of pieces
Conquer: solve each piece by applying divide and conquer to it recursively
Combine: the pieces together into a global solution.

Slow Algorithm
Name Best Θ Average (O)Worst Stable Method
Bubble Sort N N2 O(n2) No Exchanging

Selection n2 n2 O(n2) No Selection


Sort
Insertion n n2 O(n2) No Selection
Sort

In-place, Stable Sorting


An in-place sorting algorithm is one that uses no additional array for storage. A sorting algorithm is
stable if duplicate elements remain in the same relative position after sorting.

O(n log n) Algorithms


Better than slow Algorithms

Name Best Θ Average (O)Worst Stable Method


Merge Sort (n log n) (n log n) (n log n) Yes Marging
Heap Sort (n log n) (n log n) O(n log n) No Selection
Quick Sort (n log n) (n log n) O(n2) No Divide and conquer

Time Taken to Build Heap = Θ(n)


Heapify Procedure
There is one principal operation for maintaining the heap property. It is called Heapify. (In other books it
is sometimes called sifting down.) The idea is that we are given an element of the heap which we suspect
may not be in valid heap order, but we assume that all of other the elements in the subtree rooted at this
element are in heap order. In particular this root element may be too small. To fix this we “sift” it down
the tree by swapping it with one of its children. Which child? We should take the larger of the two
children to satisfy the heap ordering property. This continues recursively until the element is either larger
than both its children or until its falls all the way to the leaf level. Here is the algorithm. It is given the
heap in the array A, and the index i of the suspected element, and m the current active size of the heap.

BuildHeap
We can use Heapify to build a heap as follows. First we start with a heap in which the elements are not in
heap order. They are just in the same order that they were given to us in the array A. We build the heap by
starting at the leaf level and then invoke Heapify on each node. (Note: We cannot start at the top of the
tree. Why not? Because the precondition which Heapify assumes is that the entire tree rooted at node i is
already in heap order, except for i.) Actually, we can be a bit more efficient. Since we know that each
leaf is already in heap order, we may as well skip the leaves and start with the first non-leaf node. This
will be in position n/2.

Quicksort
Our next sorting algorithm is Quicksort. It is one of the fastest sorting algorithms known and is the
method of choice in most sorting libraries. Quicksort is based on the divide and conquer strategy. Here is
the algorithm:

Analysis of Quicksort
The running time of quicksort depends heavily on the selection of the pivot. If the rank of the pivot is
very large or very small then the partition (BST) will be unbalanced. Since the pivot is chosen randomly
in our algorithm, the expected running time is O(n log n). The worst case time, however, is O(n2).
Luckily, this happens rarely.
Efficient Algorithms

Name Best Θ Average (O)Worst Stable Method


Radix Sort n.k/d n.k/d Yes, In-place version also
available (MSD radix sort)
Bucket/ Bin N+k N2.k Yes
Sort
Counting N+r N+r Yes
Sort
Pigeonhole - N + 2k N+2k Yes
sort

Bucket/ Bin sort


Time Complexity of Different Algorithms

Name Best Θ (O)Worst Stable Method


Average
Insertion N N2 N2 Yes Selection
Sort
Bubble Sort N N2 N2 Yes Exchanging

Quick Sort N log n N log n N2 No. Stable version may Partitioning


be exits
Selection N2 N2 N2 No Selection
Sort
In-Place - - N log2n, In hybrid n Yes Merging
Merge Sort log n
Merge Sort N log n N log n N log n Yes Merging
Binary Tree N log n N log n N log n Yes Insertion
Sort
Heap Sort N log n N log n N log n No Selection

No-Comparison Sort

Name Best Average (O)Worst Stable Method


Pigeonhole - N + 2k N+2k Yes
sort
Bucket Sort N+k N2.k Yes
Counting N+r N+r Yes
Sort
Radix Sort n.k/d n.k/d Yes, In-place version also available (MSD radix sort)

Other Algorithms

Name Best Average (O)Worst Detail Space C Method


(0-1)Knap-sack O(n*W) O(n*W) Dynamic
Programming
Edit Distance O(n2) O(|S1|*|S2|) Dynamic
Programming
Chain Matrix (n3) Dynamic
Multiplication Programming
Coin Change O(kn) In-sufficient due to n if large Dynamic
Programming
Floyed-Warshall N3 Dynamic
Programming
Coin Change O(k) Greedy Algorithm
Huffman encoding Greedy Algorithm
Activity Selection O(n log n) Optimization problem Greedy Algorithm
Kruskal (E log E) If graph is Sparse (E log v) Greedy Algorithm
Prims ((V+E)log
V)

Graphs

Hamiltonian cycle= visits all of the vertices of the graph exactly once.

Name Average (O)Worst Detail Space Method


Complexi
ty
Breadth First O(V+E) V= vertices
Search & E=Edges
Depth First O(V) Dense Graph have more V
Search
Data O(E) Sparse Graph have more E
Structure=
Adjancy List
BFS & DFS O(V2)
Data
Structure=
Adjancy
Matrix
Precedence (V+E) Use (DAG) Directed Acyclic graph.
Constraint Acyclic Graph is a graph that
Graph contain no cycle. If acyclic graph
connected then it is like tree. If
Topological acyclic graph not connected then it
Sort is like forest.
Use DFS

Strongly
Component
Graph
Minimum Greedy Algorithm
Spanning Tree (kruskal and prism)
Shortest Path It Depends on Single source shortest path Dijkistra Algorithm
Problem implementati problem.
on. If Simple ->Don’t Allow Negative Weights
O(V2) ->No Negative Cost Cycle
If use Correctness of Dijkistra Algorithm
Adjency list through induction.
using binary
heap
O(E log V)

(VE) ->Allow negative weights Bellman Ford


->No Negative Cost Cycle Algorithm

`All pair N3 ->Allow negative weights N2 Floyed-Warshall


Shortest Path Adjancy ->No negative Cost Cycle Algorithm
Problem Matrix (Dynamic
Programming)
Time Complexity and its explanation:

https://en.wikipedia.org/wiki/Time_complexity#Table_of_common_time_complexities
Time complexity

http://bigocheatsheet.com/

NPC
3-color
Clique cover
Bolian satifiability

Use brute-force search: Even on the fastest parallel computers this approach is viable only for the
smallest instance of these problems.
Heuristics: A heuristic is a strategy for producing a valid solution but there are no guarantees how close
it to optimal. This is worthwhile if all else fails.
General search methods: Powerful techniques for solving general combinatorial optimization
problems. Branch-and-bound, A*-search, simulated annealing, and genetic algorithms
Approximation algorithm: This is an algorithm that runs in polynomial time (ideally) and produces a
solution that is within a guaranteed factor of the optimal solution.

Vous aimerez peut-être aussi