Vous êtes sur la page 1sur 6

1. Differentiate between best, average, and worst case complexities.

The running time of any algorithm can be experssed with given input in terms of best, average and
worst case. We can count how many steps our algorithm will take on any given input instance by
executing it on the given input.

The worst-case complexity of the algorithm is the function defined by the maximum number of
steps taken on any instance of size n.
Worst case = slowest time to complete the exexution of an algorithm.

The best-case complexity of the algorithm is the function defined by the minimum number of steps
taken on any instance of size n.
Best case = fastest time to complete the exexution of an algorithm.
The average-case complexity of the algorithm is the function defined by the average number of
steps taken on any instance of size n.
Average case = arithmetic mean i.e average time to complete the exexution of an algorithm.
2. What are the Problems associated with Linear probing?
A problem with the linear probe method is that it is possible for blocks of data to form when
collisions are resolved. This is known as primary clustering.
This means that any key that hashes into the cluster will require several attempts to resolve the
collision.

For example, insert the nodes 89, 18, 49, 58, and 69 into a hash table that holds 10 items using the
division method:

3. Discuss the Problems associated with Quadratic probing.

Limitation: at most half of the table can be used as alternative locations to resolve collisions.
This means that once the table is more than half full, it's difficult to find an empty spot. This
problem is known as secondary clustering because elements that hash to the same hash key will
always probe the same alternative cells.

1. Hashing provides a more reliable and flexible method of data retrieval than any other data
structure.
2. It is faster than searching arrays and lists. An element can be searched by O(1) time.
3. In the same space it can retrieve in 1.5 probes anything stored in a tree that will otherwise take
log n probes.
Discuss the disdvantages of hashing.
1. Implementation is difficult.
2. it is not suitable for sorting data because the elements are not stored in the order.
1. Search is O(log N) since AVL trees are always balanced.
2. Insertion and deletions are also O(logn)
3. The height balancing adds no more than a constant factor to the speed of insertion
1. Difficult to program & debug; more space for balance factor.
2. Asymptotically faster but rebalancing costs time.
3. Most large searches are done in database systems on disk and use other structures (e.g. B-trees).
4. May be OK to have O(N) for a single operation if total run time for many consecutive operations
is fast (e.g. Splay trees).
What are the applications of Hashing?
Applications:
1. File Management managing files to store records
2. Comparing Complex values
3. Cryptography Creating Digital Signatures Eg: MD5, message authentication code (MAC) etc
4. Securing Passwrds Operating System store hash values of password insted of original
5. Detecting virus infected files Hash value of a file is stored in external memory such as
pendrive, it used to verify whether virus infected to file or not

What is max heap?

A max-heap is a complete binary tree in which the value in each internal node is greater than or
equal to the values in the children of that node.
Example:
What is mix heap?
A min-heap is a binary tree such that. - the data contained in each node is less than (or equal to) the
data in that node's children. - the binary tree is complete.

Differences between AVL tree and Binary search tree.

AVL tree Binary search tree
AVL tree is a self-balancing binary A binary search tree, also known as an ordered
search tree invented by Adelson-Velsky binary tree, is a variant of binary trees in which
and Landis. The tree is named AVL in the nodes are arranged in an order.
honour of its inventors.
A tree is called binary search tree if it
In an AVL tree, the heights of the two satisfy following two conditions:
sub-trees of a node may differ by at most
one. Due to this property, the AVL tree is 1. All nodes must have at most two
also known as a height-balanced tree. children. (Binary tree)
In AVL Tree, balance factor of each and 2. In a binary search tree, all the nodes in
the left sub-tree have a value less than
every node must be -1, 0 or 1. Balance
that of the root node. Correspondingly,
factor is nothing but difference between
all the nodes in the right sub-tree have a
level of left subtree and right subtree and
value either equal to or greater than the
is calculated as below
root node.
balance factor = height of left
subtree - height of right subtree
for example: balance factor of 45
node in figure (a) = 3 (height of
left subtree of 45) - 2 (height of
right subtree of 45) = 1
The cost of AVL tree is O(logn) in all cases (best In worst case the cost Binary Search Tree is O(n)
case, average case and worst case). and O(logn) in Average Case,

What are the Applications of minimum cost spanning trees?

Building a connected network. There are scenarios where we have a limited set of possible
routes, and we want to select a subset that will make our network (e.g., electrical grid,
computer network) fully connected at the lowest cost.

Clustering. If you want to cluster a bunch of points into k clusters, then one approach is to

compute a minimum spanning tree and then drop the k1 most expensive edges of the MST.
This separates the MST into a forest with k connected components; each component is a
cluster.

Traveling salesman problem. There's a straightforward way to use the MST to get the
optimal solution to the traveling salesman problem.
Write the Differences between spanning tree and minimum spanning tree.
Spanning tree:
Spanning tree is a sub-graph of a graph which connects all the vertices with out any cycle.

Minimum spanning tree:

Minimum spanning tree is a tree in a graph that spans all the vertices and total weight of a tree is
minimal.

Define stable sorting.

If a sorting algorithm, after sorting the contents, does not change the sequence of similar content in
which they appear, it is called stable sorting.
Define In-place sorting.
Sorting algorithms may require some extra space for comparison and temporary storage of few data
elements. These algorithms do not require any extra space and sorting is said to happen in-place, or
for example, within the array itself. This is called in-place sorting. Bubble sort is an example of in-
place sorting.
Compare merge sort and quick sort.
Merge sort Quick sort
Merge sort is a divide and conquer sorting Quick sort also a divide and conquer sorting
algorithm algorithm
The merge sort uses the merge operation. Quick sort uses the partition operation.
Given two sorted sub-arrays of each having n Given an array of n elements, partition takes one
elements, the merge operation uses an auxiliary element as the pivot and places it in the correct
array of size n to merge them together into a position. By this time, the array is not sorted, but
single sorted array in linear time all the elements less than the pivot will be on the
left of the pivot, and all the elements greater
than the pivot will be on the right.
The merge operation can be performed in linear The partition operation can be performed in
time. i.e O(n) linear time.i.e O(n)
The time complexity of merge sort is O(n log n) The time complexity of quick sort is O(n log n)
in all cases for best case and avarage case and O(n2) is for
worst case case.
Merge sort requires extra memory for Quick sort does not require any additional
execution. And the amount of extra memory memory while executing.
required is O(n) which means it is directly
proportional to the input size n.
Merge sort is an IN-PLACE Algorithm Quick sort is an IN-PLACE Algorithm
Merge is a stable sorting algorithm Quick Sort is NOT a stable sorting algorithm

Quick-sort is a comparison based algortihm Radix-sort is not comparison based algortihm
Quick-sort can sort any set of elements Radix sort can sort only integers
Quick sort compares the elements against each Radix sort simply sorts elements by their binary
other. representation. The elements (as a whole) are
never compared.
The running time is O(n log n) The running time is O(kn) where k is the number
of digits in the largest number which is very fast.
It is massively recursive It has very less of recursion.

What are the applications of Graphs?

1. Social network graphs: to tweet or not to tweet. Graphs that represent who knows whom,
who communicates with whom, who influences whom or other relationships in social
structures
2. Transportation networks. In road networks vertices are intersections and edges are the
road segments between them, and for public transportation networks vertices are stops and
edges are the links between them. Such networks are used by many map programs such as
Google maps, Bing maps and now Apple IOS 6 maps to find the best routes between locations
3. Utility graphs. The power grid, the Internet, and the water network are all examples of
graphs where vertices represent connection points, and edges the wires or pipes between
them
4. Protein-protein interactions graphs. Vertices represent proteins and edges represent
interactions between them that carry out some biological function in the cell. These graphs
can be used, for example, to study molecular pathwayschains of molecular interactions
in a cellular process.
5. Network packet traffic graphs. Vertices are IP (Internet protocol) addresses and edges are
the packets that flow between them. Such graphs are used for analyzing network security,
studying the spread of worms, and tracking criminal or non-criminal activity

What are tries? Give their advantages.

a trie, also called digital tree and sometimes radix tree or prefix tree (as they can be searched by
prefixes), is a kind of search treean ordered tree data structure that is used to store a dynamic
set or associative array where the keys are usually strings.

A trie is a data structure that stores strings by decomposing them into characters. The characters
form a path through a tree until su