Vous êtes sur la page 1sur 16

MCA Semester 4 Assignment Set 1

MC0080 Analysis and Design of Algorithms Roll No.

MAY 2011 Master of Computer Application (MCA) Semester 4 MC0080 Analysis and Design of Algorithms 4 Credits
(Book ID: B0891) Assignment Set 1 (60 Marks) Answer the following: 610 = 6

1. Explain the structure of Fibonacci heaps? Ans 1:


A Fibonacci heap is a collection of trees satisfying the minimum-heap property, that is, the key of a child is always greater than or equal to the key of the parent. This implies that the minimum key is always at the root of one of the trees. Compared with binomial heaps, the structure of a Fibonacci heap is more flexible. The trees do not have a prescribed shape and in the extreme case the heap can have every element in a separate tree. This flexibility allows some operations to be executed in a "lazy" manner, postponing the work for later operations. For example merging heaps is done simply by concatenating the two lists of trees, and operation decrease key sometimes cuts a node from its parent and forms a new tree.

However at some point some order needs to be introduced to the heap to achieve the desired running time. In particular, degrees of nodes (here degree means the number of children) are kept quite low: every node has degree at most O(log n) and the size of a subtree rooted in a node of degree k is at least Fk + 2, where Fk is the kth Fibonacci number. This is achieved by the rule that we can cut at most one child of each non-root node. When a second child is cut, the node itself needs to be cut from its parent and becomes the root of a new tree (see Proof of degree bounds, below). The number of trees is decreased in the operation delete minimum, where trees are linked together. As a result of a relaxed structure, some operations can take a long time while others are done very quickly. In the amortized running time analysis we pretend that very fast operations take a little bit longer than they actually do. This additional time is then later subtracted from the actual running time of slow operations. The amount of time saved for later use is measured at any given moment by a potential function. The potential of a Fibonacci heap is given by

Potential = t + 2m
Where t is the number of trees in the Fibonacci heap, and m is the number of marked nodes. A node is marked if at least one of its children was cut since this node was made a child of another node (all roots are unmarked). 1

MCA Semester 4 Assignment Set 1

MC0080 Analysis and Design of Algorithms Roll No.

Thus, the root of each tree in a heap has one unit of time stored. This unit of time can be used later to link this tree with another tree at amortized time 0. Also, each marked node has two units of time stored. One can be used to cut the node from its parent. If this happens, the node becomes a root and the second unit of time will remain stored in it as in any other root.

2. What do you mean by reduction, NP-Complete and NP hard problem? Ans 2: Reduction
Formally, NP-completeness is defined in terms of "reduction" which is just a complicated way of saying one problem is easier than another. We say that A is easier than B, and write A < B, if we can write down an algorithm for solving A that uses a small number of calls to a subroutine for B (with everything outside the subroutine calls being fast, polynomial time). There are several minor variations of this definition depending on the detailed meaning of "small" -- it may be a polynomial number of calls, a fixed constant number, or just one call. Then if A < B, and B is in P, so is A: we can write down a polynomial algorithm for A by expanding the subroutine calls to use the fast algorithm for B. So "easier" in this context means that if one problem can be solved in polynomial time, so can the other. It is possible for the algorithms for A to be slower than those for B, even though A < B. As an example, consider the Hamiltonian cycle problem. Does a given graph have a cycle visiting each vertex exactly once? Here's a solution, using longest path as a subroutine:
for each edge (u,v) of G if there is a simple path of length n-1 from u to v return yes // path + edge form a cycle return no

This algorithm makes m calls to a longest path subroutine, and does O(m) work outside those subroutine calls, so it shows that Hamiltonian cycle < longest path. (It doesn't show that Hamiltonian cycle is in P, because we don't know how to solve the longest path subproblems quickly.) As a second example, consider a polynomial time problem such as the minimum spanning tree. Then for every other problem B, B < minimum spanning tree, since there is a fast algorithm for minimum spanning trees using a subroutine for B. (We don't actually have to call the subroutine, or we can call it and ignore its results.)

What is NP?
NP is the set of all decision problems (question with yes-or-no answer) for which the 'yes'-answers can be verified in polynomial time (O(n^k) where n is the problem size, and k is a constant) by a deterministic Turing machine. Polynomial time is sometimes used as the definition of fast or quickly. A decision problem is NP-complete if:

Definition of NP-completeness
is in NP, and Every problem in NP is reducible to in polynomial time. 2

MCA Semester 4 Assignment Set 1

MC0080 Analysis and Design of Algorithms Roll No.


can be verified in

Can be shown to be in NP by demonstrating that a candidate solution to polynomial time.

A problem is reducible to if there is a polynomial-time many-one reduction, a deterministic algorithm which transforms any instance into an instance , such that the answer to is yes if and only if the answer to is yes. To prove that an NP problem is in fact an NP-complete problem it is sufficient to show that an already known NP-complete problem reduces to . Note that a problem satisfying condition 2 is said to be NP-hard, whether or not it satisfies condition1. A consequence of this definition is that if we had a polynomial time algorithm (on a UTM, or any other Turing-equivalent abstract machine) for , we could solve all problems in NP in polynomial time.

What is NP-Hard?
NP-Hard are problems that are at least as hard as the hardest problems in NP. Note that NPComplete problems are also NP-hard. However not all NP-hard problems are NP (or even a decision problem), despite having 'NP' as a prefix. That is the NP in NP-hard does not mean 'non-deterministic polynomial time'. Yes this is confusing but its usage is entrenched and unlikely to change.

3. Explain the concept of bubble sort and also write the algorithm for bubble sort. Ans 3 :
Bubble Sort The Bubble Sort algorithm for sorting of n numbers, represented by an array A [1..n], proceeds by scanning the array from left to right. At each stage, compares adjacent pairs of numbers at positions A[i] and A [i +1] and whenever a pair of adjacent numbers is found to be out of order, then the positions of the numbers are exchanged. The algorithm repeats the process for numbers at positions A [i + 1] and A [i + 2] Thus in the first pass after scanning once all the numbers in the given list, the largest number will reach its destination, but other numbers in the array, may not be in order. In each subsequent pass, one more number reaches its destination. Example: In the following, in each line, pairs of adjacent numbers, shown in bold, are compared. And if the pair of numbers are not found in proper order, then the positions of these numbers are exchanged. The list to be sorted has n = 6 as shown in the first row below:

MCA Semester 4 Assignment Set 1

MC0080 Analysis and Design of Algorithms Roll No.

In the second pass, the next to maximum element of the list viz., 81, reaches the 5th position from left. In the next pass, the list of remaining (n 2) = 4 elements are taken into consideration.

Algorithm for i = 1:n, swapped = false for j = n:i+1, if a[j] < a[j-1], swap a[j,j-1] swapped = true invariant: a[1..i] in final position break if not swapped end

MCA Semester 4 Assignment Set 1

MC0080 Analysis and Design of Algorithms Roll No.

end { A [1..n ] is in increasing order}

MCA Semester 4 Assignment Set 1

MC0080 Analysis and Design of Algorithms Roll No.

Ans 4:

MCA Semester 4 Assignment Set 1

MC0080 Analysis and Design of Algorithms Roll No.

5. Explain briefly the concept of breadth-First search(BFS) Ans 5:


Breadth-First Search
Breadth first search as the name suggests first discovers all vertices adjacent to a given vertex before moving to the vertices far ahead in the search graph. If G(V, E) is a graph having vertex set V and edge set E and a particular source vertex s, breadth first search find or discovers every vertex that is reachable from s. First it discovers every vertex adjacent to s, then systematically for each of those vertices find all the vertices adjacent to them and so on. In doing so, it computes the distance and the shortest path in terms of fewest numbers of edges from the source node s to each of the reachable vertex. Breadth-first Search also produces a breadth-first tree with root vertex the process of searching or traversing the graph. For recording the status of each vertex, whether it is still unknown, whether it has been discovered (or found) and whether all of its adjacent vertices have also been discovered. The vertices are termed as unknown, discovered and visited respectively. So if (u, v) E and u is visited then v will be either discovered or visited i.e., either v has just been discovered or vertices adjacent to v have also been found or visited. As breadth first search forms a breadth first tree, so if in the edge (u, v) vertex v is discovered in adjacency list of an already discovered vertex u then we say that u is the parent or predecessor vertex of V. Each vertex is discovered once only. The data structure we use in this algorithm is a queue to hold vertices. In this algorithm we assume that the graph is represented using adjacency list representation. front [u] us used to represent the element at the front of the queue. Empty() procedure returns true if queue is empty otherwise it returns false. Queue is represented as Q. Procedure enqueue() and dequeue() are used to insert and delete an element from the queue respectively. The data structure Status[ ] is used to store the status of each vertex as unknown or discovered or visite.

Algorithm of Breadth First Search


1. for each vertex u V {s} 2. status[u] = unknown 3. status[s] = discovered 4. enqueue (Q, s) 5. while (empty[Q]! = false) 6. u = front[Q] 7. for each vertex v Adjacent to u 8. if status[v] = unknown 9. status[v] = discovered 10. parent (v) = u 11. end for 12. enqueue (Q, v); 13. dequeue (Q) 14. status[u] = visited 15. print u is visited 16. end while The algorithm works as follows. Lines 1-2 initialize each vertex to unknown. Because we have to start searching from vertex s, line 3 gives the status discovered to vertex s. Line 4 inserts the initial vertex s in the queue. The while loop contains statements from line 5 to end of the algorithm. The while loop runs as long as there remains discovered vertices in the queue. And we can see that queue will only contain discovered vertices. Line 6 takes an element u at the front of the queue 7

MCA Semester 4 Assignment Set 1

MC0080 Analysis and Design of Algorithms Roll No.

and in lines 7 to 10 the adjacency list of vertex u is traversed and each unknown vertex u in the adjacency list of u, its status is marked as discovered, its parent is marked as u and then it is inserted in the queue. In the line 13, vertex u is removed from the queue. In line 14-15, when there are no more elements in adjacency list of u, vertex u is removed from the queue its status is changed to visited and is also printed as visited.

The algorithm given above can also be improved by storing the distance of each vertex u from the source vertex s using an array distance [ ] and also by permanently recording the predecessor or parent of each discovered vertex in the array parent[ ]. In fact, the distance of each reachable vertex from the source vertex as calculated by the BFS is the shortest distance in terms of the number of edges traversed. So next we present the modified algorithm for breadth first search. Modified Algorithm
Program BFS (G, s) 1. for each vertex u s v {s} 2. status[u] = unknown 3. parent[u] = NULL 4. distance[u] = infinity 5. status[s] = discovered 6. distance[s] = 0 7. parent[s] = NULL 8. enqueue(Q, s) 9. while empty (Q) ! = false 10. u = front[Q] 11. for each vertex v adjacent to u 12. if status[v] = unknown 13. status[v] = discovered 14. parent[v] = u 15. distance[v] = distance[u]+1 16. enqueue(Q, v)

17. dequeue(Q) 18. status[u] = visited 19. print u is visited


In the above algorithm the newly inserted line 3 initializes the parent of each vertex to NULL, line 4 initializes the distance of each vertex from the source vertex to infinity, line 6 initializes the distance of source vertex s to 0, line 7 initializes the parent of source vertex s NULL, line 14 records the parent of v as u, line 15 calculates the shortest distance of v from the source vertex s, as distance of u plus 1.

Example:
In the figure given below, we can see the graph given initially, in which only sources is discovered.

Figure (a): Initial Input Graph

MCA Semester 4 Assignment Set 1

MC0080 Analysis and Design of Algorithms Roll No.

Figure (b): After we visited s

We take unknown (i.e., undiscovered) adjacent vertex of a s and insert them in queue, first a and then b. The values of the data structures are modified as given below: Next, after completing the visit of a we get the figure and the data structures as given below:

Figure (c): After we visit a

Figure (d): After we visit b

MCA Semester 4 Assignment Set 1

MC0080 Analysis and Design of Algorithms Roll No.

Figure (e): After we visit c

Figure (f): After we visit d

Figure (g): After we visit e

10

MCA Semester 4 Assignment Set 1

MC0080 Analysis and Design of Algorithms Roll No.

Figure (h): After we visit f

Figure (i): After we visit g

Figure (a) : Initial Input Graph Figure (b): We take unknown (i.e., undiscovered) adjacent vertices of s and insert them in the queue. Figure (c): Now the gray vertices in the adjacency list of u are b, c and d, and we can visit any of them depending upon which vertex is inserted in the queue first. As in this example, we have inserted b first which is now at the front of the queue, so next we will visit b. Figure (d): As there is no undiscovered vertex adjacent to b, so no new vertex will be inserted in the queue, only the vertex b will be removed from the queue. 11

MCA Semester 4 Assignment Set 1

MC0080 Analysis and Design of Algorithms Roll No.

Figure (e): Vertices e and f are discovered as adjacent vertices of c, so they are inserted in the queue and then c is removed from the queue and is visited. Figure (f): Vertex g is discovered as the adjacent vertex of d an after that d is removed from the queue and its status is changed to visit. Figure (g): No undiscovered vertex adjacent to e is found so e is removed from the queue and its status is changed to visit. Figure (h): No undiscovered vertex adjacent to f is found so f is removed from the queue and its status is changed to visit. Figure (i): No undiscovered vertex adjacent to g is found so g is removed from the queue and its status is changed to visit. Now as queue becomes empty so the while loop stops.

6. Explain Kruskals Algorithm. Ans 6: Kruskals Algorithm

10 marks

Next, we discuss another method, of finding minimal spanning tree of a given weighted graph, which is suggested by Kruskal. In this method, we stress on the choice of edges of minimum weight from amongst all the available edges subject to the condition that chosen edges do not form a cycle. The connectivity of the chosen edges, at any stage, in the form of a subtree, which was emphasized in Prims algorithm, is not essential. We briefly describe the Kruskals algorithm to find minimal spanning tree of a given weighted and connected graph, as follows: i) First of all, order all the weights of the edges in increasing order. Then repeat the following two steps till a set of edges is selected containing all the vertices of the given graph. ii) Choose an edge having the weight which is the minimum of the weights of the edges not selected so far. iii) If the new edge forms a cycle with any subset of the earlier selected edges, then drop it, else, add the edge to the set of selected edges. We illustrate the Kruskals algorithm through the following:

Example: Let us consider the following graph, for which the minimal spanning tree is required. Figure

Let Eg denote the set of edges of the graph that are chosen upto some stage. According to the step (i) above, the weights of the edges are arranged in increasing order as the set

{1, 3, 4.2, 5, 6}
12

MCA Semester 4 Assignment Set 1

MC0080 Analysis and Design of Algorithms Roll No.

In the first iteration, the edge (a, b) is chosen which is of weight 1, the minimum of all the weights of the edges of the graph. As single edge do not form a cycle, therefore, the edge (a, b) is selected, so that Eg = ((a, b)) After first iteration, the graph with selected edges in bold is as shown below:

Figure

Second Iteration
Next the edge (c, d) is of weight 3, minimum for the remaining edges. Also edges (a, b) and (c, d) do not form a cycle, as shown below. Therefore, (c, d) is selected so that, Eg = ((a, b), (c, d)) Thus, after second iteration, the graph with selected edges in bold is as shown below:

Figure

It may be observed that the selected edges do not form a connected subgraph or subtree of the given graph.

Third Iteration
Next, the edge (a, d) is of weight 4.2, the minimum for the remaining edges. Also the edges in Eg along with the edge (a, d) do not form a cycle. Therefore, (a, d) is selected so that new Eg = ((a, b), (c, d), (a, d)). Thus after third iteration, the graph with selected edges in bold is as shown below:

Figure

13

MCA Semester 4 Assignment Set 1

MC0080 Analysis and Design of Algorithms Roll No.

Fourth Iteration
Next, the edge (a, c) is of weight 5, the minimum for the remaining edge. However, the edge (a, c) forms a cycles with two edges in Eg, viz., (a, d) and (c, d). Hence (a, c) is not selected and hence not considered as a part of the to-be-found spanning tree.

Figure

At the end of fourth iteration, the graph with selected edges in bold remains the same as at the end of the third iteration, as shown below:

Figure

Fifth Iteration
Next, the edge (e, d), the only remaining edge that can be considered, is considered. As (e, d) does not form a cycle with any of the edges in Eg. Hence the edge (e, d) is put in Eg. The graph at this stage, with selected edge in bold is as follows:

Figure

14

MCA Semester 4 Assignment Set 1

MC0080 Analysis and Design of Algorithms Roll No.

At this stage we find each of the vertices of the given graph is a vertex of some edge in Eg. Further we observe that the edges in Eg form a tree, and hence, form the required spanning tree. Also, from the choice of the edges in Eg, it is clear that the spanning tree is of minimum weight. Next, we consider semi-formal definition of Kruskals algorithm.

ALGORITHM Spanning-Kruskal (G)


// The algorithm constructs a minimum spanning tree by choosing successively edges // of minimum weights out of the remaining edges. // The input to the algorithm is a connected graph G = (V, E), in which V is the set of // vertices and E, the set of edges and weight of each edge is also given. // The output is the set of edges, denoted by ET, which constitutes a minimum

15

MCA Semester 4 Assignment Set 1

MC0080 Analysis and Design of Algorithms Roll No.

16

Vous aimerez peut-être aussi