Vous êtes sur la page 1sur 5

This is a Java Program to implement Heap Sort on an integer array.

Heapsort is a comparison-based sorting algorithm to create a sorted array (or list), and is part of the selection sort family. Although somewhat slower in practice on most machines than a well-implemented quicksort, it has the advantage of a more favorable worst-case O(n log n) runtime. Heapsort is an in-place algorithm, but it is not a stable sort. Worst case performance : O(n log n) Best case performance : O(n log n) Average case performance : O(n log n) Heap Sort Test

Enter number of integer elements 20 Enter 20 integer elements 488 667 634 380 944 594 783 584 550 665 721 819 285 344 503 807 491 623 845 300 Elements after sorting 285 300 344 380 488 491 503 550 584 594 623 634 665 667 721 783 807 819 845 944 heap Sort Test Enter number of integer elements 20 Enter 20 integer elements 57 205 342 200 197 946 631 92 66 581 345 220 398 249 329 87 186 144 462 431 Elements after sorting 57 66 87 92 144 186 197 200 205 220 249 329 342 345 398 431 462 581 631 946 Heap Sort Test Enter number of integer elements 20 Enter 20 integer elements 802 327 219 415 648 783 250 891 232 822 604 123 138 505 883 224 86 681 51 310 Elements after sorting 51 86 123 138 219 224 232 250 310 327 415 505 604 648 681 783 802 822 883 891 Assume G is undirected (similar properties hold when G is directed). DFS(v) visits all vertices in the connected component of v The discovery edges form a tree: the DFStree of v

justification: never visit a vertex again==> no cycles we can keep track of the DFS tree by storing, for each vertex w, its parent The non-discovery (non-tree) edges always lead to a parent If G is given as an adjacency-list of edges, then DFS(v) takes O(|V|+|E|) time. 3 DFS Putting it all together: Proposition: Let G=(V,E) be an undirected graph represented by its adjacency-list. A DFS traversal of G can be performed in O(|V|+|E|) time and can be used to solve the following problems: testing whether G is connected computing the connected components (CC) of G computing a spanning tree of the CC of v computing a path between 2 vertices, if one exists computing a cycle, or reporting that there are no cycles in G 13.1 Graph Terminology A graph G = (V, E) is an ordered pair of finite sets V and E, where V(G) is the set of vertices in G, and E(G) is the set of edges in G. The edge connecting vertices v i and v j is represented as (v i , v j ). The number of vertices in G is given as |V| and the number of edges in G is given as |E|. The edges in an undirected graph are bidirectional; therefore, they can be traversed in both directions. Graph G 1

shown in Figure 13.1 (a) is undirected; the edge connecting vertices B and E allows you to traverse from B to E and from E to B. The edges in a directed graph (also called a digraph ) are unidirectional from the source vertex to the destination vertex. Graph G 2 shown in Figure 13.1 (b) allows traversal from T to Z (note the direction of the arrow on the edge), but not from Z to T. Similarly, you can travel from Z to R, but not from R to Z. Bidirectionality between adjacent vertices in a digraph is achieved by providing a pair of edges. Digraph G 3 shown in Figure 13.1 (c) allows the same connectivity as the undirected graph G Note that this is easily accomplished by turning every undirected edge into a pair of directed edges traveling in opposite directions. We will disallow a loop, also called a self-edge, which is an edge from a vertex to itself HAVE TO ALSO WRITE ABOUT ALL PATH LIKE DIRECTED UNDIRECTED N ETC

Therefore, there are several ways to traverse a binary tree. The most common include: pre-order - Visit the root, visit all of the nodes in the root's left subtree, and then visit all of the nodes in the root's right subtree. in-order - Visit all of the nodes in the root's left subtree, visit the root, and then visit all of the nodes in the root's right subtree. post-order - Visit all of the nodes in the root's left subtree, visit all of the nodes in the root's right subtree, and then visit the root. level-order - Visit the nodes by level. The nodes in each level are visited from left to right. This method is actually the most difficult and is rarely used.

For example, using the tree above, these traversal schemes would visit the nodes in the following orders: pre-order = A - B - D - C - E - G -H-F in-order = D - B - A - G - E - H C-F post-order = D - B - G - H - E - F -C-A level-order = A - B - C - D - E F - G - H Which ordering should be used? Depends on the particular application of the tree.

As with linked lists and arrays, it is often necessary to "visit" and do something with each node in the tree. With linked lists and arrays, each node had a unique successor, therefore the natural way to traverse the data structure was from the first item to the last. With a binary tree, there really is no first, second, ..., or last item (i.e. trees are not "ordered" data structures in the same way the linear data structures were.

Binary trees are introduced in terms of a binary tree ADT (Java interface) and a linked implementation which takes the form of an abstract class and two extensions of this class. Lots of concepts are discussed. Binary search trees are discussed conceptually and concretely. The approach to creating trees by means of the "add" method is distinctive. The code is "pure recursion"Minimum Cost Spanning Tree Let G=(V,E) be a connected graph where for all (u,v) in E there is a cost vector C[u,v]. A graph is connected if every pair of vertices is connected by a path. A spanning tree for G is a free tree that connects all vertices in G. A connected acyclic graph is also called a free tree. The cost of the spanning tree is the sum of the cost of all edges in the tree. We usually want to find a spanning tree of

minimum cost. Example applications: Computer networks - vertices in the graph might represent computer installations, edges represent connections between computers. We want to allow messages from any computer to get to any other, possibly with routing thru an intermediate computer, with minimum cost in connections. Airline routes vertices in the graph are cities, and edges are routes between cities. We want to service a connected set of cities, with minimum cos Kruskal's Algorithm Prim's algorithm requires O(N^2) time. We can do better using Kruskal's algorithm if E << N^2. Kruskal's algorithm constructs a MCST incrementally. Initially, each node is in its own MCST, consisting of that node and no edges. At each step in the algorithm, the two MCST's that can be connected together with the least cost are combined, adding the lowest cost edge that links a vertex in one tree with a vertex in another tree. When there is only one MCST that includes all vertices, the algorithm terminates. Since we must consider the edges in order of their cost, we must sort them, which requires O(E log E). Merge can be implemented in O(E log E). Kruskal's algorithm is O(E log E), which is better than Prim's algorithm if the graph is not dense (ie, E << N^2) A b-tree has a minumum number of allowable children for each node known as the minimization factor. If t is this minimization factor, every node must have at least t - 1 keys. Under certain circumstances, the root node is allowed to violate this property by having fewer than t - 1 keys. Every node may have at most 2t - 1 keys or, equivalently, 2t children. Since each node tends to have a large branching factor (a large number of children), it is typically neccessary to traverse relatively few nodes before locating the desired key. If access to each node requires a disk access, then a b-tree will minimize the number of disk accesses required. The minimzation factor is usually chosen so that the total size of each node corresponds to a multiple of the block size of the underlying storage device. This choice simplifies and optimizes disk access. Consequently, a b-tree is an ideal data structure for situations where all data cannot reside in primary storage and accesses to secondary storage are comparatively expensive (or time consuming).

Height of B-TreesFor n greater than or equal to one, the height of an n-key b-tree T of height h with a minimum degree t greater than or equal to 2,

For a proof of the above inequality, refer to Cormen, Leiserson, and Rivest pages 383-384.The worst case height is O(log n). Since the "branchiness" of a b-tree can be large compared to many other balanced tree structures, the base of the logarithm tends to be large; therefore, the number of nodes visited during a search tends to be smaller than required by other tree structures. Although this does not affect the asymptotic worst case height, b-trees tend to have smaller heights than other trees with the same asymptotic height

An important special kind of binary tree is the binary search tree (BST). In a BST, each node stores some information including a unique key value and perhaps some associated data. A binary tree is a BST iff, for every node n, in the tree: All keys in n's left subtree are less than the key in n, and all keys in n's right subtree are greater than the key in n.

Note: if duplicate keys are allowed, then nodes with values that are equal to the key in node n can be either in n's left subtree or in its right subtree (but not both). In these notes, we will assume that duplicates are not allowed. Here are some BSTs in which each node just stores an integer key:

These are not BSTs:

We assume that duplicates are not allowed (an attempt to insert a duplicate value causes an exception). The public insert method uses an auxiliary recursive "helper" method to do the actual insertion. The node containing the new value is always inserted as a leaf in the BST. The public insert method returns void, but the helper method returns a BSTnode. It does this to handle the case when the node passed to it is null. In general, the helper method is passed a pointer to a possibly empty tree. Its responsibility is to add the indicated key to that tree and return a pointer to the root of the modified tree. If the original tree is empty, the result is a one-node tree. Otherwise, the result is a pointer to the same node that was passed as an argument.

In the left one 5 is not greater than 6. In the right one 6 is not greater than 7. Note that more than one BST can be used to store the same set of key values. For example, both of the following are BSTs that store the same set of integer keys:

There are several things to note about this code: As for the lookup and insert methods, the BST delete method uses an auxiliary, overloaded delete method to do the actual work. If k is not in the tree, then eventually the auxiliary method will be called with n == null. That is not considered an error; the tree is simply unchanged in that case. The auxiliary delete method returns a value (a pointer to the updated tree). The reason for this is explained below.

If the search for the node containing the value to be deleted succeeds, there are three cases to deal with: The reason binary-search trees are important is that the following operations can be implemented efficiently using a BST: insert a key value determine whether a key value is in the tree remove a key value from the tree print all of the key values in sorted order 1. 2. 3. The node to delete is a leaf (has no children). The node to delete has one child. The node to delete has two children.

Where should a new item go in a BST? The answer is easy: it needs to go where you would have found it using lookup! If you don't put it there then you won't find it later. The code for insert is given below. Note that:

When the node to delete is a leaf, we want to remove it from the BST by setting the appropriate child pointer of its parent to null (or by setting root to null if the node to be deleted is the root and it has no children). Note that the call to delete was one of the following: root = delete(root, key); n.setLeft( delete(n.getLeft(), key) ); n.setRight( delete(n.getRight(), key) );

So in all three cases, the right thing happens if the delete method just returns null. Here's what happens when the node containing the value 15 is removed from the example BST:

When the node to delete has one child, we can simply replace that node with its child by returning a pointer to that child. As an example, let's delete 16 from the BST just formed: A binary search tree can be used to store any objects that implement the Comparable interface (i.e., that define the compareTo method). A BST can also be used to store Comparable objects plus some associated data. The advantage of using a binary search tree (instead of, say, a linked list) is that, if the tree is reasonably balanced (shaped more like a "full" tree than like a "linear" tree), the insert, lookup, and delete operations can all be implemented to run in O(log N) time, where N is the number of stored items. For a linked list, although insert can be implemented to run in O(1) time, lookup and delete take O(N) time in the worst case.

Vous aimerez peut-être aussi