Vous êtes sur la page 1sur 16

1) Briefly explain the linked representation of a circular queue and its

insertion and deletion operations.


a queue is a particular kind of abstract data type or collection in which the entities in the
collection are kept in order and the principal (or only) operations on the collection are the
addition of entities to the rear terminal position and removal of entities from the front terminal
position. This makes the queue a First-In-First-Out (FIFO) data structure. In a FIFO data
structure, the first element added to the queue will be the first one to be removed. This is
equivalent to the requirement that once an element is added, all elements that were added before
have to be removed before the new element can be invoked. A queue is an example of a linear
data structure.
Queues provide services in computer science, transport, and operations research where various
entities such as data, objects, persons, or events are stored and held to be processed later. In these
contexts, the queue performs the function of a buffer.
Queues are common in computer programs, where they are implemented as data structures
coupled with access routines, as anabstract data structure or in object-oriented languages as
classes. Common implementations are circular buffers and linked lists.

a queue can be represented by an array to hold the elements of the queue and a counter to specify
the rear end of the queue. In this representation, as in the case of physical model, after each
deletion all the remaining elements are to be moved forward so that front of the queue is
represented by the same position. But this proves to be quite inefficient because of the
involvement of movement of all the elements especially when the queue is long.

As an improvement over the physical model is the sequential representation of the queue where
in the queue elements are accommodated by a one-dimensional array with two counters or
indices representing the front and rear of the queue. The counter rear points to the last element of
the queue and the counter front points to the first element or front of the queue. In this method
both the front and rear of the queue are kept track of without involving any movement of the
elements. To add an element to the queue, the counter rear is increased by one and the element is
put in that position. To remove an element from the queue, it is removed from the front and the
counter is increased by one. The drawback of this method is that as the elements of the queue are
removed the front of the queue moves down the array. The storage at the beginning of the array
is discarded and is not used again. This condition is illustrated in Fig.3.2.1

The counter rear contains a value that is one less than the actual front of the queue. Initially rear
is set to -1 and the counter front is set to zero. When there are no elements in the queue rear <
front. At any point of time, the number of elements in the queue is given by rear - front +1.

The array representation of queue may lead to the condition that in spite of being empty the
queue may indicate a full condition and no more elements could be inserted.
Operations on queue

The operations associated with queue are create a queue, add an element to queue, delete an
element from queue, find the front element of the queue and determine if the queue is empty or
not. Following routines give program modules for these operations

Circular queues

One solution to the problem of not being able to insert the elements in spite of the queue being
empty is to shift the elements to the front after each removal. This method is particularly
cumbersome when the number of elements to be shifted is large. A more efficient method of
representation of queue is obtained by viewing the array holding the queue elements as a circle
instead of a straight line. The first element of the array is viewed as immediately following the
last element. This implies that even if the last element position in the array is occupied, the next
element can be inserted behind it in the first array position provided it is empty.

Position in circle are numbered from 0 to max - 1. Number of elements in the array represents
the number of elements in queue.

Circular queues can be implemented using linear arrays. The different boundary conditions under
this implementation are seen as follows:
Implementation of queues using lists is very similar to the implementation of stacks, except that
in this case items join the queue at the back and leave at the front. If the queue is represented by
the list [5, 2], adding a new item 3 will give the list [5, 2, 3]. In other words new items are added
to the end of the list. Removing an item from the queue is the same as for stacks. It comes off the

For efficiency, we need to keep track of the last item in the queue, assuming there is one. This
will be referred to by the variable back as shown in the diagram.

The code for the linked list implementation of the Queue interface is shown in Figure 4.3.

import java.util.NoSuchElementException;

public class QueueList implements Queue {

private ListNode front;

private ListNode back;

public QueueList() {
front = null;
public boolean isEmpty() {
return front == null;

public void enqueue(Object item) {

if (front == null) {
front = back = new ListNode(item, null);
} else {
back = back.next = new ListNode(item, null);

public Object dequeue() {

if (front == null) {
throw new NoSuchElementException();
} else {
Object item = front.data;
front = front.next;
return item;
A list implementation of the Queue interface.
The code for dequeue is the same as the pop method for stacks. The enqueue method needs to
check first whether or not the list is empty. In that case a new node is created and
the front and back are the same. If the list is not empty, a new node is added at the back of the
old list, and this becomes the back of the new list.

Note that back refers to the last element of the list if the list is non-empty. If the list is empty,
meaning that front = null, then back may refer anywhere. But if the list is empty, no use can be
made of the current reference of back, so its value is not significant.

2) Explain the sorting and searching techniques briefly


When data items are stored in a collection such as a list, we say that they have a linear or
sequential relationship. Each data item is stored in a position relative to the others. In Python
lists, these relative positions are the index values of the individual items. Since these index
values are ordered, it is possible for us to visit them in sequence. This process gives rise to our
first searching technique, the sequential search.
Figure 1 shows how this search works. Starting at the first item in the list, we simply move from
item to item, following the underlying sequential ordering until we either find what we are
looking for or run out of items. If we run out of items, we have discovered that the item we were
searching for was not present.

Figure 1: Sequential Search of a List of Integers

The Python implementation for this algorithm is shown in CodeLens 1. The function needs the
list and the item we are looking for and returns a boolean value as to whether it is present. The
boolean variable found is initialized to False and is assigned the value True if we discover the
item in the list.

1 def sequentialSearch(alist, item):

2 pos = 0
3 found = False
5 while pos < len(alist) and not found:
6 if alist[pos] == item:
7 found = True
8 else:
9 pos = pos+1
11 return found
13 testlist = [1, 2, 32, 8, 17, 19, 42, 13, 0]
14 print(sequentialSearch(testlist, 3))
15 print(sequentialSearch(testlist, 13))

The Binary Search

It is possible to take greater advantage of the ordered list if we are clever with our comparisons.
In the sequential search, when we compare against the first item, there are at most n−1 more
items to look through if the first item is not what we are looking for. Instead of searching the list
in sequence, a binary search will start by examining the middle item. If that item is the one we
are searching for, we are done. If it is not the correct item, we can use the ordered nature of the
list to eliminate half of the remaining items. If the item we are searching for is greater than the
middle item, we know that the entire lower half of the list as well as the middle item can be
eliminated from further consideration. The item, if it is in the list, must be in the upper half.
We can then repeat the process with the upper half. Start at the middle item and compare it
against what we are looking for. Again, we either find it or split the list in half, therefore
eliminating another large part of our possible search space. Figure 3 shows how this algorithm
can quickly find the value 54. The complete function is shown in CodeLens 3.

Figure 3: Binary Search of an Ordered List of Integers

1 def binarySearch(alist, item):

2 first = 0

3 last = len(alist)-1

4 found = False

6 while first<=last and not found:

7 midpoint = (first + last)//2

8 if alist[midpoint] == item:

9 found = True

10 else:

11 if item < alist[midpoint]:

12 last = midpoint-1

13 else:

14 first = midpoint+1


16 return found

18 testlist = [0, 1, 2, 8, 13, 17, 19, 32, 42,]

19 print(binarySearch(testlist, 3))

20 print(binarySearch(testlist, 13))


Sorting means arranging a group of elements in a particular order. Be it ascending or descending,

by cardinality or alphabetical order or variations thereof. The resulting ordering possibilities will
only be limited by the type of the source elements.

Quicksort is an algorithm of the divide and conquer type. In this method, to sort a set of
numbers, we reduce it to two smaller sets, and then sort these smaller sets.

This can be explained with the help of the following example:

Suppose A is a list of the following numbers:

In the reduction step, we find the final position of one of the numbers. In this case, let us assume
that we have to find the final position of 48, which is the first number in the list.

To accomplish this, we adopt the following method. Begin with the last number, and move from
right to left. Compare each number with 48. If the number is smaller than 48, we stop at that
number and swap it with 48.

In our case, the number is 24. Hence, we swap 24 and 48.

The numbers 96 and 72 to the right of 48, are greater than 48. Now beginning with 24, scan the
numbers in the opposite direction, that is from left to right. Compare every number with 48 until
you find a number that is greater than 48.

In this case, it is 60. Therefore we swap 48 and 60.

Note that the numbers 12, 24 and 36 to the left of 48 are all smaller than 48. Now, start scanning
numbers from 60, in the right to left direction. As soon as you find lesser number, swap it with

In this case, it is 44. Swap it with 48. The final result is:

Now, beginning with 44, scan the list from left to right, until you find a number greater than 48.

Such a number is 84. Swap it with 48. The final result is:

Now, beginning with 84, traverse the list from right to left, until you reach a number lesser than
48. We do not find such a number before reaching 48. This means that all the numbers in the list
have been scanned and compared with 48. Also, we notice that all numbers less than 48 are to the
left of it, and all numbers greater than 48, are to it's right.

The final partitions look as follows:

Therefore, 48 has been placed in it's proper position and now our task is reduced to sorting the
two partitions. This above step of creating partitions can be repeated with every partition
containing 2 or more elements. As we can process only a single partition at a time, we should be
able to keep track of the other partitions, for future processing.

This is done by using two stacks called LOWERBOUND and UPPERBOUND, to temporarily
store these partitions. The addresses of the first and last elements of the partitions are pushed into
the LOWERBOUND and UPPERBOUND stacks respectively. Now, the above reduction step is
applied to the partitions only after it's boundary values are popped from the stack.

We can understand this from the following example:

Take the above list A with 12 elements. The algorithm starts by pushing the boundary values of
A, that is 1 and 12 into the LOWERBOUND and UPPERBOUND stacks respectively. Therefore
the stacks look as follows:


To perform the reduction step, the values of the stack top are popped from the stack. Therefore,
both the stacks become empty.


Now, the reduction step causes 48 to be fixed to the 5th position and creates two partitions, one
from position 1 to 4 and the other from position 6 to 12. Hence, the values 1 and 6 are pushed
into the LOWERBOUND stack and 4 and 12 are pushed into the UPPERBOUND stack.


For applying the reduction step again, the values at the stack top are popped. Therefore, the
values 6 and 12 are popped. Therefore the stacks look like:


The reduction step is now applied to the second partition, that is from the 6th to 12th element.
After the reduction step, 98 is fixed in the 11th position. So, the second partition has only one
element. Therefore, we push the upper and lower boundary values of the first partition onto the
stack. So, the stacks are as follows:


The processing proceeds in the following way and ends when the stacks do not contain any upper
and lower bounds of the partition to be processed, and the list gets sorted.

3) Explain the basic terminology of tree and graphs and write algorithms for


A tree is a non-empty set one element of which is designated the root of the tree while the
remaining elements are partitioned into non-empty sets each of which is a subtree of the root.

Binary: Each node has zero, one, or two children. This assertion makes many tree operations
simple and efficient.
Binary Search: A binary tree where any left child node has a value less than its parent node and
any right child node has a value greater than or equal to that of its parent node.


Many problems require we visit* the nodes of a tree in a systematic way: tasks such as counting
how many nodes exist or finding the maximum element. Three different methods are possible for
binary trees: preorder, postorder, and in-order, which all do the same three things: recursively
traverse both the left and right subtrees and visit the current node. The difference is when the
algorithm visits the current node:

preorder: Current node, left subtree, right subtree (DLR)

postorder: Left subtree, right subtree, current node (LRD)

in-order: Left subtree, current node, right subtree (LDR)

levelorder: Level by level, from left to right, starting from the root node.

* Visit means performing some operation involving the current node of a tree, like incrementing
a counter or checking if the value of the current node is greater than any other recorded.


a graph is an abstract data type that is meant to implement thegraph and hypergraph concepts
from mathematics.
A graph data structure consists of a finite (and possibly mutable) set of ordered pairs,
called edges or arcs, of certain entities called nodes or vertices. As in mathematics, an edge (x,y)
is said to point or go from x to y. The nodes may be part of the graph structure, or may be
external entities represented by integer indices or references.
A graph data structure may also associate to each edge some edge value, such as a symbolic
label or a numeric attribute (cost, capacity, length, etc.).

A labeled graph of 6 vertices and 7 edges.

Representation of a graphs

Different data structures for the representation of graphs are used in practice:
Adjacency list
Vertices are stored as records or objects, and every vertex stores a list of adjacent vertices. This
data structure allows the storage of additional data on the vertices.
Incidence list
Vertices and edges are stored as records or objects. Each vertex stores its incident edges, and
each edge stores its incident vertices. This data structure allows the storage of additional data on
vertices and edges.
Adjacency matrix
A two-dimensional matrix, in which the rows represent source vertices and columns represent
destination vertices. Data on edges and vertices must be stored externally. Only the cost for one
edge can be stored between each pair of vertices.
Incidence matrix
A two-dimensional Boolean matrix, in which the rows represent the vertices and columns
represent the edges. The entries indicate whether the vertex at a row is incident to the edge at a
The following table gives the time complexity cost of performing various operations on graphs,
for each of these representations.[citation needed] In the matrix representations, the entries encode the
cost of following an edge. The cost of edges that are not present are assumed to be .

graph traversal is the problem of visiting all the nodes in a graph in a particular manner,
updating and/or checking their values along the way. Tree traversal is a special case of graph

Graph traversal
Note: If each node in a graph is to be traversed by a tree-based algorithm (such as DFS or BFS),
then the algorithm must be called at least once for each entirely distinct subgraph of the graph.
This is easily accomplished by iterating through all the nodes of the graph, performing the
algorithm on each node that is still unvisited when examined.
Depth-First Search
Main article: Depth-First Search
A depth-first search (DFS) is an algorithm for traversing a finite graph. DFS visits the child
nodes before visiting the sibling nodes; that is, it traverses the depth of any particular path before
exploring its breadth. A stack (oftentimes the program's call stack via recursion) is generally used
when implementing the algorithm.
The algorithm begins with a chosen "root" node; it then iteratively transitions from the current
node to an adjacent, unvisited node, until it can no longer find an unexplored node to transition
to from its current location. The algorithm then backtracks along previously visited nodes, until it
finds a node connected to yet more uncharted territory. It will then proceed down the new path as
it had before, backtracking as it encounters dead-ends, and ending only when the algorithm has
backtracked past the original "root" node from the very first step.
DFS is the basis for many graph-related algorithms, including topological sorts and planarity
Input: A graph G and a vertex v of G
Output: A labeling of the edges in the connected component of v as discovery edges and back

1 procedure DFS(G,v):
2 label v as explored
3 for all edges e in G.adjacentEdges(v) do
4 if edge e is unexplored then
5 w ← G.adjacentVertex(v,e)
6 if vertex w is unexplored then
7 label e as a discovery edge
8 recursively call DFS(G,w)
9 else
10 label e as a back edge

Breadth-First Search

This section
requires expansion.
(October 2012)
Main article: Breadth-First Search

A breadth-first search (BFS) is another technique for traversing a finite graph. BFS visits the
sibling nodes before visiting the child nodes. Usually a queue is used in the search process.
Input: A graph G and a root v of G
1 procedure BFS(G,v):
2 create a queue Q
3 enqueue v onto Q
4 mark v
5 while Q is not empty:
6 t ← Q.dequeue()
7 if t is what we are looking for:
8 return t
9 for all edges e in G.adjacentEdges(t) do
12 o ← G.adjacentVertex(t,e)
13 if o is not marked:
14 mark o
15 enqueue o onto Q

4) Implement generic Skip List with example?


A skip list is a data structure for storing a sorted list of items using a hierarchy of linked lists that
connect increasingly sparse subsequences of the items. These auxiliary lists allow item lookup
with efficiency comparable to balanced binary search trees (that is, with number of probes
proportional to log n instead of n).

Each link of the sparser lists skips over many items of the full list in one step, hence the
structure's name. These forward links may be added in a randomized way with
a geometric/ negative binomial distribution.[3] Insert, search and delete operations are performed
in logarithmic expected time.

Insertion and deletion involve searching the SkipList to find the insertion or deletion point, then
manipulating the references to make the relevant change.
When inserting a new element, we first generate a node that has had its level selected randomly.
The SkipList has a maximum allowed level set at construction time. The number of levels in the
header node is the maximum allowed. For convenience in searching, the SkipList keeps track of
the maximum level actually in the list. There is no need to search levels above this actual

findInsertPoint Method

In an ordinary linked list, insertion and deletion require having a pointer to the previous node.
Insertion is done after this previous node, deletion deletes the node following the previous node.
We call the point in the list after the previous node, the insertion point, even if we are doing
deletions. Finding the insertion point is the first step in doing an insertion or a deletion.

In Skip Lists, we need pointers to all the see-able previous nodes between the insertion point and
the header. Imagine standing at the insertion point, looking back toward the header. All the nodes
you can see are the see-able nodes. Some nodes may not be see-able because they are blocked by
higher nodes. Figure 5 shows an example.

Figure 5: Update Node Example

In the figure, the insertion point is between nodes 6 and 7. Looking back toward the header,
the nodes you can see at the various levels are

level node seen

0 6
1 6
2 4
3 header

We construct a backLook node that has its forward pointers set to the relevant see-able nodes.
This is the type of node returned by the findInsertPointmethod.