Académique Documents
Professionnel Documents
Culture Documents
Bachelors of Technology
in
Computer Science and Engineering
Submitted By
Debobrata Podder
(05CS3003)
This is to certify that the thesis entitled “Buffer size estimation and simulation of a
congestion free M/M/1 queue of an optical fiber network” submitted by Debobrata Podder
of the department of Computer Science and Engineering, in partial fulfillment of the
requirements for the award of the degree of Bachelors of Technology, is a bonafide record of
work carried out by him under my supervision and guidance. The thesis has fulfilled all the
requirements as per the regulations of this institute. The results embodied in this thesis have
not been submitted to any other University or Institute for the award of any degree.
2
ACKNOWLEDGEMENT
Debobrata Podder
3
DECLARATION
I certify that
• The work contained in this thesis is original and has been done by me under the
guidance of my supervisors.
• The work has not been submitted to any other Institute for any degree or diploma.
• I have followed the guidelines provided by the Institute in preparing the thesis.
• I have conformed to the norms and guidelines given in the Ethical Code of
Conduct of the Institute.
• Whenever I have used materials (data, theoretical analysis, figures, and text) from
other sources, I have given due credit to them by citing them in the text of the thesis
and giving their details in the references.
Debobrata Podder
Department of Computer Science & Engineering
Indian Institute of Technology,
Kharagur,721302, India
4
CONTENTS
Chapter 1 Introduction
3.1 Theorem 15
3.2 Simulation Result 15
3.3 Xiaohua’s Method 16
3.4 Comparison of the two methods 18
Chapter 4 Simulation of a Congestion Free M/M/1 Queue System
5.1 Discussion 29
5.2 Reference 29
5
LIST OF FIGURES
3. Simulation table
5. Simulation framework
6. Thread Layers
7. Snapshot of simulation
6
Chapter 1
Introduction
1.1 Brief Overview
Since the very dawn of the modern civilization one of the principal interests of human beings
has been to devise communication systems for sending information from one place to
another distant place. Thanks to modern communication systems which have enabled the
human race to transfer information even from a place like the icy frozen Antarctica. The
internet, cell phones, fax machines and pagers are a way of life in modern society. All these
technologies rely on optical fiber communication system.
Now one may certainly arise the question why the optical fibers or how it supersedes
conventional cables? Answer is speed, accuracy, security and economy in data transfer. The
following properties of optical fiber meet all the above mentioned requirements.
• Low transmission loss- Due to the usage of ultra low loss fibers and the erbium
doped silica fibers as optical amplifiers; one can achieve almost loss less transmission.
• Immunity to cross talk- Since optical fibers are dielectric wave guides, they are free
from any electromagnetic interference (EMI) and radio frequency interference (RFI).
• Signal security- The transmitted signal through the fiber does not radiate. Unlike in
copper cables, a transmitted signal cannot be drawn from a fiber without tampering
it.
• Low cost and availability-Since the fibers are made of silica which is available in
abundance. Hence, there is no shortage of material and optical fibers offer the
potential for low cost communication.
• Electrical Isolation- Optical fibers are made from silica which is an electrical
insulator. Therefore they do not pick up any electromagnetic wave or any high
current lightening. It is also suitable in explosive environment.
7
• Small size and weigh- The size of the fiber ranges from 10 micrometers to 50
micrometers which is very very small. The space occupied by the fiber cable is
negligibly small compared to conventional electrical cables. Optical fibers are light in
weight. These advantages make them to use in aircrafts and satellites more
effectively.
• Ruggedness and flexibility- The optical fiber cable can be easily bend or twisted
without damaging it. Further the fiber cables are superior than the copper cables in
terms of handling, installation, storage, transportation, maintenance, strength and
durability.
But still absolute efficiency is only a technical term. Congestion still occurs in optical fiber
network. And what promotes the occurrence is the inefficiency of the network to
accommodate fast arriving packets into its service queue and inefficiency to serve the packet
within required interval.
1.2 Thesis Outline
In the first phase of our study we have focused on the issue of buffer requirement for the
normal flow of data through the network. Two different methods have been discussed. And
then a comparison has been drawn between the two methods on the basis of result data set.
And finally simulation of a congestion free M/M/1 queue has been done in Java, using its
multithreading programming concept.
8
Chapter 2
Optical Buffers and Queuing Systems
2.1 Optical Buffers-WDM FDLs
In an optical packet switching network, optical buffers are an integral part to solve
contention by exploiting the time domain .When two or more packets arriving on different
inputs try to leave the switch fabric on the same wavelength at the same output port
simultaneously, output contention occurs. Only one packet is transmitted while others are
delivered to optical buffers. Up to now, FDLs are commonly used to realize optical buffers
by emulating electrical memory. But its realistic buffering capacity is limited to only about a
few tens . In synchronous operation and fixed packet length network scenario, the delay unit
of FDLs is naturally designed equal to the length of a packet, and accordingly, the FDL
buffer is equivalent to a normal queue.
The optical WDM network considered here is a mesh network, where optical links between
optical switching nodes carry packet traffic on W different wavelengths, λ1, λ2, . . . ,λW.
In order to route each packet from its source to the destination, the task of optical switching
nodes is to switch incoming packets to a specified output link and select a proper time and
wavelength on the link to send packets. The WDM packet switch comprises four main parts:
the scheduler, wavelength conversion part, optical switch part, and optical output buffer
part. When a packet arrives, its header is first extracted and analyzed, then the acquired
information is used by the scheduler to control other three parts. Sequentially, the packet is
converted to the proper wavelength, then switched to the right input of buffer, and finally
delayed the proper time and put onto the output link.
The buffer is composed of N FDLs, which introduce delays of sequential multiples of D, and
the first FDL introduces a delay of 0 μs, the second of D μs, the third of 2D μs, and the Nth
of DM = (N-1)D μs. Every FDL carries W wavelengths, acting as W servers for incoming
packets. From another point of view, W wavelengths share a unique set of FDLs to realize
the integrated set of buffers. And each FDL collects packets from all inputs and thus each
packet can be addressed to any FDL. To be convenient, D and packet length are measured in
ms or μs), the latter is the number of bits in packets divided by the link bit rate.
In the WDM buffer, every wavelength has its own queue with N seats, 0, 1, 2, . . . , N _ 1.
If a packet is seated on seat i on time t, it is certain to be served at time t + (i _ 1)D. Since
9
every wavelength is identical, an arriving packet can be assigned to any one of the W
buffers. In favor of system efficiency and packet delay time, it can always be assigned to the
shortest queue in terms of the time the packet in the queue will be served. So the
information of the completion time of the last packet in every queue should be recorded.
Based on the recorded information and the routing information, the scheduler controls the
tunable wavelength converters (TWCs) to put the packet onto the proper wavelength in the
proper buffer. For example, at the time ta, a new packet arrives. T1, T2, T3 have been
recorded, and because T2 < T1 < T3 and thus the shortest queue is on wavelength λ2, the
new packet is inserted into the queue of λ2 with predefined time tp, at which the packet is
going to be served. In this way, packets in queues will be served to leave according to the
predefined time sequence.
0XD
1XD
Output link
2XD
3XD
.
.
.
.
(N-1)XD
Fig-1. Physical implementation of an optical buffer. D is the delay unit. N is the no. of FDLs
Traffic is smoothed through processing by the local access node. In the case of optical packet
switching networks, multiple IP packets are encapsulated into a larger optical packet at the
access node, so optical packet traffic is smoother than the pure IP traffic. And if the optical
packet length and arrivals are independent of the content of packets, the arrivals of
sequential optical packets are independent. Therefore, we can assume the arrivals are
Poisson process.
10
DM= (N-1)xD Predefined time
Time Line
According to the discussion in the previous section, the W wavelengths in this case are W
servers. And the W queues are combined to form a central queue with FCFS scheduling
policy. The packet loss performance is not related to the number of packets in the queue, as
in basic queuing theory, but the whole length/service time of packets in the queue. When
the whole length is longer than DM, the new arriving packet is dropped. Consequently, the
number of packets queued is variable and uncertain, which induces that the states of the
queue are intractable. Therefore, the model of the WDM FDL buffer is an M/M/W
queuing system.
11
Where W: Average waiting time of any packet in the queue
N: average number of packets in the system
T : Average packet service time in the system
NQ: Average number of customers waiting in the queue.
Pn : Probability of n customers in the system, n=0,1,2…
From these probalities, we can get
(2)
Equation 1 & 2 together finally gives,
Pn=
We can now calculate average number of packets in the system in steady state
12
And finally by Little’s Theorem average queue length is:
13
Where ρ is given by
And finally,
The probability that an arrival will find all servers busy and will be forced to wait in the
queue is:
And finally
We will use this result to calculate the buffer size in for chosen WDM FDLs optical buffer.
14
Chapter 3
Buffer Size Calculation
3.1 Theorem
The WDM FDL buffer with infinite capacity can be modeled by a single queue and multi-
server system (SQ-MS). If the former always schedules packets to the shortest queue, the
corresponding queuing discipline in the latter is FCFS. It is different from the system
composed of W independent single-queue and single-sever systems (SQ-SS).
FDL buffer arranges in advance the times packets are to be served, while the scheduling
times of packets in the SQ-MS case are determined at the moment there is an idle server and
the first-arriving packet is served at once. And the queue length sum of all the queues in the
former case is equivalent to the length of the single queue in the latter case. Since the servers
are all identical, the length of the queue in this case is 1/W of that in the latter case. For
example, we assume the Poisson arrivals and exponential packet length, and the system has
an infinite queue, i.e. B = ∞. When D is infinitesimal, the distribution of service time is
exponential, and thus the SQ-MS becomes M/M/W model. Assuming λ is the average
packet arrival rate, μ is the average service time and thus ρ = λ/Wμ is the load intensity
per queue, the average queue size QL is :
The above mentioned buffer size calculation procedure has been simulated by a program in
C language in linux-ubuntu environment in a intel x86 machine
Taking different arrival and service rate combination buffer sizes have been calculated.
15
Simulation result Table:
16
So it is equivalent to increase the service time of the new packet by D. Under the
assumption that the arrivals as well as length of sequential packets are statistically
independent respectively and they are independent of each other and independent of the
state of the queue, the distribution of D is uniform between 0 and D, and thus its average D
is D/2.
Previously established:
Considering no packet loss and letting P queue be the probability that a packet is queued and
s be the average real length/service time of a packet, we get the average equivalent
length/service time of a packet:
Consequently, the equivalent load intensity (the equivalent load on each wavelength
channel) to the queue is :
Putting together (2)–(4), the value of qeq can be obtained by means of iterations, starting
with ρeq = ρ. By substituting ρ in (1) and (2) with ρeq, we get the average queue size QL
in the case that D has a finite value. In this case, QL is related to ρeq, λ and μ, but not only
ρeq, like in the case before.
Now,
Fig -4(a)Buffersize-Load intensity plot (Simulation) Fig-4(b) Buffersize-Load intensity plot (xiohua Method)
Both of the graphs has similar slope and it’s clear that in both cases the buffer size ≥ 1 when
load intensity ρ ≥ .94 which strengthens our assumption that taking delay as infinitesimal
doesn’t affect the buffer size much.
18
Chapter 4
Simulation of a Congestion Free M/M/1 Queue System
Effects:
1. Queue overflow at switching nodes.
2. Performance degradation.
3. Multiple packet loss.
4. Low link utilization (low Throughput).
5. High queuing delay.
6. Congestive collapse- it is a situation in which the congestion becomes so great that
throughput drops to a low level and thus little. It can be a stable state with the same intrinsic
load level that would by itself not produce congestion.
The prevention of network congestion and collapse requires two major components:
19
1. A mechanism in routers to reorder or drop packets under overload,
2. End-to-end flow control mechanisms designed into the end points which respond to
congestion and behave appropriately. The correct end point behavior is usually still
to repeat dropped information, but progressively slow the rate that information is
repeated. Provided all end points do this, the congestion lifts and good use of the
network occurs, and the end points all get a fair share of the available bandwidth.
Other strategies such as slow-start ensure that new connections don't overwhelm the
router before the congestion detection can kick in. The most common router
mechanisms used to prevent congestive collapses are fair queuing and other
scheduling algorithms, and random early detection, or RED, where packets are
randomly dropped proactively triggering the end points to slow transmission before
congestion collapse actually occurs. Fair queuing is most useful in routers at choke
points with a small number of connections passing through them. Larger routers
must rely on RED. Some end-to-end protocols are better behaved under congested
conditions than others. TCP is perhaps the best behaved. That is why we have used
TCP in our simulation.
Our aim is to simulate congestion free M/M/1 queue of an optical fiber network .The
network node in a Poisson-packet distribution environment using TCP to transfer packets
can adopt this mechanism to avoid congestion in network. And for this purpose the network
nodes have to have a buffer whose size is calculated adopting the method discussed
previously.
4.3 Simulation
TCP Sender
TCP Packets
Client 1 Client2
20
Fig-5 .Simulation Framework
4.3.2 Simulation Components
Clients:-
The clients are also acts as TCP server as well as TCP client and functions alike the TCP
packet receiver.
The simulation has been done in Java, using its multithread programming concept in Eclipse
software development kit.
21
4.3.5 Multithread programming
Multithreading as a widespread programming and execution model allows multiple threads
to exist within the context of a single process. These threads share the process' resources but
are able to execute independently. The threaded programming model provides developers
with a useful abstraction of concurrent execution. However, perhaps the most interesting
application of the technology is when it is applied to a single process to enable parallel
execution on a multiprocessor system.
In our simulation there are three threads which run simultaneously. One is the TCP packet
receiver thread and the other two is its client threads.
22
4.4 Simulation Details
4.4.1 Threads
In our simulation we have used three layers of threads. Top level thread is the main thread
which is the main program. This thread spawns a second layer of threads consisting of TCP
packet receiver and its packet distributing unit. The packet distributing unit spawns another
layer of threads, two TCP clients who receives packets distributed by the unit.
Main thread
Client 1 Client 2
23
4.4.3 Achieving Packet Arrival & Packet service rate
To achieve desired packet arrival and service rate for the Poisson packet distribution process
we have used system.nanoTime () function defined in java which returns the current value
of the most precise available system timer, in nanoseconds. Arrival rate 0f 1000 packets/sec
and service rate of 1020 packets/sec had been taken.
2. Class check
Checks all type of conditions.
25
4.4.6 Snapshot of the Simulation
26
4.5. Analysis of the Simulation
Analysis of the M/M/1 queue of the TCP packet Receiver which has been used in our
simulation has been done with the help of JMT( java modeling tool ).It shows how the queue
grows for a given arrival rate and service time i.e service rate.
The Java Modelling Tools (JMT) is a free open source suite consisting of six tools for
performance evaluation, capacity planning, workload characterization, and modelling of
computer and communication systems. The suite implements several state-of-the-art
algorithms for the exact, asymptotic and simulative analysis of queueing network models,
either with or without product-form solution. Models can be described either through
wizarddialogs or with a graphical user-friendly interface. The workload analysis tool is based
on clustering techniques.The suite incorporates an XML data layer that enables full
reusability of the computational engines.The JMT suite is composed by the following tools.
4.5.2 JMVA
The Java Modelling Tools (JMT) is a free open source suite consisting of six tools for
performance evaluation,capacity planning, workload characterization, and modelling of
computer and communication systems. Thesuite implements several state-of-the-art
algorithms for the exact, asymptotic and simulative analysis of queueing network models,
either with or without product-form solution. Models can be described either through
wizarddialogs or with a graphical user-friendly interface. The workload analysis tool is based
on clustering techniques.The suite incorporates an XML data layer that enables full
reusability of the computational engines.The JMT suite is composed by the following tools.
4.5.3 JMCH
it applies a simulation technique to solve a single station model, with finite (M/M/1/k) or
infinite queue (M/M/1), and shows the underlying Markov Chain. It is possible to
dynamically change the arrival rate and service time of the system
27
4.5.4 Analysis
Arrival rate =98 and service time =.01 sec
28
Chapter 5
Discussion & Reference
5.1 Discussion
Our simulation was pretty consistent with the simulation of M/M/1 queue and station done
in JMT.
As per the simulation done in JMT the max no. of customer at the station was
approximately 10 after processing 836527 customers which is consistent with the growth of
M/M/1 queue. In our simulation the buffer size of the TCP packet receiver was 10 and no
congestion was detected after processing 1115371 packets.
And as from the graph it is clear that the no. of customers tend to stabilize as time grows so
there was no possibility of future congestion.
5.2 References
[7] Simulation Methods for Queues: an Overview - -Glynn P.W. and Lglehart D.L
[12] http://help.eclipse.org/helios/index.jsp
29