Vous êtes sur la page 1sur 29

Buffer Size Estimation And Simulation Of A Congestion

Free M/M/1 Queue Of A FDL Optical Fiber Network

Thesis submitted in Partial Fulfillment of


the requirements for the Degree of

Bachelors of Technology
in
Computer Science and Engineering
Submitted By
Debobrata Podder
(05CS3003)

Under the guidance of


Prof. Indranil Sengupta
Department of Computer Science and Engineering
Indian Institute of Technology
Kharagpur-721302
India
1
CERTIFICATE

This is to certify that the thesis entitled “Buffer size estimation and simulation of a
congestion free M/M/1 queue of an optical fiber network” submitted by Debobrata Podder
of the department of Computer Science and Engineering, in partial fulfillment of the
requirements for the award of the degree of Bachelors of Technology, is a bonafide record of
work carried out by him under my supervision and guidance. The thesis has fulfilled all the
requirements as per the regulations of this institute. The results embodied in this thesis have
not been submitted to any other University or Institute for the award of any degree.

Dr. Indranil Sengupta


Department of Computer Science & Engineering
Indian Institute of Technology,
Kharagur,721302, India

2
ACKNOWLEDGEMENT

I thank my supervisor, Prof. Indranil Sengupta, department of Computer Science and


Engineering, Indian Institute of Technology, Kharagpur, for providing invaluable insights
and timely encouragement as well as guidance and support during my project work.
I thank all the people of Department of Computer Science & Engineering for creating a
healthy working atmosphere and supports.
I thank all my friends for being a part of my stay here.
I want to express my deepest gratitude to my parents, for their love and support
throughout my life.

Debobrata Podder

3
DECLARATION

I certify that
• The work contained in this thesis is original and has been done by me under the
guidance of my supervisors.
• The work has not been submitted to any other Institute for any degree or diploma.
• I have followed the guidelines provided by the Institute in preparing the thesis.
• I have conformed to the norms and guidelines given in the Ethical Code of
Conduct of the Institute.
• Whenever I have used materials (data, theoretical analysis, figures, and text) from
other sources, I have given due credit to them by citing them in the text of the thesis
and giving their details in the references.

Debobrata Podder
Department of Computer Science & Engineering
Indian Institute of Technology,
Kharagur,721302, India

4
CONTENTS
Chapter 1 Introduction

1.1 Brief Overview 7


1.2 Thesis Outline 8
Chapter 2 Optical Buffers and Queuing Systems

2.1 Optical Buffers – WDM FDLs 9


2.2 Buffer Size Estimation 10
2.3 The M/M/1 Queue 11
2.3 M/M/m from M/M/1 13
Chapter 3 Buffer Size Calculation

3.1 Theorem 15
3.2 Simulation Result 15
3.3 Xiaohua’s Method 16
3.4 Comparison of the two methods 18
Chapter 4 Simulation of a Congestion Free M/M/1 Queue System

4.1 Congestion in optical fiber network 19


4.2 Traditional Congestion Avoidance Mechanism 19
4.3 Simulation 20
4.4 Simulation Details 23
4.5 Analysis of Simulation 27
Chapter 5 Discussion & Reference

5.1 Discussion 29
5.2 Reference 29

5
LIST OF FIGURES

1. Physical implementation of optical buffer

2. Poisson traffic of a WDM optical buffer

3. Simulation table

4. 4(a) Buffer size-Load intensity plot (Simulation)

4(b) Buffer size-Load intensity plot (xiohua Method)

5. Simulation framework

6. Thread Layers

7. Snapshot of simulation

8. Growth of M/M/1 queue

9. Growth of number of packets in TCP packet receiver buffer

6
Chapter 1
Introduction
1.1 Brief Overview
Since the very dawn of the modern civilization one of the principal interests of human beings
has been to devise communication systems for sending information from one place to
another distant place. Thanks to modern communication systems which have enabled the
human race to transfer information even from a place like the icy frozen Antarctica. The
internet, cell phones, fax machines and pagers are a way of life in modern society. All these
technologies rely on optical fiber communication system.
Now one may certainly arise the question why the optical fibers or how it supersedes
conventional cables? Answer is speed, accuracy, security and economy in data transfer. The
following properties of optical fiber meet all the above mentioned requirements.

• Enormous bandwidth- The information carrying capacity of a transmission system is


directly proportional to the carrier frequency of the transmitted signals. The optical
carrier frequency is in the range of 1014 Hz.

• Low transmission loss- Due to the usage of ultra low loss fibers and the erbium
doped silica fibers as optical amplifiers; one can achieve almost loss less transmission.

• Immunity to cross talk- Since optical fibers are dielectric wave guides, they are free
from any electromagnetic interference (EMI) and radio frequency interference (RFI).

• Signal security- The transmitted signal through the fiber does not radiate. Unlike in
copper cables, a transmitted signal cannot be drawn from a fiber without tampering
it.

• Low cost and availability-Since the fibers are made of silica which is available in
abundance. Hence, there is no shortage of material and optical fibers offer the
potential for low cost communication.

• Electrical Isolation- Optical fibers are made from silica which is an electrical
insulator. Therefore they do not pick up any electromagnetic wave or any high
current lightening. It is also suitable in explosive environment.
7
• Small size and weigh- The size of the fiber ranges from 10 micrometers to 50
micrometers which is very very small. The space occupied by the fiber cable is
negligibly small compared to conventional electrical cables. Optical fibers are light in
weight. These advantages make them to use in aircrafts and satellites more
effectively.

• Ruggedness and flexibility- The optical fiber cable can be easily bend or twisted
without damaging it. Further the fiber cables are superior than the copper cables in
terms of handling, installation, storage, transportation, maintenance, strength and
durability.
But still absolute efficiency is only a technical term. Congestion still occurs in optical fiber
network. And what promotes the occurrence is the inefficiency of the network to
accommodate fast arriving packets into its service queue and inefficiency to serve the packet
within required interval.
1.2 Thesis Outline
In the first phase of our study we have focused on the issue of buffer requirement for the
normal flow of data through the network. Two different methods have been discussed. And
then a comparison has been drawn between the two methods on the basis of result data set.
And finally simulation of a congestion free M/M/1 queue has been done in Java, using its
multithreading programming concept.

8
Chapter 2
Optical Buffers and Queuing Systems
2.1 Optical Buffers-WDM FDLs

In an optical packet switching network, optical buffers are an integral part to solve
contention by exploiting the time domain .When two or more packets arriving on different
inputs try to leave the switch fabric on the same wavelength at the same output port
simultaneously, output contention occurs. Only one packet is transmitted while others are
delivered to optical buffers. Up to now, FDLs are commonly used to realize optical buffers
by emulating electrical memory. But its realistic buffering capacity is limited to only about a
few tens . In synchronous operation and fixed packet length network scenario, the delay unit
of FDLs is naturally designed equal to the length of a packet, and accordingly, the FDL
buffer is equivalent to a normal queue.

The optical WDM network considered here is a mesh network, where optical links between
optical switching nodes carry packet traffic on W different wavelengths, λ1, λ2, . . . ,λW.
In order to route each packet from its source to the destination, the task of optical switching
nodes is to switch incoming packets to a specified output link and select a proper time and
wavelength on the link to send packets. The WDM packet switch comprises four main parts:
the scheduler, wavelength conversion part, optical switch part, and optical output buffer
part. When a packet arrives, its header is first extracted and analyzed, then the acquired
information is used by the scheduler to control other three parts. Sequentially, the packet is
converted to the proper wavelength, then switched to the right input of buffer, and finally
delayed the proper time and put onto the output link.

The buffer is composed of N FDLs, which introduce delays of sequential multiples of D, and
the first FDL introduces a delay of 0 μs, the second of D μs, the third of 2D μs, and the Nth
of DM = (N-1)D μs. Every FDL carries W wavelengths, acting as W servers for incoming
packets. From another point of view, W wavelengths share a unique set of FDLs to realize
the integrated set of buffers. And each FDL collects packets from all inputs and thus each
packet can be addressed to any FDL. To be convenient, D and packet length are measured in
ms or μs), the latter is the number of bits in packets divided by the link bit rate.

In the WDM buffer, every wavelength has its own queue with N seats, 0, 1, 2, . . . , N _ 1.
If a packet is seated on seat i on time t, it is certain to be served at time t + (i _ 1)D. Since
9
every wavelength is identical, an arriving packet can be assigned to any one of the W
buffers. In favor of system efficiency and packet delay time, it can always be assigned to the
shortest queue in terms of the time the packet in the queue will be served. So the
information of the completion time of the last packet in every queue should be recorded.
Based on the recorded information and the routing information, the scheduler controls the
tunable wavelength converters (TWCs) to put the packet onto the proper wavelength in the
proper buffer. For example, at the time ta, a new packet arrives. T1, T2, T3 have been
recorded, and because T2 < T1 < T3 and thus the shortest queue is on wavelength λ2, the
new packet is inserted into the queue of λ2 with predefined time tp, at which the packet is
going to be served. In this way, packets in queues will be served to leave according to the
predefined time sequence.

0XD

1XD

Output link
2XD

3XD
.
.
.
.

(N-1)XD

Fig-1. Physical implementation of an optical buffer. D is the delay unit. N is the no. of FDLs

2.2 Buffer size estimation

Traffic is smoothed through processing by the local access node. In the case of optical packet
switching networks, multiple IP packets are encapsulated into a larger optical packet at the
access node, so optical packet traffic is smoother than the pure IP traffic. And if the optical
packet length and arrivals are independent of the content of packets, the arrivals of
sequential optical packets are independent. Therefore, we can assume the arrivals are
Poisson process.

10
DM= (N-1)xD Predefined time

Time Line

Fig 2. Poisson traffic of a WDM optical buffer

According to the discussion in the previous section, the W wavelengths in this case are W
servers. And the W queues are combined to form a central queue with FCFS scheduling
policy. The packet loss performance is not related to the number of packets in the queue, as
in basic queuing theory, but the whole length/service time of packets in the queue. When
the whole length is longer than DM, the new arriving packet is dropped. Consequently, the
number of packets queued is variable and uncertain, which induces that the states of the
queue are intractable. Therefore, the model of the WDM FDL buffer is an M/M/W
queuing system.

2.3 The M/M/1 Queue


The M/M/1 queuing system consists of a single queuing station with a single server. Packets
arrive according to a Poisson process with rate λ and the probability distribution of the
service time is exponential with mean 1/μ sec.
Via little’s theorem is has been established that N=λt and Nq=λW.

11
Where W: Average waiting time of any packet in the queue
N: average number of packets in the system
T : Average packet service time in the system
NQ: Average number of customers waiting in the queue.
Pn : Probability of n customers in the system, n=0,1,2…
From these probalities, we can get

And using Little’s Theorem,

Now from the Markov chain


Pn+1=ρpn, n=0, 1... Where ρ=λ/μ and it follows that
Pn+1= .p0, n=0,1,2.. (1)

If ρ<1 the probabilities pn are all positive and add up to unity, so

(2)
Equation 1 & 2 together finally gives,
Pn=
We can now calculate average number of packets in the system in steady state

12
And finally by Little’s Theorem average queue length is:

2.4 M/M/m from M/M/1


Now the M/M/m system is identical to the M/M/1 queue system except there is m server
or channels of a transmission line in a data communication context. A packet at the head of
the queue is routed to nay server that is available. The state transition diagram is follows

Fig- 3. Discrete time Markov chain for the M/M/m system.

By writing down the equilibrium equations for the steady-state probabilities

and taking we obtain:

From these equations, we obtain

13
Where ρ is given by

We can now calculate

And the condition is


We obtain

And finally,

The probability that an arrival will find all servers busy and will be forced to wait in the
queue is:

And finally

The expected number of packets waiting in the queue is given by:

We will use this result to calculate the buffer size in for chosen WDM FDLs optical buffer.

14
Chapter 3
Buffer Size Calculation

3.1 Theorem

The WDM FDL buffer with infinite capacity can be modeled by a single queue and multi-
server system (SQ-MS). If the former always schedules packets to the shortest queue, the
corresponding queuing discipline in the latter is FCFS. It is different from the system
composed of W independent single-queue and single-sever systems (SQ-SS).

FDL buffer arranges in advance the times packets are to be served, while the scheduling
times of packets in the SQ-MS case are determined at the moment there is an idle server and
the first-arriving packet is served at once. And the queue length sum of all the queues in the
former case is equivalent to the length of the single queue in the latter case. Since the servers
are all identical, the length of the queue in this case is 1/W of that in the latter case. For
example, we assume the Poisson arrivals and exponential packet length, and the system has
an infinite queue, i.e. B = ∞. When D is infinitesimal, the distribution of service time is
exponential, and thus the SQ-MS becomes M/M/W model. Assuming λ is the average
packet arrival rate, μ is the average service time and thus ρ = λ/Wμ is the load intensity
per queue, the average queue size QL is :

3.2 Simulation result

The above mentioned buffer size calculation procedure has been simulated by a program in
C language in linux-ubuntu environment in a intel x86 machine
Taking different arrival and service rate combination buffer sizes have been calculated.

15
Simulation result Table:

3.3 Xiaohua’s Method


In the above procedure of calculating buffer size the delay unit has been taken as
infinitesimal. Well Xiaohua Ma at Alcatel shanghai Bell, China has calculated the buffer size
taking a finite fixed delay.
He argues that if the delay D is nonzero, the FDL buffer has finite resolution and thus the
delay time is discrete. For example, in the Fig below, the period of time ∆ is wasted,
∆=[(tp-ta)/D] – (tp-ta) ≥0, during which the server is idle and the new packet cannot be
served.

16
So it is equivalent to increase the service time of the new packet by D. Under the
assumption that the arrivals as well as length of sequential packets are statistically
independent respectively and they are independent of each other and independent of the
state of the queue, the distribution of D is uniform between 0 and D, and thus its average D
is D/2.

Previously established:

Considering no packet loss and letting P queue be the probability that a packet is queued and
s be the average real length/service time of a packet, we get the average equivalent
length/service time of a packet:

Consequently, the equivalent load intensity (the equivalent load on each wavelength
channel) to the queue is :

Putting together (2)–(4), the value of qeq can be obtained by means of iterations, starting
with ρeq = ρ. By substituting ρ in (1) and (2) with ρeq, we get the average queue size QL
in the case that D has a finite value. In this case, QL is related to ρeq, λ and μ, but not only
ρeq, like in the case before.

Now,

Clearly the procedure is iterative


17
3.4 Comparison of the two methods
The comparison of the two methods is based on the buffer size vs. load intensity plot for
both of the methods.

Fig -4(a)Buffersize-Load intensity plot (Simulation) Fig-4(b) Buffersize-Load intensity plot (xiohua Method)

Both of the graphs has similar slope and it’s clear that in both cases the buffer size ≥ 1 when
load intensity ρ ≥ .94 which strengthens our assumption that taking delay as infinitesimal
doesn’t affect the buffer size much.

18
Chapter 4
Simulation of a Congestion Free M/M/1 Queue System

4.1 Congestion in optical fiber network

Network congestion is somewhat analogous to road congestion. It is a situation in which an


increase in data transmissions results in a proportionately smaller increase, or even a
reduction, in throughput. Throughput is the amount of data that passes through the network
per unit of time, such as the number of packets per second. Packets are the fundamental unit
of data transmission on the Internet and all other TCP/IP (transmission control
protocol/Internet protocol) networks, including most LANs (local area networks).
Congestion results from applications sending more data than the network devices (e.g.,
routers and switches) can accommodate, thus causing the buffers on such devices to fill up
and possibly overflow.
When more packets are sent than could be handled by intermediate routers, the
intermediate routers discard many packets, expecting the end points of the network to
retransmit the information. However, early TCP implementations have very bad
retransmission behavior. When this packet loss occurs, the end points sent extra packets that
repeated the information lost; doubling the data rate sent, exactly the opposite of what
should be done during congestion. This pushed the entire network into a congestion
collapse' where most packets were lost and the resultant throughput was negligible.

Effects:
1. Queue overflow at switching nodes.
2. Performance degradation.
3. Multiple packet loss.
4. Low link utilization (low Throughput).
5. High queuing delay.
6. Congestive collapse- it is a situation in which the congestion becomes so great that
throughput drops to a low level and thus little. It can be a stable state with the same intrinsic
load level that would by itself not produce congestion.

4.2 Traditional Congestion Avoidance Mechanism

The prevention of network congestion and collapse requires two major components:
19
1. A mechanism in routers to reorder or drop packets under overload,
2. End-to-end flow control mechanisms designed into the end points which respond to
congestion and behave appropriately. The correct end point behavior is usually still
to repeat dropped information, but progressively slow the rate that information is
repeated. Provided all end points do this, the congestion lifts and good use of the
network occurs, and the end points all get a fair share of the available bandwidth.
Other strategies such as slow-start ensure that new connections don't overwhelm the
router before the congestion detection can kick in. The most common router
mechanisms used to prevent congestive collapses are fair queuing and other
scheduling algorithms, and random early detection, or RED, where packets are
randomly dropped proactively triggering the end points to slow transmission before
congestion collapse actually occurs. Fair queuing is most useful in routers at choke
points with a small number of connections passing through them. Larger routers
must rely on RED. Some end-to-end protocols are better behaved under congested
conditions than others. TCP is perhaps the best behaved. That is why we have used
TCP in our simulation.

Our aim is to simulate congestion free M/M/1 queue of an optical fiber network .The
network node in a Poisson-packet distribution environment using TCP to transfer packets
can adopt this mechanism to avoid congestion in network. And for this purpose the network
nodes have to have a buffer whose size is calculated adopting the method discussed
previously.

4.3 Simulation

4.3.1 Simulation frame-work:

TCP Sender

TCP Packets

Client 1 Client2

20
Fig-5 .Simulation Framework
4.3.2 Simulation Components

Tcp Packet Sender:-


The Tcp packet sender is a simple Tcp server which sends packet at a rate determined by the
user. In my program I used system-clock (system.nanoTime()) to fix the packet sending
rate. As packet transfer takes very negligible time from one port to another in same
machine, desired packet sending rate can be achieved without any tangible deviation. Packet
size can be selected by the user.

Tcp packet receiver:-


Tcp packet receiver is a Tcp clients as well as a Tcp server. It accepts packets sent by the
Tcp packet sender and also sends those packets to its clients. It has a buffer associated with it
whose size is calculated by the method discussed earlier in this article.

Clients:-
The clients are also acts as TCP server as well as TCP client and functions alike the TCP
packet receiver.

4.3.3 Brief Overview of the simulation Process

The simulation process is implemented in following manner.


The TCP packet sender sends packets to connected TCP packet receiver at a fixed rate. Rate
is user defined. On the arrival of a packet in its listening port this node processes the packet
and keeps it in its queue.
The TCP packet receiver node also transfers the packet to its client according to poisson
process. It chooses any client randomly for sending the packet emptying the queue. Queue-
ing and de-queue-ing of the buffer is done simultaneously. Also the clients transfer packets
to each other. According to our implementation the buffer size of the TCP packet receiver
node is just enough to avoid congestion means if we decrease the buffer size there will be
congestion.
4.3.4 Simulation Platform

The simulation has been done in Java, using its multithread programming concept in Eclipse
software development kit.

21
4.3.5 Multithread programming
Multithreading as a widespread programming and execution model allows multiple threads
to exist within the context of a single process. These threads share the process' resources but
are able to execute independently. The threaded programming model provides developers
with a useful abstraction of concurrent execution. However, perhaps the most interesting
application of the technology is when it is applied to a single process to enable parallel
execution on a multiprocessor system.
In our simulation there are three threads which run simultaneously. One is the TCP packet
receiver thread and the other two is its client threads.

Two ways it can be done:


Create a class, extend Thread
• Override the run() method
• Instantiate the class
• Call start()

Create a class, implement Runnable


• Implement the run() method
• Instantiate your class
• Instantiate a Thread class, pass your class in constructor
• Call start()

4.3.6 Eclipse SDK

Eclipse is a multi-language software development environment comprising an integrated


development environment (IDE) and an extensible plug-in system. It is written mostly in
Java and can be used to develop applications in Java.
Eclipse employs plug-ins in order to provide all of its functionality on top of (and including)
the runtime system, in contrast to some other applications where functionality is typically
hard coded. The runtime system of Eclipse is based on Equinox, an OSGi standard
compliant implementation.
The Eclipse SDK includes the Eclipse Java Development Tools (JDT), offering an IDE with a
built-in incremental Java compiler and a full model of the Java source files. This allows for
advanced refactoring techniques and code analysis. The IDE also makes use of a workspace,
in this case a set of metadata over a flat file space allowing external file modifications as long
as the corresponding workspace resource is refreshed afterwards. In my program I used
Eclipse-3.6 Helios SDK environment.

22
4.4 Simulation Details

4.4.1 Threads
In our simulation we have used three layers of threads. Top level thread is the main thread
which is the main program. This thread spawns a second layer of threads consisting of TCP
packet receiver and its packet distributing unit. The packet distributing unit spawns another
layer of threads, two TCP clients who receives packets distributed by the unit.

Main thread

TCP packet Packet distributing


receiver unit

Client 1 Client 2

Fig-6 Thread Layers

4.4.2 Implementation of Buffer

To implement the buffer which acts a M/M/1 queue we have use


ArrayBlockingQueue<Type>(Capacity) data structure which is a bounded blocking queue
backed by an array. This queue orders elements FIFO (first-in-first-out). The head of the
queue is that element that has been on the queue the longest time. The tail of the queue is
that element that has been on the queue the shortest time. New elements are inserted at the
tail of the queue, and the queue retrieval operations obtain elements at the head of the
queue. This is a classic "bounded buffer", in which a fixed-sized array holds elements
inserted by producers and extracted by consumers. Once created, the capacity cannot be
increased. Attempts to put an element to a full queue will result in the put operation
blocking; attempts to retrieve an element from an empty queue will similarly block.
Operations performed on this queue were ‘offer’ and ‘poll’.

23
4.4.3 Achieving Packet Arrival & Packet service rate

To achieve desired packet arrival and service rate for the Poisson packet distribution process
we have used system.nanoTime () function defined in java which returns the current value
of the most precise available system timer, in nanoseconds. Arrival rate 0f 1000 packets/sec
and service rate of 1020 packets/sec had been taken.

4.4.4 Defined Classes


1. Class bufferestim
Calculates the desired buffer size for specified arrival and service rate.

2. Class check
Checks all type of conditions.

3. Class serverthread1 implement Runnable


• TCP packet receiver thread class.
• Connects with the external TCP packet sender.
• Runs the packet distribution unit. Receives packets.
• Put them in queue sets a timer.
• If the distribution unit can not send the packet before the timer expires
then removes one packet from its queue.
• If congestion occurs then notify the external TCP packet sender.

4. Class Threadclient1 implements Runnable


• TCP client thread class.
• Connects with the distribution unit of the TCP packet receiver and
receives packets.
5. Class Threadclient2implements Runnable
• TCP client thread class.
• Connects with the distribution unit of the TCP packet receiver and receives
packets

6. Class Distributor Implements Runnable


• TCP server thread class.
• Acts as a distributor unit for packets received by the TCP packet receiver.
• Randomly selects a client and send packets.
• If can’t send packet within a time limit then notify the TCP packet receiver.
24
7.Class TCPsender
• Connects with the TCP packet receiver.
• Sends packets at a rate of 1000 packets/sec.
• Terminates the whole program if TCP packet receiver notifies congestion.

4.4.5 Defined Methods


1. Synchronized String Polling (Array Blocking Queue<String>)
• Checks whether Queue is empty.
• Calls poll() function to extract a string from the queue head.
• Notify others that task is complete.
• Returns a sting.

2. Synchronized void putinqueue (Array Blocking Queue<String>,


DataInputStream )
• Checks whether Queue is full.
• Calls offer() function to put a string received from the network stream
at the tail of the queue.
• Notify others that task is complete.

3.Void polls(ArrayBlockingQueue<String>,Boolean flag)


• Sets timer and Waits for the flag to be true.
• Calls poll() function to remove a string from the head of the
queue.

25
4.4.6 Snapshot of the Simulation

Fig-7. Snapshot Of the Simulation Program

26
4.5. Analysis of the Simulation
Analysis of the M/M/1 queue of the TCP packet Receiver which has been used in our
simulation has been done with the help of JMT( java modeling tool ).It shows how the queue
grows for a given arrival rate and service time i.e service rate.

4.5.1 JMT(Java Modelling Tool)

The Java Modelling Tools (JMT) is a free open source suite consisting of six tools for
performance evaluation, capacity planning, workload characterization, and modelling of
computer and communication systems. The suite implements several state-of-the-art
algorithms for the exact, asymptotic and simulative analysis of queueing network models,
either with or without product-form solution. Models can be described either through
wizarddialogs or with a graphical user-friendly interface. The workload analysis tool is based
on clustering techniques.The suite incorporates an XML data layer that enables full
reusability of the computational engines.The JMT suite is composed by the following tools.

4.5.2 JMVA

The Java Modelling Tools (JMT) is a free open source suite consisting of six tools for
performance evaluation,capacity planning, workload characterization, and modelling of
computer and communication systems. Thesuite implements several state-of-the-art
algorithms for the exact, asymptotic and simulative analysis of queueing network models,
either with or without product-form solution. Models can be described either through
wizarddialogs or with a graphical user-friendly interface. The workload analysis tool is based
on clustering techniques.The suite incorporates an XML data layer that enables full
reusability of the computational engines.The JMT suite is composed by the following tools.

4.5.3 JMCH

it applies a simulation technique to solve a single station model, with finite (M/M/1/k) or
infinite queue (M/M/1), and shows the underlying Markov Chain. It is possible to
dynamically change the arrival rate and service time of the system

27
4.5.4 Analysis
Arrival rate =98 and service time =.01 sec

Fig-8 .Growth of M/M/1 Queue

Fig-9-Growth Of number of packets in TCP Packet Receiver Buffer

28
Chapter 5
Discussion & Reference

5.1 Discussion

Our simulation was pretty consistent with the simulation of M/M/1 queue and station done
in JMT.
As per the simulation done in JMT the max no. of customer at the station was
approximately 10 after processing 836527 customers which is consistent with the growth of
M/M/1 queue. In our simulation the buffer size of the TCP packet receiver was 10 and no
congestion was detected after processing 1115371 packets.
And as from the graph it is clear that the no. of customers tend to stabilize as time grows so
there was no possibility of future congestion.

5.2 References

[1] Data Networks —Dimitri Bertsekas & Robert Galleger


[2] Dynamic queue length thresholds for shared-memory packet switches- Abhijit K. .
Choudhury and Ellen L. Hahne.
[3] Estimation of buffer size of internet gateway server via G/M/1 model – Dr. L.K
Singh, Dr. R.M.L & Riktesh Srivastava .
[4] Modeling and design of optical buffers in asynchronous and variable length optical
Packet switches—Xiaohua Ma.
[5] Introduction to JMT-G.Serazzi, Politecnico di Milano,Italy .
[6] Optical Fiber Communication – Gerd keiser.

[7] Simulation Methods for Queues: an Overview - -Glynn P.W. and Lglehart D.L

[8] The Single Server Queue --J.W. Cohen,

[9] Computer Network and Internet --Douglas E. Comer

[10] Java 2 Complete Reference –Herbert Schildt.

[11] Complete Java 2 – Phillip Heller , Simon Roberts.

[12] http://help.eclipse.org/helios/index.jsp

29

Vous aimerez peut-être aussi