Vous êtes sur la page 1sur 11

Course Code : CS-68

Course Title : Computer Networks


Assignment Number : BCA(5)CS-68/Assignment/2010
Maximum Marks : 25
Last Date of Submission : 30th April, 2010 (For January Session)
30th October, 2010 (For July Session)

Answer all the questions. You may use illustrations and diagrams to enhance applications.

Question 1: (i) Show the complete TCP/IP protocol suite and write their important features.
(10 Marks)

The Internet Protocol Suite (commonly known as TCP/IP) is the set of communications protocols used for the
Internet and other similar networks. The TCP/IP protocol suite maps to a four-layer conceptual model known as the
DARPA model, which was named after the U.S. government agency that initially developed TCP/IP. The four
layers of the DARPA model are: Application, Transport, Internet, and Network Interface. Each layer in the DARPA
model corresponds to one or more layers of the seven-layer OSI model.

Network Interface Layer


The Network Interface layer (also called the Network Access layer) sends TCP/IP packets on the network medium
and receives TCP/IP packets off the network medium. TCP/IP was designed to be independent of the network access
method, frame format, and medium. Therefore, you can use TCP/IP to communicate across differing network types
that use LAN technologies—such as Ethernet and 802.11 wireless LAN—and WAN technologies—such as Frame
Relay and Asynchronous Transfer Mode (ATM). By being independent of any specific network technology, TCP/IP
can be adapted to new technologies.

Internet Layer
The Internet layer responsibilities include addressing, packaging, and routing functions. The Internet layer is
analogous to the Network layer of the OSI model.
The core protocols for the IPv4 Internet layer consist of the following:
• The Address Resolution Protocol (ARP) resolves the Internet layer address to a Network Interface layer
address such as a hardware address.
• The Internet Protocol (IP) is a routable protocol that addresses, routes, fragments, and reassembles packets.
• The Internet Control Message Protocol (ICMP) reports errors and other information to help you diagnose
unsuccessful packet delivery.
• The Internet Group Management Protocol (IGMP) manages IP multicast groups.

Transport Layer
The Transport layer (also known as the Host-to-Host Transport layer) provides the Application layer with session
and datagram communication services. The Transport layer encompasses the responsibilities of the OSI Transport
layer. The core protocols of the Transport layer are TCP and UDP.
• TCP provides a one-to-one, connection-oriented, reliable communications service. TCP establishes
connections, sequences and acknowledges packets sent, and recovers packets lost during transmission.
• In contrast to TCP, UDP provides a one-to-one or one-to-many, connectionless, unreliable communications
service. UDP is used when the amount of data to be transferred is small (such as the data that would fit into
a single packet), when an application developer does not want the overhead associated with TCP
connections, or when the applications or upper-layer protocols provide reliable delivery.

Application Layer
The Application layer allows applications to access the services of the other layers, and it defines the protocols that
applications use to exchange data. The Application layer contains many protocols, and more are always being
developed.
The most widely known Application layer protocols help users exchange information:
• The Hypertext Transfer Protocol (HTTP) transfers files that make up pages on the World Wide Web.
• The File Transfer Protocol (FTP) transfers individual files, typically for an interactive user session.
• The Simple Mail Transfer Protocol (SMTP) transfers mail messages and attachments.

Additionally, the following Application layer protocols help you use and manage TCP/IP networks:
• The Domain Name System (DNS) protocol resolves a host name, such as www.microsoft.com, to an IP
address and copies name information between DNS servers.
• The Routing Information Protocol (RIP) is a protocol that routers use to exchange routing information on
an IP network.
• The Simple Network Management Protocol (SNMP) collects and exchanges network management
information between a network management console and network devices such as routers, bridges, and
servers.

(ii) List and describe the features of broadband and base band coaxial cables
(5 Marks)

Baseband Coaxial cable is the one that makes use of digital signaling. The original Ethernet scheme makes use of
baseband coaxial cable. Baseband coaxial technology uses digital signaling in which the cable carries only one type
of digital signal. The Baseband coaxial cable uses a special 50-ohm cable rather than the typical 75-ohm CATV
cable. Baseband coaxial cable carries digital signals only. The baseband coaxial is usually referred to as Ethernet
cable, because it was originally used in the Ethernet Networks. In baseband transmission, digital signal like
Manchester Code will be used to carry data along the channel, which relies on voltage fluctuations. Baseband
coaxial cable also allows the DC voltage to pass, which is necessary for collision detection in Ethernet network.

Broadband coaxial cable is the type of cable used in cable television system. 75-ohm coaxial cable is used on
standard cable television. It is called broadband. In Broadband transmission, the digital data is modulated into
different frequency channels separated by frequency guardbands. Because of wider bandwidth and more frequency
channels, broadband transmission can support a mixture of signals such as voice and video. Analog signaling is used
at radio & television frequencies. This type of system is more expensive and more difficult to install than baseband
coaxial cable. Broadband coaxial technology typically transmits analog signals and is capable of transmitting
multiple frequencies of data simultaneously. The cables in broadband networks can be used often up to 450 MHz
and can run for nearly 100 km due to the analog signaling which is much less critical than digital signaling. To
transmit digital signal on an analog network, outgoing bit stream must be converted to an analog signal and the
incoming analog signal to a bit stream. 1 bps may occupy roughly 1 Hz of the bandwidth. At higher frequencies,
many bits per Hz are possible using advanced modulation techniques. Broadband systems are divided up into
multiple channels, frequently the 6-MHz channels used for television broadcasting. Each channel can be used for
analog TV, CD quality audio (1.4 Mbps) or a digital bit stream at, say, 3 Mbps. Television and data can be mixed on
the cable.

(iii) How is data rate related to bandwidth? (5 Marks)

Bandwidth may refer to bandwidth capacity or available bandwidth in bit/s, which typically means the net bit rate,
channel capacity or the maximum throughput of a logical or physical communication path in a digital
communication system. For example, bandwidth test implies measuring the maximum throughput of a computer
network. The data transfer rate (DTR) is the amount of digital data that is moved from one place to another in a
given time. The data transfer rate can be viewed as the speed of travel of a given amount of data from one place to
another. In general, the greater the bandwidth of a given path, the higher the data transfer rate.

Bandwidth is defined as the amount of data that can be transferred in a given time period (typically measured in
seconds), whereas Data Transfer refers to the actual amount of data transferred between two points or computers or
in other words, the traffic generated. Thus Bandwidth is the rate of data transfer for a given computer or device.
As an illustration, consider an expressway/highway in your part of the world. Bandwidth refers to the number of
vehicles that can possibly travel from one place to another in a given time, and Data Transfer refers to the actual
number of vehicles that completed the journey.
Just as a bigger expressway with many lanes will allow more vehicles to pass in lesser time, similarly a high
capacity data pipe or internet connection will allow a higher bandwidth. Also, if there are greater numbers of
vehicles on the road, the speed reduces overall. Similarly, if a lot of data transfer is taking place simultaneously, the
bandwidth will reduce.
Just as bigger vehicles such as rigs, trailers, tend to lower the average speed on the road, bigger data packets for
multimedia applications, result in a slower downloading/uploading rate. The amount of bandwidth deemed adequate
or required has really changed over the last few years as internet applications have moved from simple text transfers
to full blown multimedia and interactive applications and high graphics games.

Question2: (i) How is LLC different from MAC sub layer? Describe the type of services provided by data
link layer to the network layer? List three MAC sub layer protocols.
(10 Marks)

The MAC sub layer controls how a computer on the network gains access to the data and permission to transmit it.
The LLC layer controls frame synchronization, flow control and error checking.

Logical Link Control sublayer


The uppermost sublayer is Logical Link Control (LLC). This sublayer multiplexes protocols running atop the Data
Link Layer, and optionally provides flow control, acknowledgment, and error notification. The LLC provides
addressing and control of the data link. It specifies which mechanisms are to be used for addressing stations over the
transmission medium and for controlling the data exchanged between the originator and recipient machines.

Media Access Control sublayer


The sublayer below it is Media Access Control (MAC). Sometimes this refers to the sublayer that determines who is
allowed to access the media at any one time (usually CSMA/CD). Other times it refers to a frame structure with
MAC addresses inside. There are generally two forms of media access control: distributed and centralized. Both of
these may be compared to communication between people. In a network made up of people speaking, i.e. a
conversation, we look for clues from our fellow talkers to see if any of them appear to be about to speak. If two
people speak at the same time, they will back off and begin a long and elaborate game of saying "no, you first".

The Media Access Control sublayer also determines where one frame of data ends and the next one starts -- frame
synchronization. There are four means of frame synchronization: time based, character counting, byte stuffing and
bit stuffing.
• The time based approach simply puts a specified amount of time between frames. The major drawback of
this is that new gaps can be introduced or old gaps can be lost due to external influences.
• Character counting simply notes the count of remaining characters in the frame's header. This method,
however, is easily disturbed if this field gets faulty in some way, thus making it hard to keep up
synchronization.
• Byte stuffing precedes the frame with a special byte sequence such as DLE STX and succeeds it with DLE
ETX. Appearances of DLE (byte value 0x10) has to be escaped with another DLE. The start and stop
marks are detected at the receiver and removed as well as the inserted DLE characters.
• Similarly, bit stuffing replaces these start and end marks with flag consisting of a special bit pattern (e.g. a
0, six 1 bits and a 0). Occurrences of this bit pattern in the data to be transmitted is avoided by inserting a
bit. To use the example where the flag is 01111110, a 0 is inserted after 5 consecutive 1's in the data
stream. The flags and the inserted 0's are removed at the receiving end. This makes for arbitrary long
frames and easy synchronization for the recipient. Note that this stuffed bit is added even if the following
data bit is 0, which could not be mistaken for a sync sequence, so that the receiver can unambiguously
distinguish stuffed bits from normal bits.

List of Data Link Layer services


• Encapsulation of network layer data packets into frames
• Frame synchronization
• Logical link control (LLC) sublayer:
o Error control (automatic repeat request, ARQ), in addition to ARQ provided by some Transport layer
protocols, to forward error correction (FEC) techniques provided on the Physical Layer, and to
error-detection and packet cancelling provided at all layers, including the network layer. Data link
layer error control (i.e. retransmission of erroneous packets) is provided in wireless networks and
V.42 telephone network modems, but not in LAN protocols such as Ethernet, since bit errors are
so uncommon in short wires. In that case, only error detection and cancelling of erroneous packets
are provided.
o Flow control, in addition to the one provided on the Transport layer. Data link layer error control is
not used in LAN protocols such as Ethernet, but in modems and wireless networks.
• Media access control (MAC) sublayer:
o Multiple access protocols for channel-access control, for example CSMA/CD protocols for collision
detection and retransmission in Ethernet bus networks and hub networks, or the CSMA/CA
protocol for collision avoidance in wireless networks.
o Physical addressing (MAC addressing)
o LAN switching (packet switching) including MAC filtering and spanning tree protocol
o Data packet queuing or scheduling
o Store-and-forward switching or cut-through switching
o Quality of Service (QoS) control
o Virtual LANs (VLAN)

(ii) List the advantages of sliding window mechanism in comparison to stop and weight
mechanism. Also explain the operation of Go Back mechanism with the help of an example.
(10 Marks)

Sliding window is a flow control technique which belongs to the Data Link layer of the OSI model. It solves the
problem of missing frames during data transmission between two upper layers, so that they can send and receive
frames in order. This flow control comes to the rescue where the buffer size is limited and pre-established. During a
typical communication between a sender and a receiver the receiver allocates buffer space for n frames, where n is
the buffer size in frames. Thus, the receiver can accept n frames and the sender can send n frames without waiting
for an acknowledgment. To keep track of which frames have been acknowledged each is labelled with a sequence
number. The receiver acknowledges a frame by sending an acknowledgement that includes the sequence number of
the next frame expected. This acknowledgement also explicitly announces that the receiver is ready to receive n
frames, beginning with the number specified. Both the sender and receiver maintain what is called a window. The
size of the window is less than or equal to the buffer size.
As compared to stop-and-wait flow control the sliding window flow control has a far better performance. This is
because in a wireless environment data rates are very low and noise level is very high, so waiting for an
acknowledgement for every packet that is transferred does not seem feasible. Thus, transferring data as a bulk (Once
the medium is allocated) would yield a better performance in terms of higher throughput. So the choice is sliding
window flow control.
Sliding window flow control is a point to point protocol assuming that no other entity tries to communicate until the
current data transfer is complete. The window maintained by the sender indicates which frames he can send. The
sender sends all the frames in the window and waits for an acknowledgement. The sender on receiving an
acknowledgement indicating the next frame expected, shifts the window to the corresponding sequence number,
thus indicating that frames within the window starting from the current sequence number can be sent.

Flow control also includes the control of data transfer between the PC and the Radio. While the PC is transferring
data to the modem and if the modem detects a reception, the PC-Radio communication must be paused, giving
higher priority to the incoming signal.

Question 3: (i) List and describe services provided by Network layer, Transport layer and Application layer
(6 Marks)
Application Layer
The Application layer provides three basic services to applications:
• It makes sure the resources needed to carry out a session are present.
• It matches the application to the appropriate communication protocol or service.
• It synchronizes the transmissions of data between the application and its protocol.

The Application layer is used to support the following services:


• File services - store, move control access to, and retrieve files
• Print services - send data to local or network printers
• Message services - transfer text, graphics, audio, and video over a network
• Application services - process applications locally or through distributed processing
• Database services - allow a local computer to access network services

Transport Layer
There is a long list of services that can be optionally provided by the Transport Layer. None of them are
compulsory, because not all applications require all available services.
• Connection-oriented: This is normally easier to deal with than connection-less models, so where the
Network layer only provides a connection-less service, often a connection-oriented service is built on top of
that in the Transport Layer.
• Same Order Delivery: The Network layer doesn't generally guarantee that packets of data will arrive in the
same order that they were sent, but often this is a desirable feature, so the Transport Layer provides it. The
simplest way of doing this is to give each packet a number, and allow the receiver to reorder the packets.
• Reliable data: Packets may be lost in routers, switches, bridges and hosts due to network congestion, when
the packet queues are filled and the network nodes have to delete packets. Packets may be lost or corrupted
in Ethernet due to interference and noise, since Ethernet does not retransmit corrupted packets. Packets may
be delivered in the wrong order by an underlying network. Some Transport Layer protocols, for example
TCP, can fix this. By means of an error detection code, for example a checksum, the transport protocol may
check that the data is not corrupted, and verify that by sending an ACK message to the sender. Automatic
repeat request schemes may be used to retransmit lost or corrupted data. By introducing segment
numbering in the Transport Layer packet headers, the packets can be sorted in order. Of course, error free is
impossible, but it is possible to substantially reduce the numbers of undetected errors.
• Flow control: The amount of memory on a computer is limited, and without flow control a larger computer
might flood a computer with so much information that it can't hold it all before dealing with it. Nowadays,
this is not a big issue, as memory is cheap while bandwidth is comparatively expensive, but in earlier times
it was more important. Flow control allows the receiver to respond before it is overwhelmed. Sometimes
this is already provided by the network, but where it is not, the Transport Layer may add it on.
• Congestion avoidance: Network congestion occurs when a queue buffer of a network node is full and starts
to drop packets. Automatic repeat request may keep the network in a congested state. This situation can be
avoided by adding congestion avoidance to the flow control, including slow-start. This keeps the bandwidth
consumption at a low level in the beginning of the transmission, or after packet retransmission.
• Byte orientation: Rather than dealing with things on a packet-by-packet basis, the Transport Layer may add
the ability to view communication just as a stream of bytes. This is nicer to deal with than random packet
sizes, however, it rarely matches the communication model which will normally be a sequence of messages
of user defined sizes.
• Ports: (Part of the Transport Layer in the TCP/IP model, but of the Session Layer in the OSI model) Ports
are essentially ways to address multiple entities in the same location. For example, the first line of a postal
address is a kind of port, and distinguishes between different occupants of the same house. Computer
applications will each listen for information on their own ports, which is why you can use more than one
network-based application at the same time.

Network Layer
• The Network layer has the responsibility for dealing with routing packets to the correct destination
• The network layer provides both packets oriented and connection oriented services for the transport layer.
• Services should be independent of subnet technology
• The transport layer should be shielded from the number, type and topology of the subnets present
• The network addresses made available to the transport layer should use a uniform numbering plan even
across LANs and WANs
• Connection vs packet

(ii) Make a comparison between virtual circuit and datagram subnet


(4 Marks)

Virtual Circuit Packet Switching


1. Virtual circuits allow packets to contain circuit number instead of full destination address so less router memory
and bandwidth require. Thus cost wise it is cheaper.
2. Virtual circuit requires a setup phase, which takes time and consume resources.
3. In virtual circuit, router just uses the circuit number to index into a table to find out where the packet goes.
4. Virtual circuit has some advantages in avoiding congestion within the subnet
Because resources can be reserved in advance, when the connection is established.
5. Virtual circuit has some problem. It a router crashes and loses its memory, even it come back up a second later, all
the virtual circuits passing through it will have to be aborted.
6. The loss fault on communication line vanishes the virtual circuits.
7. In virtual circuit a fixed path is used during transmission so traffic throughout the subnet cannot balance. It cause
congestion problem.
8. A virtual circuit is a implementation of connection oriented service.

Datagram Packet Switching


1. Datagram circuits allow packets to contain full address instead of circuit number so each packet has significant
amount of overhead, and hence wasted band width. Thus it is costly.
2. Datagram circuit does not require setup phase, so no resources are consumed.
3. In datagram circuit, a more complicated procedure is required to determine where the packet goes.
4. In a datagram subnet, congestion avoidance is more difficult.
5. In datagram circuit if a router goes down only that user whose packets were queued up in the router at the time
will suffer.
6. The loss or fault on communication line can be easily compensated in datagram circuits.
7. Datagram allows the router to balance the traffic throughout the subnet, since router can be changed halfway
through a connection.

Compare Virtual Circuit and Datagram Subnet For:-


1. Router Memory Space and Bandwidth: - Virtual circuit allow packet to contain circuit number instead of
full destination addresses. If the packet tends to be fairly short, a full destination address in every packet
may represent a significant amount of overhead, and hence wasted bandwidth.
2. Setup Time and Address Parsing Time: - Virtual circuit required a setup phase, which takes time and
consumes resources. A data packet in a virtual circuit subnet is easy; the router just uses the circuit number
to for a table took up to find out where the packet goes. In data gram subnet, a move complicated procedure
is requiring to determine where the packet goes.
3. Congestion: - Virtual circuits required have some advantage in avoiding congestion within the subnet
because resources can be reserved in advance, at the time of connection establishment. A datagram subnet,
a move complicated procedure is require to determine where the packet goes.
4. Router/Communication Lin Crash: - Virtual circuit also has a vulnerability problem. If a router crashes and
loses its memory, even If it comes back up a second. Later, all the virtual circuits passing through it will
have to be aborted. A datagram router goes down, only those uses whose packets where queued up on the
router at the time will suffer, and may be not even all those, depending upon whether they have already
been acknowledged or not.
5. Traffic Balance: - Datagram also allow the router the balance the traffic throughout the subnet since routers
can be changed during the transaction halfway through a connection.

(iii) Differentiate between flow and error control.


(9 Marks)
Flow Control: -
In communications, the process of adjusting the flow of data from one device to another to ensure that the receiving
device can handle all of the incoming data. This is particularly important where the sending device is capable of
sending data much faster than the receiving device can receive it.

Error Control: -
Error control is a method that can be used to recover the corrupted data whenever possible. There are two basic
types of error control which are backward error control and forward error control. In backward error control, the
data is encoded so that the encoded data contains additional redundant information which is used to detect the
corrupted blocks of data that must be resent. On the contrary, in forward error control (FEQ), the data is encoded so
that it contains enough redundant information to recover from some communications errors.

Question 4: (i) What is the purpose of a bridge? Explain the functionality of a bridge with the help of an
example. (10 Marks)

A network bridge is a device which connects two parts of a network together at the data link layer (layer 2 of the
OSI model). Network bridges work similarly to network switches, but the traffic is managed differently. A bridge
will only send traffic from one side to the other if it is going to a destination on the other side. This is different to a
layer 1 switch which sends all traffic from either side. Sometimes network bridges are called layer 2 switches.
Bridging is a forwarding technique used in packet-switched computer networks. Unlike routing, bridging makes no
assumptions about where in a network a particular address is located. Instead, it depends on flooding and
examination of source addresses in received packet headers to locate unknown devices. Once a device has been
located, its location is recorded in a table where the MAC address is stored so as to preclude the need for further
broadcasting. The utility of bridging is limited by its dependence on flooding, and is thus only used in local area
networks. Bridges are similar to repeaters or network hubs, devices that connect network segments at the physical
layer; however, with bridging, traffic from one network is managed rather than simply rebroadcast to adjacent
network segments. Bridges are more complex than hubs or repeaters. Bridges can analyze incoming data packets to
determine if the bridge is able to send the given packet to another segment of the network.

A bridge uses a forwarding database to send frames across network segments. The forwarding database is
initially empty and entries in the database are built as the bridge receives frames. If an address entry is not found in
the forwarding database, the frame is flooded to all other ports of the bridge, forwarding the frame to all segments
except the source address. By means of these broadcast frames, the destination network will respond and forwarding
database entry will be created.
As an example, consider three hosts, A, B and C and a bridge. The bridge has three ports. A is connected to
bridge port 1, B is connected bridge port 2, C is connected to bridge port 3. A sends a frame addressed to B to the
bridge. The bridge examines the source address of the frame and creates an address and port number entry for A in
its forwarding table. The bridge examines the destination address of the frame and does not find it in its forwarding
table so it floods it to all other ports: 2 and 3. The frame is received by hosts B and C. Host C examines the
destination address and ignores the frame. Host B recognizes a destination address match and generates a response
to A. On the return path, the bridge adds an address and port number entry for B to its forwarding table. The bridge
already has A's address in its forwarding table so it forwards the response only to port 1. Host C or any other hosts
on port 3 are not burdened with the response. Two-way communication is now possible between A and B without
any further flooding.

Note that both source and destination addresses are used in this algorithm. Source addresses are recorded in entries
in the table, while destination addresses are looked up in the table and matched to the proper segment to send the
frame to.

(ii) Describe the following terms:


Cascaded Hub network, Half duplex and Full –Duplex Ethernet Hubs and Switching Hubs
(6 Marks) .

Cascaded hubs are the network configuration in which hubs are connected to other hubs. A network hub or repeater
hub is a device for connecting multiple twisted pair or fiber optic Ethernet devices together and making them act as
a single network segment. Hubs work at the physical layer (layer 1) of the OSI model. The device is a form of
multiport repeater. Repeater hubs also participate in collision detection, forwarding a jam signal to all ports if it
detects a collision.

A half-duplex system provides for communication in both directions, but only one direction at a time (not
simultaneously). Typically, once a party begins receiving a signal, it must wait for the transmitter to stop
transmitting, before replying. An example of a half-duplex system is a two-party system such as a "walkie-talkie"
style two-way radio, wherein one must use "Over" or another previously-designated command to indicate the end of
transmission, and ensure that only one party transmits at a time, because both parties transmit on the same
frequency. A good analogy for a half-duplex system would be a one-lane road with traffic controllers at each end.
Traffic can flow in both directions, but only one direction at a time, regulated by the traffic controllers.

Full-duplex, or sometimes double-duplex system, allows communication in both directions, and, unlike half-duplex,
allows this to happen simultaneously. Land-line telephone networks are full-duplex, since they allow both callers to
speak and be heard at the same time. A good analogy for a full-duplex system would be a two-lane road with one
lane for each direction. Examples: Telephone, Mobile Phone, etc. Two-way radios can be, for instance, designed as
full-duplex systems, which transmit on one frequency and receive on a different frequency. This is also called
frequency-division duplex. Frequency-division duplex systems can be extended to farther distances using pairs of
simple repeater stations, because the communications transmitted on any one frequency always travel in the same
direction.
Full-duplex Ethernet connections work by making simultaneous use of two physical pairs of twisted cable (which
are inside the jacket), wherein one pair is used for receiving packets and one pair is used for sending packets (two
pairs per direction for some types of Ethernet), to a directly-connected device. This effectively makes the cable itself
a collision-free environment and doubles the maximum data capacity that can be supported by the connection. There
are several benefits to using full-duplex over half-duplex. First, time is not wasted, since no frames need to be
retransmitted, as there are no collisions. Second, the full data capacity is available in both directions because the
send and receive functions are separated. Third, stations (or nodes) do not have to wait until others complete their
transmission, since there is only one transmitter for each twisted pair.

Switching Hubs, a special type of hub that forwards packets to the appropriate port based on the packet's address.
Conventional hubs simply rebroadcast every packet to every port. Since switching hubs forward each packet only to
the required port, they provide much better performance. Most switching hubs also support load balancing, so that
ports are dynamically reassigned to different LAN segments based on traffic patterns. Some newer switching hubs
support both traditional Ethernet (10 Mbps) and Fast Ethernet (100 Mbps) ports. This enables the administrator to
establish a dedicated, Fast Ethernet channel for high-traffic devices such as servers.
(iii) What is the purpose of OSPF? (4 Marks)
Open Shortest Path First (OSPF) is a dynamic routing protocol for use in Internet Protocol (IP) networks.
Specifically, it is a link-state routing protocol and falls into the group of interior gateway protocols, operating within
a single autonomous system (AS).
OSPF was the first widely deployed routing protocol that could converge a network in the low seconds, and
guarantee loop-free paths. It has many features that allow the imposition of policies about the propagation of routes
that it may be appropriate to keep local, for load sharing, and for selective route importing more than IS-IS. IS-IS, in
contrast, can be tuned for lower overhead in a stable network, the sort more common in ISP than enterprise
networks. There are some historical accidents that made IS-IS the preferred IGP for ISPs, but ISP's today may well
choose to use the features of the now-efficient implementations of OSPF, after first considering the pros and cons of
IS-IS in service provider environments.
OSPF can provide better load-sharing on external links than other IGPs. When the default route to an ISP is injected
into OSPF from multiple ASBRs as a Type I external route and the same external cost specified, other routers will
go to the ASBR with the least path cost from its location. This can be tuned further by adjusting the external cost. In
contrast, if the default route from different ISPs is injected with different external costs, as a Type II external route,
the lower-cost default becomes the primary exit and the higher-cost becomes the backup only.

Question 5: (i) what are the two types of ISDN services? Describe in detail (10 Marks)

Integrated Services Digital Network (ISDN) is a set of communications standards for simultaneous digital
transmission of voice, video, data, and other network services over the traditional circuits of the public switched
telephone network. The key feature of ISDN is that it integrates speech and data on the same lines, adding features
that were not available in the classic telephone system. There are several kinds of access interfaces to ISDN defined
as Basic Rate Interface (BRI), Primary Rate Interface (PRI) and Broadband ISDN (B-ISDN).

Basic Rate Interface: The entry level interface to ISDN is the Basic Rate Interface (BRI), a 128 kbit/s service
delivered over a pair of standard telephone copper wires. The 144 kbit/s rate is broken down into two 64 kbit/s
bearer channels ('B' channels) and one 16 kbit/s signalling channel ('D' channel or delta channel).
BRI is sometimes referred to as 2B+D
The interface specifies the following network interfaces:
• The U interface is a two-wire interface between the exchange and a network terminating unit, which is
usually the demarcation point in non-North American networks.
• The T interface is a serial interface between a computing device and a terminal adapter, which is the digital
equivalent of a modem.
• The S interface is a four-wire bus that ISDN consumer devices plug into; the S & T reference points are
commonly implemented as a single interface labelled 'S/T' on an NT1
• The R interface defines the point between a non-ISDN device and a terminal adapter (TA) which provides
translation to and from such a device.
BRI-ISDN is very popular in Europe but is much less common in North America. It is also common in Japan -
where it is known as INS64.

Primary Rate Interface: ISDN PRI service is used primarily by large organizations with intensive communications
needs. An ISDN PRI connection supports 23 64 kbps B-channels and one 64 kbps D-channel (or 23B+D) over a
high speed DS1 (or T-1) circuit. The European PRI configuration is slightly different, supporting 30B+D. The other
ISDN service available is the Primary Rate Interface (PRI), which is carried over an E1 (2048 kbit/s) in most parts
of the world. An E1 is 30 'B' channels of 64 kbit/s, one 'D' channel of 64 kbit/s and a timing and alarm channel of 64
kbit/s. In North America PRI service is delivered on one or more T1s (sometimes referred to as 23B+D) of 1544
kbit/s (24 channels). A T1 has 23 'B' channels and 1 'D' channel for signalling (Japan uses a circuit called a J1,
which is similar to a T1).
In North America, NFAS allows two or more PRIs to be controlled by a single D channel, and is sometimes called
"23B+D + n*24B". D-channel backup allows for a second D channel in case the primary fails. One popular use of
NFAS is on a T3. PRI-ISDN is popular throughout the world, especially for connection of PSTN circuits to PBXs.
Even though many network professionals use the term "ISDN" to refer to the lower-bandwidth BRI circuit, in North
America by far the majority of ISDN services are in fact PRI circuits serving PBXs

(ii) Explain the following concepts: (10 Marks)


Circuit Switching, Packet switching, Frame relay, Cell Relay

Circuit switching is defined as a mechanism applied in telecommunications (mainly in PSTN) whereby the user is
allocated the full use of the communication channel for the duration of the call. That is if two parties wish to
communicate, the calling party has to first dial the numbers of the called party. Once those numbers are dialed, the
originating exchange will find a path to the terminating exchange, which will in turn find the called party. After the
circuit or channel has been set up, then communication will take place, then once they are through the channel will
be cleared. This mechanism is referred to as being connection-oriented.
Advantages of Circuit Switching:
• Once the circuit has been set up, communication is fast and without error.
• It is highly reliable
Disadvantages:
• Involves a lot of overhead, during channel set up.
• Waists a lot of bandwidth, especial in speech whereby a user is sometimes listening, and not talking.
• Channel set up may take longer.
To overcome the disadvantages of circuit switching, packet switching was introduced, and instead of dedicating a
channel to only two parties for the duration of the call it routes packets individually as they are available. This
mechanism is referred to as being connectionless.

Packet switching is a digital networking communications method that groups all transmitted data – irrespective of
content, type, or structure – into suitably-sized blocks, called packets. Packet switching features delivery of variable-
bit-rate data streams (sequences of packets) over a shared network. When traversing network adapters, switches,
routers and other network nodes, packets are buffered and queued, resulting in variable delay and throughput
depending on the traffic load in the network. Packet switching contrasts with another principal networking
paradigm, circuit switching, a method which sets up a limited number of dedicated connections of constant bit rate
and constant delay between nodes for exclusive use during the communication session. In case of traffic fees, for
example in cellular communication, circuit switching is characterized by a fee per time unit of connection time, even
when no data is transferred, while packet switching is characterized by a fee per unit of information.
Two major packet switching modes exist; connectionless packet switching, also known as datagram
switching, and connection-oriented packet switching, also known as virtual circuit switching. In the first case
each packet includes complete addressing or routing information. The packets are routed individually, sometimes
resulting in different paths and out-of-order delivery. In the second case a connection is defined and pre-allocated in
each involved node before any packet is transferred. The packets include a connection identifier rather than address
information, and are delivered in order.

Frame Relay is a protocol standard for LAN internetworking which provides a fast and efficient method of
transmitting information from a user device to LAN bridges and routers.
The Frame Relay protocol uses a frame structured similar to that of LAPD, except that the frame header is replaced
by a 2-byte Frame Relay header field. The Frame Relay header contains the user-specified DLCI field, which is the
destination address of the frame. It also contains congestion and status signals which the network sends to the user.
Advantages of Frame Relay
Frame Relay offers an attractive alternative to both dedicated lines and X.25 networks for connecting LANs to
bridges and routers. The success of the Frame Relay protocol is based on the following two underlying factors:

• Because virtual circuits consume bandwidth only when they transport data, many virtual circuits can exist
simultaneously across a given transmission line. In addition, each device can use more of the bandwidth as
necessary, and thus operate at higher speeds.
• The improved reliability of communication lines and increased error-handling sophistication at end stations
allows the Frame Relay protocol to discard erroneous frames and thus eliminate time-consuming error-
handling processing.
These two factors make Frame Relay a desirable choice for data transmission; however, they also necessitate testing
to determine that the system works properly and that data is not lost.

Cell relay is the communication of information in short, fixed length cells. A concept in which data is sent over a
network in relatively small, fixed-size packets, or cells (ATM is based on cell relay). This contrasts with networks
based on frames, which may be chunks of data of variable lengths. The frame relay specifications, for instance, set a
maximum frame length of about 4,000 octets. This can cause problems with transmissions where delay might cause
distortion or even dropped information at the receiving end, for example, in voice calls.
In a cell relay network, if the cells are sufficiently small, then so are the odds that one cell will occupy the
transmission facilities long enough to cause an unacceptable delay. With cell relay it's important to note that the
cells may be of any size as long as all of the cells within that closed system are the same size. And the slower the
transmission speed, the smaller the optimal cell size. Using relatively short cells makes it easier to mix data, which
is not sensitive to minor delays, with highly delay-sensitive voice. Cell relay transmission rates usually are between
56 kbit/s and several gigabits per second. ATM, a particularly popular form of cell relay, is most commonly used for
home DSL connections, which often runs between 128 kbit/s and 1.544 Mbit/s (DS1), and for high-speed backbone
connections (OC-3 and faster).

Vous aimerez peut-être aussi