Vous êtes sur la page 1sur 55

E1 Standard

E1 Line Signal

E1 (or E-1) is a European digital transmission format devised by the ITU-TS and given the name by the
Conference of European Postal and Telecommunication Administration (CEPT). It's the equivalent of
the North American T-carrier system format. E2 through E5 are carriers in increasing multiples of the
E1 format. The E1 signal format carries data at a rate of 2.048 million bits per second and can carry 32
channels of 64 Kbps* each. E1 carries at a somewhat higher data rate than T-1 (which carries 1.544
million bits per second) because, unlike T-1, it does not do bit-robbing and all eight bits per channel are
used to code the signal. E1 and T-1 can be interconnected for international use. E2 (E-2) is a line that
carries four multiplexed E1 signals with a data rate of 8.448 million bits per second. E3 (E-3) carries 16
E1 signals with a data rate of 34.368 million bits per second. E4 (E-4) carries four E3 channels with a
data rate of 139.264 million bits per second. E5 (E-5) carries four E4 channels with a data rate of
565.148 million bits per second.
An E1 link operates over two separate sets of wires, usually twisted pair cable. A nominal 3
Volt peak signal is encoded with pulses using a method that avoids long periods without polarity
changes. The line data rate is 2.048 Mbit/s (full duplex, i.e. 2.048 Mbit/s downstream and 2.048 Mbit/s
upstream) which is split into 32 timeslots, each being allocated 8 bits in turn. Thus each timeslot sends
and receives an 8-bit PCM sample, usually encoded according to A-law algorithm, 8000 times per
second (8 x 8000 x 32 = 2,048,000). This is ideal for voice telephone calls where the voice is sampled
into an 8 bit number at that data rate and reconstructed at the other end. The timeslots are numbered
from 0 to 31. One timeslot (TS0) is reserved for framing purposes, and alternately transmits a fixed
pattern. This allows the receiver to lock onto the start of each frame and match up each channel in turn.
The standards allow for a full Cyclic Redundancy Check to be performed across all bits transmitted in
each frame, to detect if the circuit is losing bits (information), but this is not always used.
The E1 line signal is coded using the High-Density Bipolar 3 (HDB3) coding
rules. The HDB3 coding format is an improvement of the alternate mark inversion
(AMI) code. In the AMI format, “ones” are alternately transmitted as positive and
negative pulses, whereas “zeros” are transmitted as a zero voltage level. The AMI
format cannot transmit long strings of “zeros”, because such strings do not carry
timing information.

1
E1 Channels
The E1 signal is composed of three channels, called A, B, and C. E1-A (meaning the A channel within
E1) is a restricted access signal. It’s ranging codes and navigation data are encrypted. The data signal is
E1-B and the data-free signal is E1-C. A data-free signal is also called a pilot signal. It is made of a
ranging code only, not modulated by a navigation data stream. The E1 signal has a 4092 code length
with a 1023MHz chipping rate giving it a repetition rate of 4ms; on the pilot signal a secondary code of
length 25 chips extends the repetition interval to 100ms.

E1 Signal Structure
The E1 line operates at a nominal rate of 2.048 Mbps. The data transferred over the E1 line is organized
in frames. Each E1 frame includes 256 bits. The 256 bits are arranged in 32 timeslots of eight bits each
that carry the data payload. The frame repetition rate is 8,000 per second, and therefore the data rate
supported by each timeslot is 64 kbps. The number of timeslots available for user data is maximum 31,
because timeslot 0 is always reserved.

Timeslot 0 is used for two main purposes:


• Delineation of frame boundaries: For this purpose, in every second frame timeslot 0 carries a
fixed pattern, called frame alignment signal (FAS). Frames carrying the FAS are defined as even
frames, as they are assigned the numbers 0, 2, 4, etc. when larger structures (multiframes) are used.
The receiving equipment searches for this fixed pattern in the data stream using a special algorithm,
a process called frame synchronization. Once this process is successfully completed, the equipment
can identify each bit in the received frames.
• Transmission of housekeeping information: In every frame without FAS (odd frames), timeslot 0
carries housekeeping information. This information includes:
 Bit 1 - this bit is called the international (I) bit. Its main use is for error detection using
the optional CRC-4 function.
 Bit 2 - is always set to 1, a fact used by the frame alignment algorithm.
 Bit 3 - is used as a remote alarm indication (RAI), to notify the equipment at the other
end that the local equipment lost frame alignment, or does not receive an input signal.
 The other bits, identified as Sa4 through Sa8, are designated national bits, and are
actually available to the users, provided agreement is reached as to their use.

2
E1 Alarm Conditions
• Excessive bit error rate: The bit error rate is measured on the frame alignment signal. The
alarm threshold is an error rate higher than 10-3 that persists for 4 to 5 seconds. The alarm
condition is canceled when the error rate decreases below 10-4 for 4 to 5 consecutive seconds.
• Loss of frame alignment (also called loss of synchronization): This condition is declared
when too many errors are detected in the frame alignment signal (FAS), e.g., when 3 or 4 FAS
errors are detected in the last 5 frames. Loss of frame alignment is cleared after no FAS errors are
detected in two consecutive frames. The loss of frame alignment is reported by means of the A bit
• Loss of multiframes alignment (applicable only when 256S multiframes are used):This condition is
declared when too many errors are detected in the multiframes alignment signal (MAS), as for loss
of frame alignment. The loss of multiframes alignment is reported by means of the Y bit.
Alarm indication signal (AIS): The AIS signal is an unframed “all-ones” signal,
and is used to maintain line signal synchronization in case of loss of input signal, e.g., because an
alarm condition occurred in the equipment that supply the line signal. Note that the equipment
receiving an AIS signal loses frame synchronization.

3
DSLAM

A DSLAM (Digital Subscriber Line Access Multiplexer) is a network device, usually at a telephone
company central office, that receives signals from multiple customer Digital Subscriber Line (DSL)
connections and puts the signals on a high-speed backbone line using multiplexing techniques.
Depending on the product, DSLAM multiplexers connect DSL lines with some combination of
asynchronous transfer mode (ATM), frame relay, or Internet Protocol networks. DSLAM enables a
phone company to offer business or homes users the fastest phone line technology (DSL) with the fastest
backbone network technology (ATM).

DSL Connectivity diagram

The DSLAM equipment at the telephone company (Telco) collects the data from its many modem ports
and aggregates their voice and data traffic into one complex composite "signal" via multiplexing.
Depending on its device architecture and setup, a DSLAM aggregates the DSL lines over its
Asynchronous Transfer Mode (ATM), frame relay, and/or Internet Protocol network (i.e., an IP-
DSLAM using PTM-TC [Packet Transfer Mode - Transmission Convergence] protocol(s) stack. The
aggregated traffic is then directed to a Telco’s backbone switch, via an access network (AN) also called
a Network Service Provider (NSP) at up to 10 Gbit/s data rates.
A DSLAM may or may not be located in the telephone company's central office, and may
also serve multiple data and voice customers within a neighborhood Serving Area Interface (SAI),
sometimes in conjunction with a digital loop carrier. DSLAMs are also used by hotels, lodges,
residential neighborhoods, and other businesses operating their own private telephone exchange.

4
The DSLAM acts like a network switch since its functionality is at Layer 2 of the OSI model. Therefore
it cannot re-route traffic between multiple IP networks, only between ISP devices and end-user
connection points. The DSLAM traffic is switched to a Broadband Remote Access Server where the end
user traffic is then routed across the ISP network to the Internet. Customer Premises Equipment that
interfaces well with the DSLAM to which it is connected may take advantage of enhanced telephone
voice and data line signaling features and the bandwidth monitoring and compensation capabilities it
supports.
In addition to being a data switch and multiplexer, a DSLAM is also a large collection
of modems. Each modem on the aggregation card communicates with a single subscriber's DSL modem.
This modem functionality is integrated into the DSLAM itself instead of being done via an external
device like a traditional computer modem. Like traditional voice-band modems, a DSLAM's integrated
DSL modems usually have the ability to probe the line and to adjust themselves to electronically or
digitally compensate for forward echoes and other bandwidth-limiting factors in order to move data at
the maximum connection rate capability of the subscriber's physical line. This compensation capability
also takes advantage of the better performance of "balanced line" DSL connections, providing
capabilities for LAN segments longer than physically-similar unshielded twisted pair (UTP) Ethernet
connections, since the balanced line type is generally required for its hardware to function correctly.
This is due to the nominal line impedance (measured in Ohms but comprising both resistance and
inductance) of balanced lines being somewhat lower than that of UTP, thus supporting 'weaker' signals
(however the solid-state electronics required to construct such digital interfaces is more costly).

DSLAM

5
Types of DSLAM

ATM DSLAM
ATM is a high-speed networking standard designed to support both voice and data communications.
ATM is normally utilized by Internet service providers on their private long-distance networks. ATM
operates at the data link layer (Layer 2 in the OSI model) over either fiber or twisted-pair cable. ATM
differs from more common data link technologies like Ethernet in several ways. For example, ATM
utilizes no routing. Hardware devices known as ATM switches establish point-to-point connections
between endpoints and data flows directly from source to destination. Additionally, instead of using
variable-length packets as Ethernet does, ATM utilizes fixed-sized cells. ATM cells are 53 bytes in
length that includes 48 bytes of data and five (5) bytes of header information.
The performance of ATM is often expressed in the form of OC (Optical Carrier) levels, written as "OC-
xxx." Performance levels as high as 10 Gbps (OC-192) are technically feasible with ATM. More
common performance levels for ATM are 155 Mbps (OC-3) and 622 Mbps(OC-12). ATM technology is
designed to improve utilization and quality of service (QoS) on high-traffic networks. Without routing
and with fixed-size cells, networks can much more easily manage bandwidth under ATM than under
Ethernet, for example. The high cost of ATM relative to Ethernet is one factor that has limited its
adoption to "backbone" and other high-performance, specialized networks.

ATM Communication
6
IP DSLAM

An IP DSLAM takes Internet Protocol (IP) traffic and extracts it so it can join the providers IP network.
Most network traffic is now IP; even telephone conversations are digitized, encoded into IP, and sent
across the network. An IP DSLAM is a necessary part of this process as it sorts the traffic coming from
users and sends it on its way. It can be though of as the on-ramp to the IP highway.
Currently there are two main types of core network in the US. The older network is a mixture or analog
and digital technologies and carries most of the voice traffic, and the newer IP network. The older
network is a hybrid of old analog equipment, coupled with newer digital equipment all sharing the same
space. This network has been around for years, and is upgraded as money and technology allow.
Many carriers are building brand new networks based entirely on IP. These consist of large optical fibers
and routers that can carry hundreds of gigabytes of data every second, and allows for that traffic to be
carried across it quickly. The majority of traffic is digital, so these networks can be used to carry the
majority of the traffic as old networks are phased out. Digital traffic uses less space than analog, which
means that the network can cope with more users and more traffic. An IP DSLAM is an important part
of this process, as it takes digital signals early in the process, and allows the same network to carry more
traffic, helping to make the carrier more money.
As an analogy, say someone drove to work one morning during rush hour when the highway was busy,
and the car got stuck in traffic. It takes a while to get through, but eventually the person arrives at work.
The next day everybody rides their motorcycle instead of taking their cars. They take up less space, so
the highway can cope with more motorcycles than cars, and everybody gets to their destinations quicker.
The car is traditional analog traffic, and the motorcycle IP or digital.
Traditionally, a DSLAM would pass the IP traffic to the core network, where it would be extracted and
passed on to its destination. This meant every carrier needed lots of DSLAMS to cope with the demands
of their users. An IP DSLAM extracts the IP traffic at the first telephone exchange. As IP traffic takes up
less space than other traffic, each DSLAM can cope with more users. More users using less equipment
means more savings for the carrier, while the user gets a faster connection. The Internet Protocol Suite
(commonly known as TCP/IP) is the set of communications protocols used for the Internet and other
similar networks. It is named from two of the most important protocols in it: the Transmission Control
Protocol (TCP) and the Internet Protocol (IP), which were the first two networking protocols defined in
this standard. Today's IP networking represents a synthesis of several developments that began to evolve

7
in the 1960s and 1970s, namely the Internet and LANs (Local Area Networks), which emerged in the
mid- to late-1980s, together with the advent of the World Wide Web in the early 1990s.
The Internet Protocol Suite, like many protocol suites, may be viewed as a set of layers. Each layer
solves a set of problems involving the transmission of data, and provides a well-defined service to the
upper layer protocols based on using services from some lower layers. Upper layers are logically closer
to the user and deal with more abstract data, relying on lower layer protocols to translate data into forms
that can eventually be physically transmitted.
The Internet Protocol Suite resulted from research and development conducted by the Defense
Advanced Research Projects Agency (DARPA) in the early 1970s. After initiating the pioneering
ARPANET in 1969, DARPA started work on a number of other data transmission technologies. In 1972,
Robert E. Kahn joined the DARPA Information Processing Technology Office, where he worked on
both satellite packet networks and ground-based radio packet networks, and recognized the value of
being able to communicate across both. In the spring of 1973, Vinton Cerf, the developer of the existing
ARPANET Network Control Program (NCP) protocol, joined Kahn to work on open-architecture
interconnection models with the goal of designing the next protocol generation for the ARPANET.
By the summer of 1973, Kahn and Cerf had worked out a fundamental reformulation, where the
differences between network protocols were hidden by using a common internet work protocol, and,
instead of the network being responsible for reliability, as in the ARPANET, the hosts became
responsible. Cerf credits Hubert Zimmerman and Louis Pouzin, designer of the CYCLADES network,
with important influences on this design.
The design of the network included the recognition that it should provide only the functions of
efficiently transmitting and routing traffic between end nodes and that all other intelligence should be
located at the edge of the network, in the end nodes. Using a simple design, it became possible to
connect almost any network to the ARPANET, irrespective of their local characteristics, thereby solving
Kahn's initial problem. One popular saying has it that TCP/IP, the eventual product of Cerf and Kahn's
work, will run over "two tin cans and a string." A computer called a router (a name changed from
gateway to avoid confusion with other types of gateways) is provided with an interface to each network,
and forwards packets back and forth between them. Requirements for routers are defined in (Request for
Comments 1812).
The idea was worked out in more detailed form by Cerf's networking research group at Stanford in the
1973–74 period, resulting in the first TCP specification.(Request for Comments 675) (The early
networking work at Xerox PARC, which produced the PARC Universal Packet protocol suite, much of
which existed around the same period of time, was also a significant technical influence; people moved
between the two.)
8
DARPA then contracted with BBN Technologies, Stanford University, and the University College
London to develop operational versions of the protocol on different hardware platforms. Four versions
were developed: TCP v1, TCP v2, a split into TCP v3 and IP v3 in the spring of 1978, and then stability
with TCP/IP v4 — the standard protocol still in use on the Internet today.
In 1975, a two-network TCP/IP communications test was performed between Stanford and University
College London (UCL). In November, 1977, a three-network TCP/IP test was conducted between sites
in the US, UK, and Norway. Several other TCP/IP prototypes were developed at multiple research
centers between 1978 and 1983. The migration of the ARPANET to TCP/IP was officially completed on
January 1, 1983, when the new protocols were permanently activated.[5]
In March 1982, the US Department of Defense declared TCP/IP as the standard for all military computer
networking.

Layers in the Internet Protocol

Encapsulation of application data descending through the


protocol stack.

TCP/IP stack operating on two hosts connected via


two routers and the corresponding layers used at
each hop

The TCP/IP suite uses encapsulation to provide abstraction of protocols and services. Such
encapsulation usually is aligned with the division of the protocol suite into layers of general
functionality. In general, an application (the highest level of the model) uses a set of protocols to send its
data down the layers, being further encapsulated at each level.

9
The following table provides some examples of the protocols grouped in their respective layers:

DNS, TFTP, TLS/SSL, FTP, Gopher, HTTP, IMAP, IRC, NNTP,


POP3, SIP, SMTP, SMPP, SNMP, SSH, Telnet, Echo, RTP, PNRP,
rlogin, ENRP
Application
Routing protocols like BGP and RIP which run over TCP/UDP may
also be considered part of the Internet Layer.

Transport TCP, UDP, µTP, DCCP, SCTP, IL, RUDP, RSVP

IP (IPv4, IPv6), ICMP, IGMP, and ICMPv6

Internet
OSPF for IPv4 was initially considered IP layer protocol since it runs
per IP-subnet, but has been placed on the Link since RFC 2740.

Link ARP, RARP, OSPF (IPv4/IPv6), IS-IS, NDP

The Internet Layer is usually directly mapped into the OSI Model's Network Layer, a more general
concept of network functionality. The Transport Layer of the TCP/IP model, sometimes also described
as the host-to-host layer, is mapped to OSI Layer 4 (Transport Layer), sometimes also including aspects
of OSI Layer 5 (Session Layer) functionality. OSI's Application Layer, Presentation Layer, and the
remaining functionality of the Session Layer are collapsed into TCP/IP's Application Layer. The
argument is that these OSI layers do usually not exist as separate processes and protocols in Internet
application

10
Evolution of DSLAMS

Asynchronous Transfer Mode (ATM) represents a relatively recently developed communications


technology designed to overcome the constraints associated with traditional, and for the most part
separate, voice and data networks. ATM has its roots in the work of a CCITT (now known as ITU-T)
study group formed to develop broadband ISDN standards during the mid-1980s. In 1988, a cell
switching technology was chosen as the foundation for broadband ISDN, and in 1991, the ATM Forum
was founded.
The ATM Forum represents an international consortium of public and private equipment vendors, data
communications and telecommunications service providers, consultants, and end users established to
promote the implementation of ATM. _To accomplish this goal, the ATM Forum develops standards
with the ITU and other standards organizations.
The first ATM Forum standard was released in 1992. Various ATM Forum working groups are busy
defining additional standards required to enable ATM to provide a communications capability for the
wide range of LAN and WAN transmission schemes it is designed to support. This standardization effort
will probably remain in effect for a considerable period due to the comprehensive design goal of the
technology, which was developed to support voice, data, and video on both local and wide area
networks.
ATM can be considered to represent a unifying technology because it was designed to transport voice,
data, and video (including graphics images) on both local and wide area networks. Until the
development of ATM, networks were normally developed based on the type of data to be transported.
Thus, circuit-switched networks, which included the public switched telephone network and high-speed
digital transmission facilities, were primarily used to transport delay-sensitive information, such as voice
and video. In comparison, on packet-based networks, such as X.25 and Frame Relay, information can
tolerate a degree of delay. Network users can select a networking technology to satisfy a specific
communications application, but most organizations support a mixture of applications. Thus, most
organizations are forced to operate multiple networks, resulting in a degree of inefficiency and
escalating communications costs. By combining the features from both technologies, ATM enables a
single network to support voice, data, and video.
ATM is designed to be scalable, enabling its 53-byte cell to be transported from LAN to LAN via WAN,
as well as for use on public and private wide area networks at a range of operating rates. On LANs,
ATM support is currently offered at 25 and 155Mbps, whereas access to WAN-based ATM carrier

11
networks can occur at T1 (1.544Mbps), at T3 (45Mbps), or via different SONET facilities at data rates
up to 622Gbps, all based on the transportation of 53-byte cell.
The use of a fixed-length cell enables low-cost hardware to be developed to perform required cell
switching based on the contents of the cell header, without requiring more complex and costly software.
Thus, ATM can be considered to represent a unifying technology that will eventually become very
economical to implement when its development expenses are amortized over the growing production
cycle of ATM communications equipment.
Although many organizations merged voice and data through the use of multiplexers onto a common
circuit, this type of merger is typically not end-to-end. For example, traffic from a router connected to a
LAN might be fed into a port on a high-speed multiplexer with another connection to the multiplexer
from the company PBX. Although this type of multiplexing enables a common WAN circuit to be used
for voice and data, it represents an interim and partial solution to the expense associated with operating
separate voice and data networks. In addition, the emergence of multimedia applications requiring the
transmission of video can wreak havoc with existing LANs and WANs due to their requirement for high
bandwidth for short periods. ATM represents an emerging technology designed to provide support for
bandwidth-on-demand applications, such as video, as well as voice and data. A comparison of the key
features associated with each technology can give you an appreciation for ATM technology in
comparison to conventional data communications- and telecommunications-based technology. Table
14.1 compares nine features of data communications and telecommunications networks with those of an
ATM network.
In a data communications environment, the network can range in scope from a token-ring LAN to an
X.25 or Frame Relay WAN. Thus, although some features are common to both LAN and WAN
environments, there is also some variability. In general, a data communications network transports data
by using variable-length packets. Although many WAN protocols are connection-oriented, some are
connectionless. Similarly, many LAN protocols are connectionless; whereas others are connection-
oriented .DSLAM carries both data and voice. The subscriber cable and E1 cable is attached to DSLAM.
There is two ADLA cards and one MEIA card in DSLAM for controlling purposes. Because data
communications networks were designed to transport files, records, and screens of data, transmission
delay or latency, if small, does not adversely affect users. In comparison, in a telecommunications
network, a similar amount of latency that is acceptable on a data network could wreak havoc with a
telephone conversation. Recognizing the differences among voice, video, and data transportation, ATM
was designed to adapt to the time sensitivity of different applications. It includes different classes of
service that enable the technology to match delivery to the time sensitivity of the information it
transports.
12
Comparison of Network Features

Feature Data Communications Telecommunications ATM


Traffic support Data Voice Data, voice, video

Transmission unit Packet Frame Cell


Transmission length Variable Fixed Fixed
Switching type Packet Circuit Cell

Connectionless or Connection-
Connection type Connection-oriented Connection-oriented
oriented
Time sensitivity None to some All Adaptive
Defined class or
Delivery Best effort Guaranteed
guaranteed
Media and operating
Defined by protocol Defined by class Scalable
rate

Media access Shared or dedicated Dedicated Dedicated

Thus, ATM provides a mechanism for merging voice, data, and video onto LANs and WANs. You can
gain an appreciation for how ATM accomplishes this by learning about its architecture.

Architecture of DSLAM

13
ATM is based on the switching of 53-byte cells, in which each cell consists of a 5-byte header and a
payload of 48 bytes of information. Figure 14.1 illustrates the format of the ATM cell, including the
explosion of its 5-byte header to indicate the fields carried in the header.

The 53-byte ATM cell.

The 4-bit Generic Flow Control (GFC) field is used as a mechanism to regulate the flow of traffic in an
ATM network between the network and the user. The use of this field is currently under development.
As we will shortly note, ATM supports two major types of interfaces: Network-to-User (UNI) and
Network-to-Network (NNI). When a cell flows from the user to the network or from the network to the
user, it will carry a GFC bit value. However, when it flows within a network or between networks, the
GFC field is not used. Instead of being wasted, its space can be used to expand the length of the Virtual
Path Identifier field.
The 8-bit Virtual Path Identifier (VPI) field represents one half of a two-part connection
identifier used by ATM. This field identifies a virtual path that can represent a group of virtual circuits
transported along the same route. Although the VPI is eight bits long in a UNI cell, the field expands to
12-bit positions to fill the Generic Flow Control field in an NNI cell. It is described in more detail later
in this chapter.

14
The Virtual Channel Identifier (VCI) is the second half of the two-part connection identifier carried in
the ATM header. The 16-bit VCI field identifies a connection between two ATM stations
communicating with one another for a specific type of application. Multiple virtual channels (VCs) can
be transported within one virtual path. For example, one VC could be used to transport a disk backup
operation, while a second VC is used to transport a TCP/IP-based application. The virtual channel
represents a one-way cell transport facility. Thus, for each of the previously described operations,
another series of VCIs is established from the opposite direction. You can view a virtual channel as an
individual one-way end-to-end circuit, whereas a virtual path that can represent a collection of virtual
channels can be viewed as a network trunk line. After data is within an ATM network, the VPI is used to
route a common group of virtual channels between switches by enabling ATM switches to simply
examine the value of the VPI. Later in this chapter, you will examine the use of the VCI.
The Payload Type Identifier (PTI) field indicates the type of information carried in the 48-byte data
portion of the ATM cell. Currently, this 3-bit field indicates whether payload data represents
management information or user data. Additional PTI field designators have been reserved for future
use.
The 1-bit Cell Loss Priority (CLP) field indicates the relative importance of the cell. If this field bit is set
to 1, the cell can be discarded by a switch experiencing congestion. If the cell cannot be discarded, the
CLP field bit is set to 0.
The last field in the ATM cell header is the 8-bit Header Error Control field. This field represents the
result of an 8-bit Cyclic Redundancy Check (CRC) code, computed only over the ATM cell header. This
field provides the capability for detecting all single-bit errors and certain multiple-bit errors that occur in
the 40-bit ATM cell header.
ATM differs from more common data link technologies like Ethernet in several
ways. For example, ATM utilizes no routing. Hardware devices known as ATM switches establish
point-to-point connections between endpoints and data flows directly from source to destination.
Additionally, instead of using variable-length packets as Ethernet does, ATM utilizes fixed-sized cells.
ATM cells are 53 bytes in length that includes 48 bytes of data and five (5) bytes of header information.
ATM technology is designed to improve utilization and quality of service (QoS)
on high-traffic networks. Without routing and with fixed-size cells, networks can much more easily
manage bandwidth under ATM than under Ethernet, for example. The high cost of ATM relative to
Ethernet is one factor that has limited its adoption to "backbone" and other high-performance,
specialized networks.

15
Advantages of the Technology

The use of cell-switching technology in a LAN environment provides some distinct advantages over the
shared-medium technology employed by Ethernet, token-ring, and FDDI networks. Two of those
advantages are obtaining full bandwidth access to ATM switches for individual workstations and
enabling attaching devices to operate at different operating rates. Those advantages are illustrated in
Figure 14.2, which shows an ATM switch that could be used to support three distinct operating rates.
Workstations could be connected to the switch at 25Mbps, and a local server could be connected at
155Mbps to other switches either to form a larger local LAN or to connect to a communications carrier's
network via a different operating rate.
The selection of a 53-byte cell length results in a minimum of latency in comparison to
the packet length of traditional LANs, such as Ethernet, which can have a maximum 1526-byte frame
length. Because the ATM cell is always 53 bytes in length, cells transporting voice, data, and video can
be intermixed without the latency of one cell adversely affecting other cells. Because the length of each
cell is fixed and the position of information in each header is known, ATM switching can be
accomplished via the use of hardware. In comparison, on traditional LANs, bridging and routing
functions are normally performed by software or firmware, which executes more slowly than hardware-
based switching.

ATM is based on the switching of 53-byte cells.

16
Two additional features of ATM that warrant discussion are its asynchronous operation and its
connection-oriented operation. ATM cells are intermixed via multiplexing, and cells from individual
connections are forwarded from switch to switch via a single-cell flow. However, the multiplexing of
ATM cells occurs via asynchronous transfer, in which cells are transmitted only when data is present to
send. In comparison, in conventional time division multiplexing, keep-alive or synchronization bytes are
transmitted when there is no data to be sent. Concerning the connection-oriented technology used by
ATM, this means that a connection between the ATM stations must be established before data transfer
occurs. The connection process results in the specification of a transmission path between ATM
switches and end stations, enabling the header in ATM cells to be used to route the cells on the required
path through an ATM network.

Cell Routing
The actual routing of ATM cells depends on whether a connection was pre-established or set up
as needed on a demand basis. The pre-established type of connection is referred to as a Permanent
Virtual Connection (PVC), and the other type is referred to as a Switched Virtual Connection (SVC).
Examine the 5-byte ATM cell header shown in Figure 14.1 and note the VCI and VPI fields. The VPI is
8 bits in length, whereas the VCI is 16 bits in length, enabling 256 virtual paths of which each path is
capable of accommodating up to 65,536 (216) virtual connections.
By using VPs and VCs, ATM employs a two-level connection identifier that is used in its
routing hierarchy. A VCI value is unique only in a particular VPI value, whereas VPI values are unique
only in particular physical links. The VPI/VCI value assignment has only local significance, and those
values are translated at every switch a cell traverses between endpoints in an ATM network. The actual
establishment of a virtual path is based on ATM's network management and signaling operations.
During the establishment of a virtual path routing table, entries in each switch located between endpoints
map an incoming physical port and a Virtual Path Identifier pair to an outgoing pair. This initial
mapping process is known as network provisioning, and the change of routing table entries is referred to
as network reprovisioning.
Figure illustrates an example of a few possible table entries for a switch, where a virtual path
was established such that VPI=6 on port 1 and VPI=10 on port 8, representing two physical links in the
established connection.

17
Switch operations based on routing table entries.

Next, examine the entries in the routing table shown in Figure 14.3, and note that the table does not
include values for VCIs. This is by design because a VP in an ATM network can support up to 65,536
VC connections. Thus, only one table entry is required to switch up to 65,536 individual connections if
those connections all follow the same set of physical links in the same sequence. This method of
switching, which is based on the VPI and port number, simplifies the construction and use of routing
tables and facilitates the establishment of a connection through a series of switches. Although VCIs are
not used in routing tables, they are translated at each switch. To help you understand the rationale for
this technique, you must focus on their use. As previously noted, a VCI is unique within a VP and is
used at an endpoint to denote a different connection within a virtual path. Thus, the VPI/VCI pair used
between an endpoint and a switch has a local meaning and is translated at every switch; however, the
VCI is not used for routing between switches.
The establishment of a connection between two end stations is known as a Virtual
Channel Connection (VCC). To illustrate the routing of cells in an ATM network based on a VCC,
consider Figure 14.4, which represents a small two-switch–based ATM network. The VCC represents a
series of virtual channel links between two ATM endpoints. In Figure 14.4, one VCC could be
represented by VCI=1, VCI=3, and VCI=5, which collectively form a connection between workstations
at the two endpoints shown in the network. A second VCC could be represented by VCI=2, VCI=4, and
VCI=6. The second VCC could represent the transportation of a second application between the same
pair of endpoints or a new application between different endpoints served by the same pair of ATM
switches.
18
Connections in an ATM network

Connections in an ATM network

As indicated by the previous examples, each VC link consists of one or more physical links between the
location where a VCI is assigned and the location where it is either translated or removed. The
assignment of VCs is the responsibility of switches during the call setup process. The VPI/VCI value
assignment has only local significance, and those values are translated at every switch a cell traverses
between endpoints in an ATM network. The actual establishment of a virtual path is based on ATM's
network management and signaling operations. During the establishment of a virtual path routing table,
entries in each switch located between endpoints map an incoming physical port and a Virtual Path
Identifier pair to an outgoing pair. This initial mapping process is known as network provisioning, and
the change of routing table entries is referred to as network reprovisioning. This method of switching,
which is based on the VPI and port number, simplifies the construction and use of routing tables and
facilitates the establishment of a connection through a series of switches. Although VCIs are not used in
routing tables, they are translated at each switch. To help you understand the rationale for this technique,
you must focus on their use. As previously noted, a VCI is unique within a VP and is used at an
endpoint to denote a different connection within a virtual path.

19
ONU(OPTICAL NETWORK UNIT)

An Optical Network Unit (ONU) converts optical signals transmitted via fiber to electrical
signals. These electrical signals are then sent to individual subscribers. ONUs are commonly used in
fiber-to-the-home (FTTH) or fiber-to-the-curb (FTTC) applications. Silicon Laboratories' proven SLIC
and Power over Ethernet solutions offer a one stop shop for high voltage ONU solutions.

The OPTICAL NETWORK UNIT is basically used for only voice purposes not for data purposes. It
contains various cards. It just look like as a almirah. Optical network unit (ONU) is the user side
equipment in the GEPON (Gigabit Ethernet Passive Optical Network) systems. Optical network unit is
used with OLT and provides the users with many kinds of broadband services such as VoIP, HDTV, and
videoconferences. Optical network unit is economic and high efficient equipment and play an important
role in the FTTx fiber optic network.

GEPON system is a telecom grade FTTx broadband access equipment mainly for telecom operator and
large corporate users with the characteristic of high integration, flexible application, high stableness,
easy management, flexible extend and buildup of the network, as well as providing QoS functions.
Optical network unit converts the fiber optic signal into the electric signal at the user side and enables
reliable fiber optic Ethernet services to business and residential users through fiber-based network
infrastructure.

20
Types of ONU

ONU 160B
ONU 160B supports 160 voice customers i.e. we can give connection to 160 customers. It is only used
for voice purpose not for data purpose. It contains various cards for functioning of ONU 160B.

ONU 160B

21
FO2
It is an extended version of ONU 160B. A maximum of four ONU can be cascaded to increase the
capacity of exchange. So FO2 can support maximum of 640 voice customers.

FO2

22
ONU and FO2 Cards

• AC to DC Card
It is used to convert AC supply to DC supply as ONU works on DC supply.
• DC to DC Card
It is used to convert ripple DC to pure DC supply.
• ASU Card
It is used to give connection to PCO customers.
• ASL Card
It is used to connect E1 signal to the card.
• A32 Card
It is used for Home customers. It can support maximum of 32 voice customers.
• Alarm Card
This card gives indication of any fault like E1 drop by blinking the red signal.

ONU and FO2 Cards

23
OPTIX (METRO 100)

It is used for coding and decoding of E1 signal. It operates at voltage range of -48 volt to -60 volt.
OPTIX (METRO 100) has inbuilt converter which converts optical signal to electrical signal and this
signal is given to modules through subscriber cable. It contains four E1 ports where each port can
support maximum of 100 Mbps of data. So each OPTIX Support 400 Mbps of data. OPTIX is basically
used in internet lease line connections where data rate is high. In that case E1 is given directly to modem
which performs modulation and demodulation function. Modulation carried at node end where
demodulation takes place at client end. It contain alarm signal in case there is fault like E1 drop.

OPTIX (METRO 100)

The OptiX 100 system developed by Huawei is a new generation of STM-1/STM-4 compatible multi-service
transmission equipment. It supports STM-1/STM-4 optical synchronous transmission and on-line upgrade from
STM-1 to STM-4. The OptiX 100 system provides abundant interfaces and powerful cross-connect capability.
Via its SDH interface, the OptiX 100 system can build a transmission network with OptiX
100, OptiX 2500+ (metro3000) and OptiX 10G systems. Via its PDH, ATM, Ethernet, SHDSL (single-pair high bit
rate digital subscriber line), it can interwork with access network equipment, GSM base station, ETS base station,
exchange and router to form a communication network.
Through OptiX iManager T2000, a subnetwork-level integration NMS for transmission
network, a user can configure, maintain and monitor the equipment and the network. An authorized user can use
the OptiX iManager T2000 system to maintain the whole network at any NE or remote NMS center of
transmission network.
24
Features of OPTIX 100

1. Interface

The OptiX 100 system provides abundant service interfaces and auxiliary interfaces.
SDH interfaces

A single OptiX 100 supports STM-1 SDH optical interfaces, or STM-1 SDH electrical interfaces, or
STM-4 SDH optical interfaces, or combination of the STM-1/STM-4 SDH interfaces.
SDH electrical interface adopts SMB coaxial connector, which is suitable for short-haul transmission.
PDH interfaces

The OptiX 100 provides PDH interfaces operating at E1, T1, E3 and T3 rates. A single OptiX 100 can
provide a maximum of 80 E1 interfaces, or 64 T1 interfaces, or 6 E3/T3 interfaces, or combination of
the above PDH interfaces.
ATM service interfaces
The OptiX 100 provides the STM-1 optical interfaces that can access ATM service.
Ethernet service interfaces

The OptiX 100 provides 10M/100M self-adaptive Ethernet electrical interfaces with VC-12 or VC-3 as
the mapping unit, or 100M Ethernet optical interfaces with VC-12 as the mapping unit.
Tone and data interfaces

OptiX 100 provides analog audio interfaces, RS-232 and RS-422 asynchronous data interfaces. These
interface functions enable direct transmission of sub-rate payloads on SDH transport networks. These
sub-rate payloads usually include paging service, storage data service, charging information, power
supply and environment monitoring information, microwave equipment monitoring information and NM
information of other vendors? transmission equipments.
OptiX 100 provides G.SHDSL interfaces whose characteristics comply with the various specifications
defined in ITU-T Recommendation G.991.2. Then the transmission distance of the E1 signal and N
64kbit/s signal is extended.
The N64 board of the OptiX 100 provides V.35/V.24/X.21/RS-449/EIA-530 interfaces and Framed E1
interfaces.

25
Environment monitoring unit interfaces

The OptiX 100 provides primary power voltage monitoring, environment temperature monitoring,
Boolean value signal input, Boolean value signal output, and RS-232 or RS-422 serial communication
interface.
Clock input/output interfaces

The OptiX 100 provides clock input interfaces and clock output interfaces, which can be set to 2MHz or
2Mbit/s mode.
Power input interfaces

The OptiX 100 provides two -48V DC or +24V DC power input interfaces.
Abundant auxiliary interfaces

The OptiX 155/622H provides several data interfaces for the user with its powerful overhead processing
capability:
• two-wire order wire interface, which provides order wire communication of regeneration
section and multiplex section
• RJ-45 Ethernet interface
• user-defined asynchronous RS-232 data interfaces
• Modem interface with X.25 characteristics
• NM interface
For two networks not connected together via optical fiber, inter-network DCC communication can be
established by interconnecting the Ethernet interfaces.

2. Services

The OptiX 155/622H supports the access of PDH signals, SDH signals, ATM service, Ethernet service,
SHDSL service and N*64K(multiple physical interface) service which supports V.35/X.21protocol. In
addition, it supports the hybrid transmission of PDH signals, SDH signals, and ATM, Ethernet and N
*64kbit/s services within the same equipment.

26
3. Networking and Protection
Flexible networking capability

With a large-capacity cross-connect matrix, the OptiX 155/622H can provide powerful networking
capability. It supports multiple network topologies such as point-to-point, chain, ring, hub, and mesh
networks.
Ideal protection mechanism

The OptiX 100 provides network protection, including linear 1+1, 1: N Multiplex Section Protection
(MSP), ring Multiplex Section Protection, ring Path Protection (PP), Subnetwork Connection Protection
(SNCP),shared fiber virtual trail protection, and VP-Ring protection of an ATM ring network.
• 4 Ports Provides 10M/100M self-adaptive Ethernet electrical interfaces
• The minimum granule of the bandwidth which can be distributed is VC-3 or VC-12. The
Ethernet bandwidth which can be accessed to optical path is 1~12 VC-3, or 1~126 VC-12 + 6 VC-3
• Supports GFP (General Frame Processor) /LAPS (Link Access Processor SDH) /HDLC
(High-level Data Link Control) mapping protocol. The mapping protocol is optional
• Supports LCAS (Link Capacity Adjustment Scheme) protocol
• Supports CAR (Committed Access Rate)

MAIN DISTRIBUTION FRAME (MDF)

27
MDF presents as a physical connection between client and node end. It receives the dial tone from head
office (Mohali) through optical cable. Each shelf of MDF consists of four modules where each module
contains eight pairs of wire. Hence one shelf supports 32 customers. In telephony, a main distribution
frame (MDF or main frame) is a signal distribution frame for connecting equipment (inside plant) to
cables and subscriber carrier equipment (outside plant). The MDF is a termination point within the local
telephone exchange where exchange equipment and terminations of local loops are connected by jumper
wires at the MDF. All cable copper pairs supplying services through user telephone lines are terminated
at the MDF and distributed through the MDF to equipment within the local exchange e.g. repeaters and
DSLAM. Cables to intermediate distribution frames terminate at the MDF. Trunk cables may terminate
on the same MDF or on a separate trunk main distribution frame (TMDF).
The most common kind of large MDF is a long steel rack accessible from both sides. On
one side, termination blocks are arranged horizontally at the front of rack shelves. Jumpers lie on the
shelves and go through a steel hoop to run vertically to other termination blocks that are arranged
vertically. There is a hoop or ring at the intersection of each level and each vertical. Installing a jumper
requires two workers, one on each side. The shelves are shallow enough to allow the rings to be within
arm's reach, but the workers prefer to hang the jumper on a hook on a pole so their partner can pull it
through the ring. A fanning strip at the back of the termination block prevents the wires from covering
each others' terminals. With disciplined administration the MDF can hold over a hundred thousand
jumpers, changing dozens of them every day, for decades without tangling.

MDF
MODEM

28
A modem (modulator-demodulator) is a device that modulates an analog carrier signal to encode digital
information, and also demodulates such a carrier signal to decode the transmitted information. The goal
is to produce a signal that can be transmitted easily and decoded to reproduce the original digital data.
Modems can be used over any means of transmitting analog signals, from driven diodes to radio.
The most familiar example is a voice band modem that turns the digital data of a personal computer into
analog audio signals that can be transmitted over a telephone line, and once received on the other side, a
modem converts the analog data back into digital.

Modems are generally classified by the amount of data they can send in a given time, normally
measured in bits per second (bit/s, or bps). They can also be classified by Baud, the number of times the
modem changes its signal state per second. For example, the ITU V.21 standard used audio frequency-
shift keying, aka tones, to carry 300 bit/s using 300 baud, whereas the original ITU V.22 standard
allowed 1,200 bit/s with 600 baud using phase-shift keying.
Faster modems are used by Internet users every day, notably cable modems and
ADSL modems. In telecommunications, wide-band radio modems transmit repeating frames of data at
very high data rates over microwave radio links. Narrow-band radio modem is used for low data rate up
to 19.2k mainly for private radio networks. Some microwave modems transmit more than a hundred
million bits per second. Optical modems transmit data over optical fibers. Most intercontinental data
links now use optical modems transmitting over undersea optical fibers. Optical modems routinely have

29
data rates in excess of a billion (1x109) bits per second. One kilobit per second (Kbits/s, kb/s, or kbps) as
used in this article means 1,000 bits per second and not 1,024 bits per second. For example, a 56k
modem can transfer data at up to 56,000 bit/s (7 kB/s) over the phone line.

Aside from the transmission protocols that they support, the following characteristics distinguish one
modem from another:

 bps : How fast the modem can transmit and receive data. At slow rates, modems are measured in
terms of baud rates. The slowest rate is 300 baud (about 25 cps). At higher speeds, modems are
measured in terms of bits per second (bps). The fastest modems run at 57,600 bps, although they can
achieve even higher data transfer rates by compressing the data. Obviously, the faster the transmission
rate, the faster you can send and receive data. Note, however, that you cannot receive data any faster
than it is being sent. If, for example, the device sending data to your computer is sending it at 2,400 bps,
you must receive it at 2,400 bps. It does not always pay, therefore, to have a very fast modem. In
addition, some telephone lines are unable to transmit data reliably at very high rates.

 voice/data: Many modems support a switch to change between voice and data modes. In data mode,
the modem acts like a regular modem. In voice mode, the modem acts like a regular telephone. Modems
that support a voice/data switch have a built-in loudspeaker and microphone for voice communication.

 auto-answer : An auto-answer modem enables your computer to receive calls in your absence. This
is only necessary if you are offering some type of computer service that people can call in to use.

 data compression : Some modems perform data compression, which enables them to send data at
faster rates. However, the modem at the receiving end must be able to decompress the data using the
same compression technique.

 flash memory : Some modems come with flash memory rather than conventional ROM, which
means that the communications protocols can be easily updated if necessary.

 Fax capability: Most modern modems are fax modems, which means that they can send and receive
faxes.

CDMA Technology

30
Code Division Multiple Access (CDMA) is a radically new concept in wireless communications. It has
gained widespread international acceptance by cellular radio system operators as an upgrade that will
dramatically increase both their system capacity and the service quality. The majority of the winners of
the United States Personal Communications System spectrum auctions have likewise chosen it for
deployment.
CDMA is a form of spread-spectrum, a family of digital communication techniques that have been used
in military applications for many years. The core principle of spread spectrum is the use of noise-like
carrier waves, and, as the name implies, bandwidths much wider than that required for simple point-to-
point communication at the same data rate. Commercial applications became possible because of two
evolutionary developments. One was the availability of very low cost, high-density digital integrated
circuits, which reduce the size, weight, and cost of the subscriber stations to an acceptably low level.
The other was the realization that optimal multiple access communication requires that all user stations
regulate their transmitter powers to the lowest that will achieve adequate signal quality.
CDMA changes the nature of the subscriber station from a predominately analog device to a
predominately digital device. Old-fashioned radio receivers separate stations or channels by filtering in
the frequency domain. CDMA receivers do not eliminate analog processing entirely, but they separate
communication channels by means of a pseudo-random modulation that is applied and removed in the
digital domain, not on the basis of frequency. Multiple users occupy the same frequency band. This
universal frequency reuse is not fortuitous. On the contrary, it is crucial to the very high spectral
efficiency that is the hallmark of CDMA. Other discussions in these pages show why this is true.
CDMA is altering the face of cellular and PCS communication by:
 Dramatically improving the telephone traffic (Erlang) capacity. Capacity increases of 8 to 10 times
that of an AMPS analog system and 4 to 5 times that of a GSM system
 Dramatically improving the voice quality and eliminating the audible effects of multipath fading
 Reducing the incidence of dropped calls due to handoff failures: Improved call quality, with better
and more consistent sound as compared to AMPS systems
 Providing reliable transport mechanism for data communications, such as facsimile and internet
traffic
 Improved coverage characteristics, allowing for the possibility of fewer cell sites, reduce the number
of sites needed to support any given amount of traffic
 Simplifying site selection
 Enhanced privacy: Increased privacy is inherent in CDMA technology. CDMA phone calls will be
secure from the casual eavesdropper since, unlike an analog conversation, a simple radio receiver will

31
not be able to pick individual digital conversations out of the overall RF radiation in a frequency band.
 Reducing deployment and operating costs because fewer cell sites are needed: Simplified system
planning through the use of the same frequency in every sector of every cell.
 Reducing average transmitted power results in increased talk time for portables
 Reducing interference to other electronic devices due to the reduced average transmit power from
mobile station.
 Reducing potential health risks due to the reduced average transmit power from mobile station.
In the final stages of the encoding of the radio link from the base station to the mobile, CDMA adds a
special "pseudo-random code" to the signal that repeats itself after a finite amount of time. Base stations
in the system distinguish themselves from each other by transmitting different portions of the code at a
given time. In other words, the base stations transmit time offset versions of the same pseudo-random
code. In order to assure that the time offsets used remain unique from each other, CDMA stations must
remain synchronized to a common time reference. The primary source of the very precise
synchronization signals required by CDMA systems is the Global Positioning System (GPS). GPS is a
radio navigation system based on a constellation of orbiting satellites. Since the GPS system covers the
entire surface of the earth, it provides a readily available method for determining position and time to as
many receivers as are required.
CDMA cell coverage is dependent upon the way the system is designed. In fact, three primary system
characteristics - Coverage, Quality and Capacity - must be balanced off of each other to arrive at the
desired level of system performance.
In a CDMA system these three characteristics are tightly inter-related. Even higher capacity might be
achieved through some degree of degradation in coverage and/or quality. Since these parameters are all
intertwined, operators can not have the best of all worlds: three times wider coverage, 40 times capacity,
and "CD" quality sound. For example, the 13 kbps vocoder provides better sound quality, but reduces
system capacity as compared to an 8 kbps vocoder.
CDMA benefits
When implemented in a cellular telephone system, CDMA technology offers numerous benefits to the
cellular operators and their subscribers. The following is an overview of the benefits of CDMA. Each
benefit will be described in detail in the following subsections.
 Capacity increases of 8 to 10 times that of an AMPS analog system and 4 to 5 times that of a
GSM system
 Improved call quality, with better and more consistent sound as compared to AMPS systems
 Simplified system planning through the use of the same frequency in every sector of every cell

32
 Enhanced privacy
 Improved coverage characteristics, allowing for the possibility of fewer cell sites
 Increased talk time for portables

CDMA Capacity
CDMA offers an answer to the capacity problem. The key to its high capacity is the use of noise-like
carrier waves, as was first suggested decades ago by Claude Shannon. Instead of partitioning either
spectrum or time into disjoint "slots" each user is assigned a different instance of the noise carrier.
While those waveforms are not rigorously orthogonal, they are nearly so. Practical application of this
principle has always used digitally generated pseudo-noise, rather than true thermal noise. The basic
benefits are preserved, and the transmitters and receivers are simplified because large portions can be
implemented using high-density digital devices.
The major benefit of noise-like carriers is that the system sensitivity to interference is fundamentally
altered. Traditional time or frequency slotted systems must be designed with a reuse ratio that satisfies
the worst-case interference scenario, but only a small fraction of the users actually experience that
worst-case. Use of noise-like carriers, with all users occupying the same spectrum, makes the effective
noise the sum of all other-user signals. The receiver correlates its input with the desired noise carrier,
enhancing the signal to noise ratio at the detector. The enhancement overcomes the summed noise
enough to provide an adequate SNR at the detector. Because the interference is summed, the system is
no longer sensitive to worst-case interference, but rather to average interference. Frequency reuse is
universal, that is, multiple users utilize each CDMA carrier frequency... . The cells indicate that each
user uses the entire 1.25 MHz passband, and that same passband is reused in each cell.
The balance between the required SNR for each user, and the spread spectrum processing gain
determines capacity. The figure of merit of a well-designed digital receiver is the dimensionless
signal-to-noise ratio (SNR)
Eb Energy per bit
= (3.1)
Nt Power spectral density of noise + int erference
The "noise" part of the SNR, Nt, in a spread spectrum system is actually the sum of thermal noise and
the other-user interference and Eb is energy per bit. The SNR needed to achieve a particular error rate
depends on several factors, such as the forward error correction coding used, and the multipath and
fading environment. For the receivers typically used in commercial CDMA it ranges typically from
about 3 dB to 9 dB.
Energy per bit is related to signal power and data rate:
E b = Ps R (3.2)
33
where Ps is the signal power per subscriber and R is the transmission bit rate. The noise plus
interference term is power spectral density. If the spectrum of the signals is roughly rectangular, with a
bandwidth of W, then the noise + interference power spectral density is
N t = FN k B To +W −1
∑P
other users
i (3.3)
Where the first term represents the thermal noise level of the receiver (FN is receiver noise figure, kB is
Boltzman constants and To is noise temperature). Rewriting the SNR equation in terms of the data rate
and the spread-spectrum bandwidth shows where the magic lies:
 Eb  Pj R
  =
 ( N o + I o )  j N o + W ∑ Pi
−1 (3.4)
i

Where W is the transmission bandwidth, No is thermal noise density, and Io is the interference density.
The interference in this equation is the sum of the signals from all users other than the one of interest.
This equation is the key to understanding why CDMA was not explored for use in terrestrial multiple
access systems. It is also the key to the innovation that led to commercial CDMA.
CDMA (and spread spectrum in general) was always dismissed as unworkable in the mobile radio
environment because of what was called the "near-far problem." It was always assumed that all the
stations transmitted constant power. In the mobile radio environment some users may be located
near the base station, others may be located far away. The propagation path loss difference between
those extreme users can be many tens of dB. Suppose, for example that only two users are present,
and that both are transmitting with enough power that the thermal noise is negligible. Then the SNR,
in dB, is
 Eb  W

N 
 = R + Pj − Pi (3.5)
 t dB
If there is, say, 30 dB differences between the largest and smallest path losses, then there is a 60 dB
difference between the SNR of the closest user and the farthest user, because these are the received
powers. To accommodate the farthest users, the spreading bandwidth would have to be perhaps 40
dB, or 10,000 times the data rate. If the data rate were 10,000 b/s, then W=100MHz.
The spectral efficiency is abysmal, far worse than even the most inefficient FDMA or TDMA
system. Conversely, if a more reasonable bandwidth is chosen, then remote users receive no service.
This observation was, for years, the rationale for not even attempting any sort of spread spectrum in
any but geo-synchronous satellite environments, where the path loss spread was relatively small.
The key to the high capacity of commercial CDMA is extremely simple: If, rather than using
constant power, the transmitters can be controlled in such a way that the received powers from all
users are roughly equal, then the benefits of spreading are realized.
If the received power is controlled, then the subscribers can occupy the same spectrum, and the
hoped-for benefits of interference averaging accrue.
34
Assuming perfect power control, the noise plus interference is now
N o + I o = N o + ( N −1) Ps
(3.6)
N o = FN k B To
where N is the total number of users. The SNR becomes
 Eb  Ps R W R
  = = (3.7)
 N o + I o  N o + ( N − 1) Ps W N o W Ps + N − 1
Maximum capacity is achieved if we adjust the power control so that the SNR is exactly what it
needs to be for an acceptable error rate. If we set the left-hand side of (3.7) to that target SNR and
solve for N, we find the basic capacity equation for CDMA:
WR N WR
N −1 = − o P →
( E b ( N o + I o ) ) t arg et Ps s →∞
( E b ( N o + I o ) ) t arg et (3.8)
Using the numbers for IS-95A CDMA with the 9.6 kbps rate set, we find
N ≈ ( W R ) dB − ( E b ( No + I o ) ) t arg et , dB ≈ 21 .1 − 6 dB = 15 .1 dB (3.9)

or about N=32. The target SNR of 6 dB is a nominal estimate. Once power control is available, the
system designer and operator have the freedom to trade quality of service for capacity by adjusting
the SNR target. Note that capacity and SNR are reciprocal: 3 dB improvement in SNR incurs a
factor of two loss in capacity, and vice-versa.
We've neglected the difference between N and N-1 in (3.9). This is convenient in the capacity math,
and is usually reasonable because the capacity is so large.
There are factors we haven't taken into account yet. Some of the things we have not yet considered
actually help; others hurt. But on balance, there is a major improvement over the narrow-band
technologies.
The sustainable capacity is proportional to the processing gain, reduced by the required SNR. While
there are several considerations we have yet to look at, there is already a suggestion of the capacity
enhancement possible. With Eb/N0 in the 3-9 dB range, equation (3.9) gives a capacity in the
neighborhood of 16-64 users. In the same bandwidth, a single sector of a single AMPS cell has only
2 channels available.
The discussion leading to equation (3.9) assumes only a single cell, with no interference from
neighboring cells. One might ask what has been gained here. The capacity of an isolated AMPS cell
likewise is very high. In fact, there is nothing to stop you from using all the channels if there are no
neighbors; reuse is not needed. The capacity of that fully populated AMPS cell would be about 42
channels (1.25 MHz/ 30 kHz channel spacing). This is not greatly different than the number that we
just calculated for CDMA.
Other affecting factor on the capacity is voice activity detection, and sectorization. Voice activity
detection is another variable which helps to increase the capacity of a CDMA system. IS-95 CDMA
35
takes advantage of voice activity gain through its use of variable rate vocoders. In a typical phone
conversation a person is actively talking only about 35% of the time. The other 65% is spent
listening to the other party, or is quiet time when neither party is speaking. The principle behind the
variable rate vocoder is to have it run at high speed, providing the best speech quality, only when
voice activity is detected. When no voice activity is detected, the vocoder will drop its encoding
rate, because there is no reason to have high speed encoding of silence. The encoded rate can drop to
4, 2, or even 1 kbps. Thus the variable rate vocoder uses up channel capacity only as needed. Since
the level of "interference" created by all of the users directly determines system capacity, and voice
activity detection reduces the noise level in the system, capacity can be maximized.

Improved Call Quality


Cellular telephone systems using CDMA are able to provide higher quality sound and fewer dropped
calls than systems based on other technologies. A number of features inherent in the system produce
this high quality.
 Advanced error detection and error correction schemes greatly increase the likelihood that
frames are interpreted correctly.
 Vocoder offers high speed coding and reduce background noise.
 CDMA takes advantage of various types of diversity to improve speech quality:
 frequency diversity (protection against frequency selective fading)
 spatial diversity (two receive antennas)
 path diversity (rake receiver improves reception of a signal experiencing multipath
"interference," and actually enhances sound quality)
 time diversity (interleaving and coding)
 Soft Handoffs contribute to high voice quality by providing a "make before break"
connection. "Softer" Handoffs between sectors of the same cell provide similar benefits.
 Precise power control assures that all mobiles are very close to the optimum power level to
provide the highest voice quality possible.
 The voice quality for CDMA has been rated very high in mean opinion score (MOS) tests
which compare it to other technologies. (Please see the discussion of vocoders and MOS
scores in the Advanced Features section.)
A. Advanced Error Detection and Error Correction

36
The IS-95 CDMA air interface standard specifies powerful error detection and correction
algorithms. Corrupted voice data can be detected and either corrected or manipulated to
minimize the impact of data errors on speech quality.
B. Vocoders
PCM is the vocoding standard used in landline systems. It is simple, which was necessary in the
1960s, but not very efficient. It has the sound quality wireless would like to match. Wired
communications still uses PCM, since bandwidth has become rather inexpensive via fiber optic cable
and/or microwave links.
Wireless vocoders, on the other hand, are constrained by bandwidth. Several types of vocoding
standards currently exist, offering operators the choice between higher capacity and better voice
quality. Initial CDMA systems use an 8 kilobit per second (kbps) variable rate speech vocoder,
revision IS-96A. The vocoder transmits 8 kbps of voice information at 9.6 kbps, when overhead and
error correction bits are added. As a general rule, higher vocoder bit rates provide a more precise
representation of a voice signal. However, older, less sophisticated vocoder designs may be unable to
match the voice quality of newer vocoder designs, despite a higher bit rate.
The CDMA vocoder also increases call quality by suppressing background noise. Any noise that is
constant in nature, such as road noise, is eliminated. Constant background sound is viewed by the
vocoder as noise which does not convey any intelligent information, and is removed as much as
possible. This greatly enhances voice clarity in noisy environments, such as the inside of cars, or in
noisy public places.
C. Multiple Levels of Diversity
CDMA takes advantage of a number of types of diversity, all of which lead to improved speech
quality. The four types are frequency diversity, spatial diversity, path diversity and time diversity.
With radio, fades or "holes" in frequency will occur. Fades occur in a multi-path environment when
two or more signals combine and cancel each other out. Narrow band transmissions are especially
prone to this phenomenon. For wide band signals such as CDMA, this is much less of a problem.
The wide band signal is, of course, also subjected to frequency selective fading, but the majority of
the signal is unaffected and the overall effect is minimal.
As an example, consider what happens when there is a 12 dB deep, 400 kHz wide, frequency
selective fade. For a wide band CDMA signal which spans 1.25 MHz, this fade affects only about
1/3 of the entire signal's bandwidth. Since the energy of a phone call is spread across the entire
signal, the effect of the fade is looked at as an average, and represents an overall drop in signal of
approximately 2 dB.

37
If this same 400 kHz, 12 dB fade falls on top of a narrow band 30 kHz signal, as in AMPS or IS-54
TDMA systems, the results are quite different. The entire 30 kHz signal is then affected by this fade.
The result will be an overall drop in signal of the full 12 dB. This is a much more serious hit to the
signal, and could lead to severe degradation in voice quality, or even a dropped call.
Spatial Diversity refers to the use of two receive antennas separated by some physical distance. The
principle of spatial diversity recognizes that when a mobile is moving about, it creates a pattern of
signal peaks and nulls. When one of these nulls falls on one antenna it will cause the received signal
strength to drop. However, if a second antenna is placed some physical distance away, it will be
outside of the signal null area and thus receive the signal at an acceptable signal level.
With radio communications, there is usually more than one RF path from the transmitter to the
receiver. Therefore, multiple versions of the same signal are usually present at the receiver.
However, these signals, which have arrived along different paths, are all time shifted with respect to
each other because of the differences in the distance each signal has traveled. This "multipath" effect
is created when a transmitted signal is reflected off of objects in the environment (buildings,
mountains, planes, trucks, etc.). These reflections, combined with the transmitted signal, create a
moving pattern of signal peaks and nulls.
When a narrow band receiver moves through these nulls there is a sudden drop in signal strength.
This fading will cause either lower, more noisy speech quality or, if the fading is severe enough, the
loss of signal and a dropped call.
Although multipath is usually detrimental to an analog or TDMA signal, it is actually an advantage
to CDMA, since the CDMA rake receiver can use multipath to improve a signal. The CDMA
receiver has a number of receive "fingers" which are capable of receiving the various multipath signals. The
receiver locks onto the three strongest received multipath signals, time shifts them, and then sums them
together to produce a signal that is better than any of the individual signal components. Adding the multipath
signals together enhances the signal rather than degrading it.
CDMA systems use a number of forward error correcting codes, followed by interleaving.
Error correction schemes are most effective when bit errors in the data stream are spread more
evenly over time. By separating the pieces of data over time, a sudden disruption in the CDMA data
will not cause a corresponding disruption in the voice signal. When the frames are pieced back
together by the decoder, any disrupted voice data will have been in small pieces over a relatively
longer stretch of the actual speech, reducing or eliminating the impact on the voice quality of the
call.

38
Interleaving, which is common to most digital communication systems, ensures that contiguous
pieces of data are not transmitted consecutively. Even if you lose one small piece of a word, chances
are great that the rest of the word will get through clearly.
D. Soft Handoff
With traditional hard handoffs, which are used in all other types of cellular systems, the mobile
drops a channel before picking up the next channel. When a call is in a soft handoff condition, a
mobile user is monitored by two or more cell sites and the transcoder circuitry compares the quality
of the frames from the two receive cell sites on a frame-by-frame basis. The system can take
advantage of the moment-by-moment changes in signal strength at each of the two cells to pick out
the best signal.
This ensures that the best possible frame is used in the CDMA decoding process. The transcoder can
literally toggle back and forth between the cell sites involved in a soft handoff on a frame-by-frame
basis, if that is what is required to select the best frame possible.

CDMA Soft Handoff Utilizes Two or More Cells

39
Soft handoffs also contribute to high call quality by providing a "make before break" connection.
This eliminates the short disruption of speech one hears with non-CDMA technologies when the RF
connection breaks from one cell to establish the call at the destination cell during a handoff. Narrow
band technologies "compete" for the signal, and when Cell B "wins" over Cell A, the user is dropped
by cell A (hard handoff). In CDMA the cells "team up" to obtain the best possible combined
information stream. Eventually, Cell A will no longer receive a strong enough signal from the
mobile, and the transcoder will only be obtaining frames from Cell B. The handoff will have been
completed, undetected by the user. CDMA handoffs do not create the "hole" in speech that is heard
in other technologies.

40
ISDN (INTERNET SERVICES DIGITAL NETWORK)

Integrated Services Digital Network (ISDN) is comprised of digital telephony and data-transport
services offered by regional telephone carriers. ISDN involves the digitalization of the telephone
network, which permits voice, data, text, graphics, music, video, and other source material to be
transmitted over existing telephone. The emergence of ISDN represents an effort to standardize
subscriber services, user/network interfaces, and network and internet work capabilities. ISDN
applications include high-speed image applications (such as Group IV facsimile), additional telephone
lines in homes to serve the telecommuting industry, high-speed file transfer, and videoconferencing.
Voice service is also an application for ISDN. This chapter summarizes the underlying technologies and
services associated with ISDN.

Types of ISDN service


Basic Rate Interface (BRI)
Basic Rate Interface service consists of two data-bearing channels ('B' channels) and one signaling
channel ('D' channel) to initiate connections. The B channels operate at 64 Kbps maximum; however, (in
the U.S. it can be limited to 56 Kbps. The D channel operates at a maximum of 16 Kbps. The two
channels can operate independently. For example, one channel can be used to send a fax to a remote
location, while the other channel is used as a TCP/IP connection to a different location.

Note:
ISDN service on the iSeries supports basic rate interface (BRI). You should keep in mind; however, that
ISDN service may not be available in your location. See Planning ISDN Service for further information.

Primary Rate Interface (PRI)


Primary Rate Interface service consists of a D channel and either 23 or 30 B channels (depending on the
country you are in). PRI is not supported on the iSeries. ISDN Primary Rate Interface (PRI) service
offers 23 B channels and one D channel in North America and Japan, yielding a total bit rate of 1.544
Mbps (the PRI D channel runs at 64 Kbps).ISDN PRI in Europe, Australia, and other parts of the world
provides 30 B channels plus one 64-Kbps D channel and a total interface rate of 2.048 Mbps. The PRI
physical-layer specification is ITU TL-431.

ISDN Components
ISDN components include terminals, terminal adapters (TAs), network-termination devices, Line-
termination equipment and exchange-termination equipment. ISDN terminals come in two types.
41
Specialized ISDN terminals are referred to as terminal equipment type 1 (TE1). Non-ISDN Terminals,
such as DTE, that predate the ISDN standards are referred to as terminal equipment type2 (TE2). TE1s
connect to the ISDN network through a four-wire, twisted-pair digital link. TE2sConnect to the ISDN
network through a TA. The ISDN TA can be either a standalone device or a Board inside the TE2. If the
TE2 is implemented as a standalone device, it connects to the TA via a standard physical-layer interface.
Examples include EIA/TIA-232-C (formerly RS-232-C), V.24, and V.35.
Beyond the TE1 and TE2 devices, the next connection point in the ISDN network is the network
Termination type 1 (NT1) or network termination type 2 (NT2) devices. These are network-termination
devices that connect the four-wire subscriber wiring to the conventional two-wire local loop. In North
America, the NT1 is customer premises equipment (CPE) device. In most other parts of the world, the
NT1 is part of the network provided by the carrier. The NT2 is a more complicated device that typically
is found in digital private branch exchanges (PBXs) and that performs Layer 2 and 3 protocol functions
and concentration services. An NT1/2 device also exists as a single device that combines the functions
of an NT1 and an NT2. ISDN specifies a number of reference points that define logical interfaces
between functional groupings, such as TAs and NT1s. ISDN reference points include the following:
• R—the reference point between non-ISDN equipment and a TA.
• S—the reference point between user terminals and the NT2.
• T—the reference point between NT1 and NT2 devices.

Services
• U—The reference point between NT1 devices and line-termination equipment in the carrier network.
The U reference point is relevant only in North America, where the NT1 function is not provided by the
carrier network.
Figure illustrates a sample ISDN configuration and shows three devices attached to an ISDN
Switch at the central office. Two of these devices are ISDN-compatible, so they can be attached through
an S reference point to NT2 devices. The third device (a standard, non-ISDN telephone) attaches
through the reference point to a TA. Any of these devices also could attach to an NT1/2 device, which
would replace both the NT1 and the NT2. In addition, although they are not shown, similar user stations
are attached to the far right ISDN switch.

The ISDN Reference Configurations


You can't talk about ISDN without knowing about the reference configurations. This gives you the basic
vocabulary for talking about all of the pieces of ISDN. There are reference configurations for all
different pieces of the ISDN network, and lots of different configurations. The following diagram shows
42
two of the most commonly referred to configurations. The networks will actually look more complicated
than this; the diagram just serves to apply standard labels to the different parts of the network you'll
encounter.

Common reference configurations

Here's a quick glossary of some of the things shown:

TE1: Terminal Equipment type 1. This is the ISDN telephone or computer or ISDN FAX machine or
whatever it is that you've hooked up to the ISDN phone line.

TE2: Terminal Equipment type 2. This is the old analog telephone. Or old-style fax machine. Or
modem. Or whatever you used to hook up to the analog phone line. It can also be other communications
equipment that is handled by a TA (see below).

TA: Terminal Adaptor. This lets old, TE2 stuff talk to the ISDN network. It also adapts other kinds of
equipment, like ethernet interfaces to ISDN.

NT1: Network Termination type 1. This is the end of the line for the local phone company, and the
beginning of your house's phone network.

43
LT: Line Termination. This is the physical connection to the phone company.

ET: Exchange Termination. This is the local phone company's logical connection from your telephones
to "the phone network".
The difference between TE1 and TA is subtle but significant. If you buy an ISDN card for your
computer, and device drivers that tell it how to speak ISDN, you've turned your computer into a
TE1.However, if you buy an ISDN device that lets you plug your computers ethernet into an ISDN box,
then you're computer is a TE2, and the box you bought is a TA. However, the difference isn't in the
physical location, but more in the software. Specifically in whether there is any conversion going on
anywhere.
For instance, you could conceivably buy a card that plugs into your computer and utilizes the device
drivers for ethernet, and the card would convert the ethernet requests into an ISDN data stream. In this
case, the card would be a TA, and your computer would be a TE2. The card has to worry about
converting one protocol to another. Note the letters, R, S, T, U, and V in the diagram. These are
reference points that everyone uses to talk about each of these parts of the network. For instance, the R
reference point is the interface between an old-style telephone and Terminal Adaptor equipment. Since
most homes won't have any NT2 equipment, the S and T reference points are usually one and the same,
and are called the sometimes S/T bus.
The point to all of this is that different things happen in different parts of the network. What goes on
along reference point U is completely different that at the S/T reference point - different wiring
requirements, different data speeds, different encoding, etc. Notice that reference point V, and the LT
and ET equipment are in the phone company's domain. I lied when I told you that ISDN defines only the
customer's part of the phone network, but I only lied a little. This portion of ISDN is seldom discussed,
and still largely left up to the telephone companies.

Features of ISDN
Hardware
This is layer 1 (the physical layer) of the S/T bus. This defines the physical network in your home. The
most obvious things this defines, as far as a customer is concerned, are wiring, connectors, and power,
so I'll talk about those first. ISDN uses a phone jack that looks just like the standard phone jacks in use
today, except that it is a bit wider. Instead of the older 4-pin jacks (which only used two wires), ISDN
uses an 8-pin jack (which only uses four wires).The CPI is based on a four wire scheme, two wires for
transmitting, and two for receiving (which means you'll probably have to rewire your house). These
wires are typically copper wiring of some sort, and can be longer than most users will ever need.
44
Typical CPI

If you are using ISDN with a single device (for instance, your computer is hooked up to ISDN, and your
phones are still hooked up the old way), then you can have up to a kilometer (thereabouts) in your home
for typical copper wiring. This is called a point-to-point configuration. But in most cases, you'll be using
ISDN to hook up several devices, as shown in Figure, above. This is a multipoint configuration. With
the standard ISDN equipment, up to eight different devices can be hooked up to the S/T bus. With this
configuration the total length can be about 200 meters, and each device can be connected to the bus with
up to 10 meters of wire. Devices can be placed anywhere on the bus under this setup. This can also be
modified somewhat, to extend the S/T bus up to about 500 meters. To do this, all of the devices must be
connected close to the bus termination end of the bus. Further, each device on the bus must be 25-50
meters apart. Eight devices might seem a bit low if you have an active imagination, but some of these
devices could
Actually be brokers for other things -- for instance it is more likely that you'd have a single device that
could simultaneously control your microwave, furnace, A/C, alarm clock, and house lights. Even though
you can only hook up eight devices, you have an almost unlimited number of addresses (i.e. phone
number extensions) for each of those devices, so it is likely that one ISDN TE1 would be used for
several different purposes. On the other hand, you can't simultaneously use more devices than the
available number of B-channels; for most customers this means only 2 devices can be in operation at
once. In fact, with some ISDN provider's switches, you can only hook up two devices period, one

45
assigned to each B-channel. This isn't the way things are supposed to work, but that's how a particular
piece of phone company equipment works (specifically, the DMS-100 switch).
Power
One important issue of ISDN that we aren't used to worrying about is power. Currently the analog phone
system provides it's own power - if the power goes out, your phone still works. However, ISDN requires
more power than the phone company is in the habit of providing. Because of this, each of your ISDN
devices must get it's power some other way. Under normal circumstances, what will happen is that your
NT1 will be plugged in to your house's power. All the ISDN devices in your home will get power from
the NT1. This is one of the reasons that ISDN uses a four wire system for the network - it
allows separate lines for receiving and transmitting and at the same time allows for transmission of
power. Also, those other four unused wires in the 8-pin ISDN jack are specified in the standard to be
used for alternate power supplies. Whether these will actually be used remains to be seen, but it is
possible that a UPS (uninterruptible power supply) could be added to your NT1, and it could use these
auxiliary lines to provide guaranteed power. Note that one of these alternate power supplies is designed
to go from the TE to the NT.
If you are outside of North America, and your power DOES go out, you are still covered though. The
phone company will still provide the same power levels they used to. This should be sufficient to keep at
least one TE1 device in operation. The assumption is that this would be your telephone, so that you
could still call the power company and complain about your loss of power. The NT1 notifies all devices
on the S/T bus of the power failure by reversing the polarity of the receive and transmit line pairs. All
non-essential devices are supposed to respond by shutting themselves off. As I implied, this standard has
not been used in North America - if your power goes out here, you have no phone.

Network Operation
All traffic on the S/T bus flows in 48 bit frames, at a transmission rate of 192 Kbps. You might notice
that this is higher than the 160 Kbps that I said could be sent between you and the phone company. This
is because the CPI covers shorter distances, and is presumed to be more modern, and can therefore run
as fast as is needed. So 144 Kbps is used for the 2B+D channels, leaving 48Kbps for overhead. Since the
S/T bus has to worry about network contention in addition to other issues, it needs all of this extra
bandwidth to keep things running smoothly. The encoding on the S/T bus is a pseudo ternary line code,
known as modified alternate mark invert (MAMI). In this encoding, ones are represented by a zero
voltage, and zeros are represented by a pulse, which is alternately either positive or negative:

46
MAMI Encoding

Signaling
There are two different types of signaling used in ISDN. For communicating with your local phone
company, ISDN uses the Digital Subscriber Signaling System #1 (DSS1). DSS1 defines what format the
data goes in on the D-channel, how it is addressed, etc. It also defines message formats for a variety of
messages used for establishing, maintaining, and dropping calls, for instance SETUP messages,
SUSPEND and RESUME messages, and DISCONNECT messages.
Once your DSS1 signal makes it the phone company, their own signaling system takes over
to pass the call information within their system, and between other phone companies. Signaling System
#7 (SS7) is supposed to be used for this. SS7 defines a communications protocol, and formats similar to
DSS1, however SS7 is designed in a broader, more general way. DSS1 is specific to ISDN; however
SS7 will handle the signaling needs of ISDN as well as other older signaling systems and (hopefully)
will adapt well to future needs. One important feature of SS7 is providing CCS. This makes it harder for
malicious users of the phone network to put one over on the phone company. It also improves the
service, for instance by offering faster connection establishment. However, the phone companies haven't
yet fully converted their equipment to use CCS. Older equipment still looks for the signaling
information in the same channel as the voice, in the eighth bit of each piece of voice data. This is why
many parts of the country only offer 56Kbps B-channels - they've lost 1/8 of their bandwidth to the older
in-band signaling system.

Switching
With pure ISDN, switching shouldn't be a concern - it's basically the phone company's problem to solve
as they please. So far though, they don't have it completely solved, so we need to mention it here.
Traditional phone services are Circuit Switched Voice (CSV). Your voice goes through several switches
47
before reaching its final destination. The phone company is pretty good at this. For point-to-point data
connections, you need Circuit Switched Data (CSD) - the exact same thing with data instead of voice.
The phone companies aren't prepared yet to dynamically provide whatever service you need right from
the start, so they will want to know ahead of time what you are going to be using your ISDN channels
for.
If you are using CSV, they are free to route your call through any type of switch, even the
old analog switches (there are a few left here and there). Your digital channel may also be shared with
other channels, in the moments when there is silence on your phone line. And the digital parts of a CSV
call can go through noisy switches that might create an undetected error here or there - it's only voice
and you won't hear it. For CSD, they can't do any of these things - your call must be routed only on
pieces of equipment that will give dependable full time data channels. So even though the service in
ISDN is supposed to be transparent, for the time being you have to tell the phone company how you are
going to be using your B-channels. This seems to be more of a problem in the U.S. than in Europe.
Typically, each B-channel is setup for only one of these types of data. There are actually a standard set
of combinations defined for setting up BRIs. These are called National ISDN Interface Groups (NIIGs),
so there will be a limited menu of offerings available. Typically you can get both B-channels for data, or
one for voice and the other for data, or one for voice and the other for either voice or data. In order to
facilitate this, North American phone companies use an optional part of the ISDN standard to identify
each TE1 or TA you use. The phone company assigns a Service Profile Identifier (SPID) to each of
these devices, and you have to manually enter them into each device you use. The phone company then
stores this data somewhere, and when you connect your machine to the network, it sends its SPID to the
nearest phone company switch which identifies what type of connection the device needs and (therefore)
how to route its calls. Presumably, the SPIDs have to refer to a configuration that matches one of the
two B-channels you have.
By the way, the SPIDS are arbitrary numbers that refer to data stored by the phone
company. The phone company often includes the phone number in the SPID for their own convenience,
but in general you won't get anywhere trying to find significance in the patterns of SPIDs. One older
type of phone company switch, a DMS-100, was improperly designed with respect to the standards
relating to SPIDs (the standards may not have been complete when the DMS-100 was designed). This
switch misguidedly assigns one SPID to each B-channel that you use, rather than to each device.
Therefore if your nearest switch is a DMS-100, you will only be able to hook up two devices to your
CPI, rather than eight. If you are only going to be hooking up a single device to your ISDN (i.e. setting it
up in a point-to-point configuration, you might not need a SPID at all, as the phone company can

48
identify your ISDN line as one particular type, full time. This depends on what equipment they have -
the old DMS-100 switch will still require you to have a SPID.
Packet Switching
Another kind of switching is also available, Packet Switched Data (PSD). With Packet switched data,
each piece of data you send out might go to a different destination. This is used (or will be used) by the
D-channel data. Using your D-channel, it is possible to implement various low-bandwidth services for
communicating with other ISDN users. In addition you could also use PSD on the B-channels, although
this is generally only used for X.25 or something similar.

Bearer Service
The options of CSV, CSD, and PSD are broad categories of bearer services that the phone companies
can provide. Different bearer services provide different types of guarantees about the reliability and
Synchronization of the data. There are currently ten different bearer services for circuit-mode, and three
services for packet mode. These bearer services are defined in terms of a number of attributes, which
include mode (circuit or packet), structure (bit-stream or octet-stream), transfer rate (e.g. 64Kbps),
transfer capability (basically, the content, for instance speech, 7Khz audio, video, or unrestricted), and
several other attributes that specify protocols to use and other things.
The attributes of the bearer service are encoded into a Bearer Code, or BC, that is sent every time a
new connection is being set up. In theory, this allows the switches to dynamically choose from a variety
of different switching paths techniques depending on requirements. In practice, as discussed before, the
SPID is used to determine what services are needed for switching, as this greatly simplifies things for
the telephone companies. The BC will not be completely ignored, however there are certain bearer
services that will be unavailable on your B-channels, based on how they are configured It is important to
note that the BC is sent to the switch every time a connection is established. However, the SPID is only
sent to the switch when you physically attach your equipment to your phone line. At this time the switch
gives your device a Terminal Equipment Identifier (TEI) which is used from then on to identify all
connection requests from that piece of equipment. This allows the switch to look at the TEI and BC,
determine the SPID, and see if the BC and the SPID match up. Finally, there is a feature in some TAs
that allows you to use a CSV bearer service to carry data (perhaps because it is heaper, or possibly CSV
is all that is available), which is called Data over Speech Bearer Service (DOSBS). This works by
providing additional end-to-end data guarantees that can't be relied upon from the speech Bearer
Service.

49
Rate Adaptation
Terminal Adaptors are designed to facilitate equipment with data rates lower than the 64Kbps per B-
channel. Because of this, standards have been developed (independent of ISDN, as they happen end-to-
end) to determine how this lower rate data will be merged into the higher speed stream. There are
standards for doing rate adaptation with a wide variety of other communications systems, including
standard serial interfaces (RS-232C), X.21, and X.25. Because this is an end-to-end issue, it will only
work if both end points can speak the same protocol.
Two common rate adaptation standards that are emerging as the most popular are V.110 and
V.120. V.110 is the earlier standard of the two, and is mainly concerned with synchronous
transmissions. It was designed for putting low-rate (2400 or 9600 bps, for instance) synchronous data
onto 56Kbps channels prior to the development of ISDN. V.110 does also support asynchronous data up
to 19.2Kbps. It does not have any error correction. V.120 is a frame-oriented protocol based on LAPD,
and also supports both synchronous and asynchronous data streams. Because of its use of LAPD, it
provides error correction. Both V.110 and V.120 support the multiplexing of several lower rate data
streams onto a single channel, although this feature isn't currently found in many products. V.110 is an
easier protocol to implement and is better suited to synchronous data, while V.120 is more suited to
asynchronous communications, and is more complicated to implement, especially if all V.120 features
are included in the implementation. V.110 is more commonly implemented, ad V.120 is gaining
popularity. It is likely that for the near future vendors will try to support both protocols in their products,
but eventually one will win over the other.

Inverse Multiplexing
In addition to rate adaptation of lower speeds onto one B-channel, there are three common methods for
combining several B-channels to get speeds greater than the 64Kbps. This is called inverse multiplexing.
The most common method, BONDING (for Bandwidth ON Demand Interoperability Group), is
implemented by most vendors. The standard is still developing, and some vendors may have features
that others lack, so interoperability could still be a problem. BONDING is implemented outside of the
ISDN architecture, so only the end points no it is a single connection - the ISDN thinks it is just several
Separate phone calls. It is able to support up to 63 combined 56 or 64Kbsp B-channels.
The second method, Multilink PPP, is used only when routing IP over ISDN. Under the PPP standard, it
is possible to have a single logical connection multiplexed across several physical connections, and this
method is widely implemented. As with BONDING, this works entirely outside of the ISDN
architecture.

50
ISDN Videoconferencing
ISDN (H.320) Videoconferencing is circuit switched, fixed pathway connection that lasts for the
duration of the call sent through an encoder/decoder. This system works a lot like a regular phone call
would, except you use multiple lines.
ISDN is calls are sent out in sets of lines. Our rooms have the 3 sets of lines available for ISDN calls.
Calls can be sent out on 2 lines (112 Kbps), or 4 lines (224Kbps) or 6 lines (384Kbps). The more lines a
call is sent out on the more bandwidth that will be available to make the call, resulting in better audio
and video quality. The drawback to this is that more long distance is charges will apply (4 lines is 4x the
long distance charge). ISDN Videoconferencing is the most widely used and most stable of all the
videoconferencing standards at this time.

Isdn Testing
Roshan Enterprises supplies Local Line/Broadband/ISDN Line Tester. Roshan Enterprises offers wide
ranges of Cable Fault Locator, Route Locator, Cable Guard, Broadband Line Tester, Local Line Tester,
IPM/GD Tube Tester, Digital Earth Tester, and Artificial Load for testing battery, Monitoring system
for unmanned exchanges etc. Local Line/Broadband/ISDN Line Tester is ideal for use in various sectors
like- telecommunication, industrial electronics and information technology etc. Local
Line/Broadband/ISDN Line Tester is useful for testing exchange and subscriber side of the line. On the
exchange side user can check Exch. Voltage & Dial tone. On the subscriber side user can check the
presence of AC voltage, Battery, Earth, Short-circuit and Condenser click. Local Line/Broadband/ISDN
Line Tester is available with tone sending provision.

ISDN Tester
51
Salient Features of Local Line/Broadband/ISDN Line Tester are:
 Compatible with MDF Test board

 Easy to use operation. Any layman can use it

 Measures Exchange side as well as Line side parameters

 Provision to give ring & speech facility

 Provision to measure ISDN voltage

 Provision to check loop resistance for broadband line

 Provision to feed tone on line to detect particular pair

 Very low power requirement. Mains operated

 Compact & light weight

 Direct measurement

 Use of high speed circuitry

 Tests any type of metallic telecom cable

52
RF Lease Line
As shown in Fig, RF modems are installed at node end & customer end. When antennas will be aligned
in the same direction, modems will get sync. Ethernet output is given to node end modem & the same is
traveled to customer end through RF waves (electromagnetic waves). Frequency used by HFCL is 2.4
GHz. Then Ethernet output from customer end modem can be fed to customer’s PC. The bandwidth
capping is done at ISP end as required by customer.

RF RF
m odem m odem

E1
IN P U T ETHERNET
O UTPUT

1U

S W IT C H
E1 T O E T H E R N E T
CONVERTER

CU STO M ER END
NODE END

Applications of Lease Lines:


Leased lines are used to build up private networks, private telephone networks (by interconnecting
PBX's) or access the internet or a partner network (extranet).

Site to site data connectivity


Terminating a leased line with two routers can extend network capabilities across sites. Leased lines
were first used in the 1970s by enterprise with proprietary protocols such as IBM System Network
Architecture and Digital Equipment Decent, and with TCP/IP in University and Research networks
before the Internet became widely available. Note that other Layer 3 protocols can be was used such as

53
Novell IPX on enterprise networks until TCP/IP became ubiquitous in the 2000s. Today, point to point
data circuits are typically provisioned as TDM, Ethernet, or Layer 3 MPLS.
Site to network connectivity
As demand grew on data network telco started to build more advanced network using packet switching
on top of their infrastructure. Thus number of telecommunication companies added ATM, Frame-relay
or ISDN offerings to their services portfolio. Leased lines were used to connect the customer site to the
Telco network access point.

International Private Lease Circuit


An IPLC is an International Private Leased Circuit that functions as a point-to-point private line. IPLCs
are usually Time-division multiplexing (TDM) circuits that utilize the same circuit amongst many
customers. The nature of TDM requires the use of a CSU/DSU and a router. Usually the router will
include the CSU/DSU.
Then came the Internet (in the mid 1990's) and since the most common application for leased line is to
connect a customer to its ISP Point of presence. With the changes that Internet brought in the
networking world other technologies were developed to propose alternative to Frame-relay or ATM
networks such as VPN's (hardware and software) and MPLS networks (that are in effect an upgrade to
TCP/IP of existing ATM/Frame-relay infrastructures).

54
REFERENCES

 www.wikipedia.org
 google.com
 hfclconnect.com
 www.webopedia.com
 www.huawei.com
 www.net4.in

55

Vous aimerez peut-être aussi