Vous êtes sur la page 1sur 15

Network Infrastructures A.A.

2009-2010

Carrier Ethernet

Palamides Damiano Pompei Alessio Vernata Alessandro

I.

Introduction

At the present time the standard technology used to manage data transmission on carrier networks is SONET/SDH, a circuit-based system which is mainly intended for the transport of voice traffic. In the last years new technologies are being developed to replace it, due to the arise of new needs among the carriers customers: the residential triple play market (data, television, voice) requires high peak data bandwidths approaching Gigabits per second, priority voice and high definition broadcast/on-demand video services. Residential access networks are evolving to fiber to the premises (FTTP) technologies to support these bandwidth and QoS requirements, while metro core networks are being driven to a converged IP/Ethernet architecture, capable of prioritizing services while handling several Gbps of traffic. Many carriers are already offering Metro Ethernet services at a fraction of the cost of TDM services. Many are also in the midst of effecting major Network transformations involving Carrier Ethernet to gear up to NGN service and transport requirements. Metro Ethernet improves operational efficiency and can be a launch pad for newer services; from the carriers point of view, it gives service providers the ability to offer higher revenue services. Moreover, Ethernet has got increasingly sanitized (Sonetized and ATMized) to acquire some of the proven carrier grade characteristics from SONET/SDH and ATM technologies.

II.

Carrier Ethernet Requirements

According to the Metro Ethernet Forum (MEF), Carrier Ethernet is a ubiquitous, standardized, carrier-class service defined by five attributes that distinguish Carrier Ethernet from familiar LAN based Ethernet. In the next paragraphs these attributes and the main challenges for putting them into practice are analyzed.

II.1. Scalability
Providers require that the network is able to scale to serve millions of users and to deliver them Ethernet services from low to high speeds (1 Mbps to 10 Gbps) configurable in granular increments. The scale involves the number of customer sites, the number of VPNs per customer site and the number of end-hosts in the VPNs. To obtain this, several standardized access methods are needed, together with a standard interface to connect them to the metro network; other constraints to scalability are the bottleneck of 4096 possible VLANs due to the structure of MAC addresses (this is worked out by the PBB technology, see IV.4), and the need for automated provisioning which can be provided by enhanced control plane solutions.

II.2. Standardized services


While service providers see a growing potential in Ethernet services, existing leased lines are still a significant revenue source for them, so that they must be able to retain and seamlessly interwork with existing leased lines services as they migrate to a carrier Ethernet network. Moreover a standard is needed to define new service types such as E-Line (point-to-point), E-LAN (MultiPoint-to-MultiPoint) and E-Tree (Point-to-MultiPoint) supporting different classes of service and levels of QoS.

II.3. Reliability and resiliency


SONET/SDH is able to provide five 9s of network availability; to reach this level metro Ethernet has to deal with the need to quickly detect and recover from network failures in a very complex environment: Link Aggregation method provides redundancy but is only applicable to two adjacent nodes; service OAM (Operations, Administration and Management) is not yet fully supported; finally, a new and more efficient Spanning Tree Protocol has to be created to overcome the difficulty to manage a large network quickly.

II.4. Hard Quality of Service (QoS)


To match application requirements, service providers must be able to offer customers classes of service (CoS) differentiated in terms of availability, frame delay, frame delay variation and data delivery ratio. QoS mechanisms provide the functionality to prioritize different traffic streams, but hard QoS ensures that service level parameters agreed for each level of service are guaranteed and enforced across the network;

so customers are provided with the guaranteed deterministic performance they receive from their existing leased line services. This can be performed by monitoring the traffic, working on automated provisioning, supporting multiple queues per port, and finally supporting both a single CoS per Ethernet Virtual Connection model and a multi CoS model.

II.5. Service management


Mature network and service management systems are required in order to deliver existing and new services, and to monitor different parameters; and in case of fault a troubleshooting functionality is needed to locate it and react properly. Studies are being performed to equip carrier Ethernet with systems of link OAM and service OAM, as well as new performance monitoring protocols.

III.

Business Aspects

Besides the technical issues, service providers are looking for ways to increase revenues, on the one hand by enticing customers to move to the Ethernet technology; on the other, by making easier and cheaper the migration to a new infrastructure. The first is a marketing issue: customers are looking for high bandwidth services that fill the TDM bandwidth gaps. Even with attractive Carrier Ethernet pricing from competitive carriers, enterprise customers have not broadly accepted these services. Attractive pricing, although a benefit, is only one reason for an enterprise to adopt a new service. High availability and reliability are equally important. For the incumbent provider, a strategic approach would be to devise their Carrier Ethernet pricing based on quality of service and bandwidth. They can define a Platinum level service complementary to existing Private Line business prices and a Gold and Economy Levels complementary to existing Frame Relay and ATM pricing models. From the infrastructural point of view, carrier technologies offer a wide variety of services from a single network platform and thus would appear to give both economies of scale the bigger it is, the cheaper per unit and economies of scope the more services covered by the platform, the cheaper it becomes per platform. However, this does not continue forever, and after a certain size the growing complexity of managing the large scale and/or broad scope starts to show diseconomies of scale and scope. The primary way of avoiding this growing cost of complexity is to divide and conquer. That is, by partitioning out specific areas of functionality in such a way as to minimize the interdependence between these separated areas, the cost of complexity can be greatly reduced. Carrier networks have taken full advantage of this by separating switching from transmission: switching has focused on service-oriented features using signaling systems whilst transmission has concentrated on the cost effective management of bandwidth (Figure 1). This brings to a flexible system in which both the growing user base and further technical changes will not affect the management costs, thus granting higher revenues per user.

FIGURE 1: DATA TRANSPARENCY IN A DIVIDE AND CONQUER SCENARIO

IV.

An implementation technology: road to PBB

Ethernet emerged as the preferred technology at Layer-2 for the TCP/IP suit. Due to this emergence and desire of service providers to have a common Layer-2 protocol, Ethernet started evolving from purely a LAN service to carrier grade WAN transport with MAC forwarding. The standard evolved from initial VLAN (IEEE 802.1Q) service to Q-in-Q (IEEE 802.1d) and then MAC-in-MAC (IEEE 802.1ad). The latest separates network discovery and fault management functions from the MAC forwarding functions making Ethernet carrier class manifesting itself as PBB-TE. In the following, all these standards are briefly described (see figure 2 for a visual reference of the frame structure) for a better understanding of the resultant Provider Backbone Bridge technology (PBB).

IV.1. MAC bridging 802.1D


The IEEE standard 802.1D describes the behavior of bridges, introduced to divide the LAN traffic thereby limiting the delays due to the increasing users exploiting CSMA/CD. Carrier Ethernet makes use of transparent bridging in which the bridges update their forwarding table by examining the source MAC address of the upcoming packets. This standard also defines additional protocols related to bridging, like the Spanning Tree Protocol (STP) which eliminates loops in the network.

IV.2. VLAN 802.1Q


Based on MAC bridging, the Virtual LAN method was introduced to deliver a MultiPoint-to-MultiPoint E-LAN service: this standard creates VLANs across a common LAN infrastructure to enable logical separation of traffic while sharing the same physical network. Each VLAN is identified by a Q-tag (also known as a VLAN tag or VLAN ID) that identifies a logical partitioning of the network to serve the different communities of interest. IEEE 802.1Q works fine within the boundaries of a single organization, but is found to be inadequate when service providers attempt to deliver Ethernet services to multiple end users over a shared network infrastructure. Also, because the Q-tag consists of a 12-bit tag, up to 4094 possible service instances can be created. Although this is sufficient for an enterprises LANs, it does not offer the scalability required to support Ethernet services in a large metropolitan area.

IV.3. Q-in-Q Provider Bridge 802.1ad


in December 2005, this new standard was developed in which is simply added an additional service provider VLAN ID (S-tag) to the customers Ethernet frame. This identifies the service in the provider network, while the customers VLAN ID (C-tag) is not modified. Provider bridges use the S-Tag to identify the service to which a customers Ethernet frame belongs; so each service instance requires a separate STag. Because the S-Tag consists of a 12-bit tag, provider bridges have the same scalability limitation as IEEE 802.1Q: only 4094 services instances can be created. Moreover, different bridges use the same MAC

FIGURE 2: EVOLUTION OF THE ETHERNET FRAME STRUCTURE

address, thus limiting further the system scalability and even increasing the work done into the core: every new host MAC address has to be learned by the bridges, while the work done by the STP increases.

IV.4. MAC-in-MAC Provider Backbone Bridge 802.1ah


With the advent, in June 2008, of 802.1ah provider backbone bridges (also known as MAC-in-MAC), Ethernet gains the possibility to permit true hierarchical scaling, virtualization, and full isolation of provider infrastructure from customer broadcast domains. PBB evolves the Ethernet frame by adding an additional MAC header dedicated to the service provider, composed of a backbone source and destination MAC address, a backbone VLAN ID (B-Tag) and a backbone service ID (I-Tag) completely independent from the customers information, as described in figure 3. The 24 bits of I-tag defines a maximum of 16 millions service instances thereby solving the scalability issue; the B-Tag is then used to segregate the PBB network (PBBN) into virtual networks, or regions, exploiting different technologies (e.g. E-LAN, E-Tree or E-Line either with traffic engineering or not). Moreover, because of the additional MAC addresses, the core switches are no more interested to any change occurring in the edge of the network, since these changes will only affect the customer MAC addresses. Hierarchy has particular utility when the customer base consists of a large number of relatively small communities of interest (the primary situation that will be faced by carriers) that can be overlaid upon a common transport network. It reduces the amount of provisioning and forwarding state in the network core and correspondingly reduces the load and ongoing cost of assuring service and managing faults.

FIGURE 3: PBB ENCAPSULATION

IV.5. PBB-TE 802.1ay


Hierarchical isolation of customers from carrier operations permits the carrier to use different forwarding modes by engineer the network: this is capital for providers, that are able to exercise deterministic management of traffic and performances. The application of traffic engineering to PBB is standardized in IEEE 802.1Qay, in which a new forwarding behavior is obtained by simply turning off some Ethernet functionalities like STP and MAC learning, thus maintaining the same hardware and reducing the infrastructural costs; this allows to set up manual end-to-end connection oriented Ethernet paths with predictable bandwidth and delay. Considering that a VLAN ID (VID) usually identifies a loop-free multicast domain in which MAC addresses are flooded, if we configure instead loop-free MAC paths, the VLAN tag is freed up to be used for something else. PBB-TE uses a set of VIDs to identify specific paths through the network to a given destination MAC address: the combination of VID + MAC (60 bits) becomes globally unique. PBB-TE allocates a range of VID/MAC addresses whose forwarding tables are populated via the management or control plane instead of through the traditional flooding and learning techniques, resulting in a prescribed and predetermined path through the network and totally predictable network behavior under all circumstances.

V.

T-MPLS/MPLS-TP
V.1. Overview

Transport MPLS (or T-MPLS) is a new formulation of MPLS, designed specifically for application in transport networks. It builds upon well-known and widely deployed IP/MPLS technology and standards, but offers a simpler implementation, where features not relevant to connection-oriented applications are removed. The key enhancements to MPLS provided by T-MPLS, such as engineered point-to-point bi-directional LSPs, end-to-end LSP protection together with advanced OAM support promise optimal control of transport network resources leading to lower operational expenses. MPLS was originally developed by IETF in order to address core IP router performance issues, but has since found strong application in carriers' converged IP/MPLS core networks, and as a platform for data services such as IP-VPN. With increasing packet networking, the ITU-T became interested in adapting MPLS to make it a carrier class network. The result is Transport MPLS (T-MPLS), a connection-oriented packet transport network based on MPLS that provides managed point-to-point connections to different client layer networks (such as Ethernet). Furthermore unlike MPLS, it does not support a connectionless mode and, is intended to be simpler in scope, less complex in operation and more easily managed (e.g. layer 3 features have been eliminated and the control plane uses a minimum of IP). ITU-T ceased work on T-MPLS in December 2008, in favor of MPLSTP standardization of this protocol, realized in a joint work with IETF.

V.2. Differences from MPLS


Key differences of T-MPLS/MPLS-TP compared with MPLS include: Use of bi-directional LSPs (Label Switched Paths). Whilst MPLS LSPs are uni-directional, transport networks conventionally provision bi-directional connections. T-MPLS therefore pairs the forward and backward LSPs to follow the same nodes and links. No PHP (Penultimate Hop Popping) option. PHP, by removing the MPLS label one node before the egress node, simplifies the egress processing required. Indeed, it comes from a historical legacy of wanting to minimize router processing requirements. However, the interface now has a mix of IP and MPLS packets and the final node must perform an IP (or other payload) look-up instead. More importantly, OAM is more complex or even impossible since the MPLS label context is lost. No LSP Merging option. LSP Merge means that all traffic forwarded along the same path to the same destination may use the same MPLS label. Whilst this may promote scalability, in fact it makes effective OAM and Performance Monitoring (PM) difficult or even impossible, since the traffic source becomes ambiguous and unknown. It is thus not a connection-oriented concept. No ECMP (Equal Cost Multiple Path) option. ECMP allows traffic within one LSP to be routed along multiple network paths. Not only does this require additional IP header processing, as well as MPLS label processing, but it makes OAM more complex since Continuity Check (CC) and PM flows may follow different paths. This concept is not needed in a connection-oriented network. The following part of the paper will now describe though a case study how these T-MPLS/MPLS-TP can be employed to effectively realize NGN based on Carrier Ethernet.

VI.

The HIPT Project

Considering the increased interest in providing high performance network for delivering IPTV traffic, the HIPT project was founded with the objective of enhancing the carrier Ethernet transport for IPTV applications by developing technology that can fulfill the increasing requirements in terms of bandwidth and quality and at the same time reduce cost of network operation. So the goal of HIPT is to provide a Carrier Class Ethernet Transport platform integrating control plane, traffic management, extended surveillance mechanisms and methods for protection, redundancy and resiliency. Lets now introduce the

Carrier Ethernet transport network architecture for IPTV designed in this HIPT project and see how it takes advantage by the use of MPLS TP as layer 2 technology.

VI.1. Carrier Ethernet architecture design


In HIPT they investigate on carrying out a layer 2 Carrier Ethernet transport network architecture complied with Next Generation Network framework sowed in the figure:

This is a NGN based transport network architecture which is provisioned by three function blocks: Service Control Functions (Service Control Function in figure), transport network control plane (RACF) and transfer functions (Layer 2 transport network). Carrier Ethernet resides in the lowest block of Layer 2 transport network. The Layer 3 dynamic routing in the metro/access domain is replaced by a scalable architecture with static tunnels by means of PBB-TE or MPLS TP. Above the transfer functions there is centralized transport control plane, the RACF (Resource Admission Control Functions), which deals with all the issues of transport network resource control and management. The functions in the Service Control Function block instead are within network Service Stratum which is not connected directly to the transport network and deals with the application layer signaling, resource reservation negotiation, access authentication and accounting etc. VI.1.1. MPLS-TP & NGN In HIPT project MPLS TP is chosen as the Layer 2 transport technology by the reasons of its original attributes incompliance with IP/MPLS, considered as the main convergence technology within the core network. This technology (MPLS TP) avoids the IP routing procedure by setting up Layer 2 MPLS tunnels that manage to provide the end-to-end quality of service the transfer service assured. These kinds of network distinguish different service class by using different LSPs (Label Switched Paths). Moreover, in HIPT, in addition to this mechanism, different traffic flows related to the same service class in a LSP are assigned by different bandwidth profile. In other words there are many LSPs, one for service class, inside the same MPLS tunnel, which indicate a transport service and a transport direction, and the traffic flow related to one of the LSPs is modeled using one bandwidth profile, that indicate the requirements for the bandwidth carried by the LSP. By this way in order to grant the QoS transport and flow based policy in the MPLS TP transport layer, two identifiers are needed to associate with one traffic flow: the service/user ID and the flow ID. The former one will be treated as the indicator to both the LSP to the destination and the service class. The latter one can be used as the indicator for traffic policing at the edge routers. These two identifiers are mapped in two MPLS labels that are assigned for each traffic flow: one for the destination and the transport service class (mapping service/user ID) and the other one specifying the bandwidth profile (mapping the flow ID).

Therefore, in the Carrier Ethernet MPLS TP the network QoS assurance control mechanism is held within layer 2 label switching accompanied with statically label assignment. This is much more efficient with respect to the traditional IP/MPLS because it avoids the IP level signaling and discovery procedures (as OSPF and RSVP). In this solution the traffic flow is assigned at the edge router with the first label (inner label), that indicate the service and the bandwidth profile, and then inside the Layer 2 Transport Network the second label (outer label) is used to distinguish the traffic flow and thus the service class. The functions such as MPLS label mapping and managing, network resource control and policy decision are centralized inside the Carrier Ethernet control plane (RACF). VI.1.2. How the RACF works Lets now illustrate in detail the RACF itself. As said before, this is the control plane of the HIPT project and it can be considered a single connect point between the service control function and the transport network, in such a way to keep the underlying transport technology independent with the control technologies. Differently from GMPLS, which is considered the main implementation of control plane for Carrier Ethernet, the RACF is less distributed but centralized instead on the top of the traffic domain. In the RACF there are two blocks, transport network control plane and core network control plane, they deal with different part of the network while the functionalities are the same. The RACF implements two function entities: PD-FE (Policy Decision Function Entity) and TRC-FE (Transport Resource Control Function Entity). They provide the connection between the Service Control Functions in the service stratum and the Carrier Ethernet transport network. Upon the receiving of the request from the service layer, the PD-FE will do the resource availability check, by inquiring the related TRC-FE, and after the service ID is gained, according to the service level QoS requirements, it tries to install the policy into the correspondent underlying routers. The control plane uses a QoS push model to handle the QoS and network resource control for the transport network. The request by the CPE is initially sent to the Service control function that, generates the network service request for the RACF. The RACF makes the network resource admission control and the final policy decision consulting its own network managing database or inquiring the information to the according transport network and basing on the user profile. When final policy decision is made the QoS information interpreted through Service Control Functions and RACF function components are finally pushed into the CE devices: Carrier Ethernet switches allowing user to get his service.

VI.2. HIPT Carrier Ethernet test bed implementation


Now we can illustrate how the HIPT project was implemented in a test bed and the series of service performance tests carried out on this test bed. The following picture illustrates the architecture that has been implemented in this Carrier Ethernet test bed:

In this architecture the CPE (1) works as network service client and initiates the service requests, of all the different service classes, with the SIP signaling protocol. These requests are than handled by the SIP server in component (2) that acts as the Service Control Function in the NGN previously described. It communicates with the network control plane (RACF) to deal with the application level QoS negotiation. The RACF is implemented in the same component (3) in the test bed and the resource admission control functions, such as PD-FE and TRC-FE, reside in this plane. According to the network request, Control plane communicates with Carrier Ethernet equipments to push the policy and QoS parameters to configure LSPs along the MPLS TP tunnels. Components (4) represents MPLS TP Edge router they are configured with LSPs of different service classes and support multiple ways of identifying the unique traffic flow. The unique flow ID can be derived from MPLS EXP bits, VLAN priority bits or IP DS bits. The Area Border router instead deals with the traffic policy control. Carrier Ethernet MPLS TP Switches (6) finally receive the QoS and policy configuration command from RACF, thus each flow can be mapped to a separate output queue. Before the session starts, each CE switch and edge router is configured manually with the MPLS LSP QoS and policy mapping information.

VI.3. The Tests


Performance tests on this network were performed through the following operations: the traffic generator generated 3 traffic flows with different IP address which indicates the different service class within the Carrier Ethernet network. All the three flows were switched through the network and received by the traffic generator, where the classified service performance will be tested. Thus, according to the working procedure previously described, three LSP indicating the different service class and bandwidth profile were also set up. The following tables show the bandwidth provision for the service classes and LSP bandwidth profile:

The results of the tests are summarized in these three plots: 1. Seq Errors: counts the sequence number error for every incoming packet in each traffic flow

2. Tx Test Throughput: shows the traffic throughput coming out from the traffic generator

10

3. Rx Test Throughput: shows the flows throughput incoming

From these three plots we can tell, that the flow with highest service class (RT) can always get the assurance under their committed data rate which the lowest one (BestEffort) is always be sacrificed. In the hill part of the second and the third plots, the Tx and Rx throughput has the same trend but different scale. The BestEffort traffic has to be dropped to make the other two flows keep their committed bandwidth profile. However from the Seq Errors plot we can also see the packets dropping from the two higher classes, that is because the total bandwidth far exceeds the total bandwidth provision for the certain service classes. The in total bandwidth consumption is beyond the capacity of the network, so the even the first class traffic need to be dropped.

VII.

Mobile Backhaul for Carrier Ethernet:

Third and fourth generation mobile communication systems, in particular the Universal Mobile Telecommunication Systems (UMTS) and Long Term Evolution (LTE), are expected to have an intensive growth in the next few years caused by a continuously increasing number of mobile subscribers and operative networks all over the world, as well as by a dramatically growing traffic demand for data applications like video streaming, web and multimedia services. This in turn requires the Universal Terrestrial Radio Access Network (UTRAN) to offer much higher transport capacity supporting the evolved UMTS radio interface and HSDPA (High Speed Downlink Packet Access) as well as HSUPA (High Speed Uplink Packet Access) services . But adding ATM capacity by leasing additional E1/T1 lines leads to a linear increase of the operation expense. Must be considered that substantial expenditures have been invested ATM-based transport networks with numerous NodeBs with ATM-based interfaces. Thus a smooth introduction of IP asks for a gradual evolution towards IP. Therefore, an intermediate migration solution is needed to integrate cost-efficient IP based transport alternatives to reduce the cost per bit-rate within the radio access network, and to allow backward compatibility and interworking of RANs with different transport technologies. In this context, Carrier Ethernet has already established itself as a very cost-effective way of addressing the rapidly increasing bandwidth demands of new services. It is also a viable solution of converged fixed-mobile access networks, as well as a flexible and reliable way for enabling heterogeneous access networks and "all IP" 3G and 4G mobile networks. Therefore, many solution are proposed and we now expose the solution adopt by MEF standard Circuit Emulation Services and the Pseudo-Wired in backhaul network standardized by the IEFT.

11

VII.1. Circuit Emulation Services


CES is a major step in industrys progression toward entirely converged networks and is focused on the transport of TDM services over Carrier Ethernet services MEF standard and certificate was designed to meet these challenges where legacy voice traffic is transported via TDM and CES over Carrier Ethernet (CESoETH) and the data growth is handled by Carrier Ethernet. In this way, the traffic is merged over time. Until now cellular operators have relied on tradition T1/E1 leased lines from incumbents that have caused provisioning delays. E-Line, TALS, and Mixed-Mode CESoE enable a metro Ethernet network to be used to backhaul infrastructure traffic from cell site. CESoE gateways can extend cellular base station T1/E1 circuits transparently over metro Ethernet networks, eliminating the need for TDM leased line. Implementing CESoE also oppositions the cellular operator for future 3G and 4G network expansion VII.1.1.Technical challenges Most, if not all, of the technical challenges facing CESoE result from replicating a Constant Bit Rate (CBR) service over a Variable Bit Rate - VBR Metro Ethernet Network (MEN). The performance of the MEN in terms of latency, errant and lost frames has a critical effect on the ability to support CESoE, especially on the ability to synchronize both ends of the synchronous CBR service. VII.1.2.Use cases There are an many possible deployment scenarios. The Implementation Agreement identifies four generic deployment scenarios that capture the main short term and long term deployment possibilities: Legacy means in this case: non-packet RAN and non-packet transport.

Here overlay MEN does bandwidth offloading onto ethernet services, and at the same time legacy network continue to transports voice and deliver timing.

Now RAN nodes with legacy interfaces transport all traffic over ethernet services using emulation technologies.

12

In the most evolutes case RAN ( RAN dual stack) nodes are equipped with ethernet and legacy interface: overlay legacy network transport voice and delivers sync; MEN is used for BW offloading.

New RAN nodes with native ethernet interfaces and all traffic is transported over ethernet services. This is the final step in the deployment of carrier ethernet in circuit emulation.

VII.1.3.Key Implementation Issues In order to adopt a new step technology, we need to match some key implementation issues that will be add to the requirement derived in the implementation of Ethernet as Carrier technology. Ethernet OAM need for verify connectivity, identify configuration fault and measure service performance, is based on the following already define standards: IEEE 802.3ah (Link OAM), IEEE 802-1ag (Connectivity Fault Management,) ITU-T Y.1731 (Performance Monitoring) Protection and fault recovery In this case we have the requirements driver be the applications needs, customer preference, and cost, but it is possible to focus on some main aspect: protection, restoration, access and service recovery. All this feature are achieved by the composition of RAN application failure detection and RAN application protection action.

Traffic separation there is the need to define the Cos class and what are the performance requirement for each Cos. In general we can identify general traffic types: Sync, Voice, Near-RT, Control/Signaling, Legacy Data, Background. But The standard are recommend 3 CoS classes.

13

Services Typically there are 1-2 RNC sites and between hundreds to thousands of RBS sites and services need to be: scalable, flexible and cost effective: Rooted Multipoint where we can fount similar behavior as leased lines but supports simpler RAN BS and

RAN NC solution and where multiplexing could be used for better traffics separation.

VII.2 Pseudo-wired Ethernet


Here the deployment of Carrier Ethernet for UTRAN is realized by establishing Pseudo-Wires in the backhaul network. This technique is standardized by the IETF's Pseudo Wire Emulation Edge-to-Edge (PWE3) working group defining various types of Pseudo-Wires to emulate traditional and emerging services such as ATM or frame relay over Packet Switched Network (PSN) Despite many technical advantages and low costs of implementing Carrier Ethernet as transport in the UTRAN, there are two major performance challenges that need further investigation: (1) The delay is often an issue of paramount importance in UTRAN networks, not only due to its impact on service quality, but also because some signaling and control protocols cannot tolerate additional delay. The transport network must deliver the frame on time to the base stations for transmission over the air, excessively delayed frames are discarded . This leads to strict delay and delay variation requirements on the UTRAN transport network. (2) The QoS challenge in Ethernet networks is mainly associated with the fact that Ethernet was designed as a connectionless technology. Therefore, predefining a path for a service, and preallocating bandwidth along this path is considered impossible. Standard QoS mechanisms are possible to prioritize between packets belonging to different traffic classes, but this cannot really guarantee an end-to-end QoS. VII.2.1. Network Structure and Protocol Stack: According to the PWE3 reference model in IETF draft, the network structure of UTRAN is reorganized to deploy the Pseudo-Wire replacing the conventional ATM transport network layer (TNL) with an Ethernet network, as illustrated in Figure 1. Both the NodeB and RNC are Customer Edges (CEs), which are not aware of using an emulated ATM service over Ethernet. The NodeB and RNC are connected to the transport Ethernet network via two intermediate PWE capable routers (ATM_IP_router) which contain dual interfaces for ATM and Ethernet. Such routers are located at the edge of the ATM network and the Ethernet network, hence also called as Provider Edges (PEs), which establish a tunnel emulating ATM service over the Ethernet network for the corresponding CEs.

14

Between these routers, an Ethernet Pseudo-Wire is established. ATM cells coming from CEs will be encapsulated into Ethernet PDUs within the routers and then carried across the underlying Ethernet network. After the Ethernet packets arrive at the egress port of the Ethernet network, they are decapsulated into the ATM cells and then forwarded to their destination. Figure 1 also shows the involved protocol layers. At the user plane of the RNC or NodeB, higher layer data entering the UTRAN, e.g. packets of speech data from an AMR codec, is carried via the Frame Protocol (FP) PDUs through the hub interface. These FP PDUs are segmented into AAL2 packets and transmitted as ATM cells to the ATM links. In the ATM-IP router, the ATM cells are received from either RNC or NodeB through an ATM Virtual Circuit (VC). At the ATM interface of the router, the ATM cells are captured and delivered to the PWE layer. Here the ATM cells are concatenated into a PWE payload and PWE protocol overheads and control information (e.g. specifying the ATM service to be emulated in this case) are added. The encapsulated PWE frames are then sent downwards through UDP, IP and Ethernet. At last Ethernet packets are created and transmitted via the Ethernet link to other side. A reverse process occurs at the router of the other end, where PWE payloads are retrieved and the carried ATM cells are extracted and sent via the ATM link to destination node.

VII.2.2.PWE Parameters Pseudo-Wire solutions allow network operators to control two parameters that can affect the PWE frame size and resultant delay and jitter. Nc: maximum number of ATM cells allowed to be concatenated into one PWE frame; Tc: maximum waiting time for the concatenation of ATM cells into a PWE frame; this determines the maximum waiting time if less than Nc PDUs are in the buffer. Each Ethernet frame includes a dedicated header, so a large setting of Nc and Tc minimizes the overhead per service PDU, which in turn results in higher efficiency. Nevertheless, the larger these parameters are configured, the higher is the additional delay and delay variance inferred by PWE. Thus, by means of these parameters the overhead and the resulting quality impact have to be carefully balanced to achieve a suitable Iub delay not exceeding its inherent delay boundary.

VIII.

Mobile Backhaul Synchronization Requirements

The exact requirements on the synchronization depend on the mobile equipment used by the mobile operator and the mobile services and is general issued regardless of the implementation standard. The mobile backhaul networks connecting their base stations with their base station controllers, or NodeBs with their radio network controller, have to provide precise frequency synchronization mechanisms regardless of the transport used. The type of sync can be in frequency, in phase and in Time. In the case of MEN standard we can approach to sync with 3 different method: Outside of MEN Packed based method Synchronous Ethernet The current approach referred to G.8261, focus on Phase and is on packet based timing methods.

15

IX.
[1] [2] [3] [4] [5] [6]

References
Stuart Elby, Haidar Chamas, William Bjorkman, Vincent Alesi, Carrier Ethernet, A Reality Check Rafael Snchez, Lampros Raptis, Kostas Vaxevanakis, Ethernet as a Carrier Grade Technology: Developments and Innovations, Sep. 2008 Michael Disini, New Carrier Ethernet Services Andy Reid, Peter Willis, Ian Hawkins, Chris Bilton, Carrier Ethernet, Sep. 2008 Raviraj Vaishampayan, Ashwin Gumaste, Santosh Rana and Nasir Ghani, Application Driven Comparison of T-MPLS/MPLS-TP and PBB-TE . Driver Choices for Carrier Ethernet David Allan, Nigel Bragg, Alan McGuire, Andy Reid, Ethernet as Carrier Transport Infrastructure, Feb. 2006 www.tpack.com T-MPLS A New Route to Carrier Ethernet. June 2007 Rong Fu, Michael S Berger, Yu Zheng, Lukasz Brewka, Henrik Wessing NEXT GENERATION NETWORK BASED CARRIER ETHERNET TEST BED FOR IPTV TRAFFIC. IEEE. Rong Fu, Yanmeng Wang, Michael S.Berger CARRIER ETHERNET NETWORK CONTROL PLANE BASED ON THE NEXT GENERATION NETWORK. Xi Li 1, Yongzi Zeng 1, Bjoern Kracker 2, Richard Schelb 2, Carmelita Goerg 1 , Andreas Timm-Giel Carrier Ethernet for Transport in UMTS Radio Access Network: Ethernet Backhaul Evolution MEF White Paper Carrier Ethernet for Mobile Backhaul Implementation Agreement Cui Jiang Mobile backhauls road to Wikipedia

[7] [8]
[9]

[10] [11] [12] [10]

Vous aimerez peut-être aussi