Vous êtes sur la page 1sur 51

215 86877 EACK TR Ed.

02, June 2005

System Description System Architecture Page 52

of the call engine VMs and will result in execution of the related interrupt routines when the VM is scheduled next time. Note: In Alcatel 5020 MGC the RTC is assumed to be 5 ms (but must be variable, data driven) There is in addition a 2ms interrupt required for SysLoader. If an call engine VM is scheduled several times within a 2ms-interval then exactly once the 2ms interrupt has to be executed inside each call engine VM. This will be implemented in a similar way that exactly once, at the beginning of a 2ms-period a 2ms-interrupt indication will be set for each of the call engine VMs which results in execution of the related interrupt routines when the VM is scheduled next time. Note: We are not forced to change any critical ASM instructions like CLI, STI, etc. in the call engine software to be compatible with Jaluna (but there are some rules to be followed in the call engine software to respect the nano-kernels working assumptions). In case of disabled interrupts at beginning of an RTC processing will continue without immediate impact but will be detected after interrupt enable and then all related tasks will be executed immediately. Scheduling is triggered after a primary tick interrupt (PTI see definitions below) and after having processed any other interrupt by the primary LINUX (thus pre-empting the call engine secondaries). Such interrupts will occur whenever ethernet frames arrive at the physical controller, and whenever a packet-send action is triggered by a secondary via NK2OSN-trap. The PTI is the real timer interrupt that is setup and the primary LINUX will be a configurable item at Jaluna-software-package-production time. The definition of the value has to respect following rules :

PTI RTC/NbrOfSecondaries
(to ensure that all call engine-VMs will be triggered at least once within one RTC)

PTI = RTC/n
(where n is a positive integer to provide exactly the 5ms interval)

PTI = 2ms/n

Alcatel 5020 MGC

215 86877 EACK TR Ed. 02, June 2005

System Description System Architecture Page 53

(where n is a positive integer to provide exactly the 2ms interval needed for SysLoader) In case of max load the secondary will be interrupted latest when this timer expires and next secondary will be scheduled. The PTI is not fixed and may vary according to findings related to real behaviour of the system under real load conditions. But to avoid extreme scheduling impact on the processor load we have to fix a lower limit. Note: In Alcatel 5020 MGC the PTI should never be less than 250 us (initial default will be 1ms) In case of normal load of the secondaries there is a further scheduling trigger, which is entering the VMs idle state. This will be done by calling the related idle NK2OSN trap (this will pass control to the nano-kernel which will then finally start the scheduling procedure). Scheduling sequence is considering all secondaries at same priority (if not using Fair Share scheduling) using a round-robin mechanism. This means that for any scheduling (independent of trigger event) the next secondary in the list of secondaries will be selected. If all secondaries return control to the nano-kernel within the same primary tick interval via the idle-trap (by that indicating that there is currently nothing else to do) and the primary goes to idle too then we will run into a HLT from where we get released at the next interrupt (latest at next primary tick interrupt). The Fair Share scheduling feature enables the option to pre-define for each secondary a time quantum within the given RTC, which it will get assigned. This will enable asymetric VM time-shares even in case of overload of several secondaries. This concept allows to pre-define asymmetric load conditions which will be achieved within some tolerance by the FS mechanism within each 5ms cycle. Note that without that mechanism the secondaries will get an unpredictable time-share. It is indicated to Jalunas nano-kernel/primary LINUX that fair-share scheduling is to be applied by using the primary LINUX command line parameter fs-sched=(<period>:[<quantum>[/<prio>]],{[<quantum> [/<prio>]]}*)

Alcatel 5020 MGC

215 86877 EACK TR Ed. 02, June 2005

System Description System Architecture Page 54

We will not use the option to assign a specific background priority. Period and Quantum will be defined in (micro-seconds [us] ). We can use this option e.g. like: fs-sched=(5000:1250,1250,1250,1250) fs-sched=(5000:1000,1000,2000,1000) VM4=20% all get the same share (~25%) VM1=20%, VM2=20%, VM3=40%,

The accuracy that can be provided to achieve the given quantum depends mainly on the frequency of schedulings. As explained already above this depends not only on the chosen granularity of the primary tick interval (PTI see above). Using very short primary tick timers will improve deviation figures but increase scheduling overhead (the same effect will be created by frequent hardware-interrupts due to packet-send/receive events which are expected to contribute notably to the scheduling triggers). Using long primary tick timers will decrease scheduling overhead but deviation from assigned load-part will become unacceptable if no other scheduling events occur. Due to the fact that scheduling overhead must be minor anyhow (to not jeopardize this virtualisation approach) the impact of chosen PTI-value will not be a significant parameter. As long as none of the secondaries consumed its pre-defined quantum the standard scheduling sequence will be as described for basic scheduling above. All secondaries will have same foreground priority and a round-robin mechanism will be applied. This means that for any scheduling (independent of trigger event) the next secondary in the list of secondaries will be selected. But as soon as any of the secondaries has consumed its pre-defined quantum the FS scheduling implementation will push down its priority to background (done at next interrupt execution). This means that any next scheduling execution will consider first all secondaries that are still on foreground priority and schedule them in a round-robin sequence. But if all secondaries of the foreground level return control to the nano-kernel via the idle-trap within the same primary tick interval (by that indicating that there is currently nothing else to do) then we will start scheduling the secondaries on background priority (again in a round-robin sequence). If also all these secondaries returned control to the nano-kernel via the idle-trap within the same

Alcatel 5020 MGC

215 86877 EACK TR Ed. 02, June 2005

System Description System Architecture Page 55

primary tick interval and primary LINUX goes to idle too then we will run into a HLT from where we get released by the next hardware-interrupt (latest at next primary tick interrupt). The following figures will show the principles of the RT behaviour and scheduling policy and they will show in addition the concept and consequences of the fair-share scheduling approach:

Alcatel 5020 MGC

215 86877 EACK TR Ed. 02, June 2005

System Description System Architecture Page 56

Figure 12. Scheduling concept (legend)

Figure 13. Conceptional view of scheduling policy (no FS-scheduling considered) overload case

Alcatel 5020 MGC

215 86877 EACK TR Ed. 02, June 2005

System Description System Architecture Page 57

Figure 14. Conceptional view of scheduling policy (no FS-sched. considered) average load case

Alcatel 5020 MGC

215 86877 EACK TR Ed. 02, June 2005

System Description System Architecture Page 58

Figure 15. Conceptional view of scheduling policy (no FS-scheduling considered) missed tick case

Alcatel 5020 MGC

215 86877 EACK TR Ed. 02, June 2005

System Description System Architecture Page 59

Figure 16. Real view on scheduling (full RTC -no FS-sched.) different scheduling request triggers

Alcatel 5020 MGC

215 86877 EACK TR Ed. 02, June 2005

System Description System Architecture Page 60

Figure 17. Real view on scheduling (full RTC - with FS-sched.) different scheduling requests triggers

Figure 18. Flow / State Diagram of implemented Jaluna Scheduling Concept

Alcatel 5020 MGC

215 86877 EACK TR Ed. 02, June 2005

System Description System Architecture Page 61

Load monitoring The basis for measurement performed in a call engine CE is the HRC (High Resolution Counter) which exists on the hardware blade. In correspondence to that tick counter and to an absolute time stamp the call engine OSN is able to determine how much time has been spent for real code execution like for FMMs, Clocked Procedure, Event Handlers and in relationship to that how much idle time has been left. In a non-virtualized call engine CE environment the result of that evaluation is the CE load equivalent to the processor blade load. In the virtualisation approach this HRC is virtualized. This means that in each VM this counter is only incremented as long such a VM runs. The remaining time, in order not to be wasted, will be assigned to the other VMs running on the same blade. This behaviour will lead in calculated load value which are not reflecting the actual environment. The measurement system will compute a dedicated VM/CE load as a percent value. A total sum value is evaluated by correlating all VMs running on one blade in order to give a load value for the processor blade (cTCA) or CPU on a blade (cTCB). The information which VM/CE run on a blade/CPU is based on RIT_id.

System Overview
The system uses:

one hardware type (cTCA) two types of operating systems (call engine OSN, LINUX (MV and RH)) one Virtualisation layer: JALUNAs OSware/LINUX (different from
LINUX-types mentioned above)

four functional domains (call engine, ITCE, server, hardware-mgmt);


management-platform which could be considered as a fifth domain is an external domain Functional Domains

call engine domain - CE-Types and functional Mapping


Operating system is call engine-OSN (in case of virtualisation: on top of JALUNAs OSware/LINUX)

Alcatel 5020 MGC

215 86877 EACK TR Ed. 02, June 2005

System Description System Architecture Page 62

CE-Functions derived from old product; functions are assigned to a few CE-types only Some RCDS functions moved to new EARS; CE-functions LSIF and PBX now mainly in new EAUS

Remaining processing Nodes are: 1. The PLDA (merged PLCE - type: A/S) is the OAM-processor that operates the call engine-part (CEs with call engine-OSN). It is responsible for 2nd step of loading the call engine-CEs and performs maintenance, DB-backup, and translation and passing on of operator-commands. It has no longer any real (physical) peripherals (HDs). a. PLCE see above b. DFCE Centralized Exchange Maintenance and Defence functions c. CTCE this CE is no longer available but some timing functions are now implemented in PLDA in a different way: there is an NTP client to sync. PLDA with OAM; sync. to all other call engine CEs is done via LDTP server

d. SACP The CP ACE handles all charging output related functions that are not executed on a per call basis, e.g. : Translation from charging DN to charging key and backwards Output Meter Block (OMB) Charging Error Handling Charging ORJs e. SORJ performed most ORJs, related to trunk and routing management, to avoid overload of the master copy of the SCALSVT in large exchanges f. EPOAM to connect an SMA/SMC to call engine via ROMA over TCP/IP or the OSI-stack protocol over TCP/IP

g. MONI MPTMON Master/Slave modules to be mapped to all CEs with call engineOSN 2. The TGWCS (type: A/A) is used for calls to or from a TGW (ISUP signalling, TGW controlled via H.248) to or from PRA (connected via cAGW, controlled via H.248. The signalling information (D-Channel) is transported via SIGTRAN / IUA)
Alcatel 5020 MGC

215 86877 EACK TR Ed. 02, June 2005

System Description System Architecture Page 63

Note: Here we have as well the interface to the affected CEs of the UNIX-part for calls over SIP-I or to/from SIP terminals (no difference from an ISIG point of view). It plays the role of a TCE for SIP-I. TGWCS handles incoming and outgoing SIP-A/SIP-I calls a. STGW see above

b. SCSVT CALL SERVICE ACE Trunks; All Modules (e.g. CHGC, CFCS) to be mapped to AGWCSXFC - except PATED, TRC and CHAN (mapped to RCDS) c. SAIN Combines SSF, TCAP and SCCP functionality for all IN calls. Loadsharing tables to address SSF is to be populated with 1 entry (= own LCEID)

d. ISIG Provides the interface to the affected CEs of the UNIX-part for calls over SIP-I or to/from SIP terminals (no difference from a ISIG point of view) and provides direct Ethernet connectivity. It plays the role of a TCE for SIP-I. e. SLDC This processor contains the Line/Trunk Local Data Collector function: events in the individual terminal control elements are collected in real time to update various statistical counters and collect call events. The Central Data Collector (CDC) in the PLDA polls the LDCs every 5 minutes. 5% of the processor capacity is reserved for the transfer of this polling information. f. EPBS EPBS performs the collection and buffering of charging records from the LCG function. Needed for USBS, EPM Modules to be mapped to AGWCS and TGWCS as Charging does not use the Middleware for communication.

g. MONI MPTMON Master/Slave modules to be mapped to all CEs with call engine OSN In addition we find here: Interface to external MRF (announcements, conference), where the served trunk is controlled via TGWCS. With the introduction of the MRF interface signalling the ARTA (AFHL) is removed (also DIAM and CONF no longer existing). the new LSH function, which is the call engine part remaining from LSIF/PBX-CE. This part covers the interface part to CFCS, SIGNALLING and AGW-DH, the interface towards EAUS (ASN1 protocol stack support), remaining PARM functionality (queue-service, access monitoring etc.)

Alcatel 5020 MGC

215 86877 EACK TR Ed. 02, June 2005

System Description System Architecture Page 64

SATI functionality, IP based lawful interception, for interception devices controlled by TGWCS trunk signalling systems.

3. The AGWCS (type: A/A) is used for calls to/from analogue and ISDN BAs (using SIGTRAN) connected via an AGW/RGW/IAD or RSU-line-GW. In order to provide feature continuity also for subscribers connected via an AGW, the call engine call handling functionality is re-used for these subscribers as far as their services are concerned. Here we have as well the interface to the affected CEs of the UNIX-part for calls over SIP-I or to/from SIP terminals (no difference from a ISIG point of view). It plays the role of a TCE for SIP-I. AGWCS handles outgoing SIP-A/SIP-I calls only. a. SAGW see above b. SCSVL CALL SERVICE ACE Lines; All Modules (e.g. CHGC, CFCS) to be mapped to AGWCSXFC - except PATED, TRC and CHAN(mapped to RCDS) c. SAIN Combines SSF, TCAP and SCCP functionality for all IN calls. Loadsharing tables to address SSF is to be populated with 1 entry (= own LCEID)

d. ISIG Provides the interface to the affected CEs of the UNIX-part for calls over SIP-I or to/from SIP terminals (no difference from a ISIG point of view) and provides direct Ethernet connectivity. It plays the role of a TCE for SIP-I. e. SLDC This processor contains the Line/Trunk Local Data Collector function: events in the individual terminal control elements are collected in real time to update various statistical counters and collect call events. The Central Data Collector (CDC) in the PLDA polls the LDCs every 5 minutes. 5% of the processor capacity is reserved for the transfer of this polling information. f. EPBS performs the collection and buffering of charging records EPBS from the LCG function. Needed for USBS, EPM Modules to be mapped to AGWCS and TGWCS as Charging does not use the Middleware for communication.

g. MONI MPTMON Master/Slave modules to be mapped to all CEs with call engine OSN In addition we find here: Interface to external MRF (announcements, conference), where the served trunk is controlled via TGWCS. With the introduction of the MRF

Alcatel 5020 MGC

215 86877 EACK TR Ed. 02, June 2005

System Description System Architecture Page 65

interface signalling the ARTA (AFHL) is removed (also DIAM and CONF no longer existing). the new LSH function, which is the call engine part remaining from LSIF/PBX-CE. This part covers the interface part to CFCS, SIGNALLING and AGW-DH, the interface towards EAUS (ASN1 protocol stack support), remaining PARM functionality (queue-service, access monitoring etc.) SATI functionality, IP based lawful interception, for interception devices controlled by TGWCS trunk signalling systems.

4. The TRAMC (type: A/S) covers the trunk Resource Allocation (TRA) function (former STRA). a. STRA see above

b. SBB SACE Bulk Billing and Limit Of Credit. This means that the following ACE functions are to be mapped to TRAMC: MCC (Meter Count Collection) LOC (Limit of Credit counters) DOR (Division of Revenue): collection (CHRON), buffering (TCSSM) and intermediate transfer to the DORC (IDORC) DORC (Division Of Revenue Collector): centralized collection of DOR records, coming from IDORC, and the update of the DOR counters TDFM: collecting (CHRON), buffering (TCSSM) and storing (TDFM) of taxation records on hard disk? LTL (Local Tax Layouter) for DoR c. MONI MPTMON Master/Slave modules to be mapped to all CEs with call engine OSN

5. The TDC (type: A/S) the TDC (Traffic Destination Code) Controls System ACE performs the centralized traffic control b.m.o. call gapping. Also, traffic control via the CLBC (Centralized Leaky Bucket Control) mechanism is located in this processor (former STDC) a. STDC see above

b. SADM contains the centralised collection and processing of statistical data; including trunk monitoring, line and trunk observation and network traffic management

Alcatel 5020 MGC

215 86877 EACK TR Ed. 02, June 2005

System Description System Architecture Page 66

c.

CON7 (SAN7 and parts of SACP) Real Time Part, e.g. Traffic Destination Code

d. SWLS White List Screening ACE; All Modules for White-List to be mapped to TRAMC e. MONI MPTMON Master/Slave modules to be mapped to all CEs with call engineOSN 6. The RCDS (type: LSHR) merges/covers the following functions from former a. NCCM TCE functions; Loading of OBC code, N7 tables and data collector function (64kbit/s variant of MCCM which is for 2Mbit/s) Note: N7-signalling may be transported either via TDM interface using MTP2/MTP3 or IP-interface using SIGTRAN i.e SCTP / M2UA or M3UA b. SCSVL only module: CHAN, PATED, TRC (routing function of PATED and TRC for local and outgoing calls now also via EARS !!! ) c. Combines the OSI-stack processing, TCAP and SCCP funcSOSI tions. All Modules (e.g. OSI, TCAP, SCCP, GTT) to be mapped to RCDS

d. SRCIC The RANDOM CIC processor performed, per incoming N7 message, the translation of the CCSSN7 CIC code into the trunk circuit identity, in case there is no fixed relationship between both. TRC-Modules as well to be mapped to RCDS e. MONI MPTMON Master/Slave modules to be mapped to all CEs with call engineOSN. 7. The INTM (type: A/S) is the Traffic Management ACE; implemented the IN functions: Service Filtering and Call Gapping (is former CE-type: SATM) a. SATM see above

b. MONI MPTMON Master/Slave modules to be mapped to all CEs with call engine OSN

ITCE-Domain - CE-types and functional Mapping


Operating system is LINUX CGE 3.1

Note: OAM is still have SOLARIS7 CE-Functions migrated from MGC Processing Nodes are:
Alcatel 5020 MGC

215 86877 EACK TR Ed. 02, June 2005

System Description System Architecture Page 67

1. The COCE (type: A/S) controls the packet network connection and performs H.248 functionality, including UDP/IP or TCP/IP. The COCO-CEs are responsible for terminating the voice over IP connection control protocol and inter-working with call engine Call Handling processors. It manages the state of a large number of concurrent IP network connections. 2. The RM (type: A/S) provides the function to route messages from call engine based CEs to the appropriate ITCE (e.g. COCO distribution). Managing extend-trunks is no longer required because we have no local bearer traffic. The RM-CE hosts also the destination selector function. 3. The IPACC (type: A/S) acts as a gateway to the external packet network performing functions like IP-forwarding, firewall, and layer 2 / layer 1 below IP. 4. The SIP (type: A/S) performs the inter-working between ISUP and SIP-I/SIP-U/SIP-A/SIP-M, the functions for a SIP user agent and hosts the SIP protocol stack. 5. The STRAN (type: A/S) processor-type is responsible for terminating signalling protocol stack items related to SIGTRAN (i.e. performs SCTP- and IUA-functions). 6. The OAM (type: A/S) is responsible for the administration and maintenance of all ITCEs in the MGC, for the interface to the hardware-managers in the different chassis. In this role there are requirements for storage and additional Ethernet connectivity. The call engine part (CEs with call engine OSN) is to be operated via call engine OAM-interfaces (PLDA) - but using physical connectivity to the CMC that is provided by the OAMCE. 7. The SIPRG (type: A/S) provides the SIP IAD Registry which was earlier a part of the ALCATEL 5020 MGC. 8. The SLN7S (type: simplex [application] - A/S [platform]) performs E1-termination for N7-signalling links including termination of MTP-layers and provides IPPoE communication capabilities with call engine-domain for transport of N7-payload. It supports as well M2UA and M3UA in Alcatel 5020 MGC (incoming signalling info then via IPACC; function in this CE may replace STRAN in future).

Hardware management Domain


Operating system is LINUX
Alcatel 5020 MGC

215 86877 EACK TR Ed. 02, June 2005

System Description System Architecture Page 68

3rd party product which is CCPUs hardware chassis manager.

Processing Nodes are: 1. The Flex Manager is used to perform supervision of the hardware in the CCPU chassis using actions according to PICMG 2.9 or 2.1. It provides an API to higher layer system management SW in the OAMCE.

Server Domain - Server-Types and functional Mapping


Operating system is LINUX (details see below) EARS and EAUS are servers from a functional point of view - which means they provide former call engine functional parts but via state-of-the-art SW/platform/database. But from platforms point of view they are considered to be ITCEs.

Processing Nodes are: 1. The EABS (type: A/S) represents the hardware-integration of the billing server into the cTCA shelves. It is handled as an independent black-box system, which has its own OAM-, loading-, and maintenance-mechanisms. Operating system is RH LINUX (RHL ES 3.0 Enterprise Solution) Other 3rd party SW used is: FreeDCE Failsafe DRBD Freeware SGI Linbit Release 1.1.0.7 Release 0.6 Release 1.0.4 RT RT RT

2. The EAUS (type: simplex/LSHR [application] - A/S [platform]) represents a major part of the functionality that was up to now in the call engine CEs LSIF (type: A/S) and/or PBX (type: A/S). EAUS contains following subscriber related data: Subscriber information (profiles) Hunting information for PABX hunting groups Busy-free status info for subscribers/PRAtrunks belonging to PABX hunting group Manager/secretary notification data

Alcatel 5020 MGC

215 86877 EACK TR Ed. 02, June 2005

System Description System Architecture Page 69

Data available in EAUS and call engine-CEs: replicated subscriber access settings (depending on necessity), dynamic queue service info for hunting groups (a change of these data in EAUS triggers as well a change of same data in call engine-CEs

EAUS performs mainly following functions: Profile retrieval (subscribers / groups / PABX) Profile data merge Busy-free status control Hunting of PABX trunk groups Notification to manager/secretary configurations Subscriber control update functions

EAUS replaces LSIF- and PBX-CE. Parts of CFCS SCH/RSH and also Attendant, Alarm call are taken over as well by EAUS. Note that some remaining functionality was placed in AGWCS/TGWCS (BCG costumer billing in conjunction with LSH). EAUS takes subscriber control functions (SDM, SCH, RSH) with respect to profile data, SS ineteraction, PW check, ... Operating system is MV LINUX CGE 3.1. Available Middleware is Server Platform software - but not used by server applications. Database used is Versant VDS 6.0.5.3 All subscriber-data are stored in memory in an OO- database (VERSANT), which has a disk-back copy on the OAMCE. The data will be provisioned from a master-database in the network.

The EARS (type: simplex/LSHR [application] - A/S [platform]) covers ..


For class4 application the routing data for outgoing destinations. which were before populated in PATED/TRC-data (Depending on population/administration the routing task is done by EARS, but could be still done by PATED if populated accordingly). The PATED and TRC functions itself are still located in the call engine CE-type RCDS but as we have now an EARS available the data required to route a call has to be retrieved from there. For class5 application the EARS will cover subscriber related data corresponding to former LSIF-tree/PNP-tree which was located in LSIF-CE before. This means that EARS will handle now as well local DNs.
Alcatel 5020 MGC

215 86877 EACK TR Ed. 02, June 2005

System Description System Architecture Page 70

In addition EARS will handle subscriber control codes (like *nn#) and evaluate the related service code string. In addition there will be an LI server implemented on this server hardware which is independent of the EARS functionality. Lawful Interception as a separate application (with own DB) called EALS. Furthermore the provisioning from SPS is done separately but the hardware and platform features are shared. EALS indicates whether a lawful interception is active for a call. Operating system is MV LINUX CGE 3.1. Available Middleware is Server Platform software - but not used by server applications. Database used is Versant VDS 6.0.5.3 All routing-data are stored in memory in an OO- database (VERSANT), which has a disk-back copy on the OAMCE. The data will be provisioned from a master-database in the network.

Alcatel 5020 MGC

215 86877 EACK TR Ed. 02, June 2005

System Description System Architecture Page 71

See following picture as an overview on merge activities related to call engine CEs, servers and ITCEs:

Figure 19. call engine - ITCE - Server types evolution overview

Alcatel 5020 MGC

215 86877 EACK TR Ed. 02, June 2005

System Description System Architecture Page 72

Inter-Domain Communication Inter domain communication is implemented as follows: TGWCS, AGWCS need to communicate with some ITCE-domain CEs. This communication is performed via the communication middleware (see chapter Middleware (call engine CE ITCE) Therefore, the following CEs need the middleware interface: call engine Domain: TGWCS, AGWCS ITCE Domain: COCO, RM, SIGTRAN, SIPCE

TGWCS and AGWCS need to communicate with the billing server. This communication is performed via the EPI interface (see chapter EPI (call engine CE EABS)). Therefore, the following CEs need the EPI interface: call engine Domain: TGWCS, AGWCS Server Domain: EABS

TGWCS and AGWCS need to communicate with the EAUS. This communication is performed via the EDOTCP interface (see chapter EDOTCP (call engine CE EAxS) Therefore, the following CEs need the EPI interface: call engine Domain: TGWCS, AGWCS Server Domain: EARS

TGWCS and AGWCS needs to communicate with the EAUS. This communication is performed via the EDOTCP interface (see chapter EDOTCP (call engine CE EAxS) Therefore, the following CEs need the EPI interface: call engine Domain: TGWCS, AGWCS Server Domain: EAUS

TGWCS and AGWCS need to communicate with the SLN7S. This communication is performed vai the IPPoE interface (see chapter IPPoE (call engine CE ITCE) Therefore, the following CEs need the IPPoE interface: call engine Domain: TGWCS, AGWCS, RCDS, TDC Server Domain: SLN7S

Alcatel 5020 MGC

215 86877 EACK TR Ed. 02, June 2005

System Description System Architecture Page 73

PLDA needs to communicate with OAMCE. This communication is performed via the EPG interface (see chapter EPG (call engine CE ITCE)). Additionally the EPG interface is used for MPTMON access of the OWP to all call engine CEs. Therefore, the following CEs need the EPG interface: call engine Domain: PLDA, TDC, TRAMC, RCDS, INTM, LSIF, PBX, AGWCS, TGWCS ITCE Domain: OAM

PLDA needs to communicate with the managements systems CMC. This communication is performed via the ROMA interface (see chapter ROMA (call engine CE CMC)). The local operator console (OWP) uses GoGlobal for access to the CMC. Therefore, the following CEs need the ROMA interface: call engine Domain: PLDA Management System Domain: CMC

The OAMCE needs to communicate with the hardware-management system (Flex manager). This communication is performed via the RegLib interface (see chapter RegLib (ITCE FlexMgr)). Therefore, the following CEs need the RegLib interface: ITCE Domain: OAM Hardware-Management Domain: Flex manager

Multi Domain Loading Loading of the systems changes due to the fact that all parts are on the cTCA hardware now. For more details see Loading External interface to TDM network The only physical interface to a TDM network is related to N7-signalling connections. There will be upto 4 E1-links available for each SLSN7-CE. For more details see Figure 29 and chapter Loading External interfaces to packet network There will be some physical connections implemented to the external / public packet network. There are 3 types of processors that have physically external connectivity in the Alcatel 5020 MGC:

OAM
Alcatel 5020 MGC

215 86877 EACK TR Ed. 02, June 2005

System Description System Architecture Page 74

IPACC UABS
PLDA will communicate to external world via OAM-CEs physical interface (via LINUX firewall). For more details of physical connection of OAM-interfacesind. local OWP see chapter OA&M The figure below will give an individual view on external connectivity requirements of Alcatel 5020 MGC:

Figure 20. External connectivity - individual requirements of MGC components

Alcatel 5020 MGC

215 86877 EACK TR Ed. 02, June 2005

System Description System Architecture Page 75

External connectivity requirements for MM-rack components are shown for completeness as well; this will have to be considered for real implementation configurations including MM-rack:

Figure 21. External connectivity - individual requirements of MM-rack components

External connectivity of EABS in Alcatel 5020 MGC Platform Architecture


The duplicated nodes of the EABS (active/standby) provide two plug-in connectors per node to the customers management network. The primary connection is used as data link towards: IP Centrex (Sylantro Server) for CDR transfer network to co-loc. MM-rack Multimedia Server (A5020 CSC) network to co-loc. MM-rack => internal => internal

Billing Centre (file and message based transfer of CDRs) => customer management network System management centre (CMC) management network => customer

The secondary connection may be used as alternative route to the Sylantro Server. This route will be used automatically to reach the Sylantro Server, in case of the primary route is not possible.

Alcatel 5020 MGC

215 86877 EACK TR Ed. 02, June 2005

System Description System Architecture Page 76

A high available network is highly recommended for billing data transfer from Sylantro Server to the EABS nodes because of limited storage capacity on the Sylantro Server. The target network elements, which do not require two independent paths for the connection to both EABS nodes (Billing Centre, Alcatel 5020 CSC, CMC) are connected physically via both external links too. But to reach these network elements only the primary link and the default gateway is used. Connections to the EABS can use both nodes (active and standby). Therefore, there are alternative routes for CMC and file based Billing Centre to reach the EABS. Outgoing connections from the EABS (Alcatel 5020 CSC, message based Billing Centre) are only created on the active node.

External connectivity of OAMCE in Alcatel 5020 MGC Platform Architecture


The duplicated nodes of the OAMCE (active/ standby) provide two plug-in connectors per node for data links towards System management centre (CMC) WBEM management PLDA management => customer management network => customer management network

EARS/EAUS Versant synchronisation => customer management network

Each node will be connected to the external network via a single links. Connection to a high availability router or two dedicated routers is NOT assumed (no high availability for OAM connections). Connection to two independent subnets is not supported (A-connections of OAMCEs in subnet1 for SNMP, B-connections of OAMCE for management in subnet2 may be configured but low-level UNIX commands required). Due to the migration from of SOLARIS7 to LINUX the restriction of missing link-redundancy - due to the fact that SOLARIS7 was not using link-status as input for its routing-table-updates - is no longer valid. Although we could use the two external links in bonded mode this would not really improve reliability (but only MTBF of the connection) because we have no link supervision implemented which means that we could never detect a faulty link unless both of them fail. Therefore it was decided to connect only one external link per OAM as long as we have no link-supervision and alarming of faulty links feature Note: There will be no fail-over of the OAM-nodes in case of an unavailability of the external link.

Alcatel 5020 MGC

215 86877 EACK TR Ed. 02, June 2005

System Description System Architecture Page 77

Note: The WEBM will execute on the active OAMCE only.

External connectivity of IPACC in Alcatel 5020 MGC Platform Architecture


The duplicated nodes of the IPACC (active/ standby) provide two plug-in connectors per node for data links towards Gateways (control protocol) => customer control network

other softswitches / A5020CSC (signalling) => customer control network media server - MRF (signalling) IP Centrex - Sylantro Server (signalling) SBC (session border control - gateway) => customer control network => customer control network => customer control network

It is required to... have High availability for control network access avoid necessity of any IPACC- or ER switch-over due to link failures

Each IPACC-pair node will be connected to one external (sub-) network. There is an option to connect two external sub-networks via two IPACC-pairs. This will look like shown in the figure below:

Alcatel 5020 MGC

215 86877 EACK TR Ed. 02, June 2005

System Description System Architecture Page 78

Figure 22. Implementing two IPACC-pairs to support two external sub-networks Each IPACC-node will be connected to the external network via two links that may be configured in different ways for different needs. The two links of an IPACC-node are operated in bonded mode (one IP-address). On an IPACC the two individual interfaces (eth2, eth3) will not have an IP configuration associated with it. The two interfaces are linked ("enslaved") to the virtual bonding interface and the bonding interface (bond0) will be configured with the required IP parameters. The bonding interface will have only a single IP address associated with it. This address will be different on IPACC a and IPACC b (as it was before). Further details on linux bonding can be found in "Documentation/networking/bonding.txt" which is part of the linux kernel package.

IPACC will be connected to two high availability routers (which may be provided via VRPP-handling), which is required to enable high availability for IPACC connections. Connection will be provided via two switches that are interconnected by (at least) two links (port trunking) to ensure high availability. These links are also available for the routers to communicate via VRRP if no other connection is available. The two IPACC-links are used in crossed configuration to both switches (1Gbit/s physical interface). Note: that in all configuration variants and for all external interfaces of IPACC, EABS and OAM the new 1Gbit ITF hardware will be available and in use. This should be supported by Gbit-cabling. By that we are Gbit-ready for external connectivity. But real usage of that capability (which requires then as well upgrade of Fabric Switches to Gbit/s) cannot be envisaged for Alcatel 5020 MGC before a specific product integration cycle for that purpose. Note: To limit the capacity of the external IPACC ITF we will set the options for auto-negotiation in the LINUX ethernet driver to maximum 100Mbit/s. This should be valid as well for the EABS and OAM. This will allow Gbit-enabling by SW-means at any point in time after successful integration cycle.

Alcatel 5020 MGC

215 86877 EACK TR Ed. 02, June 2005

System Description System Architecture Page 79

Note: enabling Gbit/s transmission will still face the fact that real throughput will be limited by current CE-performance (PentiumM 1.6/1.8 GHz) to significantly less than 1Gbit/s. Some figures below will show the possible configurations:

Figure 23. IPACC configuration options for external connections In case of unavailability of one of the used external links there will be no fail-over of the IPACC-nodes, because each IPACC-node is connected to the external network via a second link, which could then be used. In case of a switch- or link-failure there is as well always another possibility to route the traffic without the need to do an IPACC-switch-over or ER switchover. This means that the configuration requires an IPACC-swov only n case if IPACC-failure and an ER-swov only in case of ER-failure. This concept improves the external link availability because it: reduces the number of cases that require IPACC swov (and by that avoids risk of IPACC boot which is associated with the swov and results in an outage if 2nd IPACC boots as well before the first one is up again which may take some minutes) reduces the detection time of a failure without coming into a situation that we get fooled by bad link-quality
Alcatel 5020 MGC

215 86877 EACK TR Ed. 02, June 2005

System Description System Architecture Page 80

simplifies the external switching configuration which allows failure detection directly via collocated physical links

The remaining swov-cases are demonstrated in the failure scenarios that are shown in the figures below:

Figure 24. failure cases and resulting activities for two links-C crossed configuration
Alcatel 5020 MGC

215 86877 EACK TR Ed. 02, June 2005

System Description System Architecture Page 81

Figure 25. failure cases and resulting activities for two links-C crossed configuration

Alcatel 5020 MGC

215 86877 EACK TR Ed. 02, June 2005

System Description System Architecture Page 82

Note: yellow lines show used links before failure; blue lines show used links after recovery; mixed colour lines are used before and after failure. A high available network is highly recommended to ensure control and signalling data transfer from/to gateways / other softswitches.

Connectivity Implementation in Alcatel 5020 MGC-configurations with


co-located MM-rack In case of colocated MM-rack external connectivity is to be implemented as defined below: we assume a separate subnet for the external control network (IPACC connection), and we assume connection via a high availability router (implementation variant) we assume a separate subnet for the OAM-area; redundancy in MGC is given via duplicated OAMs; each of them has two external connections for the gateway-connections etc. redundancy is given via duplicated IPACC; each of them has duplicated external connections for the billing-connections redundancy is given via duplicated EABS; each of them has two external connections in addition

Alcatel 5020 MGC

215 86877 EACK TR Ed. 02, June 2005

System Description System Architecture Page 83

The IP Centrex server can be reached via one IP address in Alcatel 5020 MGC.

Figure 26. Overview: Customer implementation of external connectivity with co-located MM-rack The switches needed to interconnect the IPACC-nodes and the ERs will be the ones in the MM-rack. We use the MM-rack switch but the next level product: e.g. ProCurve2800, size 1U 24-port Gbit switch (power dissipation 2x64W in addition to the one currently used). Note: That two of the Gbit links are used for inter-switch communication. For list of requirements for external switching/routing equipment see info in chapter Connectivity Implementation in Alcatel 5020 MGC stand-olone configurations (no MM-rack)

Alcatel 5020 MGC

215 86877 EACK TR Ed. 02, June 2005

System Description System Architecture Page 84

The figures below will show the possible switching configurations:

Figure 27. External connections switching configurations using MM-rack switch

Connectivity Implementation in Alcatel 5020 MGC stand-olone configurations


(no MM-rack) In these cases we define a reference configuration instead of a fixed implementation definition. The figure below will show this reference configuration and the paragraph below will give a list of recommended hardware- items that may be used to implement such a configuration.

Alcatel 5020 MGC

215 86877 EACK TR Ed. 02, June 2005

System Description System Architecture Page 85

Note: that the items in area of external customer equipment are not further considered in the MGC architecture w.r.t. maintenance, hardware-integration, management, configuration etc.

Figure 28. Reference Configuration for a site without MM-rack Recommended items list: Selected Switches should fulfill following Switch Requirements:

for IPACC without MM Rack


Ports: minimum 4xGigabit + 2x100 Mbit/s; recommended: 6 x Gigabit Layer 2 Switching Features: IEEE 802.3ad Link Aggregation Management: SNMP V1, 2 ,3 Alcatel reference product: OmniStack 6300

for OAMCE+EABS:
Alcatel 5020 MGC

215 86877 EACK TR Ed. 02, June 2005

System Description System Architecture Page 86

Ports: minimum 1xGigabit + 7x100 Mbit/s (2 spare for next step enhancements) Management: SNMP V1, 2 ,3 Alcatel reference product: OmniStack 6300

Selected Routers should fulfill following Edge Router Requirements

the router must integrate into the Ethernet switch matrix so that
communication is not affected by single switch or single link failure. Note: multiple router links must not provide Layer 2 connectivity that would generate Ethernet loops.

The router must be high available by means of internal or external


redundancy (e.g. dual router configuration with HA protocol support like VRRP or HSRP). The unavailability during switch-over following a failure must be limited to a duration of less than 2 seconds.

The router must be dimensioned to handle the combined communication


traffic of the installed MGC configuration. The router must be able to handle peak traffic without congestion and without introducing packet loss or significant delay or jitter.

Alcatel reference product: ALCATEL 7750SR


Firewall Functionality

IPACC
There is a firewall on the IPACC implemented using the built-in LINUX-Ipfilter functionality to setup an ACL. By that only packets from pre-defined IP-source-address on pre-defined ports will be forwarded (all others being discarded). Due to a notable number of residential GWs which are directly connected to the MGC plus some other large SBCs we assume that we have to support up to 60k rules in the IPtables. To solve the problem of time-consuming searches in case of a high number of entries in the forwarding chain (FW) of IPtables/netfilter of the IPACC-CE we configure the table in the following way: create a tree style user defined set of chains which has several levels each user defined chain will contain in average 100-300 rules only setup user-defined chains offline in the default config. file of the IPtables. They are empty in the be-ginning and will be filled with rules by creating GWs via GUI and/or during the registration of the GWs
Alcatel 5020 MGC

215 86877 EACK TR Ed. 02, June 2005

System Description System Architecture Page 87

To solve the problem of time-consuming recovery of the IPtables rules in case of an IPACC-CE reload (up to 14min with 40000 rules) we apply following handling: Rules will be downloaded from the Security-Mgr server to the Security-Mgr client in the IPACC-CE. The Security-Mgr client will create the restore-file by inserting the rules in the relevant user-defined chain. Behaviour of the IPtables during restoration: there are no changes during restoration; at the end of the command execution all the restored rules will take effect in one go; in the meantime which will be several minutes in case of large number of rules we have either no traffic at all (not acceptable for some minutes) or traffic according to specific IPtables rules that will be inserted as default during initialisation and are active during restoration phase. This will allow all megaco/SIP/sigtran traffic to be passed for that intermediate phase (chosen concept).

Megaco/MGCP have default ports (MGCP: 2427/2727; Megaco: 2944) and others out of a range of temporarily ports (11000-11999), which will be open for this short time period of restoration for all source IP-addresses. As known so far these ranges are preconfigured and cannot be changed. For SIP and sigtran-XUAs there are only default ports used and only these are opened for all source IP-addresses. If other protocols are implemented in the future, they will impact this default IPtables file during initialisation as well.

OAMCE
The OAM-CE is protected by a firewall which function it is to limit management access to the MGC to CMC and defined operator terminals. The firewall is based on the Linux IP-Tables functionality. The configuration of the firewall is performed through a GUI, which allows the addition and removal of firewall rules. An individual rule combines a protocol specification with a destination. A set of commonly used OAM protocols is pre-configured; more can be added and removed through the GUI. Destinations can be entered to cover individual hosts or complete subnets. During initial installation of the MGC an IP address can be nominated to have full access to the OAM to allow detailed configuration of the firewall via GUI. Once the initial firewall set-up is complete, this IP address can be removed.

EABS
Linux operating system provides a built-in firewall, which is used by the USBS/EABS application for the public LAN. The firewall functionality affects access to the USBS/EABS from remote via the configuration of ports.
Alcatel 5020 MGC

215 86877 EACK TR Ed. 02, June 2005

System Description System Architecture Page 88

Generally all answers of own requests are allowed. All ports which are used for normal operation and maintenance activities are operational (icmp, telnet, rlogin, rsync, sunrpc, cdbd, snmp, emap, ntp, recp, ftp, ssh, rexec, xdmcp, ...). System Configuration Overview The following figure gives an overview on the Alcatel 5020 MGC system architecture w.r.t. network environment, functional domains, inter-domain communication (mapping of EPG into all call engine-CEs not shown due to

Alcatel 5020 MGC

215 86877 EACK TR Ed. 02, June 2005

System Description System Architecture Page 89

clarity), control elements, operating systems and switching/communication networks:

Figure 29. System Architecture Overview for Alcatel MGC 5020

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 90

Hardware Architecture
Equipment practice - 3rd party product
The ALCATEL 5020 MGC consists of one hardware-family: Commercial 3rd party off-the-shelf cTCA-hardware. The hardware platform selected was requested to be PICMG 2.16, 2.9, 2.1 compliant and the vendor is under commitment to make the platform cTCA-compliant as soon as this standard is valid officially (where we expect a grouping of standards like 2.16, 2.9, etc. probably in slightly enhanced versions). Nevertheless we are already using the internal working-title "cTCA-hardware" now. Continuous Computing (CCPU) provides the commercial cTCA hardware in units of Flex21 chassis. These chassis will be mounted into 19" racks where we have 47U for power input and cTCA components (1U = 1.75 inch = 44,45 mm). This results in a maximum of 3 Flex21-chassis in one rack. Each of these chassis

is a complete independent system from hardware point of view. This means


that each chassis has a pair of hardware-managers (called FlexManager, provided by CCPU), which will maintain the chassis.

has a total height of 13U and provides 21 6U-slots - 2 slots are reserved for
the fabric switches, 19 slots are configurable node slots.

has a dual power feed (-48V) and individual power-supplies for each
node-slot (resp. dual-slot power-supplies for dual-slot applications) in a separate top shelf inside the chassis. This means that there are 21 3U PSU slots. Slots 1 and 21 house as well the redundant chassis-controller called FlexManager .

provides front-to-back air flow is fully de-coupled from the other chassis

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 91

is designed to meet Class A EMC requirements. EMC screening is performed


at chassis- and rack-level. (The complete system mounted inside a rack meets Class B)

has a pair of Ethernet-switches (fabric switches in slots 1 and 21) that are
serving the 2.16 Ethernet back-plane connections to node slots. There are in addition 3 spare ports for external usage - via RTM. The iron rack will provide mechanical mounting for each shelf and -48 volt DC power distribution with breakers/fuses. The fact that there is a 1:1 relation between PSUs and application-boards provides a high level of power-supply redundancy for a chassis, which means that a PSU-failure will not impact more than 1 node-position. There are 3 fan-trays on the bottom of the chassis each containing 2 fans; a single fan failure does not impact the system function but will be compensated by speeding up the remaining ones. Cooling-redundancy is implemented in a way that even removing a fan-tray totally will have only limited impact on the system - depending on ambient air temperature (for details see Flex21 specifications). The node slots are used to plug in processor boards and backbone Ethernet switches. Peripheral SCSI disks are provided on specific RTMs. DVDs are not anymore in the chassis; DVD of local OWP is used instead. All node slot equipment can be accessed and handled via IPMI - if the equipment is IPMI enabled. The cTCA processing blades (housing one or precessing nodes which are called ITCEs or call engine CEs or EAxS) will all have two Ethernet interfaces via backplane, which are used for fault tolerance reason.

call engine CEs on 3rd party hardware


From the legacy call engine part we inherit functionality / CE-types as shown below. They will all be implemented on cTCA boards (either 1.6GHz PentiumM or 1.8GHz dual-P ). They have no connection to any hardware-clock distribution system - which does not exist any more. They have all communication capabilities via dual Ethernet connections via the backplane (PICMG2.16) :

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 92

1. PLDAs: the PLDAs (PLCEs) will be positioned at slots2 of the OAM-chassis and will have an RTM incl. dedicated disk-peripherals. The PLDA-board will be used for a merged functionality of still needed functionality of former PLCE, DFCE, CTCE and PTCE, etc. 2. ACEs: all ACEs that are available can be equipped on any of the slots that are not predefined for specific functionality. They have no RTMs. Note: That it is required to equip at least one call engine CE per chassis (recommended: 2 ... max) to enable full functionality of Ethernet link maintenance in that chassis (see chapter Ethernet Link Maintenance).

N7-solution on 3rd party hardware


There will be an element on cTCA-hardware that terminates physical E1-links used for N7-signalling. The hardware used for that purpose is the standard 1.6GHz single PentiumM cTCA board plus a specific PMC and an RTM plus specific PIM. A detailed description of the N7-solution in Alcatel 5020 MGC can be found in chapter Ethernet Link Maintenance (ELM).

call engine Disk Usage


From hardware point of view each PLDA (PLCE) has two SCSI disks equipped. They are placed on the specific PLDA RTM. The first one serves as system disk. The second magnetic disk takes over the role of the optical disk (MOD - used for init and backup). The assignment of disks to CEs and related SCSI-connectivity topics are shown in Chass Configurations with CCs Flex21 - Top View: . In chapter Backup and Data Restore Strategy of this document we find details w.r.t. usage of those disks in the backup-concepts of this version.

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 93

Related Chassis Types


For Alcatel 5020 MGC we defined three chassis-types from hardware provisioning point of view: an OAM-chassis, a primary and a secondary call-handling chassis. All types are shown below to visualise all the items mentioned above:

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 94

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 95

Figure 1. Chassis Configurations with CC's Flex21 - front view

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 96

Figure 2. Chass Configurations with CCs Flex21 - Top View

Figure 3. Chass Configurations with CCs Flex21 - Top View

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 97

Rack Frame Description


Frame Configuration
The chassis-level equipment practice approach reduces the dependency on auxiliary support hardware mounted within the racks. The basic configuration of this rack consists out of

the 19 rack itself Fuse Panel Unit all necessary mounting parts (to fix standard 19" equipment in the rack).
This configuration will be documented as RAU-LR01-A1. The rack frame has the following dimensions:

Width = 600 mm Depth = 600 mm


Height = 2200 mm The inner height of the rack is 47U (=2089,15 mm), but 1U on top and bottom are reserved due to the structural rack enforcement. Both cable trays and raised floor installations must be supported. Consequently, air must optionally also be provided from the floor. There is no specific EMC provisioning at rack structure level, i.e. no EMC shielding by the cabinet.

Grounding and Power


The top rack unit provides 12 circuit breakers for 36 to 72V DC, with 50 Ampere each. The top rack unit allows multi-point grounding (compliant to ETS 300 253). The grounding system within the racks results in a two-wire system (different ground systems are connected within the rack).

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 98

Hardware-Numbering-Scheme
It has been decided to apply RSU-like documentation/addressing-scheme which means ...

each vendor chassis is considered as a separate maintenance-rack inside an


iron-rack in hardware CAE (where both numberings occur)

all cPCI-racks are assigned to a virtual row (that is row70 racks A-Z, row71
racks A-Z) independent of their location in the floor-plan (which is documented as well in offline hardware-CAE tables and will show the cPCI-racks in the real physical context of the total exchange) (valid rack letters are A Z without O, I, N, Q, X, Y - i.e. 20 racks per logical row)

each PBA is identified uniquely by the combination of chassis-ID/slot-NBR


Note: for dual-Pentium boards we have in addition the discriminator CPU-Id to identify a single processor-instance; for virtual machines we have in addition the discriminator VM-Id to identify a single virtual CE-instance which is still seen as one call engine-CE as known from the legacy world.

the chassis-ID is a composition of two information-parts which are:


cPCI-rack-NBR and chassis-NBR inside that rack; both numberings start from zero - so the first chassis has number "00" This type of RSU-like documentation/addressing-scheme has been selected ...

to get more flexibility for future changes in iron-rack configurations; iron-rack


configuration-variants which may be added in future will not lead to documentation-effort To complete the geographical location identification scheme we can in addition identify uniquely the CPU on a dual-processor board; CPUs can determine their ID (either 0 or 1) by interrogating the IPMI controller and evaluating the assigned IPMI address. Preparing a VM concept that is to come in next main release we provide in addition already a VM-Id (0,1,2,3) that has to be passed to the VM by the VM-manager

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 99

To support the call engine-CE Virtualisation concept that is introduced in Alcatel 5020 MGC we provide in addition a VM-Id (0,1,2,3 which means that the first call engine-VM has ID=0) that has to be passed to the VM by the VM-manager. For more detail w.r.t. how these address elements are used to form a unique PCE-Id see chapter Computing MAC-Addresses We to stick to a fixed numbering scheme in the cTCA hardware structure (NO dynamic scheme) which means that

We provide CE-redundancy as defined in chapter above; the mate of each CE


is in another chassis at the pre-defined slot-position / CPU-ID / VM-ID

there is in addition a CAE assignment of each hardware-slot to a CE-function


- documented in hardware-CAE (OAM-CE, IPACC-CE, fabric switches are generic)

from Alcatel 5020 MGC onwards the EQUIP-GUI (for on-site installation of
ITCEs; all other CE-types will not be equipped via GUI but are fixed by off-line data-definitions only) will check against a configuration-file which contains call engine-hardware-CAE-definitions that will identify all call engine-CE slots, server-slots and peripheral-slots (file transferred offline). This will avoid any equipment of ITCEs on slots that are already equipped from another domain (in addition it will enable CHACO [see chapter Hardware Management Software (ChassisCoordinator, ChassisManager, clients)] to perform alarming for all these slots as well. The EQUIP-GUI will store relevant info into the equipment database which is implemented disk-backed in the iDM.

ELM will fail if there is any on-site installation that deviates from the
configuration that was defined off-line (due to correlation of MAC-address assignment to geographical position) The cTCA hardware requires a chassis-ID per vendor-chassis for location-identification. This chassis-ID

correlates to cPCI-iron-frame-sequence-nbr and location inside this frame(


figures on next pages)

has to be set by installation staff according to specific data-table provided in


installation-manual

will be stored in the chassis in an EEPROM on the back-plane is used together with slot-ID (and CPU-Id and VM-Id if applicable) and other
elements (see above) as input for defined algorithm to setup PCE-ID
Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 100

associated to that slot - fixed off-line by hardware-CAE (PCE-ID is used in turn to set up MAC-address for that board-location (for algorithm see chapter Computing MAC-Addresses ).

will be required in the boot-loader, which gets the chassis-ID from


FlexManager using IPMI-protocol and then uses this info together with the retrieved slot-Id, CPU-Id and VM-Id (if applicable) to identify uniquely the requesting CE resp. the loading-configuration-file that is to be applied by OAM for that load-request. OAM has to provide a unique view for commands/reports from PLDA- and OAMCE-side.

This view presented to the operator will coincide with the


hardware-documentation-view

This means that the call engine maintenance numbering scheme (virtual
row/rack in rack/shelf/slot) will be used also by OAMCE-GUIs

This does NOT mean that there is a data synchronisation between call engine
and OAMCE - this might be implemented in any future follow-on project

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 101

The figures below will demonstrate the hardware-numbering scheme and will introduce all related terms :

Figure 4. hardware-numbering-schemes - overall numbering and cTCA-rack related numbering

Alcatel 5020 MGC

215 86877 EACK TR Ed. 01, June 2005

System Description Hardware Architecture Page 102

Figure 5. hardware-numbering-scheme - details of cTCA-chassis aspects & composition of chassis-ID


Alcatel 5020 MGC