Académique Documents
Professionnel Documents
Culture Documents
Publication History
iii
PUBLICATION HISTORY
System release: GSM/BSS V14.3
July 2004 Issue 14.10/EN Standard Final cleanup and publication activities July 2004 Issue 14.09/EN Standard Final cleanup and publication activities June 2004 Issue 14.08/EN Preliminary Updated according to the following feature: 24961: S12000 dual band 850/1900 E1 December 2003 Issue 14.07/EN Standard Update according to CR Q00732635- 04 Update according to CR Q00794268 November 2003 Issue 14.06/EN Preliminary For CR Q00767318, added hot insertion note to Chapter 5 and hot extraction note to Chapter 6. Update Chapter 4 to resolve Q00767079. April 2003 Issue 14.04/EN Preliminary Minor editorial update
iv
Publication History
January 2003 Issue 14.03/EN Preliminary The following features were integrated into this document for V14.3 Release: SV713: AMR Full Rate SV885: AMR Half Rate SV1322: TTY support on BSC/TCU e3 AMR 850Mhz The following changes were made in individual chapters in accordance with the internal comments taken into account: Chapter 1:
removed the hubs outside the cabinets and the redundant Ethernet link onto
OMU module
Chapter 4:
modified the OAM and CallP architecture added the new upgrade type according to MIB content and BSC software
changes
Chapter 8:
updated the software description with the delivery package list
New function (AMR, TTY) integrated in TMG, DSP functions and TRM module (Chapters 4 and 7) Minor editorial update carried out in Chapters 1, 2 and 4 November 2002 Issue 14.02/EN Preliminary Creation The following features were integrated into this document for V14.3 Release: AR1209-4b1: BSC/TCU e3 SW marking consultation from OMC- R AR1209- 11: Build on- line on BSC e3 AR1209- 17: Plug and play on BSC e3 AR1209-30a2: TCU e3 upgrade
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Publication History
PR439: TCU e3 product PR440: BSC e3 product PR440- BSC e3 Overload -8: PR440-13: BSC e3 Erlang capacity PR440-20a: Support of two TCU e3 on one BSC e3 PR440- 20b: Support of mix TCU 2G/e3 on one BSC e3 PR440- 20c: Creation/deletion on- line on TCU e3 PR1062: Support of TCU 2G on BSC e3 UP1286- 3b: BSC e3 upgrade BSC e3 1000 TRX capacity e-GSM support on BSC e3
vi
Publication History
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Publication History
vii
June 2001 Issue 13.03/EN Draft Added GPRS features in chapters 1,2,3,4,6 and 8. Added RPP and SPP features in chapter 4. Added DTM and STCH features in chapter 4. October 2000 Issue 13.02/EN Preliminary Update after review September 2000 Issue 13.01/EN Draft Creation.
viii
Table of contents
1
1.1 1.2 1.3
Hardware Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Physical characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Electric power supply . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mechanical structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.1 1.3.2 1.3.3 BSC e3 and TCU e3 frame overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SAI frame overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . HUBs overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dual--shelf assemblies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Power supply and alarm systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cooling system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CTU module description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . BSC e3 cabinet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . TCU e3 cabinet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1- 1 1--1 1--1 1--1 1--4 1--7 1--8 1--10 1--10 1--21 1--30 1--34 1--34 1--51 1--51 1--59
1.4
1.5 1.6
SAI frame description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5.1 1.6.1 1.6.2 BSC e3 and TCU e3 cabinet cabling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2
2.1
Physical architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Hardware structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 2.1.2 BSC e3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . TCU e3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Control Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Interface Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Transcoder Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Interface within the Control Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Interfaces between the Control Node and the Interface Node . . . . . . . .
2- 1 2--1 2--1 2--3 2--5 2--5 2--6 2--7 2--8 2--9 2--11
2.2
2.3
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Table of contents
ix
2.3.3 2.3.4
2--11 2--13
Protocol Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.1 3.1.1 3.1.2 3.1.3 Protocol used for communication between the OMU modules and the OMC--R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3- 1 3--1
Protocol used for communication between each node inside the BSC e3 and the TCU e3 and between each BSS product . . . . . . . . . . . . . . . . . . . . . . 3--4 Protocol used for communication between each node inside the BSC e3 and the PCUSN and between each BSS product . . . . . . . . . . . . . . . . . . . . . . 3--9 Overview and conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3--13
4
4.1
Functional Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
BSC e3 functional design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.1 4.1.2 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . BSC e3 functional characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . TCU e3 functional characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . OAM architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CallP architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Standards compliancy and detailed requirements . . . . . . . . . . . . . . . . . .
4-1
4--1 4--1 4--3 4--6 4--6 4--8 4--9 4--9 4--79 4--102 4--102 4--102
4.2
4.3
4.4
5
5.1
5-1
5--2 5--4 5--4 5--5 5--11 5--12 5--14 5--14 5--15 5--19 5--22
5.2
5.3
Table of contents
Electrical characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Functional description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . External interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Electrical characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Functional block description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hot swap removal request and action . . . . . . . . . . . . . . . . . . . . . . . . . . . .
MMS modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6
6.1
6- 1 6--4 6--4 6--4 6--6 6--9 6--9 6--9 6--12 6--14 6--14 6--14 6--16 6--18 6--23 6--33 6--35 6--39 6--39
6.2
6.3
6.4
6.5 6.6
7
7.1 7.2
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Table of contents
xi
8
8.1 8.2 8.3
Software Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Software architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Layered architecture presentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Customer software package deliveries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.1 8.3.2 8.3.3 Control Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Interface Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Transcoder Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Dimensioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9-1
xii
List of figures
Figure 1--1 Figure 1--2 Figure 1--3 Figure 1--4 Figure 1--5 Figure 1--6 Figure 1--7 Figure 1--8 Figure 1--9 Figure 1--10 Figure 1--11 Figure 1--12 Figure 1--13 Figure 1--14 Figure 1--15 Figure 1--16 Figure 1--17 Figure 1--18 Figure 1--19 Figure 1--20 Figure 1--21 Figure 1--22 Figure 1--23 Figure 1--24 Figure 1--25 Figure 1--26 Figure 1--27 Figure 1--28 Figure 1--29 Figure 1--30 Figure 1--31 Figure 1--32 Figure 1--33 Figure 1--34 Figure 1--35
BSC e3 cabinet presentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . TCU e3 cabinet presentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . BSC e3 cabinet: component layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . TCU e3 cabinet: component layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Example of two 8--port optional HUBs connection with BSC/TCU e3 cabinet BSC e3 frame: front view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . TCU e3 frame: front view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Control Node: common architecture inside each module . . . . . . . . . . . . . . . . . Interface Node or Transcoder Node: common architecture inside each RM . Generic module view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Module front panel indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . FILLER module: hardware overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . PCIU: hardware overview (BSC e3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . PCIU: hardware overview (TCU e3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ALM module: functional blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . FMU module: functional blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SIM module: hardware overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SIM module : functional blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cooling air flow diagram inside the BSC e3 or TCU e3 frame assembly . . . . Cooling unit: hardware overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fan unit: hardware overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SAI: hardware overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CTU module: left side view with a CTB and CTMPs (PCM E1 120 ohms) . . CTU module: right side view with a CTB and CTMDs (PCM T1 110 ohms) . CTB physical representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CTB component layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SUBD 62--pin connector on CTB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CTMP board: hardware overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CTMP board: components layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CTMP board: SUBD 25--pin connector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CTMC board: hardware overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CTMC board: components layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CTMC board: SUBD 8--coax connector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CTMD board: hardware overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CTMD board: component layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1--2 1--3 1--5 1--6 1--9 1--11 1--12 1--13 1--15 1--16 1--17 1--20 1--22 1--23 1--25 1--26 1--28 1--29 1--31 1--32 1--33 1--35 1--36 1--37 1--39 1--40 1--41 1--43 1--44 1--44 1--46 1--47 1--47 1--49 1--50
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
List of figures
xiii
Figure 1--36 Figure 1--37 Figure 1--38 Figure 1--39 Figure 1--40 Figure 1--41 Figure 1--42 Figure 1--43 Figure 1--44 Figure 1--45 Figure 2--1 Figure 2--2 Figure 2--3 Figure 2--4 Figure 2--5 Figure 3--1 Figure 3--2 Figure 3--3 Figure 3--4 Figure 3--5 Figure 3--6 Figure 3--7 Figure 4--1 Figure 4--2 Figure 4--3 Figure 4--4 Figure 4--5 Figure 4--6 Figure 4--7 Figure 4--8 Figure 4--9 Figure 4--10 Figure 4--11 Figure 4--12
CTMD board: SUBD 25--pin connector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . BSC e3: optical fibers cabling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . BSC e3: optical fiber cabling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ATM--SW module: optical fibers plug--in . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ATM--RM module: optical fibers plug--in . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . BSC e3: PCM internal and external cabling for maximal configuration . . . . . BSC e3: -- 48 Vdc and alarms cabling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . BSC e3: cabling to/from both optional HUBS . . . . . . . . . . . . . . . . . . . . . . . . . . . TCU e3: PCM internal and external cabling for maximal configuration . . . . . TCU e3: -- 48 Vdc and alarms cabling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . BSC e3 cabinet: physical architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . TCU e3 cabinet: physical architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . BSC e3 frame: alarms cabling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Interface Node: S--Link distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . TCU e3 frame: alarms cabling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Protocol architecture: between the OMC--R and the BSC e3 cabinet . . . . . . Protocol architecture: between each node within a BSC e3 and a TCU e3 . . Protocol architecture between each BSS product within a BSC e3 and a TCU e3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Protocol architecture: between each node within a BSC e3 and a PCUSN . . Protocol architecture between each BSS product within a BSC e3 and a PCUSN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Protocol architecture inside a BSS with a TCU e3: overview . . . . . . . . . . . . . . Protocol architecture inside a BSS with a PCUSN: overview . . . . . . . . . . . . . . BSC e3 and TCU e3: OAM architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . BSC e3 (Control Node): OMC--Com functional group . . . . . . . . . . . . . . . . . . BSC e3 (Control Node): AOM services functional group . . . . . . . . . . . . . . . . . BSC e3 (Control Node): supervision functional group . . . . . . . . . . . . . . . . . . . . BSC e3 (Control Node): C--Node_OAM functional group organization . . . . . Example of a SWACT operation with three TMU modules . . . . . . . . . . . . . . . . Example of a cell group with three TMU modules . . . . . . . . . . . . . . . . . . . . . . . BSC e3 (Interface node): functional group organization . . . . . . . . . . . . . . . . . . TCU e3 (Transcoder Node): Functional group organization . . . . . . . . . . . . . . . BSC e3, TCU e3 and PCUSN: OA&M hierarchical architecture . . . . . . . . . . . BSC e3 and TCU e3: Call processing architecture . . . . . . . . . . . . . . . . . . . . . . BSC e3 with PCUSN: Call processing architecture . . . . . . . . . . . . . . . . . . . . . .
1--50 1--52 1--53 1--54 1--55 1--56 1--57 1--58 1--60 1--61 2--2 2--4 2--10 2--12 2--14 3--2 3--5 3--6 3--10 3--11 3--14 3--15 4--10 4--15 4--18 4--24 4--28 4--30 4--32 4--43 4--61 4--78 4--80 4--81
xiv
List of figures
Figure 4--13 Figure 4--14 Figure 4--15 Figure 4--16 Figure 4--17 Figure 4--18 Figure 4--19 Figure 5--1 Figure 5--2 Figure 5--3 Figure 5--4 Figure 5--5 Figure 5--6 Figure 5--7 Figure 5--8 Figure 5--9 Figure 5--10 Figure 5--11 Figure 6--1 Figure 6--2 Figure 6--3 Figure 6--4 Figure 6--5 Figure 6--6 Figure 6--7 Figure 6--8 Figure 6--9 Figure 6--10 Figure 6--11 Figure 6--12 Figure 6--13 Figure 6--14 Figure 6--15 Figure 7--1 Figure 7--2
BSC e3 (Control Node): TMG functional organization with a TCU e3 . . . . . . BSC e3 (Control Node): TMG functional organization with a PCUSN . . . . . . BSC e3 and TCU e3: CallP organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wander: MTIE Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wander: TDEV Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jitter: Maximum Interface Jitter Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . Phase Transient Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Control Node: physical architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . OMU module: hardware overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . OMU module: functional blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . TMU module: hardware overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . TMU module: functional blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ATM--SW module: hardware overview with OC3 optical fibers plug--in . . . . . ATM--SW module: hardware overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ATM--SW (CC--1) module: functional blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . MMS module: hardware overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . MMS module: functional blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SCSI bus transaction: OMU modules to/from MMS modules . . . . . . . . . . . . . BSC e3 frame (Interface Node): physical architecture . . . . . . . . . . . . . . . . . . . CEM module: hardware overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CEM module: functional blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ATM--RM module: hardware overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ATM--RM module: optical fibers plug--in . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ATM--RM module: functional blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8K--RM module: hardware overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8K--RM module: functional blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LSA--RC/CTU module: electrical architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . LSA--RC module: hardware overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LSA--RC module: front view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IEM module: functional blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . RCM: components layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . TIM module layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . TIM module: 62--pin connector on . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . TCU e3 (Transcoder Node): physical architecture . . . . . . . . . . . . . . . . . . . . . . . TRM module: hardware overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4--84 4--85 4--97 4--104 4--105 4--105 4--106 5--1 5--3 5--6 5--13 5--16 5--20 5--21 5--24 5--27 5--28 5--29 6--1 6--5 6--7 6--10 6--11 6--13 6--15 6--17 6--20 6--21 6--22 6--32 6--34 6--36 6--38 7--1 7--6
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
List of figures
xv
TRM module: functional blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Position of the core system in the layered Control Node software architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Position of the core system in the layered Interface Node software architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Position of the core system in the layered Transcoder Node software architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
xvi
List of tables
Table 1--1 Table 1--2 Table 4--1 Table 4--2 Table 4--3 Table 8--1
Description of the visual indicators on the front panel of each module (except the MMS modules) in the BSC e3 and the TCU e3 . . . . . . . . . . . . . . . . . . . . . . . . . 1--18 Description of the visual indicators on the front panel of each MMS module in the BSC e3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Type of BSC e3 upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Interaction of BSC e3 upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Type of TCU e3 upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Presentation and description of the software packages inside the Control Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1--19 4--36 4--37 4--65 8--9
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
0-1
Applicability
This document applies to the V14.3 BSS system release.
Audience
This document is for operations and maintenance personnel, and other users who want more knowledge of the BSC e3 and the TCU e3.
Prerequisites
It is recommended that the readers also become familiar with the following documents < 00 > : BSS Product Documentation Overview < 01 > < 07 > : BSS Overview : BSS Operating Principles
Readers should also refer to: < 16 > : TCU Reference Manual < 39 > < 91 > : BSS Maintenance Principles : PCUSN Reference Manual
< 101 > : Fault Number Description - Volume 1 of 6: BSC and TCU < 105 > : Fault Number Description - Volume 5 of 6: Advanced Maintenance Procedures < 128 > : OMC- R User Manual - Volume 1 of 3: Object and Fault menus < 129 > : OMC- User Manual - Volume 2 of 3: Configuration, Performance, -R and Maintenance menus < 130 > : OMC- User Manual - Volume 3 of 3: Security, Administration, -R SMS- CB, and Help menus < 131 > : Fault Number Description: BSC/TCU e3 < 132 > : BSC/TCU e3 Maintenance Manual
0-2
< 138 > : GSM BSS Engineering Rules < 139 > : TML (BSC/TCUe3) User Manual The glossary is presented in the NTP < 00 >.
Related Document
The NTPs listed in the above paragraph are quoted in the document.
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
0-3
Chapter 8 deals with the software entities that make up the BSC e3 cabinet and the TCU e3 cabinet. The services provided by each software entity are outlined within a strictly functional context. This section will be a reference for personnel who are installing new software versions. Chapter 9 gives the number of the NTP that describes the factors governing the dimensioning of the BSC e3 cabinet and the TCU e3 cabinet.
Regulatory information
Refer to Manual < 01 >.
0-4
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Hardware Description
1-1
HARDWARE DESCRIPTION
1.1 Physical characteristics
For the overall dimensions of the BSC e3 cabinet and the TCU e3 cabinet refer to NTP < 01 >.
1.2
1.3
Mechanical structure
The BSC e3 cabinet (see Figure 1- or the TCU e3 cabinet (see Figure 1- is -1) -2) composed of one frame assembly and one SAI (Service Area Interface) frame assembly. The BSC e3 frame and the TCU e3 frame are based on a PTE2000 architecture. Each SAI frame is based on a PTE2000 altered architecture. The basic mechanical elements of a BSC e3 frame or a TCU e3 frame consist of two dual-shelf assemblies which are based on a SPECTRUM architecture. The Control Node can only accommodate up to twenty- eight removable modules. The modules are electrically-shielded metal boxes that have identical dimensions, except the OMU and LSA- RC modules. Modules, cable connections, air- filter assemblies, and other maintenance items can be accessed from the front of the frame. Retractable doors and cable- trough covers protect the cable runs and cable connections. The frame can be used with existing earthquake anchors and existing overhead or underfloor cabling systems. The SAI is installed in the left hand side of the frame. It is an auxiliary frame which allows you to connect the PCM E1/T1 cables between: the BSC e3 frame and the:
BTSs TCU e3 or PCUSN
The BSC e3 cabinet or the TCU e3 cabinet is designed for indoor applications and are EMC compliant (no rack enclosure is necessary.) EMC compliance is performed on each dual-shelf assembly.
1-2
Hardware Description
BSC e3 frame
SAI frame
Control node
Interface node
Figure 1-1
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Hardware Description
1-3
TCU e3 frame
SAI frame
Transcoder node
Transcoder node
Figure 1-2
1-4
Hardware Description
1.3.1
BSC e3 and TCU e3 frame overview The frame of a BSC e3 cabinet (see Figure 1-3) or a TCU e3 cabinet (see Figure 1- houses the following: -4) two dual- shelf assemblies: for the BSC e3:
one dual- shelf assembly is dedicated to the Control Node the other dual- shelf assembly is dedicated to the Interface Node The Control Node is located above the Interface Node
for the TCU e3:
both dual- shelf assemblies are dedicated to the Transcoder Node four retractable doors on each dual- shelf assembly Each of them houses a transparent part, on the top, to show both visual indicators (red and green LEDs) on each module one PCIU (Power Cabling Interface Unit) The PCIU is mounted on the top of the frame of the BSC e3 or the TCU e3 cabinet. It accommodates the power cables from the operator boxes, the power and alarm cables to each dual- shelf assembly. Different covers protect each cable and each connector. A frame summary indicator and a fan failure lamp are located on the cover. two air filter assemblies The air filter assemblies filter the air supply for each dual-shelf. One filter assembly is located in the middle of the frame and the other at the bottom of the frame. two grill assemblies The upper grill assembly is located in the middle of the frame and the lower grill assembly at the bottom of the frame. They allow the air flow circulation. two cooling units One is located on the top and the other in the middle of the frame. Each of them houses four fan units and provides mechanical ventilation for each dual-shelf.
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Hardware Description
1-5
SAI frame
BSC e3 frame PCIU Cooling unit (with four fan units) Control Node (Dual--shelf 01 -shelf 01) Retractable doors (shown in the open position)
CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX
Control Node (Dual--shelf 01 -shelf 00) Air filter assembly Upper grill assembly Cooling unit (with four fan units) Interface Node (Dual--shelf 00 -shelf 01) Retractable doors (shown in the closed position)
Interface Node (Dual--shelf 00 -shelf 00) Air filter assembly Lower grill assembly
Figure 1-3
1-6
Hardware Description
TCU e3 frame PCIU Cooling unit (with four fan units) Transcoder Node (Dual--shelf 01 -shelf 01)
CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX CTMX
Transcoder Node (Dual--shelf 01 -shelf 00) Air filter assembly Upper grill assembly Cooling unit (with four fan units) Transcoder Node (Dual--shelf 00 -shelf 01) Retractable doors (shown in the open position) Modules in slots (30 slots per shelf)
Transcoder Node (Dual--shelf 00 -shelf 00) Air filter assembly Lower grill assembly
Figure 1-4
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Hardware Description
1-7
1.3.2
SAI frame overview The SAI frame in a BSC e3 cabinet (see Figure 1-3) or a TCU e3 cabinet (see Figure 1- encloses the electronic equipments used to interface: -4) the frame of the BSC e3 with the external PCM (E1/T1) cables heading to the:
TCU e3 on the Ater interface PCUSN on the Agprs interface BTSs on the Abis interface
the frame of TCU e3 cabinet with the external PCM (E1/T1) cables heading to the:
BSC e3 on the Ater interface MSC on the A interface
The SAI frame houses the following: for the BSC e3, up to six CTU (Cable Transition Unit) modules They provide the physical interface for:
up to 21 (120 or 75 ) PCM E1 links up to 28 (100 ) PCM T1 links
for the TCU e3, up to eight CTU modules They provide the physical interface for:
up to 21 PCM E1 links up to 28 PCM T1 links
Each CTU module of the SAI frame houses: one backplane: CTB (Cable Transition Board) up to seven boards: CTMx (Cable Transition Module) Each of them is either: a CTMC board for 75 PCM E1 Coax This board provides a connection for three PCM E1 links. a CTMP board for 120 PCM E1 twisted pair This board provides a connection for three PCM E1 links. a CTMD board for 100 PCM T1 twisted pair This board provides a connection for four PCM T1 links.
1-8
Hardware Description
1.3.3
HUBs overview One or two HUBs are necessary for running the system. They provide a physical interface between the OMC- and both OMU modules, -R then they allow the supervision of the CEMs via the OMUs. The CEM connections are done in 10BASE- T (Ethernet) and the OMU connections are done in 10 or 100BASE- (Fast Ethernet). -T These HUBs are installed outside the SAI frame. Their own installation is in charge of the customer. The Figure 1- gives an example of OMUs and CEMs connections inside the -5 BSC/TCU e3 cabinet with two 8- port optional HUBs. -
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Hardware Description
1-9
To TML e3 HUB1
5V DC MDI MDIX
To OMC--R
HUB2
5V DC MDI MDIX
ATM--SW ATM--SW
FILLER
FILLER
TRM TRM
OMU
OMU
LSA- RC 1
LSA- RC LSA- RC 3 2
Private MMS
Shared MMS
Shared MMS
Private MMS
FILLER FILLER
FILLER
FILLER
TRM TRM
LSA- RC 1
LSA- RC LSA- RC 2 3
LSA- RC 1
LSA- RC LSA- RC 3 2
IEM TIM IEM IEM TIM IEM CEM CEM 8K--RM 8K--RM FILLER IEM TIM IEM SIM
LSA- RC LSA- RC 0 5
LSA- RC 4
Is used to put the optional Hub in cascade (cascade optional Hub) Twister pair cable with RJ45 connectors
Figure 1-5
TRM TRM TRM IEM TIM IEM CEM CEM TRM TRM TRM TRM TRM TRM SIM
LSA- RC 0
TRM SIM
TRM TRM TRM IEM TIM IEM CEM CEM TRM TRM TRM TRM TRM TRM SIM
LSA- RC 0
TRM SIM
1-10
Hardware Description
1.4
1.4.1
1.4.1.1
The BSC e3 frame houses the following dual- shelf assemblies (see Figure 1-6): the Control Node which houses the following modules: OMU: Operation and Maintenance Unit TMU: Traffic Management Unit MMS: Mass Memory Storage ATM-SW: ATM SWitch, also named CC- 1 (Communication Controller 1) SIM: Shelf Interface Module FILLER modules the Interface Node which houses the following modules: CEM: Common Equipment Module ATM-RM: ATM Resource Module 8K-RM: 8K Resource Module or SRT- RM: SubRaTe Resource Module LSA- RC: Low Speed Access Resource Complex Each of them houses the following modules: IEM: Interface Electronic Module TIM: Termination Interface Module SIM: Shelf Interface Module FILLER modules
1.4.1.2 TCU e3
TCU e3 frame houses the following dual- shelf assemblies (see Figure 1-7). Each of them corresponds to the Transcoder Node which houses the following modules: CEM: Common Equipment Module TRM: Transcoder Resource Module LSA- RC: Low Speed Access Resource Complex Each of them houses the following modules: IEM: Interface Electronic Module TIM: Termination Interface Module SIM: Shelf Interface Module FILLER modules
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Hardware Description
1-11
ATM-SW ATM-SW
OMU
OMU
Control node
TMU FILLER TMU TMU Private MMS Shared MMS FILLER (*) FILLER (*) Shared MMS Private MMS TMU TMU TMU TMU SIM
IEM TIM IEM IEM TIM IEM CEM CEM 8K-RM 8K-RM FILLER IEM TIM IEM SIM LSA- RC LSA- RC 5 0 LSA- RC 4
FILLER IEM TIM IEM ATM-RM ATM-RM FILLER IEM TIM IEM IEM TIM IEM FILLER SIM LSA- RC 1 LSA- RC LSA- RC 2 3
Interface node
Figure 1-6
1-12
Hardware Description
Figure 1-7
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
TRM TRM TRM IEM TIM IEM CEM CEM TRM TRM TRM TRM TRM TRM SIM LSA- RC 0
FILLER IEM TIM IEM TRM TRM FILLER IEM TIM IEM IEM TIM IEM TRM SIM LSA- RC 1 LSA- RC LSA- RC 2 3
TRM TRM TRM IEM TIM IEM CEM CEM TRM TRM TRM TRM TRM TRM SIM LSA- RC 0
FILLER IEM TIM IEM TRM TRM FILLER IEM TIM IEM IEM TIM IEM TRM SIM LSA- RC 1 LSA- RC LSA- RC 2 3
Transcoder node
Transcoder node
July 2004
Hardware Description
1-13
1.4.1.3
Except for the SIM modules, each module contains a computer board or an SCSI disk and an adapter board enclosed in a metallic housing which provides (see Figure 1-8): a single level of EMC shielding
noise protection control of other environmental parameters
communication redundancy
regenerates SCBUS clocks from the synchronising data flow provides live inserting capability to the module supplies on the front panel: visual indicators, Ethernet connector,
Module
LEDs
ITM block Front panel Device (computer board, disk or ATM switch) Back plane Interface adapter board BSC/TCU e3 Reference Manual
Figure 1-8
1-14
Hardware Description
1.4.1.4
Common module hardware architecture in the Interface Node and the Transcoder Node
Figure 1- shows how each 8K- RM, ATM- RM, TRM and IEM/LSA- RC -9 module presents the same interface to the CEM module. A common S- Link interface is responsible for the physical interface, and it: recovers data from both CEM modules monitors link health by means of a CRC check extracts messaging channels from both CEM modules selects PCM data from both CEM modules, based on the CEM activities A small elastic store function is supplied to accommodate phase variations between each CEM module formats the selected data stream into a parallel bus for access by the resources supplied by:
8K- RM, ATM- RM or IEM/LSA-RC modules in the Interface Node TRM, IEM/LSA-RC modules in the Transcoder Node
broadcasts outgoing PCM data to both CEM modules inserts outgoing messaging TS to each CEM module inserts link CRCs provides low level links, control and status facilities, including test and ID storage
Each RM in the Interface Node or in the Transcoder Node: has a local processor This processor provides maintenance and low level processing related to the function has a main device board This board is enclosed in a metallic housing, providing a single level of EMC shielding, noise protection and other environmental parameters control has a Test Bus Master functional block This functional block consists of:
ITM (Intelligent Test Master) ASIC module Information memory power-up reset logic
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Hardware Description
1-15
The Test Bus Master functional block provides an interface to a system standard test and maintenance bus based on the proposed IEEE 1149.5 MTM (Module test and Maintenance) bus standard. This provides consistent access to system test and maintenance resources such as: MTM bus module slot ID storage and retrieval of fault logs storage and retrieval of test and configuration data status LEDs access and control of the Test Bus Master functional block is performed through the CEM ITM block using the MTM bus
RM
LEDs
ITM block Back plane Local maintenance processor Serial link (S--link) interface RM specific hardware and software Copyright E 2000--2004 Nortel Networks
MTM bus
Front panel
Figure 1-9
1-16
Hardware Description
1.4.1.5
Generic hardware architecture inside the BSC e3 and the TCU e3 Physical design description
Figure 1-10 shows a generic module. Each module provides the following features and benefits: a single level of EMC shielding EMI containment across boards within a shelf (RFI, radiated/conducted) defined control volume for noise environment ESD protection for circuit packs handling ruggedness minimized EMC retest with new designs PCB stiffener function visual indicators on top of the front panel
EMC shield
Figure 1-10
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Hardware Description
1-17
LED description
Each module inside each dual- shelf assembly houses two LEDs on the upper part of the front panel. This eases on- site maintenance and reduces the risk of human error. The actual colors of these LEDs are: red with a triangular shape green with a rectangular shape The red and green LEDs indicate the module status. Figure 1- shows the position of each LED for each module. -11
XXX
Figure 1-11
1-18
Hardware Description
LED display
The table below gives the description, the combinations and the states of the red LED and the green LED for each module (except the MMS module) inside the BSC e3 cabinet and the TCU e3 cabinet. In addition it gives some scenarios: examples of LED states according to the action on the module (inserted, removed...)
steps 1 2 3 4 5 6 7 Red LED unlit LED lit LED unlit LED unlit LED lit LED blinking LED Green LED unlit LED lit LED blinking LED lit LED unlit LED unlit LED blinking LED Status The module is not powered or the BIST terminated successfully The BIST is running or terminated unsuccessfully The module is passive The module is active and unlocked (active OMU, both ATM--SW, all TMU) Alarm state Path finding (the module can be removed) The ATM--SW module waits OMU master activation (simultaneous blinking) The ATM--SW module waits software downloading (alternative blinking)
blinking LED
blinking LED
blinking LED
Table 1-1
Description of the visual indicators on the front panel of each module (except the MMS modules) in the BSC e3 and the TCU e3
When a TMU module is inserted, the LED behavior is: step 1 step 2 step 4. When an ATM-SW module is inserted, the LED behavior is: step 1 step 2 step 7 step 8 step 4.
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Hardware Description
1-19
Scenario 2:
When a passive OMU module has to be removed, one must press the Removal request pushbutton (a TML command also exists), the LED behavior is : step 3 step 1 step 6 When an active OMU module has to be removed, the behavior is : step 4 step 3 step 1 step 6
Scenario 3:
The table below gives the description, the combinations and the states of the red LED and the green LED for the MMS module in the BSC e3 cabinet. In addition it gives some scenarios: examples of LED states according to the action on the module (inserted, removed...)
steps 1 2 3 4 5 6 Red LED unlit LED lit LED unlit LED unlit LED lit LED blinking LED Green LED unlit LED lit LED blinking LED lit LED unlit LED unlit LED Status The MMS module is not powered The MMS module is not managed or not created The MMS module is not operational (disk updating or stopping) The MMS module is active and unlocked Alarm state Path finding (the MMS module can be removed)
Table 1-2
Description of the visual indicators on the front panel of each MMS module in the BSC e3
Scenario 2:
Scenario 3:
1-20
Hardware Description
A FILLER module (see Figure 1-12) is an empty module container which can be used inside: the Control Node and the Interface Node of the BSC e3 cabinet each of both Transcoder Nodes of the TCU e3 cabinet A FILLER module occupies any slot in each dual- shelf that does not contain a module or an RM. Each unused slot on a powered shelf must be equipped with a FILLER module. FILLER modules maintain electromagnetic interference (EMI) integrity and they maintain shelf airflow patterns to ensure proper cooling. If one (or more) slot(s) remains empty (that is, they do not house a FILLER module) then the BSC e3 or the TCU e3 frame assembly can be damaged.
EMC shield
Front panel
Figure 1-12
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Hardware Description
1-21
1.4.2
Power supply and alarm systems The power supply and the alarm systems of the BSC e3 or the TCU e3 frame are composed of: a PCIU This serves as a central distribution and gathering point for all power and alarm cabling used inside the BSC e3 or the TCU e3 frame two SIM modules for each dual- shelf They serve to transfer:
the - 48 Vdc supply to (from): -
The PCIU is located in a frame power distribution tray and mounted: on the top of the BSC e3 (see Figure 1-13) on the top of the TCU e3 (see Figure 1-14) The PCIU contains the following modules: ALM: ALarm Module FMU: Fan Management Units
the SIM modules the fan units housed in the cooling units When the frame summary indicator (amber lamp) located on the front cover is: OFF: there is no active alarm in the BSC e3 or the TCU e3 frame ON: there is an active alarm in the BSC e3 or the TCU e3 frame
1-22
Hardware Description
ABS (--)
Test jacks
ALM module
BSC e3
cover Open cover screw Front view with cover Open cover screw Fan failure lamp
Figure 1-13
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Hardware Description
1-23
ABS (--)
Test jacks
ALM module
TCU e3
cover Open cover screw Front view with cover Open cover screw Fan failure lamp
Figure 1-14
1-24
Hardware Description
The PCIU provides: a connection for the - 48 Vdc (A and B feeds) between the PCIU and the operator boxes a connexion (via four cables) for the - 48 Vdc (A and B feeds) and the alarms between the PCIU and the four SIM modules a connexion (via two cables) for the - 48 Vdc (A and B feeds) and the alarms between the PCIU and both cooling units an ABS connexion in standalone mode for:
telephone data jacks frame fail LED
provides front access to all connection for the I&C and maintenance procedures
ALM module Functions
The ALM module performs the following main functions: monitors the SIM modules, the cooling units and the fuse failures provides control for each LED on the fan units reports alarms on each dual- shelf reports the PCIU fail function
Functional blocks
The ALM module houses the following functional blocks (see Figure 1-15): PCIU fail function It combines the PCIU fail signals with the fan units status input The composite signal is applied to the office aisle alarm block for delivery to the row-end alarm display and to the office alarm device fan units and cooling function status and LED drive Eight fan units and two cooling unit status and alarms are combined by this function and forwarded to the shelf and to the PCIU fail function for further processing LED test This test provides a signal to each dual- shelf to test all LEDs. The signal is activated from the push button named Test Lamp located in the middle of BSC e3 or the TCU e3 frame on the cooling unit front panel (see Figure 1-20)
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Hardware Description
1-25
ALM module
LEDs
Backplane
LED test
Front panel
Figure 1-15
The FMU module provides over- current protection and conducted noise filtering. A soft-start circuit prevents high filter- capacitor inrush current. Functions
The main functions of the FMU module are: soft- start to limit capacitor inrush current capacitor fault alarm - 48/- 60 V at 30 A input capability input transient protection alarm
1-26
Hardware Description
Functional blocks
The FMU module contains the following functional blocks (see Figure 1-16): inductors They limit the noise conducted from the fan units to the main power supply soft-start When power is applied, the charging circuit allows the capacitors to charge slowly until they are fully charged. If the capacitor current is excessive, a fuse interrupts the flow of current Alarm The alarm circuitry is triggered by a failure of the capacitor filter or a loss of input power Capacitors The capacitors are a part of the circuit that limits the noise conducted from the fan units to the main power bus
FMU module
LEDs
Front panel
Figure 1-16
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Hardware Description
1-27
1.4.2.2
SIM module
The SIM modules (see Figure 1-17) are the dc power conditioner for each dual-shelf. Each of them houses two SIM modules. The dc part of the SIM module houses: a switch a soft start circuit a - 48 Vdc/alarms connector an electromagnetic interference (EMI) conditioning element The SIM module also provides the alarm interface between the PCIU module and each dual-shelf.
Functions
The SIM module manages the following functions: current limiting during start- up the alarms: filter fail, loss alarm, switch on/off and alarm interface between the PCIU and: the OMU modules for the Control Node the CEM modules for the Interface Node and the Transcoder Node filtered - 48/- 60 Vdc at 30 A power conditioning
Functional Blocks
The switch houses a 30- amp filter which is used to connect the EMI filter to the 48 Vdc.
Soft start circuitry
The soft start circuitry protects the power conditioning circuit from high- inrush start- up conditions. EMI filter
The EMI filter provides filtration of conducted interference to maintain CC/CSA-mandated standards.
Alarm block and LED drivers
They provide alarm logic and LED drivers for the following alarm functions: capacitor protection fuse or the circuit protection fuse power loss power switch open
1-28
Hardware Description
SIM
Alarms (+)
Figure 1-17
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Hardware Description
1-29
SIM module
LEDs
Figure 1-18
(+)
(--)
(--)
(--)
Alarms
Alarms
Filter (+)
1-30
Hardware Description
1.4.3
Cooling system The air is forced through each part, and the frame is cooled by two cooling unit, as shown in Figure 1-19, housing four fan units (see Figure 1-20). The filter assembly rests horizontally at the bottom of each dual- shelf. The foam air filter elements contained in the assembly are not reusable. Replace the filter periodically, depending on the local dust conditions The bottom dual-shelf assembly draws air through the lower grill assembly mounted at the bottom of the frame. The upper dual- shelf assembly draws cooling air through the upper grill assembly located between the dual- shelf assemblies The grill assemblies protect the ambient air intakes that cool the dual-shelf assembly. The fan units (see Figure 1-21): draw air through the grills, into the air filter assemblies, and then into the actual shelves for cooling. The fans expel the air from the rear of the frame assembly are individually replaceable fans that include mounting slides and connectors. The fan units are removed by turning the plastic screw clockwise (located on the extraction handle) and pulling the unit out of the cooling unit
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Hardware Description
1-31
Exhaust air
Dual--shelf assembly 01
Shelf 00
Filter assembly
Exhaust air
Dual--shelf assembly 00
Shelf 00
Filter assembly
Figure 1-19
Cooling air flow diagram inside the BSC e3 or TCU e3 frame assembly
1-32
Hardware Description
Alarm LED
Locking screw
Extraction handle
Fan--unit assembly
Front view of the cooling unit with PCIU Upper grill assembly Air filter assembly Telephone jacks
Test lamp
Data jacks
Alarm LED
Locking screw
Extraction handle
Fan--unit assembly
Figure 1-20
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Hardware Description
1-33
Alarm
Figure 1-21
1-34
Hardware Description
1.5
1.5.1
CTU module description The CTU module (see Figure 1- and Figure 1-23 -24) is a frame assembly which provides the physical interface (PCM E1/T1 links) between the TIM module (housed inside the LSA- RC module) and the other BBS products. It is split up as follows: one backplane: CTB (Cable Transition Board) The CTB is a backplane that is mounted at the back of the CTU module. The CTB provides connection with each CTMx (either CTMP, CTMC, or CTMD) up to seven boards: CTMx (Cable Transition Module). Each of them is either:
a CTMP board for PCM E1 twisted pair
It provides a PCM loopback capability and secondary surge protection for the 120 impedance PCM E1 interface connection a CTMC board for PCM E1 Coax It provides PCM loopback capability, secondary surge protection and impedance matching. Impedance matching allows the 75 operator premise coaxial cables to be connected to the TIM module with an internal impedance of 120
a CTMD board for PCM T1 twisted pair
It provides the PCM loopback capability and secondary surge protection for the 100 impedance PCM T1 interface connection Each CTU module provides the following functions:
terminates the cables that connect the TIM module to the CTB provides connectors for terminating the PCM links on the:
Abis interface and the Ater interface or the Agprs interface for the BSC e3 cabinet A interface and the Ater interface for the TCU e3 cabinet
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Hardware Description
1-35
SAI frame
CTU module
CTMP board
Note: This figure shows an SAI frame dedicated to a BSCe3 cabinet with the CTMP board (for PCM E1 120 ).
Figure 1-22
1-36
Hardware Description
25--pin connector for external E1 links (to/from other products: OMC--R, BTS, MSC, etc.)
Figure 1-23
CTU module: left side view with a CTB and CTMPs (PCM E1 120 ohms)
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Hardware Description
1-37
Tx signals on 62--pin connector for internal PCMs (from the TIM module via the TIM Tx cable)
Rx signals on 62--pin connector for internal PCMs (to the TIM module via the TIM Rx cable)
Figure 1-24
CTU module: right side view with a CTB and CTMDs (PCM T1 110 ohms)
1-38
Hardware Description
1.5.1.1
CTB description
The CTB is a backplane that interfaces (see Figure 1-25): the individual CTMx board which shall be either:
a CTMP board for PCM E1 twisted pair a CTMC board for PCM E1 Coax a CTMD board for PCM T1 twisted pair
and the TIM module which is housed inside the LSA- RC module -
The CTB houses: up to seven CTMx to connect with the PCM (E1/T1) external links between the SAI and the other BSS products two 62- pin connectors for the PCM (E1/T1) internal links between the SAI and the LSA-RCs. Figure 1-26 shows the CTB components layout. Each PCM (E1/T1) transmit signal is routed to one of the 62- pin connectors (see Figure 1-27) and each PCM (E1/T1) receive signal is routed to the other 62- pin connectors. Each of them is connected to the corresponding TIM module inside the LSA- RC module. -
In addition, the CTB houses seven 1SU connectors to connect each CTMx board. The CTB provides the following features: ease of installation for 21 PCM E1 links or 28 PCM T1 links ease of troubleshooting for 21 PCM E1 links or 28 PCM T1 links a controlled impedance design, with:
120 +/- 10% between tip and ring signals for PCM E1 links 75 +/- 10% between tip and ring signals for PCM E1 links 100 +/- 10% between tip and ring signals for PCM T1 links -
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Hardware Description
1-39
1SU molex connector for the CTMC, CTMD or CTMP board Tx signals on 62--pin connector for internal PCMs (from the TIM module)
CTB Rx signals on 62--pin connector for internal PCMs (to the TIM module)
Figure 1-25
1-40
Hardware Description
CTB
CTMx (5)
CTMx (3)
CTMx (2)
CTMx (1)
CTMx (0)
Figure 1-26
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Hardware Description
1-41
(02) (00)
(05)
(08)
(11)
(14)
(17)
(20)
(23)
(03)
(06)
(09)
(12)
(15)
(18)
(21)
(24)
21 20 19 18 17 16 15 14 13 12 11 10 9
42 41 40 39 38 37 36 35 34 33 32 31 30 29 28 27 26 25 24 23 22 62 61 60 59 58 57 56 55 54 53 52 51 50 49 48 47 46 45 44 43
(P3) loop A
(01)
(04) Tip
(07)
(10)
(13)
(16)
(19)
(22)
(25)
GND
Note: The number in brackets indicates the number of the PCM (E1/T1) pin. (P3): Transmit PCM (E1/T1) links. (P4): Receive PCM (E1/T1) links.
Figure 1-27
1-42
Hardware Description
1.5.1.2
CTMP description
The CTMP contains two basic sections (see Figure 1-28): secondary protection for each PCM E1 twisted pair against over- current and over-voltage loopback push buttons (see Figure 1-29) that can loop the transmit and receive PCM E1 signals back towards the LSA- RC module and the customer equipment -
The CTMP board houses two connectors: 25-pin connector 1SU Molex Omnigrid right angle connector The 25- pin connector (see Figure 1-30) pinout is arranged such that the transmit and receive PCM (E1/T1) signals are separated as much as possible. In addition, ground pins are distributed throughout the connector for further isolation between spans.
The CTMP provides the following functions: physical interface for three PCM E1 links to the LSA-RC module loopback capability for each of the three PCM E1 links both on the local and remote sides secondary protection against over- voltage and over- current for each PCM E1 link easy installation and troubleshooting for many PCM E1 links secondary protection against over- voltage and over- current with the Trisil SMP75- 8 by SGS- Thomson. Trisil is a low voltage surge arrestor designed to protect T1/E1 trunks against lightning strikes and other transients three loopback switches to loop each PCM E1 link back towards the LSA- RC module and towards the customer equipment. These switches are mounted on the PCB and protrude through the faceplate of the CTMP. In addition, a right- angle faceplate has been designed to allow customer cable access
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Hardware Description
1-43
Figure 1-28
1-44
Hardware Description
CTU module
CTMP board
120 balanced
Figure 1-29
NC
GND TxR01 TxT01 GND TxR00 TxT00 GND RxR01 RxT01 GND RxR00 RxT00
13 12 11 10 9
25 24 23 22 21 20 19 18 17 16 15 14
NC
NC
NC
NC
NC
NC
Figure 1-30
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Hardware Description
1-45
1.5.1.3
The CTMC board houses three basic sections (see Figure 1-31): secondary protection for each PCM E1 coax against over- current and over-voltage a balun for each PCM E1 coax to match the 75 open- ended signal and convert to a 120 differential pair loopback push buttons (see Figure 1-32) that can loopback the transmit and receive PCM E1 signals back towards the TIM module and the operator boxes
The CTMC board houses two connectors: a 8-coax connector 1SU Molex OmniGrid right angle connector On the 8-coax connector (see Figure 1-33), there are 3 pins associated with each coax cable connection; one pin for the signal and two pins for the shield. There are two different pin counting schemes. The 8 coax connections are labelled A1 through A8. Ground separates the pairs of signals (tip and ring). However, certain shield pins are left as no connects intentionally. This is to prevent a ground potential difference between the customers equipment and the CTMC because the cable is grounded at the customer transmit end.
The CTMC provides the following functions: physical interface for three PCM E1 links to the LSA RC loopback capability for each of the three PCM E1 links on the local and remote sides secondary protection against over- voltage and over- current for each of three PCM E1 links ease of installation and troubleshooting for many PCM E1 links balun interface to convert 75 single open- ended to 120 differential pair -
1-46
Hardware Description
Figure 1-31
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Hardware Description
1-47
CTU module
CTMC board
Secondary Protection
75 unbalanced
120 balanced
Figure 1-32
Rx02
Tx02
NC
Rx01
Tx01
NC
Rx00
Tx00
A8
A7
A6
A5
A4
A3
A2
A1
Figure 1-33
1-48
Hardware Description
1.5.1.4
CTMD description
The CTMD (see Figure 1-34) contain two basic sections: secondary protection for each PCM T1 twisted pair against over- current and over-voltage loopback push buttons (see Figure 1-35) that can loop the transmit and receive PCM T1 signals back towards the LSA-RC module and the customer equipment. These push buttons are mounted and they protrude through the faceplate of the CTMD
The CTMD houses two connectors: 25-pin connector 1SU Molex Omnigrid right angle connector The 25- pin connector (see Figure 1-36) pinout is arranged such that the transmit and receive PCM T1 signals are separated as much as possible. In addition, ground pins are distributed throughout the connector for further isolation between each PCM T1 link.
The CTMD provides the following functions: physical interface for four twisted pair PCM T1 links to the TIM module loopback capability for each of the four PCM T1 links on the local side and on the remote side secondary protection against over voltage and over current for each of the 4 PCM T1 links secondary protection against over voltage and over current loopback capability via 4- pole double throw loopback switches loopback capability for both the customer line side and the equipment side. For each PCM T1 link, there is a 4- pole, double- throw switch to put the transmit and receive paths in loopback mode.
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Hardware Description
1-49
Figure 1-34
1-50
Hardware Description
CTU module
CTMP board
100 balanced
Figure 1-35
NC
GND TxR01 TxT01 GND TxR00 TxT00 GND RxR01 RxT01 GND RxR00 RxT00
13 12 11 10 9
25 24 23 22 21 20 19 18 17 16 15 14
GND TxR03 TxT03 GND TxR02 TxT02 GND RxR03 RxT03 GND RxR02 RxT02 Note: NC : Not connected.
Figure 1-36
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Hardware Description
1-51
1.6
1.6.1
1.6.1.1
Internal cabling
The internal cabling of the BSCe3 cabinet is shown on the following figures: Figure 1-37 and Figure 1-38 shows how to connect the OC- 3 optical multi mode fibers They are used to connect the ATM backplane in the Control Node via the ATM_SW module to the S- link backplane in the Interface Node via the ATM- RM module Figure 1-39 shows how to connect the OC- 3 optical multi mode fibers on the ATM- SW module Figure 1-40 shows how to connect the OC- 3 optical multi mode fibers on the ATM- RM module Figure 1-41 shows how to connect the internal PCM (E1/T1) cables between:
the TIM module in each LSA- RC module of the Interface Node and each CTU module of the SAI frame
Figure 1-42 shows how to connect the internal - 48 Vdc and alarm cables between:
the PCIU and the four SIM modules located on the Control Node and the Interface Node
The internal - 48 Vdc and the alarm links are distributed (see Figure 2-3):
for the Control Node: from the SIM modules to the OMU modules and the
for the Interface Node: from the SIM modules to the CEM module and the
Figure 1-43 shows how to connect the OMU modules to the optional HUBs
1-52
Hardware Description
1.6.1.2
External cabling
The external cabling of the BSC e3 cabinet is shown on the following figures: Figure 1-41 shows how to connect the external PCM (E1/T1) cable on each CTU module of the SAI frame. Then, these cables are connected to the TCU e3 cabinet or the PCUSN cabinets or the BTS cabinets Figure 1-42 shows how to connect the external - 48 Vdc cables to the PCIU. Then, these cables are connected to the other BSS products Figure 1-43 shows how to connect the external the OMU modules to the OMC- R or to the TML Note: Both optional HUBs can be installed outside the SAI frame.
Rx
Rx
Figure 1-37
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Hardware Description
1-53
Figure 1-38
Rx0 Rx1
Tx0 Tx1
Rx0 Rx1
Tx0 Tx1
1-54
Hardware Description
Guide slot
Fiber cable connector (**) To Rx connector on ATM--RM From Tx connector on ATM--RM Note: (*) (**) A connector extender is installed on all SC (Single Contact) connectors mating on the inside of the faceplate to facilitate connector removal. Notch key faces up. Tx connector Rx connector
Figure 1-39
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Hardware Description
1-55
Attenuator
A connector extender is installed on all SC (Single Contact) connectors mating on the inside of the faceplate to facilitate connector removal. Notch key faces up.
Figure 1-40
1-56
Hardware Description
PCM (E1/T1) links to/from TCU e3 (Ater interface) or BTSs (Abis interface) or PCUSN (Agprs interface)
Rx3
Tx3
Rx2
Tx2
Rx1
Tx1
Tx5
Tx1 Rx1
TX2 Rx2
TX3 Rx3
Rx0
Tx0
Rx5
TX5 Rx5
TX0 Rx0
TX4 Rx4
CTU 5 (to LSA 4) Note: Rx (CTU) is plugged on Rx (TIM) and Tx (CTU) is plugged on Tx (TIM).
Figure 1-41
BSC e3: PCM internal and external cabling for maximal configuration
PE/DCL/DD/0126 411--9001--126
Rx4
Tx4
Standard
14.10/EN
July 2004
Hardware Description
1-57
(+)
(-)
ABS (--)
Figure 1-42
1-58
Hardware Description
TCP/IP on Ethernet to/from the OMC--R via the optional HUBs (*)
Note: (*) The optional HUBs can be installed outside the SAI.
Figure 1-43
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Hardware Description
1-59
1.6.2
TCU e3 cabinet The TCU e3 cabinet is cabled inside the TCU e3 frame and the SAI frame via different internal cable paths ensuring protection against electrical and electromagnetic interference. The cables are routed from the TCU e3 cabinet to the other BSS products, by rails which are located in the upper part or via a false floor.
1.6.2.1
Internal cabling
The internal cabling of the TCU e3 cabinet is shown on the following figures: Figure 1-44 shows how to connect the internal PCM (E1/T1) cables between:
the TIM module in each LSA-RC module of the Transcoder Node and each CTU in the SAI frame
Figure 1-45 shows how to connect the internal - 48 Vdc and alarm cables between:
the PCIU and the four SIM modules located on both Transcoder Nodes
The internal - 48 Vdc and the alarm cables are distributed on each transcoder from the SIM modules to the CEM module and each RM via the S-link backplane (see Figure 2-5).
1.6.2.2 External cabling
The external cabling of the TCU e3 cabinet is shown on the following figures: Figure 1-44 shows how to connect the external PCM (E1/T1) cable on each CTU module of the SAI frame. Then, these cables are connected to the MSC or to the BSC e3 Figure 1- shows how to connect the external - 48 Vdc cables to the PCIU. -45 Then, these cables are connected to the other BSS products by the operator
1-60
Hardware Description
Tx1
Rx2
Tx1 Rx1
TX2 Rx2
TX3 Rx3
Tx1
Tx0
Tx3
Tx2
Tx1 Rx1
TX2 Rx2
TX3 Rx3
CTU 7 (to LSA 0 down) Note: Rx (CTU) is plugged on Rx (TIM module) and Tx (CTU) is plugged on Tx (TIM module).
Figure 1-44
TCU e3: PCM internal and external cabling for maximal configuration
PE/DCL/DD/0126 411--9001--126
Rx0
Tx0
Tx3
Standard
14.10/EN
July 2004
Hardware Description
1-61
(+)
(-)
ABS (--)
Rx0
Tx0
Tx1 Rx1
TX2 Rx2
TX3 Rx3
Rx2
Tx2
Rx1
Tx1
TX0 Rx0
Rx0
Tx0
Rx3
Tx3
Tx1
Tx1 Rx1
TX2 Rx2
TX3 Rx3
Tx2
Rx1
Rx2
TX0 Rx0
Figure 1-45
Rx3
Tx3
1-62
Hardware Description
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Physical architecture
2-1
PHYSICAL ARCHITECTURE
2.1
2.1.1
Hardware structure
BSC e3 The BSC e3 cabinet is split up as follows (see Figure 2-1): The SAI frame which is used to interface the BSC e3 frame with the BTSs (Abis interface) and the TCU e3 (Ater interface) or the PCUSNs (Agprs interface) the Control Node which houses: OMU modules These modules are provisioned in pairs to provide redundancy TMU modules These modules are provisioned in an N+P scheme: N to provide the targeted performance P to provide the redundancy ATM-SW modules These modules are provisioned in pairs to provide redundancy MMS modules These modules are provisioned in pairs to provide redundancy SIM modules (refer to paragraph 1.4.2.2) These modules are provisioned in pairs to provide redundancy the Interface Node which houses: CEM modules These modules are provisioned in pairs to provide redundancy ATM-RM modules These modules are provisioned in pairs to provide redundancy 8K- RM modules These modules are provisioned in pairs to provide redundancy LSA-RC modules These modules are provisioned to reach the required number of PCMs. Each of them houses: IEM modules These modules are provisioned in pairs to provide redundancy TIM module SIM modules (refer to paragraph 1.4.2.2) These modules are provisioned in pairs to provide redundancy
2-2
Physical architecture
SCSI-PA
SCSI-PB
OMU
(active/passive)
OMU
(active/passive) ATM links (4x25 Mb/s)
42
Reception
Transmission
Tx Rx 35
CTU0
Tx0 Rx0 2
Transmission
ATM_SW
ATM_SW
Tx Rx 28
CTU1
Tx1 Rx1 4
TMU
TMU
TMU
TMU
Tx Rx1 21
CTU2
Tx2 Rx2 6
INTERFACE NODE
ATM links (155 Mb/s) on optical fiber ATM links (155 Mb/s) on optical fiber
ATM--RM
8K--RM
(active/passive)
8K--RM
(active/passive)
ATM--RM
Tx Rx 14
CTU3
Tx3 Rx3 8 Ethernet link to TML 3 9 9 3 IMC 3 9 9 3 S-links 2(24X256 DS0) Ethernet link to TML
Tx Rx 7
CTU4
Tx4 Rx4 10 6 6
CEM
(active/passive) 6 6 6 6
CEM
(active/passive) 6 6 6 6 6 6
Tx Rx
CTU5
Tx5 Rx5 12
LSA--RC 0
Tx0 12
LSA--RC 1
Rx1 8
LSA--RC 2
Tx2 Rx2 6
LSA--RC 3
Tx3 Rx3 4
LSA--RC 4
Tx4 Rx4 2
LSA--RC 5
Tx5 Rx5
Rx0 Tx1 10
Figure 2-1
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Physical architecture
2-3
2.1.2
TCU e3 The TCU e3 cabinet (see Figure 2- is split up as follows: -2) The SAI frame interfaces the TCU e3 frame assembly with the BSC e3 (Ater interface) and the MSC (A interface) two Transcoder Nodes Each of them houses: CEM modules These modules are provisioned in pairs to provide redundancy TRM These modules are provisioned in an N+P scheme: N to provide the targeted performance P to provide the redundancy
LSA-RC modules
These modules are provisioned to reach the required quantity of PCM (E1/T1) links. Each of them houses: IEM modules These modules are provisioned in pairs to provide redundancy TIM module
SIM modules
2-4
Physical architecture
SAI FRAME
Tx Rx 49
CTU0
Tx0 Rx0 2
Up to 12 TRM
TRM
TRM
TRM
modules
TRM
Tx Rx 42
CTU1
Tx1 Rx1 4 Ethernet link to TML
3 ICM
CEM
(active/passive) 6 6 6 6
CEM
(active/passive) 6 6 6 6
Tx Rx 35
CTU2
Tx2 Rx2 6
Tx Rx
CTU3
Tx3 Rx3 8 8
LSA--RC 0
Tx0 Rx0 6
LSA--RC 1
Tx1 Rx1 4
LSA--RC 2
Tx2 Rx2 2
LSA--RC 3
Tx3 Rx3
TRANSCODER NODE
28
Tx Rx 21
CTU0
Tx0 Rx0 2
Up to 12 TRM
TRM
TRM
TRM
modules
TRM
Tx Rx 14
CTU1
Tx1 Rx1 4 Ethernet link to TML
3 IMC
CEM
(active/passive) 6 6 6 6
CEM
(active/passive) 6 6 6 6
Tx Rx 7
CTU2
Tx2 Rx2 6
Tx Rx
CTU3
Tx3 Rx3 8 8
LSA--RC 0
Tx0 Rx0 6
LSA--RC 1
Tx1 Rx1 4
LSA--RC 2
Tx2 Rx2 2
LSA--RC 3
Tx3 Rx3
Figure 2-2
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Physical architecture
2-5
2.2
2.2.1
2.2.1.1
Hardware modules
Control Node
OMU module
The OMU module is front end OAM for the BSC e3. It performs the following main operations: manages each resource inside:
the Control Node the Interface Node the Transcoder Node
supervises the BSC e3 and the TCU e3 cabinet manages the interface with the OMC- R manages the SCSI disks provides system maintenance (by using the TML or the OMC- R) 2.2.1.2 TMU module
The TMU module manages the GSM protocols. It performs the following main operations: manages processing power for the GSM CallP terminates the GSM protocols for A, Abis, Ater and Agprs interfaces terminates the low level of the GSM protocols: LAPD and SS7
2.2.1.3 ATM--SW module
The ATM-SW module, also called CC- 1, is mainly an ATM switch that implements the ATM network used as the Control Node backplane. In addition, it provides the OC- 3 connectivity on optical fibers towards the Interface Node. 2.2.1.4 MMS module
The MMS modules houses the data and software repositories. The RAID (Random Array of Inexpensive Disks) architecture, an industry standard, ensures that the data and the softwares are secured and still accessible in the event of a software or hardware failure.
2.2.1.5 SIM module
The SIM module provides the power and alarm interfaces for the Control Node. It provides shelf-originated alarm signals from the PCIU to the OMU modules.
2-6
Physical architecture
2.2.2
2.2.2.1
Interface Node
CEM module
The CEM module is in charge of controlling each LSA-RC module, each 8K-RM module and each ATM-RM module of the Interface Node and the traffic switching functions. In addition, it provides: clock synchronization and traffic switching an access to the system maintenance using the TML
2.2.2.2 ATM--RM module
The ATM-RM module provides OC- 3 connectivity on optical fibers towards the Control Node. Each ATM- RM module terminates one ATM port for both bearer channels and signaling channels. In addition, it converts: ATM/AAL- 1 and ATM/AAL- 5 cells into DS0 rate channels ATM/AAL- packet into intra- node messaging -5 2.2.2.3 8K--RM module
The 8K- RM module is an application- specific circuit module which performs a timeswitch function on sub- DS0 rate channels, allowing for the efficient switching of 8 and 16 kbps channels.
2.2.2.4 LSA--RC module
The LSA-RC module provides the PCM (E1/T1) link interfaces (the LSA is named Complex rather than a module since it made up of several modules). Two versions of the LSA-RC module exist: one for the international PCM E1 links For this version, the LSA- RC provides 21 PCM E1 connections the other for the North American PCM T1 links For this version, the LSA- RC provides 28 PCM T1 connections -
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Physical architecture
2-7
The LSA-RC module is a set of the following components: a duplicated IEM module The IEM module transmits and converts the PCM (E1/T1) line coded signals to the CEM module across the S-link interface (within the backplane), handles various other functions such as clock and frame recovery, alarm detection, line coding, mapping the PCM information onto the S- link format and providing a diagnostic interface a single TIM module Each function inside the TIM module is implemented with passive components which allows to the TIM to be non-redundant without impacting system reliability a RCM (Resource Complex Mini backplane) The RCM performs the connection of the IEM module across the S-link interface (within the backplane) to the CEM module and the PCM (E1/T1) line coded signals between the IEM module and the TIM module 2.2.3
2.2.3.1
Transcoder Node
CEM module
The active CEM is in charge of controlling each LSA- RC module and each TRM of each Transcoder Node and the traffic switching functions. In addition, it provides: clock synchronization and traffic switching an access to the system maintenance using the TML
2.2.3.2 TRM module
The TRM manages the vocoding of speech channels. This task is accomplished by an array of general purpose, programmable DSPs. The flexibility and computational power of the TRM allow it to run any of the GSM codecs (full, enhanced full, and adaptive multi- rate) on multiple traffic channels. 2.2.3.3 LSA--RC module
For a functional description of the LSA-RC module, refer to paragraph 2.2.2.4. For a description of the IEM and the TIM modules, and the RCM, refer to paragraph 2.2.2.4.
2-8
Physical architecture
2.3
Physical interfaces
The following internal BSC e3 equipment interfaces connect the component parts: within the Control Node:
ATM 25 links (within the backplane) between CC- modules, OMU modules -1
Ethernet links:
between the active and passive OMU modules to the TML and the OMC
SCSI interface bus between the MMS modules and the OMU modules
modules in the Control Node and the ATM- RM module in the Interface Node -
The following internal TCU e3 equipment interfaces connect the component parts within the Transcoder Node: S- links IMC links between the active and passive CEM modules Ethernet links to connect the TML MTM interface bus (within the backplane) alarm links via the SIM modules
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Physical architecture
2-9
2.3.1
2.3.1.1
The physical Layer interface for the ATM 25 is described in the documentation with the following reference: ATM Forum AF- PHY- 0040.000. 2.3.1.2 Ethernet links
The physical Layer interface for Ethernet is described in the documentation with the following reference: Ethernet 10/100 Base- T IEEE std 802.3u - 1995. 2.3.1.3 SCSI interface buses
The physical Layer interface for SCSI interface buses is described in the documentation with the following reference: ANSI SCSI SPI- 3. 2.3.1.4 Alarm links
Figure 2- shows for the frame assembly of the BSC e3 cabinet the internal and -3 external alarm links.
2.3.1.5 MTM bus
The MTM bus transfers the information between each module via the backplane. The MTM bus is used to facilitate the communication with the test and maintenance commands. Only the active OMU module is assigned as a mastership of the bus, and controls the MTM bus transactions. The other modules within the system are slaves, but they can initiate communication with the OMU module through the MTM bus. Except for the SIM modules, each module is connected to the MTM bus. The MTM bus will give priority to the active OMU module over all other modules for: reset control LED control override module configuration data read- out For more information about the MTM bus refer to the IEEE P1149.5 Standard Module Test and Maintenance (MTM) bus protocol. Mars 1992.
2.3.1.6 Ethernet link
The Ethernet link performs the connection between the BSC e3 or TCU e3 and the TML to the OMU module. The physical layer interface for Ethernet is described in the Ethernet 10/100 Base- T IEEE std 802.3u - 1995. -
2-10
Physical architecture
Cooling unit
Fan unit Fan unit Fan unit Fan unit
Control node
SIM module Fan unit SIM module SIM module Passive Up to 4+2 Up to 12+2 Active
ATM_SW module
MTM bus
Cooling unit
Fan unit Fan unit Fan unit
Interface node
Active
Up to 6
ATM-RM module
OMU module
MMS module
TMU module
LSA- RC module
SIM module
Notes: The bold lines show the alarm external routes. The regular lines show the alarm internal routes on the back panel.
MTM bus
Figure 2-3
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Physical architecture
2-11
2.3.2
2.3.2.1
The physical Layer interface for ATM155 over the OC3 optical fiber interface is described in the ATM User- Network Interface Specification af- uni- 0010.002. - 2.3.2.2 Ethernet links
The IMC links perform the connection between both CEM modules. An IMC link has a bandwidth of 126 DS0. It is a specific interface dedicated to both CEM modules.
2.3.3.2 MTM bus
For a description of the MTM bus, refer to paragraph 2.3.1.5. Only the active CEM module is assigned as a mastership of the MTM bus.
2.3.3.3 Alarm links
The modules are equipped within the Interface Node to provide the functionality required for a particular application. Figure 2- shows how the modules are connected to the CEM modules by means -1 of S- Links (Serial Links). This results in a point- to- point architecture, which - (when compared to bus architectures) provides superior fault containment and isolation properties. In addition, to interfacing PCM (E1/T1) transport channels, the S-Link also interfaces transport messaging channels and overhead control and status bits between the CEM modules and the ATM- RM, the 8K- RM and the LSA- RC modules. Each S-Link provides 256 TS. Some module slots have access to three S-Link interfaces (an S-Link cluster), or 768 TS.
2-12
Physical architecture
Two module slots (9 and 10 on shelf 0) are provided with six extra links (that is, two clusters) each. Therefore, these slots are capable of terminating the full payload bandwidth from an OC- 3c. So each CEM module supplies a total of 96 S-Links (18x3 S- Links + 4x1 S- Links + 4x9 S- Links). Figure 2- shows the S-4 -Link distribution on the module slots and the slot position numbers.
(1) (1) (3) (3) (3) (3) (1) (1) (3) (3) (3) (3) (3) (3)
S I M B
Shelf 1
10 11 12 13 14 15 S I M A
(9) (9) (3) (3) (3) (3) (96) (96) (9) (9) (3) (3) (3) (3)
Shelf 0
10 11 12 13 14 15
Note: The number in brackets indicates the quantity of S--link interfaces per slot.
Figure 2-4
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Physical architecture
2-13
2.3.4
2.3.4.1
The Ethernet links provides the connection of the TML to the CEM module. The physical layer interface for Ethernet is described in the Ethernet 10/100 Base- T IEEE std 802.3u - 1995. 2.3.4.2 IMC links
For a description of the MTM bus, refer to paragraph 2.3.1.5. Only the active CEM module is assigned as a mastership of the MTM bus.
2.3.4.4 Alarm links
Figure 2- shows for the frame assembly of the TCU e3 cabinet the internal and -5 external alarm links.
2.3.4.5 S--link interfaces
2-14
Physical architecture
Cooling unit
Fan unit Fan unit Fan unit Fan unit
Transcoder node
SIM module CEM module CEM module Active Fan unit CEM module CEM module SIM module Active SIM module Passive Up to 14 Up to 4
TRM module
Cooling unit
Fan unit Fan unit Fan unit
Transcoder node
Passive Up to 14 Up to 4
TRM module
SIM module
Notes: The bold lines show the alarm external routes. The regular lines show the alarm internal routes on the back panel.
Figure 2-5
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Protocol Architecture
3-1
PROTOCOL ARCHITECTURE
The purpose of the following is to give a high- level presentation of the protocol architecture in a BSS network with a BSC e3 cabinet and a TCU e3 cabinet. 3.1 Protocol used for communication between the OMU modules and the OMC- R This paragraph describes the protocol used for communication between an OMU module inside the BSC e3 cabinet and the OMC- (see Figure 3-R -1). The BSC e3 cabinet and the OMC- R communicate via an OSI protocol stack which covers the following needs: physical connection capability LAN/RFC1006 (TCP/IP): two Ethernet link from the BSC e3 side and one Ethernet link from the OMC- R side. The Ethernet link at up to 100 Mbps association management capability The associations manager is interfaced upon the Transport layer API the FTAM (File transfer Acces Management) The FTAM contains the following characteristics:
only the responder capability is needed restart and recovery capabilities are used content list types needed: NBS9, FTAM- 1 and FTAM- 3 -
ASN1 compiler This compiler is required to generate the data coding/decoding of the transactions exchanged upon the BSC e3/OMC- R interface -
The OSI protocol stack is compliant with the following recommendations: General
ISO 7498/ITU- T X.200 -
3-2
Protocol Architecture
OMC
CEM module
8K--RM
Switch 64K
Switch 8K
BTS LSA--RC
TRM
Figure 3-1
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Protocol Architecture
3-3
Application Layer
ISO 8571
3-4
Protocol Architecture
3.1.1
Protocol used for communication between each node inside the BSC e3 and the TCU e3 and between each BSS product This paragraph describes briefly the main protocol communication between: each node inside (see Figure 3-2):
the BSC e3 cabinet: Control Node and Interface Node the TCU e3 cabinet: both Transcoder Nodes
each BSS product (see Figure 3-3): OMC- R, BSC e3, BTSs, TCU e3 and MSC 3.1.1.1 ATM interface distribution
The Control Node uses a duplex star connectivity with the cell switching performed by both ATM- SW modules at the center of the star and the OMU and TMU modules at the leaves. This subsystem provides a reliable backplane modules interconnection with live insertion capabilities. It contains the following main components: an ATM switch located inside each ATM- SW module an ATM Adapter located inside each OMU module and each TMU module The connections between each module inside the Control Node use a redundant ATM25 point to point connection to the ATM switches. The ATM interface uses the ATM25 standard as defined by the ATM Forum. It carries all internal signalling information, using AAL- 1 and AAL- 5 protocols. 3.1.1.2 ATM Adaptation Layer Protocols: AAL--1 and AAL--5
The communication exchanged between each module on the ATM subsystem is accomplished over the Vc (Virtual circuits) using the AAL- and AAL- (ATM -1 -5 Adaptation Layer) protocols. For example, a TMU module needs 82 Vcs: 64 Vcs for CallP signaling (LAPD), using AAL- 1 protocol 18 Vcs for internal messaging, using AAL- protocol -5
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Protocol Architecture
3-5
OMU module
ATM
OMC
ATM--RM
Convert Protocol AAL5 to SPM messaging OAM CallP LAPD ISDN-B DS0
CEM module
8K--RM
Final processing of Callp and OA&M OAM management Speech + data
Switch 64K
Switch 8K
BTS
LSA--RC
MSC
CEM module
TRANSCODER NODE
OAM management
TRM
Figure 3-2
3-6
Protocol Architecture
TMU module
LAPD
OMU module
OMC
ATM--RM
BSS MAP DTAP SCCP MTP3, MTP2, MTP1 DS0
Convert Protocol AAL1 to DS0 DTAP OAM CallP LAPD ISDN-B DS0
CEM module
Speech + data Switch 64K
8K--RM
BTS
SS7
Speech + data
MSC
CEM module
TRM
Figure 3-3
Protocol architecture between each BSS product within a BSC e3 and a TCU e3
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Protocol Architecture
3-7
3.1.1.3
The communication inside the BSC e3 is performed by an internal messaging, which conveys the OAM and CallP data flows. To secure the transfer, the internal messaging uses the TCP/IP protocol. The IP packets are carried in ATM AAL- 5 type cells. Translation of IP address to ATM (Vp,Vc) address is achieved via the address resolution protocol table built during system initialization. AAL- 5 traffic is a bursty traffic because it contains internal messaging: inside the Control Node:
from each OMU module to each TMU module and ATM- SW module from each ATM- SW module to each other module (OMU, TMU and -
from each TMU module to each ATM- SW module and the other TMU -
LAPD and SS7 links carried on PCM TS (DS0) are translated over ATM using the ATM AAL- protocol (Circuit mode Emulation). -1 The AAL- 1 Vcs are used to transport LAPD links and SS7 links between the Control Node and the Interface Node. The AAL- 1 Vcs are converted in DS0 links by the ATM- RM module located inside the Interface Node. The DS0 are used to transport: LAPD channels between:
the Interface Node and the Transcoder Node the Interface Node and the BTSs
SS7 channels between the Interface Node and the MSC via the Transcoder Node
3-8
Protocol Architecture
3.1.1.5
The ATM connections currently used in the BSC e3 are: for AAL-5:
inside the Control Node:
one Vc from each OMU module to each ATM- SW module one Vc from each TMU module to each ATM- SW module one Vc between each module (OMU, TMU and ATM- SW modules) between the Control Node and the Interface Node:
one Vc from each OMU module to each CEM module one Vc from each TMU module to each CEM module for AAL- 1: between the Control Node and the Interface Node:
LAPD and SS7 messages inside the Interface Node are received inside the AAL- 1 cells by both ATM- RM modules and distributed to both CEM modules via the S-link interfaces. A Y connection connects the two identical TSs to the required LSA- RC module: in the ATM to LSA way, only the TS of the active plane is switched in the LSA to ATM way, the TS is broadcast to both S- links S-links used for signaling are called Primary S- links. 3.1.1.7 Communication between the Control Node and the Interface Node
The communication between the Control Node and the Interface Node uses the TCP/IP and UDP/IP protocol stack over AAL- 5. FTP is used to download the software. The OAM- IN and CallP_IN (for Interface Node) data flows are conveyed over the UDP/IP protocol stack.
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Protocol Architecture
3-9
3.1.2
Protocol used for communication between each node inside the BSC e3 and the PCUSN and between each BSS product This paragraph briefly describes the main protocol communication between: each node inside (see Figure 3-4):
the BSC e3 cabinet: Control Node and Interface Node the TCU e3 cabinet: both Transcoder Nodes
each BSS product (see Figure 3-5): OMC- R, BSC e3, BTSs, TCU e3 and MSC 3.1.2.1 ATM interface distribution
The Control Node uses a duplex star connectivity with the cell switching performed by both ATM-SW modules at the center of the star and the OMU and TMU modules at the leaves. This subsystem provides a reliable backplane module interconnection with live insertion capabilities. It contains the following main components: an ATM switch located inside each ATM- SW module an ATM Adapter located inside each OMU module and each TMU module The connections between each module inside the Control Node use a redundant ATM25 point- to- point connection to the ATM switches. - The ATM interface uses the ATM25 standard as defined by the ATM Forum. It carries all internal signalling information, using AAL- 1 and AAL- 5 protocols. 3.1.2.2 ATM Adaptation Layer Protocols: AAL--1 and AAL--5
The communication exchanged between each module on the ATM subsystem is accomplished over the Vc (Virtual circuits) using the AAL- and AAL- (ATM -1 -5 Adaptation Layer) protocols. For example, a TMU module needs 82 Vcs: 64 Vcs for CallP signaling (LAPD), using AAL- 1 protocol 18 Vcs for internal messaging, using AAL- protocol -5
3-10
Protocol Architecture
OMU module
ATM
OMC
ATM--RM
Convert Protocol AAL- to SPM -5 messaging OAM CallP LAPD ISDN-B DS0 OAM_IN CallP_IN SPM messaging
CEM module
8K--RM
Final processing of Callp and OA&M OAM management data
Switch 64K
Switch 8K
BTS
OAM management
SGSN
Figure 3-4
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Protocol Architecture
3-11
OMU module
OMC
ATM--RM
DTAP
CEM module
Data Switch 64K
8K--RM
BTS
Frame relay
SGSN
Figure 3-5
Protocol architecture between each BSS product within a BSC e3 and a PCUSN
3-12
Protocol Architecture
3.1.2.3
The communication inside the BSC e3 is performed by an internal messaging, which conveys the OAM and CallP data flows. To secure the transfer, the internal messaging uses the TCP/IP protocol. The IP packets are carried into ATM AAL- 5 type cells. Translation of IP address to ATM (Vp,Vc) address is achieved via the address resolution protocol table built during system initialization. AAL- 5 traffic is a bursty traffic because it contains internal messaging: inside the Control Node:
from each OMU module to each TMU module and ATM- SW module from each ATM- SW module to each other module (OMU, TMU and -
from each TMU module to each ATM- SW module and the other TMU -
LAPD links carried on PCM TS (DS0) are translated over ATM using the ATM AAL- protocol (Circuit mode Emulation). -1 The AAL- Vcs are used to transport LAPD links between the Control Node and -1 the Interface Node. The AAL- Vcs are converted in DS0 links by the ATM- RM located inside the -1 Interface Node. The DS0 are used to transport LAPD channels between:
the Interface Node and the Transcoder Node the Interface Node and the BTSs
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Protocol Architecture
3-13
3.1.2.5
The ATM connections currently used in the BSC e3 are: for AAL-5:
inside the Control Node:
one Vc from each OMU module to each ATM- SW module one Vc from each TMU module to each ATM- SW module one Vc between each module (OMU, TMU and ATM- SW modules) between the Control Node and the Interface Node:
one Vc from each OMU module to each CEM module. one Vc from each TMU module to each CEM module. for AAL-1:
between the Control Node and the Interface Node:
up to 64 AAL- 1 Vc from each TMU module to the Interface Node 3.1.2.6 Switching LAPDTS
LAPD messages inside the Interface Node are received inside the AAL- 1 cells by both ATM- RM and distributed to both CEM modules via the S- link interfaces. An Y connection connects the two identical TSs to the required LSA- RC module: in the ATM to LSA way, only the TS of the active plane is switched in the LSA to ATM way, the TS is broadcast to both S- links S-links used for signaling are called Primary S- links. 3.1.2.7 Communication between the Control Node and the Interface Node
The communication between the Control Node and the Interface Node uses the TCP/IP and UDP/IP protocol stack over AAL- 5. FTP is used to download the software. The OAM and CallP data flows are conveyed over the UDP/IP protocol stack. 3.1.3 Overview and conclusion Figure 3- shows an overview of the protocol communication architecture in a -6 BSS network with a BSC e3 cabinet and a TCU e3 cabinet. Figure 3- shows an overview of the protocol communication architecture in a -7 BSS network with a BSC e3 cabinet and a PCUSN cabinet.
3-14
Protocol Architecture
TMU module
TCP/IP LAPD LAPD
OMU module
TCP/IP
TCP/IP Ethernet
OMC
FTAM Presentation Session APE RFC 1006 UDP/IP Ethernet OAM CallP TCP/IP AAL5 ATM
ATM--RM
BSS MAP DTAP SCCP MTP3, MTP2, MTP1 DS0
Convert Protocol AAL5 to SPM messaging OAM CallP LAPD ISDN-B DS0 OAM_IN CallP_IN Proxy AAL5 SPM messaging
CEM module
8K--RM
Final processing of Callp and OA&M OAM management Speech + data
Switch 64K
BTS
SS7
LSA--RC
Speech + data
MSC
CEM module
TRANSCODER NODE
OAM management
TRM
Figure 3-6
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Protocol Architecture
3-15
OMU module
TCP/IP
TCP/IP Ethernet
OMC
FTAM Presentation Session APE RFC 1006 UDP/IP Ethernet OAM CallP TCP/IP AAL5 ATM
ATM--RM
BSS MAP DTAP SCCP MTP3, MTP2, MTP1 DS0
Convert Protocol AAL5 to SPM messaging OAM CallP LAPD ISDN-B DS0 OAM_IN CallP_IN Proxy AAL5 SPM messaging
CEM module
8K--RM
Final processing of Callp and OA&M OAM management Data
Switch 8K
BTS
Data processing
Data
SGSN
OAM management
Figure 3-7
3-16
Protocol Architecture
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-1
FUNCTIONAL ARCHITECTURE
4.1
4.1.1
4-2
Functional Architecture
reliable and high- performance of management: the disk subsystem is protected against: a hardware or a software failure inside the OMU module an extraction of the MMS module the OMU subsystem is protected against a hardware or software failure inside the MMS module plug and play modules: easy hardware maintenance or extension by extracting or plugging the modules hot module insertion or extraction without service interruption OMU module and private MMS module can not be managed like other plug and play modules, due to their logical hierarchical dependency
a private MMS may be plugged in a shared MMS slot, but not the reverse
plug and play without snapshot hardware management without the snapshot management. The snapshot gives a picture of the detected hardware ATTENTION Performing a hot extraction may interrupt service. Before performing any hot extraction, see appropriate sections in this manual and in the BSC/TCU e3 Maintenance Manual (411-9001-132) to review the limitations and precautions associated with the component to be removed.
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-3
4.1.2
BSC e3 functional characteristics The BSC e3 is fully redundant. It manages the main functions described below: radio resource management:
to process radio accesses to allocate radio channels (traffic and signaling) to monitor radio channel operating states to share radio channels between GPRS and GSM
call processing:
to set up and release terrestrial and radio links to transfer messages between the mobile stations and the MSC (via the TCU
to switch the channels between the BTSs and the MSC (via the TCU e3 or
functions include for the BSC e3 the module switching and restart mechanisms
4-4
Functional Architecture
(OMC-R)
to process operations requested by the OMC-R to store all BSS configuration data and software, and distribute them among
The BSC e3 software architecture is based on a network model of processors called a core system, which can be tailored to fit into different hardware structures. The core system is divided into logical process units. A set of modules which house boards and processors provide each logical unit with the processing power they need.
The main types of processing unit are split up as follows: for the Control Node:
the OMU modules enable the following basic BSC e3 operating functions:
MMS module management BSC e3 initialization sequences (loading the programs and data into the different processors) monitoring correct processor operations OMC- R access and related function management Interface Node management
the TMU modules enable:
for the twelve TMU module, which are dedicated to the CallP:
centralized call processing functions communication with the BTS (traffic management, radio environment monitoring, message broadcasting, traffic overload control, etc.) TCU e3 management
for the two TMU module, which are dedicated to SS7 management:
communication with the MSC communication with the MSC (SS7 signaling channels)
conversion of the ATM 25 to the SONET interfaces (ATM155) conversion of the LAPD and SS7 channels on each TMU module via the VP-VC on AAL- 1 -
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-5
conversion of the AAL- 1 to S- link interfaces conversion of the AAL- 5 to Spectrum messaging interfaces the CEM modules enable:
management of the ATM-RM, 8K-RM and LSA-RC modules management of the mixing order for the 64K and the 8K switching parts
the 8K- RM modules enable: -
4-6
Functional Architecture
4.2
4.2.1
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-7
PCM link management transcoder management transcoding and rate adaptation synchronization of the time base on the clock taken from three of the PCM links connected to the MSC or from an external reference clock terminating the LAPD links, from the BSC e3, which carries: permanent links for the CallP and the OAM functions temporary links for the software downloading
4-8
Functional Architecture
4.2.2
TCU e3 functional characteristics The TCU e3 is fully redundant. It performs the following main functions: call processing, which does the following:
switches the speech/data channels switches channels between BTS and MSC
spreading
functions include switching and restart mechanisms for the modules inside the TCU e3
The main types of processing units inside each Transcoder Node are split up as follows: the TRM module is used to transcode A channel (64 Kbits) from (to) the MSC into Ater channel (16 Kbits/8 Kbits) from (to) the BSC e3 the CEM module is used:
to manage the TRMs and the LSA- RC modules to manage the switching order for the 64K switching part
the LSA- RC module enables: to manage the PCM/defect monitoring to convert the LAPD to Spectrum messaging interfaces to convert the PCM to the S- links interfaces -
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-9
4.3
Architecture presentation
To facilitate presentation of the main functions of the BSC e3 cabinet and the TCU e3 cabinet, we have split up the functional architecture as follows: an OAM (Operation And Maintenance) architecture a CallP (Call Processing) architecture
4.3.1
4.3.1.1
OAM architecture
Overview
For the BSC e3 cabinet and the TCU e3 cabinet, Figure 4- shows the OAM -1 architecture with the main functional groups and their main relationships.
Control Node
The Control Node houses the following main functional groups: BSCe3/OMC- Com This functional group is in charge of the relationship between the BSC e3 and the OMC- in term of protocol (association, transport, transaction, file transfer, -R etc.) OMC Services This functional group provides to any software present in the Control Node, the ability to interact with:
the Fault Management function the Performance Management function the Configuration Management function
Supervision This functional group gathers together all software entities to manage the nodes of the BSS network. It is composed of the following functional groups:
SUP_CN SUP_IN SUP_TCU SPT
to supervise the Control Node in the BSC e3 to supervise the Interface Node in the BSC e3 to supervise each TCU e3 to supervise each TCU 2G
4-10
Functional Architecture
To the OMC--R via TCP/IP on Ethernet BSC e3 CONTROL NODE OSI FTAM OSI/FTAM BSCe3/OMC--Com Supervision SUP_TCU SPP SPR SPT SUP_IN SUP_CN APE OMC Services Fault Management Performance Management Configuration Management
C--Node_OAM Software Management Overload Management Upgrade Management Tests & Diag Management Fault Tolerance
Generic ACCESS
IN/IF ACCESS
PCUSN_OAM BTS_OAM
PCUSN BTSs
Object Management Critical Path Management Upgrade Management Tests & Diag Management
Load Balancing Hardware Management Global Services Administration Data/File Access System Software Bus
Object Management Critical Path Management Upgrade Management Tests & Diag Management Hardware Management Base OS
Hardware Management
Base OS
TCU e3
TCU_OAM
TCU 2Gs
Figure 4-1
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-11
SPP SPR
C-Node_OAM The C-Node_OAM functional group gathers together the different parts which are briefly described below:
a Software Management part
This part is in charge of the start- up of all none fault tolerant software entities in the Control Node. In addition, it ensures their synchronization during the start- up phase and supervises their activities an Overload Management part
This part is in charge to detect overload condition and to generate internal signals toward the different software entities inside the Control Node to adapt their behavior to the congestion phase. These signals allow a reaction of each software when different thresholds are crossed (activation of a flow control, handling of emergency call, call restriction, etc.)
an Upgrade Management part
This part is in charge to manage the software upgrade of the Control Node. In addition, it ensures the relationship with the SUP_IN and the SUP_TCU to upgrade respectively the software of the Interface Node and the Transcoder Node
a Test and Diagnostic Management part
This part is in charge of the maintenance aspect of the Control Node. In addition, it ensures the communication with the TML (Terminal Local Maintenance): to run the tests to collect various information about the Interface Node components to provide advanced services for:
the I&C (Installation and Commissioning) procedures the maintenance procedures
This part is in charge of the distribution of the software entities over the available modules of the Control Node (it is the Load Balancing). To do this, the Load balancing function uses the Fault Tolerance function, which interacts with the software entities: to run, to switch or to delete their activities
4-12
Functional Architecture
This part is in charge to provide the following functions: to detect the plugged modules to identify the plugged modules etc. Global services Three functions manage the software entities of the Control Node:
Administration function Data/File access system function Software bus function
Basic services Various basic services are offered to all software entities of the Control Node.
Messaging service
This service provides the capability to each software entity to communicate without knowing the location of the destination entity, even if this software entity, in case of module failure, migrates from one module to another one
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-13
Interface Node
The Interface Node houses the I- Node_OAM functional group and gathers together the different parts which are briefly described below: an Object Management part This part is in charge of the following operations in the Interface Node:
setting up each module via ATM network (AAL- 1/AAL- 5) and the external -
interfaces via the PCM links on the Abis interface and the Ater interface
providing a local defense (i.e. the SWACT: SWitch of ACTivities) sending to the OMC each software or hardware fault that will appear inside
a Critical Path Management part This part is in charge of the following operations at the start- up of the BSC e3 cabinet:
running some components inside the Interface Node setting up some components inside the Interface Node establishing the dialog with the Control Node
an Upgrade Management part This part is in charge of handling the requests to upgrade the software of the Interface Node. The Control Node via the SUP_IN sends these requests a Test and Diagnostic Management part This part is in charge of the maintenance aspect of the Interface Node. In addition, it ensures the communication with the TML (Terminal Local Maintenance) in order to do the following:
to run the tests to collect various information about the Interface Node components to provide advanced services for:
the I&C (Installation and Commissioning) procedures the maintenance procedures a Hardware Management part This part is in charge of supervising each procedure to test the modules via the MTM bus and the S-link interfaces located inside the back plane of the frame assembly
4-14
Functional Architecture
Transcoder Node
The Transcoder Node houses the T- Node_ OAM functional group. It gathers together the following main functions: an Object Management part This part is in charge of the following operations in the Transcoder Node:
setting up each module via S- link interfaces providing a local defense (i.e. the SWACT: SWitch of ACTivity) sending to the OMC- R each software or hardware fault that will appear inside -
a Critical Path Management part This part is in charge of the following operations at the start- up of the TCU e3 cabinet:
running the components inside the Transcoder Node setting up the components inside the Transcoder Node establishing the dialog with the Control Node
an Upgrade Management part This part is in charge of handling the requests to upgrade the software of the Transcoder Node. The Control Node via the SUP_TCU sends these requests a Test and Diagnostic Management part This part is in charge of the maintenance aspect of the Interface Node. In addition, it ensures the communication with the TML (Local Maintenance Terminal):
to run the tests to collect various information items about the Transcoder Node components to provide advanced services for:
the I&C procedures the maintenance procedures a Hardware Management part This part is in charge of supervising each procedure to test the modules via the MTM bus and the S- link interfaces located inside the back plane of the frame assembly
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-15
4.3.1.2
The BSCe3/OMC- Com functional group (see Figure 4- is located inside the -2) OMU module. The BSCe3/OMC- Com functional group is responsible for the following: enabling the communication with the OMC- R handling the links with the OMC- R managing the different access protocols (FTAM and SEPE) ensuring: the storage of the events when the OMC- R cannot be accessed the relationships between the OMC- R and the OMC Services functional group The OAMC functional group is divided into the following functions: OAMC_APE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . association protocol OAMC_OSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . OSI layer management OAMC_FTAM . . . . . . . . . . . . . . . . . . . . . . . . . . file transfer management
OSI FTAM
OAMC
OSI/FTAM
APE
CM Configuration Management (Common Agent) OAM services OMU module CONTROL NODE BSC e3
Figure 4-2
4-16
Functional Architecture
OAMC_APE
The OAMC_APE provides the following services: application-oriented associations (logical data flows), which are split up as follows:
INIT TRANSAC FAULT UPDATE
to convey an event dedicated to the MIB state to convey operation orders requested by the OMC-R operator and the associated answers sent back by the BSC to convey fault and alarm events to convey orders relative to the MIB update
The association protocol is called: SEPE protocol. It is a secure protocol transport connection management This service is used to transport the application- oriented associations. Only one transport connection is established by the OMC- R and it conveys all the associations. Circular file management This service is used by the SEPE protocol. It enables storing of all transactions (events, alarms, etc.) on the disk until the end of the downloading by the OMC- R OAMC_OSl
The OAMC_OSI is a standard OSI protocol stack for the communication between the OMC- R and the BSC e3 over the IP network. OAMC_FTAM
The OAMC_FTAM handles the file transfer protocol and is used to: upload the new software releases download the result files (measures, call path tracing, debug, etc.) which are not sent directly to the OMC- R -
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-17
The OMC services functional group of the BSC e3 is (see Figure 4-3): in charge of:
fault detection on the BSC e3 of the:
based on a hierarchical structure The hierarchical and physical architectures contain for each function: a CA (Common Agent) This is the master part of the functional entities and is located on the BSC board in the OMU module an LA (Local Agent) This is the slave part of the functional entities and is located on each board inside each module (ATM- SW, OMU and TMU modules) of the Control Node. This architecture is used to hide all software entities that interact with the OAM services and the SWitching of the ACTivity between the passive and the active OMU modules. Only the software entities using the OMC services interact with the LA. The OMC services functional group is divided into the following functions: CM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuration Management PM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Performance Management FM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fault Management
4-18
Functional Architecture
BSC e3 CONTROL NODE OMU module Shared MMS modules MIB MIB BSCe3/OMC--Com functional group APE
OMC services CM Configuration Management (Common Agent) FM Fault Management (Common Agent) PM Performance Management (Common Agent)
SUP_IN
SUP_CN
SPT
SUP_TCU
SPR
LAPD
SS7
Other functions
Figure 4-3
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-19
CM (Configuration Management)
The CM function is located in the OMU module of the Control Node. The CM function is in charge of the relationship with the OMC- R for the configuration aspects. It receives all TGEs (global operation transactions) sent by the OMC- (initiated or not by the operator), translates them into a Control Node -R internal view (this mediation is performed by a dedicated module called ADM) and schedules the processing of these transactions by the software present on the Control Node. The translation discussed above is a translation of view. In fact, a logical view is used between the OMC- R and the BSC e3. Inside the BSC e3 a hardware or software view is used. Each object is handled by the OMC- R via an OE (managed object) and internally to the BSC e3 via an OA (application object). The software architecture can use several OAs for the same OE (i.e. a cell object is split into several OAs one used by the CallP for the signaling management and the other for the OAM aspect when SPR manages the site where the cell is hosted). The OMC- is in charge of BSC e3 operation. The operator creates a logical BSC -R e3 object at the OMC- R. When the communication is established between the BSC e3 and the OMC- the OMC- R can start to send the requests or the transactions. -R, A set of commands is provided to the OMC- R user: to create a BSS network to update a BSS network by creating new elements to modify (lock/unlock) the existing elements to delete the existing elements to get all useful information about any elements in the BSS network managed by the OMC- R (equipment and links between them) to test a module on failure condition or diagnostic or defense action to reset a module on failure condition or diagnostic or defense action
4-20
Functional Architecture
The data related to application objects includes the following data: static data This data cannot be modified on line by the operator. It is stored inside a MIB (Managed Information Base) located on the shared mirrored disks permanent data This data can be initialized and modified by OMC- R initiative. It can be either a parameter change on an existing MIB or the generation of a new MIB. This generation could be on- line or off- line. It is archived inside a SCSI disk, which is located in an MMS module dynamic data This data is directly managed by the software entities via set/get transactions coming from OMC- R -
The transactions between the OMC- R and the BSC e3 are conveyed by the SEPE protocol data units called TGEs (global operation transactions). A TGE contains a set of TEE (elementary operation transactions). After being processed by the CM functions, TEE messages are translated into TEAs (application elementary transactions) that apply to different application objects. The permanent and static data are stored inside a data base called MIB (MAnaged Information Base). This MIB is kept permanently synchronized with its OMC-R equivalence (BDE: Operation Data Base) via the SEPE protocol. The audit action allows the OMC- to control this synchronization after a new connection. If not, -R the OMC- R via the SEPE protocol (build transactions) can build the MIB again (the build action is the initialization of the permanent data with the same image as OMC-R.)
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-21
PM (Performance Management)
The PM function is located in the OMU module of the Control Node. The GSM BSS observation counters (Recommendation GSM 12.04) are collected and reported to the OMC- R. The counters are split up into four permanent observation groups: Real Time Observations (ORT) This observation group corresponds to a small set of counters that can be displayed in real time (network supervision) Diagnostic Observations (ODIAG) This observation group allows observations of a small set of cells (two diagnostic observations per OMC- R, and four cells per BSC e3) with a large set of counters Fast Statistic Observations (OFS) This observation group allows statistics on a 15, 30, 60 minutes basis General Statistic Observations (OGS) This observation group allows statistics on a 1440 minutes (24 hours) basis (for example specific equipment counters such as processor load)
The PM function is distributed over the OMU modules and the TMU modules of the Control Node. It collects measurement information provided by the software entities, processes them and forwards them to the OMC- R via the BSCe3/OMC-Com functional group. The local agent parts available on each card are in charge of collecting the measurement information. They are close to the source of information. The rate of collection is given by the central agent localized on the OMU module. The period value is provided by the OMC- R inside the transaction that activates the collection. The consolidation of the data (summarization with a BSS view) is done at the central agent level. The operator can either start or stop collecting the counters.
4-22
Functional Architecture
FM (Fault Management)
The FM function is located in the OMU module of the Control Node. The FM function provides the following services: reliable transport of the alarms or spontaneous events issued by the Control Node software entities defense action in the case of critical Error Processing Alarm or overflow of Error Processing Alarms When the number of errors is over a threshold (static parameter of the MIB), the FM function proceeds to reset the faulty module. As the Control Node is a fault tolerant platform, the activity is recovered either on the OMU modules, on the TMU modules or on the ATM-SW modules. The fault and alarm events generated by the remote BSS products are forwarded to the FM function by the related function of the Supervision functional group: SUP_CN for the Control Node in the BSC e3 SUP_IN for the Interface Node in the BSC e3 SUP_TCU for each TCU e3 SPT for each TCU 2G SPP for the PCUSN SPR for the BTSs In addition, the FM function provides the log files on shared mirrored disks. In normal operation (OMC- connected to the BSC), all events and alarms are -R forwarded to the OMC- for update of the operator screen. Some log files are -R dedicated to a local access via a TML (local maintenance terminal) and are coded in HTML format. They can be read on the TML by using any standard browser. The fault and alarm events sent to the OMC- R contain all necessary information for the supervision and maintenance: type of fault severity service impact hardware impact The hardware failure is notified directly on the related module, so that the OMC- R can display the faulty equipment precisely to the operator.
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-23
Supervision
The supervision functional group (see Figure 4- supplies the user functions -4) (TMG, SS7, LAPD, Load Balancing etc.) with resources (mainly physical). To perform this function, it must: initialize resources (set up control links, load software and data as needed) control operating resources (supervision, fault detection) manage resource availability In addition, it also stores on the disk the reference of the software that is downloaded in the flash memory. Resource availability may depend on the operating status of other equipment items. Therefore, the supervision entities have to manage equipment/resource alliances. The supervision functional group is divided into the following functions: SUP_CN This function supervises the Control Node resources in the OMU modules, ATM- SW modules, MMS modules and TMU modules SUP_IN This function supervises the Interface Node resources in the CEM, ATM-RM, 8K- RM and IEM/LSA- RC modules SUP_TCU This function supervises the Transcoder Node resources in the CEM, TRM and IEM/LSA- RC modules of the TCU e3 SPT This function supervises the Transcoder Node resources in the TCU 2G. SPP This function supervises the resources on the PCUSN. SPR This function supervises the resources on the BTSs.
4-24
Functional Architecture
Basic services
Supervision SUP_CN
OMC services
SUP_IN
TCU 2G group
SPT
SPP
TCU 2G
Figure 4-4
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-25
SUP_CN
This function is installed in the OMU module and does the following: manages and supervises the other modules in the Control Node generates event report messages when failures or state modifications occur in the BSC e3 (hardware fault detection, equipment status change, start/end of fault conditions). The SUP_CN sends these event reports to the OMC- (via the -R BSCe3/OMC-Com functional group.) provides the available resources (TMU modules) to the Load Balancing function consolidates (in the plug and play procedure) the hardware view and the logical view
SUP_IN
This function is installed inside the OMU module and handles the following: access to the modules in the Interface Node using the dedicated LAPD links downloading of the software of the Interface Node, in the case of upgrade, and the forwarding of all events detected on the Interface Node
SUP_TCU
This function is installed inside the TMU module and handles the following: access to the modules in the TCU e3 using the dedicated LAPD links downloading of the software on TCU e3, in the case of upgrade, and the forwarding of all events detected on the Transcoder Node
SPT
This function is installed inside the TMU module and handles the following: access to the modules in the TCU 2G using the dedicated LAPD links downloading of the TCB software on TCU 2G, in the case of upgrade, and the forwarding of all events detected on the Transcoder Node
SPP
This function is installed inside the TMU module and manages the main following functions: etablishes the dialog with each PCUSN element via the Agprs interface configures each PCUSN element via the Agprs interface distributes the GPRS cells to each PCUSN element via the Agprs interface configures a cell and its associated PCUSN element via the Agprs interface attributes to the TMG the configurated cell
4-26
Functional Architecture
SPR
This function is installed in the TMU modules. It handles radio sites and individual component radio parts. Each radio site is managed independently. One cell group manages the cells, which are associated with a collection of sites. The unit enables the same functions as SUP_BSC e3 (SUP_CN + SUP_IN) but focuses on the BTSs and BTS access interfaces: BTS start-up Loading BCF and TRX radio transceiver software - configuring data initializing LAPD data links on the BTS and BSC e3 TDMA frame priority management radio entity management (site, TRX, cell, TDMA) resource management (supplies the TMG with radio channels) recovering uncustomary events: informing the OMC- R of radio site configuration changes or faults via the BSCe3/OMC- Com functional group (change of site/cell/TRX/TDMA status, faults detected by the BTSs and feedback to the BSC on the Abis interface, faults detected by SUP_RDS monitoring mechanisms) conducting defense action: Abis interface defense by managing PCM link redundancy and reorganizing signaling links, radio traffic defense by managing TRX redundancy Only two TSs are necessary on an Abis PCM link to carry eight TDMA channels. The SPR handles dynamic allocation of two TSs on an Abis PCM link for a BTS.
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-27
Basic services
The basic services group houses the following functions (see Figure 4-5): FT . . . . . . . . . . . . . . . . . . . . . . . . . . . Fault Tolerance LB . . . . . . . . . . . . . . . . . . . . . . . . . . Load Balancing MESSAGING . . . . . . . . . . . . . . . . . Service to exchange messages SM . . . . . . . . . . . . . . . . . . . . . . . . . . Software Management UM . . . . . . . . . . . . . . . . . . . . . . . . . . Upgrade Management T&D . . . . . . . . . . . . . . . . . . . . . . . . . Test and Diagnostic Management OV . . . . . . . . . . . . . . . . . . . . . . . . . . OVerload HM . . . . . . . . . . . . . . . . . . . . . . . . . . Hardware Management Base OS . . . . . . . . . . . . . . . . . . . . . . Base Operating System
4-28
Functional Architecture
Basic services HM Hardware Management UM Upgrade Management T&D Test and Diagnostic Management SM Software Management (Common Agent) MESSAGING LB Load Balancing
Base OS Fault Tolerance (Local Agent) Overload (Local Agent) Software Management (Local Agent)
MESSAGING
Base OS
TMU module
Figure 4-5
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-29
FT description
A fault tolerance application allows, in case of hardware failures: an immediate reconfiguration of the software activities a bearing of the service which is provided by the application
The GSM application are classified as follows: Non FT application In this case, the context is directly associated with a specific module. When a fault appears, the application and the data are deleted. This application is instantiated on each module and deals with the local processes and the central agent of the OMU FT application In this case, the application is composed of an active image and a passive image. The passive image can recover a hardware failure or a software failure, which appears on the active image. For the Control Node, this is done by a single active instance on a given module (TMU, OMU ATM-SW modules) named active module with one (or more) replicable instances named passive instance which is (are) located on a different module named passive module. An active instance is used to perform the call processing procedures of the application. A passive instance(s) is used to update the current context of the active instance. In addition the passive instance (in the case of hardware failure on the module, which houses the corresponding active instance) can take over and continue to run the application and to maintain the service provided by the application. This passive instance becomes a new active instance. This process is named: Switch of Activity or SWACT. Figure 4- shows an example of a SWACT operation with three TMU modules. -6
4-30
Functional Architecture
LWP2 (active)
LWP2 (passive)
LWP3 (passive)
LWP3 (active)
LWP2 (active)
LWP2 (passive)
LWP3 (passive)
LWP3 (passive)
LWP3 (active)
Note:
LWP = Light Weight Process. The LWP in grey indicates the new state of this one.
Figure 4-6
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-31
This is the smallest entity (application instance) which can be SWACTed or migrated. It owns memory context and communication endpoint
an LWG (Light Weight Group)
This is a set of LWPs attached to the same cell group. The active LWP is located on a module and the passive LWP on another module
a CP (Core Process)
This is a set of dependent LWPs (For example: all members of a CP can interact on a regular basis via messages). A GSM CP (inside a TMU module) contains the following four GSM applications (for their description refer hereafter to the CallP paragraph) in relation with one cell group: TMG-CNX TMG- MES TMG-RAD SPR
a cell group
to enable the main TMG function to manage the SS7 connection needs to execute radio resource procedures to manage radio resource connections
For example, a cell group is composed of a software entities collection (TMG-CNX, TMG- MES, TMG- RAD and SPR) which are in charge of a radio cell area as a set of sites (or radio cells). Each software instance is handled by an FT application instance as an LWP. The cell group is mapped to a CP, which means that all LWPs inside a CP follow simultaneously the same fault tolerant state: active or passive. Figure 4- shows the cell group organization inside the TMU modules -7
4-32
Functional Architecture
TMU 0
TMU 1
TMU 2
CP (Core Process)
Active
Passive
Cell Group 1
Passive
Active
Cell Group 3
Passive
Active
Cell Group 2
Active CP
LWPp
TMG_MES
LWPa
LWPp
TMG_CN
LWPa
Passive CP
LWPp
TMG_RAD
LWPa
LWPp
SPR
LWPa
Figure 4-7
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-33
LB (Load Balancing)
The LB function corresponds to the ability of the Control Node to: evaluate the resource capacities of the Fault Tolerance application (CPU load, memory, Abis interface, Ater interface, timers) in relation to the number of GSM objects (TRX, PCMA, LAPD links, etc.) that will be managed evaluate the capacity of each TMU module in relation with their hardware version optimize the partitioning of the Fault Tolerance application instances on the different TMU modules in relation to the constraints which are described below
The LB function is used to: minimize the overload problems by having the best resource distribution distribute passive entities in order to have well balanced entities after a SWACT The purpose of the LB function is to distribute processing in an optimal way over the TMU modules and to use the optimal resources inside the BSC e3. Distributing the processing related to the different cell groups (i.e. sets of cells belonging to the same process) equally over the TMU modules. The whole processing relative to a cell group is executed on a single TMU module. The corresponding passive (or redundant) process is executed on another TMU module. The cell groups are determined at boot time according to data associated with the cells. When a BTS is added to the BSC e3, it is added to an old or a new cell group. When a cell is added to a BTS, the corresponding cell group has more loads. The distribution of the cell groups and the redundant processes is done automatically by the system at boot time as well. Each LB function allows a redistribution of the cell groups on the TMU modules, without disturbing the calls. The LB function is activated: when a failure occurs on a TMU module when a new unlocked TMU module is plugged in when cell groups are modified (to add a BTS) when the operator locks a TMU module when an imbalance of the TMU CPU loads is detected by the BSC. In this case, the load balancing can be done during non- busy hours. -
4-34
Functional Architecture
MESSAGING
The messaging group provides a generic service to exchange messages between the software entities: To take into account the migration of the software entities, the MESSAGING application is closely linked with the FT application. It performs the following functions: translates FT address (Prefix, Occurrence, Status) into Non- FT address (Prefix, Occurrence, Rank), according to the routing table updated by FT routes the Non- FT address to the right processor by associating the corresponding IP address delivers message to destination (using TCP/IP) if there are several passive LWPs, a message is delivered to a passive LWP only when this message was received by all the destination messaging entities in the case of delivery failure, the following situations are possible:
fault on the receiver module in the case of a recipient mailbox overflow
If the reception buffer of the LWP is full, the message is buffered into a TCP/IP stack. The overload is started
fault on the transmitter module in the case of original overload activation
(flow control) If the TCP/IP destination stack is full, the message is queued on the sending mailbox. Emission towards other modules remains possible
SM (Software Management)
The SM function is in charge of: launching all software entities present on the Control Node ensuring correct sequence of start- up for the no Fault Tolerance software entities supervising all modules in the Control Node launched by the SM restarting a module in case of a failure
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-35
UM (Upgrade Management)
The UM group is responsible for upgrading the OMU, the TMU and the ATM- SW modules inside the Control Node. It interacts with the SUP_IN and the SUP_TCU to upgrade the modules inside the Interface Node or inside the Transcoder Node. Note: For upgrading the software of each module inside the Interface Node, refer to paragraph 4.3.1.3 - section UM. Note: For upgrading the software of each module inside the Transcoder Node, refer to paragraph 4.3.1.4 - section UM. The various types of BSC e3 upgrades are introduced in the Table 4-1.
Modification of BSC upgrade Object MIB Structure MIB content change. The BSC software release and the MIB prototype remains unchanged MIB content change. The BSC software release and the MIB prototype remains unchanged BSC software change with new MIB structure and/or with inter board interface evolution BSC software change with same MIB and no inter board interface evolution Content SW with I/F compatibility Yes No Operator O actions
Build On Line
reset on line (running) build (automatic or commanded) If commanded: activate new BDA download set version activate new version build (automatic) download set version activate new version
Upgrade Type 3
Upgrade Type 4
4-36
Functional Architecture
Modification of BSC upgrade Object MIB Structure BSC software change with new MIB structure and inter board interface evolution. MIB objects remain the same (was called build BDA N+1) BSC software change with new MIB structure and inter board interface evolution. MIB objects remain the same (was called build BDA N+1) BSC software change with the same MIB and with inter board interface evolution Content SW with I/F compatibility Yes No download reset on line build (commanded) set version activate new version validate new version or cancel new version download reset on line set version build (commanded) activate new version download set version activate new version Operator O actions
Upgrade Type 5
Upgrade Type 6
Upgrade Type 7
Table 4-1
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-37
The impacts on the BSC type of upgrade on the other equipements are provided in Table 4-2.
BSC type of upgrade Build Off Line Build On Line Type 3 Type 4 Type 5** Type 6 Type 7 BSC e3 Impact BTS Impact TCU e3 Impact PCUSN Impact
Loss of service Loss of service Loss of service No downtime* No downtime* Loss of service Loss of service
Note: Service means from a communications point of view. * : means that for there is an impact only on the call setup communications or on the handovers, according to the time it needs for the BSC e3 to have the new active Cell Group ready to handle the communications on a new TMU module (few seconds). There is no impact on the established communications not involved in a handover. From a supervision point of view, for each type of BSC upgrade, the OMC- R loses the communication with the BSC e3 during few minutes (the minimum time is for the OMU SWACT and the maximum is for a reset of the Control Node), and hence is not able to handle or supervise the BSC e3. ** : not available for BSC e3
Table 4-2 Interaction of BSC e3 upgrade
WARNING: NO DSHELL COMMANDS CAN BE USED ON BSC E3 AND TCU E3. POTENTIAL OUTAGE CAN OCCUR. For the Control Node, all binary files, which compose a new software version, are downloaded from the OMC- R to the MMS modules. The OMU modules are used in dual mode and upgraded as follows: passive OMU module is reset and updated with the new software version when the passive OMU module is entirely recovered and correctly updated, OMU modules activity is enforced to SWACT
4-38
Functional Architecture
the new active OMU module runs with the new version Note: This version can interact with the old or the new software version of the TMU module. the new passive OMU module is reset and updated with the new version
The TMU modules are upgraded in real time with the N+P redundancy mode. Each of them is upgraded one after the other as follows: the TMU module is relieved of all its processes so that service (active processes) and redundancy (passive processes) are entirely supported by the other TMU modules when entirely isolated, the TMU module is reset and booted on the new software version (the flash inside the TMU module is updated at this time) once recovered, the TMU module joins the group to retrieve the applicative processes it hosted previously to the upgrade The ATM- SW modules are used in dual mode and upgraded as follows: the active ATM- SW module is reset and booted on the new version. At this time all ATM messaging is still routed by the passive ATM- SW module once recovered the active ATM- SW module retrieves the AAL- 1 dynamic configuration (from the passive ATM- SW module or the active OMU module) the passive ATM-SW module may be upgraded on its turn At TMU module level, AAL- reception is done from only one of the two ATM -1 planes.
T&D (Test and Diagnostic) management
The T&D management function is used to test and to diagnose each software entity inside the Control Node. These operations are used by: the various software entities of the Control Node to notify the operator of:
the detection of failures in a module the faulty component in the module with the best accuracy
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-39
OV (OVerload)
The BSC e3 robustness in overload conditions is ensured by a centralized overload control mechanism which is based on the same principles as for the overload control implemented for the BSC 2G. The OV function monitors: the processor loads the memory images The TMU module is the main module subject to overload. Note: the OMU module is not involved in the GSM traffic processing, so its load is not impacted by the traffic level variations. For this module, the critical resources monitored are: CPU load, system memory occupancy, etc. These parameters are used to compute a synthetic load of the module. Each module reports its synthetic load to the OMU, which controls globally the load state of the BSC e3 and triggers the appropriate actions according to the boards that are in overload (TMU module, ATM- SW module) and to the level of overload. The TMU modules are rather independent one with respect to the other in terms of overload handling. Since a TMU module manages the whole traffic of a group of cells, so when a TMU module is in overload, it will filter partially the new coming traffic requests related to the group of cells it manages. Counters giving the processor synthetic loads and the number of filtered operations by type are provided. Those counters give the operator a detailed view of the filtered traffic and processor loads during overload conditions, allowing him to plan the BSC e3 capacity evolution in his network. The overload thresholds are a part of the BSC e3 parameters. One nominal value will be used for this parameter. To the value of this parameter are associated the sets of overload thresholds for each monitored processing modules. This nominal value ensures both the BSC e3 robustness and a nominal level of carried traffic. The overload levels are defined in the BSC e3. Each level corresponds to the load level or the defense level of the BSC e3 processors.
4-40
Functional Architecture
overload levels: According to the overload level, some amount of new traffic requests are filtered:
overload level 1 (OV1):
It allows a traffic reduction by filtering 1 request out of 3 of the following messages: paging request channel request with cause different from Emergency call all First Layer 3 messages with cause different from Emergency call handover for traffic reason handover for OAM reason directed retry
overload level 2 (OV2):
No new traffic is accepted by filtering all previous and following messages: all First Layer 3 messages all Channel Requests (including cause for Emergency Call) all Handover Indications all Handover Requests
overload level 4 (OV4) and overload level 5 (OV5): Not used
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-41
HM (Hardware management)
The HM function is in charge of the interactions with the hardware components on the modules for the plug and play features. The Control Node offers plug and play (or auto discovery) capability for the BSC e3 cabinet start- up and for the module hot insertion. This means that the modules are automatically: detected started configured
Base OS
The base Operating System provides an abstract view of the OS for the upper layer softwares and the basic software services that are necessary for running software on this system. The Control Node hosts the following Operating Systems: a UNIX Operating System (AIX) It is located inside the OMU- SBC board which is in charge of the operations and the maintenance. It contains a high level standard communication service (TCP, UDP, IP, etc.) a real-time Operating System (VxWorks) It is located on all other Control Node processors, which are in charge of the traffic management, the input functions and the output functions. It contains:
a disk storage management some standard communications facilities
4-42
Functional Architecture
4.3.1.3
The Interface Node is the connectivity component of the BSC e3 and it is fully driven by the Control Node. It provides the following main functions: manages the connections:
between each module in the Interface Node between the Interface Node and:
the BTSs (Abis interface) the TCU e3s (Ater interface) the PCUSN (Agprs interface) manages each module inside the Interface Node provides the ATM links via the ATM- RM module to connect the Interface Node with the Control Node
It houses the following functional groups: inside each CEM module: Interface NODE_ACCESS I-Node_OAM SAPI (Standalone API) Base Maintenance inside each ATM-RM module:
RM_OAM_Generic ATM_OAM_Specific
Figure 4- shows each of the functional groups and their main functions inside the -8 Interface Node without the redundancy modules.
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-43
INTERFACE NODE CEM module IN_OAM GSM PCM management Interface node access GSM object management GSM object management GSM object management Critical path management Upgrade management Tests & diag management Standalone API (SAPI) SAPI PCM management SAPI object management SAPI object management SAPI object management Base maintenance Common carrier maintenance Hardware management Hardware management Hardware management Messaging Connection management Base OS
8K--RM RM_OAM_GENERIC Hardware management (Slink, CPU, TIM block) Upgrade dowloading Test actor Fault actor Base OS 8K_OAM_SPECIFIC Test actor Fault actor
ATM--RM Hardware management (Slink, CPU, TIM block) Upgrade dowloading Test actor Fault actor Base OS ATM_OAM_SPECIFIC Test actor Fault actor
LSA--RC Hardware management (Slink, CPU, TIM block) Upgrade dowloading Test actor Fault actor Base OS LSA_OAM_SPECIFIC Test actor Fault actor LSA carrier maintenance
Figure 4-8
4-44
Functional Architecture
Interface NODE_ACCESS
The Interface NODE_ACCESS interface is at the front of the Control Node. It ensures the transfer of CallP and OAM information between the Control Node and the Interface Node via TCP/IP over the ATM networks (AAL- 5). Two types of message are transferred: OAM messages CallP messages It manages the following main functions: channel management IP address identification Critical path resolution at the start up of the Interface Node
I-Node-OAM
to provide the Control Node with a logic view of each software entity in the Interface Node
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-45
The GSM PCM function manages the PCM (E1/T1) links on the Abis interface side (between the BSC e3 and the BTSs) and the PCM (E1/T1) links on the Ater interface side (between the BSC e3 and the TCU e3). The PCM (E1/T1) links connecting the BSC e3 with the BTS and the BSC e3 with the TCU e3 are considered as an object of the BSS and are designated: PCM object. Using the configuration and operation data provided by the BSC e3, the PCM link management configures and monitors the PCM link transmission supports for all the associated external and internal PCM (E1/T1) links. The PCM management generates PCM operational status indications for which changes are transmitted to the BSC e3. The external PCM links are operational as soon as the BSC e3 starts up. The GSM PCM management function performs the following main operations: create a PCM (E1/T1) link delete a PCM (E1/T1) link change the administrative status of a PCM (E1/T1) link notifiy the fault and the alarm events
4-46
Functional Architecture
GSM object
CEM module 8K- RM module ATM- RM module IEM module housed inside a LSA-RC module
object creation in accordance with object hierarchy the communication between an external element and the following functional
groups:
the SAPI (Standalone API) the node access use the services of the SAPI objects to:
manage the mediation between the I-Node_OAM and the Spectrum platform provide a mediation function between the Control Node and the Interface
Node
It manages the following two types of GSM object: physical objects Each of them corresponds to a physical module logical objects Each of them represents a group of physical objects (group of CEM modules or group of RMs) which is named: protection group. A protection group contains:
a working instance that corresponds to an active CEM module or an active RM a spare instance that corresponds to a passive CEM module or a passive RM
Critical path
The critical path management function is only used at the start up of the BSC e3. It acts as a substitute for the Control Node and handles the startup of the CEM modules and the ATM-RM modules during the initialization step. Then, it allows the first dialog with the Control Node via the ATM-RM modules.
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-47
UM (Upgrade Management)
The UM function is responsible for upgrading the software of each module inside the Interface Node. It enables the transition from a first working state to a new one by changing the software version of the module(s). Note: For upgrading the software of each module inside the Control Node, refer to paragraph 4.3.1.2 - section UM, Table 4- and Table 4-1, -2. Note: For upgrading the software of each module inside the Transcoder Node, refer to paragraph 4.3.1.4 - section UM. The software upgrade of a module is requested: by the OMC- R via the OMU module located in the Control Node or directly by the TML connected to:
the OMU module located on the Control Node the optional HUB(s) or if failure is not detected, by the CEM module located on the Interface Node
The first phase of the software upgrade can be made a long time before the upgrade of a module. It transfers the upgrading data to the MIB (Managed Information Base) located in the private disk located in the Control Node. This operation is done when the BSC e3 is working without any service disturbance (except the bandwidth reduction.) Then, the Control Node sends upgrade orders to the CEM module which manages the upgrade of the concerned module. For the CEM modules and the RMs, with the following redundancy factor: 1+1, the upgrading of this protection group is done as follows: the loading of the software packages is running inside the passive RM or the passive CEM module a SWACT is running between:
the passive CEM module and the active CEM module the active RM and the passive RM
4-48
Functional Architecture
The T&D management group is used to test and to diagnose each software entity in the Interface Node. These operations are used by: the various software entities of the Interface Node to notify the operator of: the detection of failures in a module the faulty component in the module with the best accuracy the I&C (Installation and Commissioning) procedures to check: the possible damage during the transportation of the Interface Node to the site that the installation is running correctly that the Interface Node is available for integration in the GSM network the maintenance procedures
SAPI
The main parts of the SAPI services are located inside the CEM modules. The SAPi provides the OMC services independently of the platform and the application which are running on this platform. It offers also an interface to manage links, physical and logical devices that are abstracted into objects. It supplies a consistent, stable interface maintenance operations on the Interface Node.
SAPI PCM
The SAPI PCM manages and supervises PCM (E1/T1) links, which are located inside the LSA-RC module. It provides a logical view of the PCM (E1/T1) links to lock or unlock each of them. Each PCM (E1/T1) instance of an Interface Node is included inside a Pool_PCM object. This object corresponds to all PCM (E1/T1) links on all IEM modules.
SAPI object
The SAPI object provides the following services: gives the activity status (active or passive) for each CEM module gives the list of the modules inside the Interface Node gives the slot number of the active CEM module gives the slot number of the passive CEM module notifies the SWACT for the CEM modules handles the CEM, the 8K- RM, the ATM- RM and the LSA- RC modules ensures data synchronization between the CEM modules in the case of a SWACT gives a direct OAM interface with each RM
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-49
Base Maintenance
The base maintenance function is closely linked to the Spectrum hardware concept and is located in both CEM modules. It performs the following operations: provides all the mechanisms for each module to perform the following functions:
administration provisioning duplex mode
communicates with the RM_OAM_Generic functional group which is located inside each:
ATM- RM module 8K- RM module LSA- RC module Common carrier maintenance
This function is closely linked to the Spectrum hardware concept. It is located inside both CEM modules. It is in charge of provisionning, implementing and monitoring the PCM (E1/T1) links inside the Interface Node. It is used to: support GSM-E1 and GSM- T1 carrier types configure and supervise each of the PCM (E1/T1) links which are located inside the LSA- RC module and dedicated to: the BTS(s) via the Abis interface the TCU e3 via the Ater interface
Carrier Maintenance, exclusively via the SAPI interface: provides carrier related information accepts the carrier These interactions contain: carrier provisioning (addition and deletion) carrier state change (locked/unlocked, enabled/disabled) carrier state change notification carrier fault notification carrier performance monitoring reports
4-50
Functional Architecture
HM (Hardware management)
The modules inside the Interface Node are hot insert/extract. That means that a hardware module can be replaced (repaired) or added (capacity extension) in the equipment without shutting down even partially the Interface Node and without any service impact. Furthermore, the Interface Node offers plug and play (or auto discovery) capability for the BSC e3 cabinet start- up and for the module hot insertion. This means that the modules are automatically detected.
Messaging
The messaging function is used to transmit various information items between each software entity of the different modules located inside the Interface Node via the S-link interfaces.
CM (Connection Management)
The CM function is used to connect the DS0 links via the 64K switch which is located inside the CEM module.
Base OS (Base Operating System)
The Base OS provides: an abstract view of the OS (Operating System) for the upper layer softwares the basic software services that are necessary to run the software on this system The Interface Node hosts the VRTX Operating Systems, which is in charge of: OS resource management
tasks queues semaphores etc.
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-51
RM_OAM_Generic
The RM_OAM_Generic functional group is located inside each RM. It supervises each software entity inside each RM. Note: An LSA logic object, which manages both physical objects, identifies the LSA- RC module. Each of both physical objects corresponds to each IEM module. It is used to manage: reset S- link redundancy the MTM bus plug and play the BIST (Built In Self Test)
Upgrade downloading
The upgrade downloading function is a local agent. It is used to run in the RM each upgrade command sent by the UM (Upgrade Management) function located inside the CEM module.
Test actor
The test actor is a local agent. It is used to run and to supervise the hardware tests inside each RM for the following components: CPU S- link redundancy ITM block These tests are punctual. They are carried out: at start up every 30 minutes after an operator request from the TML or from the OMC- R -
4-52
Functional Architecture
Fault actor
The Fault actor is a local agent. It is used to run and to supervise the software faults inside the RM for the following components: CPU S- link redundancy ITM block These faults are carried out when the module provides some services.
Base OS description
The Base OS (Base Operating System) provides: an abstract view of the OS (Operating System) for the upper layer softwares the basic software services that are necessary to run the software on this system The Interface Node hosts the VRTX Operating Systems, which is in charge of: OS resources management
tasks queues semaphores etc.
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-53
ATM_OAM_Specific
The ATM_OAM_Specific group is closely linked to the Spectrum hardware concept. It is used to manage the specific parts of configuration messaging that are transmitted to the ATM- RM modules. Test actor
The test actor is also named: device dialog interface container. A contained actor is a subclass and is also a container for the device diagnostic interface container. Each contained actor is optional and is created during the initialization steps. The test actor for the 8K- RM module contains the following actors: PROC/Module test actor This actor is used as an interface with the HAL (Hardware Abstract Level) processor to invoke the tests on the RM host processor when the services are running or are out of order. This component is a part of the Spectrum maintenance framework S-link test actor This actor is used as an interface with the HAL S- link to invoke the tests on the S- link interfaces when the services are running or are out of order. This component is a part of the Spectrum maintenance framework ITM block test actor This actor is used as an interface with the integrated HAL test manager to invoke the tests on the ITM block in the ATM- RM module when the services are running or are out of order. This component is a part of the Spectrum maintenance framework ATM test actor This actor is used as an interface with the ATM HAL to invoke the tests on the hardware responsible for the conversion from DS0 into ATM cells, and vice versa, when the services are running or are out of order. This component is a part of the Spectrum maintenance framework
4-54
Functional Architecture
Fault actor
The fault actor is also named: device test interface container. A contained actor is a subclass and is also a container for the device test interface container. Each contained actor is optional and is created during the initialization steps. It contains the following actors: ITM fault actor This actor is used as an interface with the ITM HAL to receive and process any faults that may be reported by the ITM HAL. This component is a part of the Spectrum maintenance framework PROC/Module fault actor This actor is used as an interface with the PROC HAL to receive and process any faults that may be reported by the PROC HAL. This component is a part of the Spectrum maintenance framework S- link fault actor This actor is used as an interface with the S-link HAL to receive and process any faults that may be reported by the S- link HAL. This component is a part of the Spectrum maintenance framework ATM fault actor This actor is used as an interface with the ATM HAL to receive and process any faults that may be reported by the ATM HAL. This actor needs to be created and the following object model shows the data members and methods that need to be overwritten
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-55
8K_OAM_Specific
The hardware management function controls the specific part of the configuration messaging which are transmitted to the 8K- RM module. Test actor
The test actor is also named: device dialog interface container. A contained actor is a subclass and is also a container for the device diagnostic interface container. Each contained actor is optional and is created during the initialization steps. The test actor for the 8K- RM module contains the following actors: PROC/Module test actor This actor is used as an interface with the HAL processor to invoke the tests on the RM host processor when the services are running or are out of order. This component is a part of the Spectrum maintenance framework S-Link test actor This actor is used as an interface with the HAL S- link to invoke the tests on the S- link interfaces when the services are running or are out of order. This component is a part of the Spectrum maintenance framework ITM test actor This actor is used as an interface with the integrated HAL test manager to invoke the tests on the ITM block in the 8K- RM module when the services are running or are out of order. This component is a part of the Spectrum maintenance framework 8K- RM Switching Matrix This actor is used as an interface with the HAL Switching Matrix through the diagnostic layer to invoke the tests on the SRT hardware when the services are running or are out of order. This actor needs to be created and the following object model shows the data members and methods that need to be overwritten 8K- RM Channel Sequencer This actor is used as an interface with HAL Channel Sequencer through the diagnostic layer to invoke the tests on the channel sequencer hardware the services are running or are out of order. This actor needs to be created and the following object model shows the data members and methods that need to be overwritten
4-56
Functional Architecture
Fault actor
The fault actor is also named: device test interface container. A contained actor is a subclass and is also a container for the device test interface container. All the contained actors are optional and are created during the initialization steps. It contains the following actors: ITM fault actor This actor is used to interface with the ITM HAL to receive and process any faults that may be reported by the ITM HAL. This component is a part of the Spectrum maintenance framework PROC/Module fault actor This actor is used to interface with the PROC HAL to receive and process any faults that may be reported by the PROC HAL. This component is a part of the Spectrum maintenance framework S- link fault actor This actor is used to interface with the S- link HAL to receive and process any faults that may be reported by the S- link HAL.This component is a part of the Spectrum maintenance framework Switching Matrix fault actor This actor is used to interface with the Switching Matrix HAL IF to receive and process any faults that may be reported. This actor needs to be created and the following object model shows the data members and methods that need to be overwritten Channel Sequencer fault actor This actor is used to interface with the Channel Sequencer HAL IF to receive and process any faults that may be reported. This actor needs to be created and the following object model shows the data members and methods that need to be overwritten
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-57
LSA_OAM_Specific
The LSA_OAM_Specific group is closely linked to the Spectrum hardware concept. It is identified by an LSA object, which manages two objects. Each of them corresponds to an IEM module.
LSA Carrier Maintenance
The LSA Carrier Maintenance function is located inside each IEM module. It manages the provisioning data and provides interfaces with the following services, for the PCM E1 links or the PCM T1 links: configuration supervision reporting the software or hardware faults to the OMC- R It will give also the service requests to: change carrier states change monitor defects provide support for the SWicht of ACTivities of the CEM modules and the RM sparing It must support software interfaces to several components. These interfaces contain: SAPI The SAPI acts as the HMI interface in a stand- alone mode. Each PCM (E1/T1) configuration data and carrier state change request is received via the SAPI interface. The detailed interface specifications will be captured in the CD (Component Description) CARML LSA Carrier Maintenance reuses the base carrier maintenance framework in the CEM module and the IEM module located inside the LSA-RC module HAL The carrier maintenance is interfaced with the Hardware via the carrier device agent to:
register the fault notifications notify the state changes
It is used to manage and to record the PCM faults which are described below
4-58
Functional Architecture
The PCM fault priorities are listed below in descending order: for the PCM E1 links:
LOS . . . . . . . . . . . . . . . . . . . . . . . Loss Of Signal AIS . . . . . . . . . . . . . . . . . . . . . . . . Alarm Indication Signal LFA . . . . . . . . . . . . . . . . . . . . . . . Loss of Frame Alignment RAI . . . . . . . . . . . . . . . . . . . . . . . . Remote Alarm Indication
The PCM fault(s) (collection of contiguous faults) can result in a failure that has to be reported to the GSM applications if they reach a threshold. This threshold is called the FBT (Fault Begin Time). On a fault notification, the number of faulty seconds based on the fault type needs to be reported. A second can be faulty for one fault only. If two faults occur in one second, the fault priority will be used to determine which one to peg. The quantity of faults reported in a message is equal to the FBT. The provisional threshold that will determine the termination of a failure is named FET (Fault End Time). If there is no defect detected over a period of FET, then there needs to be a failure cleared notification sent to the GSM applications.
Test actor
The test actor is also named: device dialog interface container. A contained actor is a subclass and is also a container for the device diagnostic interface container. All the contained actors are optional and are created during the initialization steps. The test actor for the LSA- RC module contains the following actors: PROC/Module test actor This actor is used as an interface with the HAL processor to invoke the tests on the RM host processor when the services are running or are out of order. This component is a part of the Spectrum maintenance framework S-Link test actor This actor is used as an interface with the HAL S- link to invoke the tests on the S- link interfaces when the services are running or are out of order. This component is a part of the Spectrum maintenance framework
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-59
ITM test actor This actor is used as an interface with the integrated HAL test manager to invoke the tests on the ITM block when the services are running or are out of order. This component is a part of the Spectrum maintenance framework CARM test actor This actor is used as an interface with HAL PCM through the diagnostic layer to invoke the tests on the PCM links when the services are running or are out of order. This component is a part of the Spectrum maintenance framework HDLC test actor This actor is used as an interface with HAL HDLC through the diagnostic layer to invoke the tests on the HDLC level when the services are running or are out of order. This component is a part of the Spectrum maintenance framework
Fault actor
The fault actor is also named: device test interface container. A contained actor is a subclass and is also a container for the device test interface container. All the contained actors are optional and are created during the initialization steps. It contains the following actors: ITM fault actor This actor is used as an interface with the ITM HAL to receive and process any faults that may be reported by the ITM HAL. This component is a part of the Spectrum maintenance framework PROC/Module fault actor This actor is used as an interface with the PROC HAL to receive and process any faults that may be reported by the PROC HAL. This component is a part of the Spectrum maintenance framework S- link fault actor This actor is used as an interface with the S- Link HAL to receive and process any faults that may be reported by the S- Link HAL. This component is a part of the Spectrum maintenance framework CARM fault actor This actor is used as an interface with the CARM HAL to receive and process any faults that may be reported by the PCM HAL. This actor needs to be created and the following object model shows the data members and methods that need to be overwritten
4-60
Functional Architecture
4.3.1.4
The Transcoder Node is a connectivity component. It is fully driven by the Control Node. It provides the following main functions: manages the connections:
of each module in the Transcoder Node between the Transcoder Node and:
the BSC e3 (Ater interface) the MSC (A interface) manages each component inside the Transcoder Node supervises the physical links (S- link interfaces) -
It houses the following functional group: inside each CEM module: Transcoder NODE_ACCESS T-Node_OAM SAPI (Stand alone API) Base Maintenance inside the LSA- RC module: RM_OAM_Generic LSA_OAM_Specific
Figure 4- shows each functional group and their main components inside the -9 Transcoder Node without the redundancy modules.
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-61
TCU e3 TRANSCODER NODE CEM module TN_OAM GSM PCM management GSM object management GSM object management Critical path management Upgrade management Tests & diag management Standalone API (SAPI) SAPI PCM management SAPI object management SAPI object management Base maintenance Common carrier maintenance Hardware management Hardware management Messaging Connection management Base OS
TRM RM_OAM_GENERIC Hardware management (Slink, CPU, TIM block) Upgrade dowloading Test actor Fault actor Base OS TRM_OAM_SPECIFIC Test actor Fault actor
LSA--RC Hardware management (Slink, CPU, TIM block) Upgrade dowloading Test actor Fault actor Base OS LSA_OAM_SPECIFIC Test actor Fault actor LSA carrier maintenance
Figure 4-9
4-62
Functional Architecture
Transcoder NODE_ACCESS
The Transcoder NODE_ACCESS interface is at the front of the Control Node. It ensures the transfer of the CallP and the OAM information between the Control Node and the Transcoder Node via the Interface Node and the LSA- RC module located inside the Transcoder Node. Both types of message are transferred: OAM messages CallP messages The Transcoder Node performs the following main functions: channel management IP address identification Critical path resolution at the start up of the Interface Node
T-Node_OAM functional group
The T-Node_OAM functional group is used to configure and supervise each: CEM module ATM- RM and TRM modules IEM module housed inside each LSA- RC module PCM (E1/T1) links on the A interface and the Ater interface It performs the following functions: report each software or hardware fault which appears on each RM and CEM module manage the defense actions run the tests It provides the Control Node with a logical view of each software entity in the Transcoder Node
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-63
GSM PCM
This function manages the PCMA (E1/T1) links on the A interface side (between the TCU e3 and the MSC), and the PCM (E1/T1) links on the Ater interface side (between the TCU e3 and the BSC e3). The PCM (E1/T1) link connecting the TCU e3 and the MSC is considered as an object of the BSS. It is called: PCMA object. The PCM (E1/T1) link connecting the TCU e3 and the BSCe3 is considered as an object of the BSS. It is called: PCM object. Using the configuration and operational data provided by the BSC, the PCM link management function configures and monitors the PCM link transmission. The PCMA management function generates PCMA operational status indications for those changes which are transmitted to the BSC. The external PCM links are operational as soon as the TCU e3 starts up. The GSM PCM management function manages the following main operations: creates a PCM (E1/T1) link deletes a PCM (E1/T1) link changes the administrative status of a PCM (E1/T1) link notifies the fault and the alarm events notifies the faults
4-64
Functional Architecture
GSM object
This GSM object management group ensures supervision of each: CEM module TRM module IEM module housed inside an LSA- RC module This group ensures object creation with the respect to the object hierarchy This group ensures communication between an external element and the following imported objects: the SAPI (Standalone API) the node access This group uses the services of the SAPI objects to: manage mediation between the T- Node_OAM and the Spectrum platform provide a mediation function between the Control Node and the Transcoder Node This group manages the following two types of GSM object: physical objects Each of them corresponds to a physical module logical objects They represent a group of physical objects (group of CEM modules or group of RMs) which is named: the protection group. A protection group contains:
a working instance which corresponds to an active CEM module or an active
RM
For the TRM module, each of them is active. Also all RMs are inside the same protection group
Critical path management
The critical path management function is only used at the start- up of the TCU e3. It acts as a substitute for the Control Node. It handles the startup of the CEM modules and the LSA- RC module during the initialization step. It runs the LAPD channels on the specific PCMs for each LSA. Then, it allows the first dialog with the Control Node (Ater interface) via the Interface Node and the LSA- RC module located inside the Transcoder Node.
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-65
UM (Upgrade Management)
The UM function is responsible for upgrading the software of each module inside the Transcoder Node. It allows transition from a first working state to a new one by changing the software version of the modules. Note: For upgrading the software of each module inside the Control Node, refer to paragraph 4.3.1.2 - section UM. Note: For upgrading the software of each module inside the Interface Node, refer to paragraph 4.3.1.3 - section UM. The various types of TCU e3 upgrades are introduced in the Table 4-3.
Modification of SW with I/F compatibility Yes Background downloading The TCU software release with interface compatibility The TCU software release with no interface compatibility X No set version activate new version lock TCU set version activate new version unlock TCU No downtime*
Upgrade TCU
Object
Operator actions
Impacts
Note: * : There is no downtime with this kind of upgrade, but according to the soft blocking feature, established communications may be lost when the associated TRM module is reset.
Table 4-3 Type of TCU e3 upgrade
The software upgrade of a module is requested: by the OMC- via the OMU module located inside the Control Node -R or directly by the TML connected to:
the OMU module located on the Control Node the optional HUB(s) the CEM module located on the Interface Node if the failure is not detected
4-66
Functional Architecture
The first phase of the software upgrade can be made a long time before the upgrade of a module. It transfers the upgrading data to the MIB (Managed Information Base) located in the private disk of the Control Node. This operation is done when the BSC e3 is working without any service disturbance (except the bandwidth reduction.) Then the Control Node sends upgrade orders to the CEM module which manages the upgrade of the concerned module, without breakdown of the services which are running. For the CEM modules and the RMs, with the following redundancy factor: 1+1, the upgrading of this protection group is done as follows: the load software packages are running in the passive RM or the passive CEM module a SWACT is running between:
the active RM and the passive RM or the passive CEM module and the active CEM module
For the TRM modules with the following redundancy factor: N+P, the upgrading of the protection group is done as follows: a soft blocking is sent to the TRM concerned the new communications are distributed to the another TRM when the communications in progress inside the concerned TRM are accomplished, then the software upgrading is done
T&D (Test and Diagnostic) management
The T&D management function is used to test and to diagnose each software entity in the Transcoder Node. These operations are used by : the various software entities of the Transcoder Node to notify the operator of:
the detection of failures in a module the faulty component in the module with the best accuracy
site
that the installation is running correctly that the Transcoder Node is available for integration in the network
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-67
SAPI description
All SAPI services are located inside the CEM modules, except for some services like the software upgrade service, which are located on the CEM modules, and the TRM module or the LSA-RC module. The SAPI provides an OMC service independently of the platform and the application which is running on this platform. In addition, the SAPI offers an interface to manage links, physical and logical devices that are abstracted into objects.
SAPI PCM
The SAPI PCM manages and supervises each of the PCM (E1/T1) links, which are located inside the LSA- RC module. It provides a logic view of the PCM (E1/T1) links to lock/unlock each of them. Each PCM (E1/T1) instance of a Transcoder Node is included inside the Pool_PCM object. This object corresponds to all PCMs on all IEM modules.
SAPI object
The SAPI object provides the following services: gives the activity status (active/passive) gives the list of the modules inside the Interface Node gives the slot number of the active CEM module gives the slot number of the passive CEM module notifies the SWACT handles the CEM, TRM and the LSA-RC modules ensures the data synchronization between the CEM modules in case of a SWACT gives a direct OAM interface with the each module
4-68
Functional Architecture
Base maintenance
The base maintenance group is closely linked to the Spectrum hardware concept and is located inside both CEM modules. This group provides all the mechanisms for each module to do the following: administration provisioning duplex features This group communicates with the RM_OAM_Generic functional group which is located inside each: CEM module TRM module LSA- RC module Common carrier maintenance
The common carrier maintenance function is closely linked to the Spectrum hardware concept. It is located inside both CEM modules. It is in charge of provisioning, implementing and monitoring the PCM (E1/T1) links inside the Transcoder Node. It is used to: support GSM-E1 and GSM- T1 carrier types configure and supervise each of the PCM (E1/T1) links which are located inside the LSA-RC module and are dedicated to:
the MSC via the A interface the BSC e3 via the Ater interface
Carrier Maintenance via the SAPI interface provides carrier related information, and accepts carriers. These interactions include: carrier provisioning (addition and deletion) carrier state change (locked/unlocked, enabled/disabled) carrier state change notification carrier fault notification carrier performance monitoring reports
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-69
HM (Hardware Management)
The modules inside the Transcoder Node are hot inserted/extracted. This means that a hardware module can be replaced (repaired) or added (capacity extension) in the equipment without shutting down even partially the Interface Node and without any service impact. Furthermore, the Transcoder Node offers plug and play (or auto discovery) capability for the TCU e3 cabinet start- up and for the module hot insertion. This means that the modules are automatically detected.
Messaging
The messaging function is used to transmit different information between each software entity of the different modules located inside the Transcoder Node via the the S-link interfaces.
Connection management
It is used to connect the DS0s via the Switch 64K which is located inside the CEM module.
Base OS (Operating System)
The Base OS provides: an abstract view of the OS (Operating System) for the upper layer softwares the basic software services that are necessary to run the software on this system The Interface Node hosts the VTRX Operating Systems, which is in charge of: the OS resources management
tasks queues semaphores etc.
the memory management the debug shell management the logging management
4-70
Functional Architecture
RM_OAM_Generic
The RM_OAM_Generic group is located in each RM. It supervises each software entity inside each RM. Note: The LSA- RC module is identified by an LSA logic object which manages both physical objects. Each of them corresponds to each IEM module. It is used to manage: reset S-links redundancy the MTM bus plug and play the BIST (Built In Self Test)
Upgrade downloading
The upgrade downloading function is a local agent. It enables each upgrade command sent by the UM function which is located inside the CEM module to run inside the RM.
Test actor
The test actor is a local agent. It is used to run and to supervise the hardware tests inside each RM for the following components: CPU S- link redundancy ITM block These tests are punctual. They are carried out: at start- up every 30 minutes after an operator request from the TML or from the OMC- R. -
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-71
Fault actor
The fault actor is a local agent. It is used to run and to supervise the software faults inside each RM for the following components: CPU S- link redundancy ITM block These faults are carried out when the module provides some services.
Base OS (Operating System)
The Base OS manages the hardware resources of the operating system to provide the necessary basic services to run the software resources on this operating system. The Base OS is used to perform the following: animation and synchronization of software in a real- time environment operation and maintenance actions communication between each node through standard interfaces The Interface Node hosts the VRTX Operating System, which is in charge of:
OS resource management (tasks, queues, semaphores, etc.) memory management debug shell management logging management
4-72
Functional Architecture
LSA_OAM_Specific
The LSA_OAM_Specific group is closely linked to the Spectrum hardware concept. It is identified by an LSA object, which manages two objects. Each of them corresponds to an IEM module.
LSA Carrier Maintenance
The LSA Carrier Maintenance function is located inside each IEM module. It manages the provisioning data and provides, for the PCM E1 or PCM T1 interfaces, the following services: configuration supervision reporting the software or hardware faults to the OMC- R It will also give the service requests to: change carrier states change monitor defects provide support for CEM SWACT and RM sparing It must support software interfaces to several components. These interfaces contain: SAPI The SAPI acts as the HMI interface in a stand- alone mode. Each PCM (E1/T1) configuration data and carrier state change request is received via the SAPI interface. The detailed interface specifications will be captured in the CD (Component Description) CARML LSA Carrier Maintenance reuses the base carrier maintenance framework in the CEM module and the IEM module located in the LSA- RC module HAL Carrier Maintenance interfaces with the Hardware through the Carrier Device Agent to:
register fault notifications notify state changes
It is used to manage and to record the PCM (E1/T1) faults which are described below.
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-73
The PCM fault priorities are listed below in descending order: for the PCM E1 links:
LOS . . . . . . . . . . . . . . . . . . . . . . . Loss Of Signal AIS . . . . . . . . . . . . . . . . . . . . . . . . Alarm Indication Signal LFA . . . . . . . . . . . . . . . . . . . . . . . Loss of Frame Alignment RAI . . . . . . . . . . . . . . . . . . . . . . . . Remote Alarm Indication
The PCM fault(s) (collection of contiguous faults) can result in a failure that has to be reported to the GSM applications if they reach a threshold. This threshold is called the FBT (Fault Begin Time). On a fault notification the number of faulty seconds based on the same fault type needs to be reported. A second can be faulty for one fault only. If two faults occur in one second, the fault priority will be used to determine which one to peg. The quantity of faults reported in a message is equal to the FBT. The provisional threshold that will determine the termination of a failure is named FET (Fault End Time). If there is no defect detected over a period of FET, then there needs to be a failure cleared notification sent to the GSM applications.
Test actor
The test actor is also named: device dialog interface container. An contained actor is a subclass and is also a container for the device diagnostic interface container. All the contained actors are optional and are created during the initialization steps. The test actor for the LSA- RC module contains the following actors: PROC/Module test actor This actor is used as an interface with the HAL processor to invoke the tests on the RM host processor when the services are running or are out of order. This component is a part of the Spectrum maintenance framework S-link test actor This actor is used as an interface with the HAL S- link to invoke the tests on the S- link interfaces when the services are running or are out of order. This component is a part of the Spectrum maintenance framework
4-74
Functional Architecture
ITM test actor This actor is used as an interface with the integrated HAL test manager to invoke the tests on the ITM block when the services are running or are out of order. This component is a part of the Spectrum maintenance framework CARM test actor This actor is used as an interface with HAL PCM through the diagnostic layer to invoke the tests on the PCM links when the services are running or are out of order. This component is a part of the Spectrum maintenance framework HDLC test actor This actor is used as an interface with HAL HDLC through the diagnostic layer to invoke the tests on the HDLC level when the services are running or are out of order. This component is a part of the Spectrum maintenance framework
Fault actor
The fault actor is also named: device test interface container. An contained actor is a subclass and is also a container for the device test interface container. All the contained actors are optional and are created during the initialization steps. It contains the following actors: ITM fault actor This actor is used as an interface with the ITM HAL to receive and process any faults that may be reported by the ITM HAL. This component is a part of the Spectrum maintenance framework PROC/Module fault actor This actor is used as an interface with the PROC HAL to receive and process any faults that may be reported by the PROC HAL. This component is a part of the Spectrum maintenance framework S- link fault actor This actor is used as an interface with the S-link HAL to receive and process any faults that may be reported by the S- link HAL. This component is a part of the Spectrum maintenance framework CARM fault actor This actor is used as an interface with the CARM HAL to receive and process any faults that may be reported by the PCM HAL. This actor needs to be created and the following object model shows the data members and methods that need to be overwritten
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-75
TRM_OAM_Specific
The TRM_OAM_Specific functional group is closely linked to the Spectrum hardware concept. It is used to manage the specific part of the configuration messaging which are transmitted to the TRM module.
Test actor
The test actor is also named: device dialog interface container. An contained actor is a subclass and is also a container for the device diagnostic interface container. All the contained actors are optional and are created during the initialization steps. The test actor for the TRM module contains the following actors: PROC/Module test actor This actor is used as an interface with the HAL processor to invoke the tests on the RM host processor when the services are running or are out of order. This component is a part of the Spectrum maintenance framework S-link test actor This actor is used as an interface with the HAL S- link to invoke the tests on the S- link interfaces when the services are running or are out of order. This component is a part of the Spectrum maintenance framework ITM test actor This actor is used as an interface with the integrated HAL test manager to invoke the tests on the ITM block when the services are running or are out of order. This component is a part of the Spectrum maintenance framework DSP test actor This actor is used as an interface with DSP through the diagnostic layer to invoke the tests on the messaging when the services are running or are out of order. This component is a part of the Spectrum maintenance framework
4-76
Functional Architecture
Fault actor
The fault actor is also named: device test interface container. An contained actor is a subclass and is also a container for the device test interface container. All the contained actors are optional and are created during the initialization steps. It contains the following actors: ITM fault actor This actor is used as an interface with the ITM HAL to receive and process any faults that may be reported by the ITM HAL. This component is a part of the RM Maintenance Framework PROC/Module fault actor This actor is used as an interface with the PROC HAL to receive and process any faults that may be reported by the PROC HAL.This component is a part of the RM Maintenance Framework S- link fault actor This actor is used as an interface with the S-link HAL to receive and process any faults that may be reported by the S- link HAL.This component is part of the RM Maintenance Framework DSP fault actor This actor is used as an interface with the vocoding to receive and process any faults that may be reported by the DSP. This actor needs to be created and the following object model shows the data members and methods that need to be overwritten
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-77
4.3.1.5
Figure 4-10 shows how the OAM is distributed inside a BSC e3 and a TCU e3. Each part of the OAM architecture has been described previously. We summarize in the following paragraphs the main function of the OAM architecture. OAM is not only an OMC- R agent for the BSC e3. It decides and involves each action after orders or observations. The following spontaneous behavior can be run: overload protection the SWitch of the ACTivity after a hardware or a software failure in an active module defense against applicative inconsistencies etc.
At any level the OAM entity manages the following operations: ensures coordination between each subtending entity reports upper layer orders to lower layer orders synthesizes and informs the upper layer entity about lower events runs a corrective action if this remains local to the resource controls each subtending entity (supervision role)
4-78
Functional Architecture
SS7 group
PCU_OAM TN_OAM 2 4
SBC
BTS_OAM 3 3 BTS_OAM
TMU_OAM SBC_OAM
TM
RM_OAM 8K--RM
PCUSN
RM_OAM LSA--RM
TCU e3 TRANSCODER NODE 2 TN_OAM CEM_OAM CEM module RM_OAM TRM RM_OAM LSA--RC 4 PCUSN_OAM
Figure 4-10
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-79
4.3.2
CallP architecture Figure 4- shows the CallP (Call Processing) architecture for the BSC e3 and the -11 TCU e3. Figure 4-12 shows the CallP (Call Processing) architecture for the BSC e3 and the PCUSN. The CallP uses the TMG (Traffic ManaGement) installed inside BSC 2G and the TCU 2G with upgrade and adaptation for the BSC e3 and the TCU e3. The CallP corresponds to each job, which is related to the management of the GSM communications. It manages the main following main functions: the traffic which corresponds to:
the management of connections between an MS and the MSC the transfer of user information between an MS and the MSC[DTAP] the management of functions related to a whole cell (e.g. paging) or to the
the AMR management (AMR vocoding; link, channel and frame [TRAU]
handover radio measurements power control The active and the passive applications share the time switch. Each connection request sent by the active application is directly seen by the passive application.
4.3.2.1 Control Node overview
The Control Node is mainly used to: set up a call connection delete a call connection modify a call connection
4-80
Functional Architecture
CONTROL NODE
OMU ADM
TCU 2G group TMG_COM OBS_COM SPT OBS_OBC TMG_S7A TMG_DBA OBR_CNX MTP1/MTP2
on PMC board
TCU e3 group TMG_COM OBS_COM Cell group TMG_MES OBS_CNX TMG_RAD SUP_TCU
LAPD
on PMC board
IN_ACCESS
ATM--RM
CEM
NODE_ACCESS
Pool 8K
CallP_IN
Pool PCM
CallP_SW64K
8K--RM
CallP_SW8/16K
LSA--RC (Up to 6) TCU 2G TCU e3 LSA--RC (Up to 4) (Refer to NTP < 16 >) LAPD_ACCESS
CEM
DSP
CallP_TMA
Figure 4-11
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-81
CONTROL NODE
OMU ADM
TCU 2G group TMG_COM OBS_COM SPT OBS_OBC TMG_S7A TMG_DBA OBR_CNX MTP1/MTP2
on PMC board
TMU (Up to 12) TCU e3 group TMG_COM OBS_COM Cell group TMG_RPP TMG_MES OBS_CNX TMG_RAD TMG_L1M SPR TMG_CNX OBS_RAD ACCESS
on PMC board
LAPD
on PMC board
IN_ACCESS
ATM--RM
CEM
NODE_ACCESS
Pool 8K
CallP_IN
Pool PCM
CallP_SW64K
8K--RM
CallP_SW8/16K
Figure 4-12
4-82
Functional Architecture
4.3.2.2
The Interface Node does the following: provides network connectivity for the Abis and Ater interfaces routes the Control Node connectivity for LAPD and SS7 signalling links performs the following switching functions:
16 kbps for bearer voice/data between BTS and BSC e3 (Abis interface) 64 kbps for signaling links between the BSC e3 and the TCU e3 (Ater
interface)
4.3.2.3
The Transcoder Node performs the following: managing the vocoding path between the MSC and the BSC managing the bearer channels managing the different types of vocoding algorithms providing the network connectivity between the RMs and:
the A interface the Ater interface
terminating the Ater interface routing SS7 signaling links and the Control Node connectivity for SS7 signaling links switching functions:
64 kbps for signaling links between the TCU e3 and the BSC e3 after
64 kbps for signaling links between the TCU e3 and the MSC (A interface)
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-83
4.3.2.4
Figure 4-13 shows the TMG functional organization inside a BSC e3 with TCU e3. Figure 4-14 shows the TMG functional organization inside a BSC e3 with PCUSN. The TCU e3 group and the Cell group notions are introduced with the TMG. The TCU 2G or TCU e3 group manages a group of PCM channels on the A interface and supervises the PCM channels on the A interface allocated by the TCU e3. The Cell group manages the cells, which are associated with a collection of sites. The creation of a collection of sites depends of the operator requests and the BSC e3 system limits (maximal Erlang quantity will be processed by a cell group). ATTENTION The order to begin the communication between the Control Node and the PCUSN and the Interface Node are sent by the TMG_RPP. Note: On a platform, each group is used as a core process by the fault tolerance function. The software architecture inside the BSC e3 is fault tolerant on a hardware or software breakdown inside the OMU module or the TMU module (each instance contains an active image and a passive image. When the active image is broken down, the passive image is automatically run). The main part of the TMG is located in each TMU module. It establishes, modifies, and releases logical links between the MS (Mobile Subscribers) and the MSC. Each logical link supports: a Radio Resource connection an SS7 connection on the BSC- MSC interface The TMG controls the physical connections between the BTS and the MSC via the TCU e3. The BSC e3 processing units used by the TMG contain a processor that enables communication with the MSC, the TCU e3, and another processor that enables communication with one or more BTS(s). To enable these functions, the TMG uses the services provided by a radio resource and some terrestrial circuit management units. When logical links are established, mobile subscribers can exchange transparent messages (DTAP messages) with the MSC.
4-84
Functional Architecture
OMU ADM
TCU e3 group
SUP_TCU TMG_COM
TCU 2G group
SPT
OBS_COM TMG_L1M
Link
1
MTP3
TMG_COM
OBS_COM
2
TMG_MES
SPR
2
OBS_RAD
TMG_S7A
MTP1/MTP2 on PMC board (2 HDLC ports for SS7)
Cell group
RSL
Note: (*) For the description of the SS7 and the LAPD protocols inside the BSC e3 and the TCU e3 refer to Figure 3--2 Protocol architecture.
Figure 4-13
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
(*)
Functional Architecture
4-85
OMU ADM
TCU e3 group
SUP_TCU 3 TMG_COM 1 2
ACCESS on PMC board LAPD on PMC board (62 HDLC ports for SS7)
SPP group
SPP
TCU 2G group
SPT
OBS_COM TMG_L1M
TMG_COM 1 2 1
OBS_CNX
Cell group
OBS_RAD
RSL
TMG-RPP
Link
SPR
PCUSN
PCUSN_ACCESS Note: (*) For the description of the LAPD protocols inside the BSC e3 and the PCUSN refer to Figure 3--4 Protocol architecture.
Figure 4-14
4-86
Functional Architecture
A TCH channel allocation request queuing process is included in the radio resource allocator. If no TCH is available, the request is put into a queue according to its priority. When a radio resource becomes available, it will be assigned to the request of the highest priority according to the BTS object parameter tables. These parameters are described into the Operating Manual User Guide. They allow users to set internal versus external priorities, the number of TCH channels, the maximum waiting time for the requests, the maximum request number, and the priority level according to the request cause. Two lists of 32 frequencies will be defined at the OMC- R for each cell. The BSC e3 considers the first BCCH broadcast list (Frequencies of neighboring cells for reselection) as the data of the SYS INFO 2 and 2bis, and the second SACCH broadcast list (Frequencies of neighboring cells for handover) as the data of the SYS INFO 5 and 5bis. Note: The TMG has some dedicated mechanisms for managing the AMR channel, especially:
the allocation of a HR (half rate) or a FR (full rate) radio TS at the call
setup the allocation of a HR or a FR radio TS for an handover handover for (to) FR radio TS to (from) HR radio TS in order to increase the capacity or the voice quality
TMG description
The TMG does the following: handles Radio Resource connection handles BSC e3 transactions handles MSC (and TCU e3) connections allocates resources manages global procedures (connectionless service) transfers transparent messages The TMG must be divided to cover varying traffic load handling needs. The following operations can increase traffic handling potential: duplicating the TMG that enable communication between mobile subscribers (MS) and the BTSs providing physical processors to back up other traffic
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-87
Note: In the BSC e3 architecture, due to the redundancy, in order to simplify the defense during the SWACT of a cell, the communication of both TMG_CNX and TMG_RAD will migrate during the inter- CG (inter-cell group) handover. This architecture establishes for AMR calls the Handover between each zone.
TMG_RST
The TMG_RST performs the main following functions: manages the GSM reset downlink procedures manages the GSM reset uplink procedures updates the status of the MSC traffic The reset downlink procedure is triggered when the TMG_RST receives a message. The reset uplink procedure is triggered when The status of the MSC traffic, memorized by each TMG_RST, is: CLOSED
at start- up between a reset message and the following reset acknowledge message
OPEN
between a reset acknowledge and the following reset message
The active TMG_RST is memorized inside the operating system as the master and the passive TMG_RST as the slave.
TMG_MES
The TMG_MES does the following: interfaces with the SS7 functional unit (MSC connection handling) and SS7 connection management distributes BSSMAP message to the appropriate TMG_CNX manages transparent and non- transparent messages between mobile subscribers (MS) and the MSC (message queuing during radio channel handover, etc.)
TMG_COM
The TMG_COM does the following: allocates terrestrial circuits (resources) performs terrestrial circuit soft blocking manages the connectionless BSSMAP procedures related to the terrestrial circuit in both directions (blocking and reset circuit)
4-88
Functional Architecture
TMG_CNX
The TMG_CNX enables the main TMG functions. It manages all the MS- MSC logical link- dependent procedures and synchronizes the procedures handled by the other TMG functions to establish, update and release links, especially when a radio channel change occurs (from SDCH to TCH, handover, mode modify). MS- MSC logical link contexts and some observation counters (handover monitoring) are also installed in the TMG_CNX. It interfaces with the following: TMG_MES for SS7 connection management needs TMG_COM to obtain a terrestrial circuit TMG_RAD to execute radio resource procedures
DTM (Download Token Manager)
The main principles of the DTM is to manage the maximum of BTS downloading per BSC e3, per TMU module and per cell group.
STCH (STate CHange)
The main principles of the STCH are: send to OMC- R at regular time intervals a global grouped notification indicating all state changes of BTS objects since the last OMC- R update secure end- to- end the message sending with an acknowledgement mechanism. - The notification message sending is then under BSC e3 applicative control eliminate all transitory state changes between two OMC- R updates send state change notifications at the same speed as the OMC- R is processing them send grouped state change notifications to the MMI of the local manager
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-89
TMG_RPP
The TMG_RPP on each cell group enables the functions needed for: SGSN- mobile subscriber communication GPRS resources management cell management It contains each information item concerning the GPRS channel configuration, cell by cell, and the counters associated with transaction monitoring, and the GPRS channels supervision: TCH/F. The TMG_RPP does the following: to open the GPRS services on the radio interface (between the MS and the BTS) to close the GPRS services on the radio interface (between the MS and the BTS) to allocate the dedicated TCH/F radio resources to the PCUSN to remove the dedicated TCH/F radio resources to the PCUSN to establish the connection between the Abis interface and Agprs interface to give the radio parameters to the PCUSN In addition the TMG_RPP is used to: hide as far as possible the GPRS capability of the BSC e3 to the MS until the service is not available. It means MS will be informed of the GPRS service perform connection only when service is available in the cell attribute a pool of Agprs circuits from the SPP. TMG is in charge to connect theses circuits with radio resources and to inform the PCUSN of the relationships between a radio resource and an Agprs circuit.
4-90
Functional Architecture
TMG_RAD
The TMG_RAD on each cell group enables the functions needed for BTS- mobile subscriber communication, radio resource, and cell management. It contains all the information concerning the radio channel configuration, cell by cell, and the counters associated with radio observations, transaction monitoring, and SDCCH and TCH/F channel supervision. It does the following: allocates SDCCH and TCH/F radio channels (resources) supervises the BTS-mobile subscriber communication protocol (RR connection) manages paging procedures manages counters on behalf of the OBR functional unit manages the Short Message Service detects and resolves overload periodically supervises TCH and SDCCH channels
TMG_DBA
The TMG_DBA centralizes the global data on PCMA (crossing table CIC/TCU 2G or TCU e3 group), cells managed by the BSC e3 (crossing table cell/cell group) and global SCCP data (OPC, DPC).
TMG_S7A
TMG_S7A is the interface in the BSC e3 that allows messages to be routed from the SS7_ADM_CA to the right TMG in connectionless mode. It is localized on each TMU module (one instance per TMU module). The TMG_S7A contains no permanent data but it can access to the permanent data inside the TMG_DBA.
TMG_L1M
The TMG_L1M performs routing of the messages from the TMG part to the L1M part of the BTS.
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-91
OBR overview
The BSC e3 radio observation functional unit (OBR) is in the OMU processing unit. It constructs a BSC e3 database containing radio measurements and call tracing to set algorithm parameters and check their accuracy. The call tracing results collected through a new interface with the MSC that is compliant with GSM Phase 2 08.08 Rec. include data such as handover attempts, radio resource unavailable, handover parameters, handover activity, supplementary services activity, short message service. Depending on the priority level for the session, the data either is sent directly in event reports to the MSC (highest priority) or by way of files stored into the BSC e3 disk. This process is activated by the MSC.
OBR description
The OBR performs the following: interfaces with the CM- CA functional unit used to activate and deactivate observation session interfaces with the TMG to start or stop radio channel observation creates a file on the hard disk that contains the measurements and events for the detection of handovers performed by the BTS L1M functional unit. The information items are stamped in the chronological order of reception.
OBS
The purpose of the observation software is to gather information about the BSS and to send it to the OMC. The management of all observation requests for collection and transfer of the observation results is issued from PM_CA For more information about the PM_CA function refer to paragraph 4.3.1.2.
SPR
The SPR manages the Radio Resource connection. The main functions of the SPR inside the CallP architecture are described by the SPR function inside the OAM architecture. For more information about the SPR function refer to paragraph 4.3.1.2.
4-92
Functional Architecture
SPP
The SPP manages the PCUSN connection. The main functions of the SPP inside the CallP architecture are described by the SUP_PCUSN function inside the OAM architecture. For more information about the SUP_PCUSN function refer to the Paragraph 4.3.1.2.
SS7 overview
The entire SS7 protocol, which is mainly dedicated to the TCU e3 is located in two types of TMU module (one active + one passive). Only one SS7 instance manages communications with the active TMG. It does the following: supports the following SS7 layered protocols:
SCCP (Signaling Connection Control Part) MTP (Message Transfer Part)
sets up the initial SS7 configuration distributes the signaling load over the links of the combined link set that connects the BSC e3 and MSC distributes TMG functional unit messages
SS7 description SSCP
The SCCP is used to: provide a referencing mechanism to identify a particular transaction relating to an instance of a particular call enhance the message routing for (for instance) operations and maintenance information provide both connectionless as well as connection- oriented network services. Only 2 SCCP protocol classes are provided:
Class 0: basic connectionless class Class 2: basic connection- oriented class -
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-93
No segmentation/reassembly are provided. SCCP performs the following functions: provides a SCCP Discrimination function by discarding any messages with wrong format or not intended for a registered SCCP User. Any time a message is discarded, an SCCP Warning Indication message is sent directly to A ACCESS maintains a number of connections established at the request of SCCP Users. SCCP manages active and stand- connections at the same time -by receives and transmitting on of User Data in a connectionless mode routes again the incoming calls from one user instance to another one performs SCMG functionality: controls of the availability of all configured remote Subsystems by performing a periodic Subsystem Status Test (SST) on all prohibited Subsystems responding to the SST from all remote subsystems SCCP can be divided in the following parts: SCCP Routing Control (common to all SCCP instances) SCCP Connection Control: connection oriented services. This is the only part to be fault tolerant SCCP Connectionless Control (common to all SCCP instances) SCCP State Control: SCCP Management (SCMG)
MTP
MTP provides a mechanism giving reliable transfer of signaling messages. MTP3, MTP2 and MTP1 correspond to the normalized SS7 layers. The MTP3 is centralized on the dedicated TMU board (TMU- SBC). The MTP1 and MTP2 are shared on the TMU-PMC and handle up to two SS7 links.
4-94
Functional Architecture
LAPD
The entire LAPD protocol management unit is located inside the TMU modules of the BSC e3. It does the following: handles the LAPD protocol (initialization message transfer, error detection, fault recovery) distributes incoming messages to user entities enables the BTSs and TCU e3 link access procedures for BSC e3 application message sending needs handles the LAPD configuration, which depends on the ADM and SUP functional services
Generic_Access
The Generic_Access is made up of three types of access interfaces: IN_Access Ater_Access LAPD_Access IN_ACCESS The management of the LAPD links between the BSC e3 and the TCU 2G is done by the IN_Access application. IN- Access allows SPT to have the same behavior in the environment 2G or e3. The IN- Access is located on the OMU module and is a FT application. The accesses to the TCU regarding the O&M dialogs are made up: only one link LAPD on the single PCM Ater in the TCU 2G case several LAPD links share the PCM Ater in the TCU e3 case. the Ater LAPD links in BSC 2G are concentrated on the same LAPD port of a SICD board inside the BSC e3, each LAPD link with a TCU 2G is connected to a different LAPD port of a TMU module chosen by the IN- Access. This difference induces different behaviors regarding the building of these links, the defense and the dialogs management. Ater_ACCESS The Ater_Access is the access functionality to adopt for the SUP_TCU which is responsible for the TCU e3 supervision. The SUP_TCU requests the access to open the LAPD links towards the TCU e3 and to build the LAPD dialogs between the BSC e3 and the TCU e3.
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-95
LAPD links: The build or delete LAPD links towards the TCU e3 is requested by the SUP_TCU via the Ater_Access, while building LAPD link SUP_TCU Access gives information about the Ater_PCM. Then, the access chooses an unallocated LAPD port on the TMU module. All the links opened by the SUP_TCU are configured for three types of communications OAM, RSL and FT LAPD dialogs: Once the SUP_TCU got the LAPD link establishment from access, it asks to configure the LAPD dialogs on this link. On the same LAPD link, the SUP_TCU can ask to configure OAM, RSL and FT. The OAM dialog is used by the SUP_TCU to communicate with the TCU e3 and the A_Access. The RSL dialog is used by traffic management application TMG_COM for call processing. The FT dialog is used while upgrading TCU e3. The Ater_Access is also responsible for the routing of CallP messages from TMG_COM. The LAPD link on which the CallP message has to be routed is decided by the Ater_Access. The Ater_Access distributes the CallP messages received on the opened RSL LAPD dialogs, so that it reaches the TCU e3 which is specified in the message.
LAPD_ACCESS
The entire LAPD_ACCESS protocol management unit is located inside the TMU modules of the BSC e3. It does the following: it enables the routing procedure as follows:
sends the LAPD message to the correct LAPD_DL functional unit on the
TRX- site base on behalf of SUP or on the TDMA-cell base on behalf of TMG base
it interfaces with CM_CA inside the OA&M services group and the Supervision group. This interface allows obtaining the LAPD configuration fixed by:
the permanent data managed by CM_CA dynamic data managed by the Supervision group
it handles the LAPD configuration it recovers observation data counters distributed on each LAPD
4-96
Functional Architecture
4.3.2.5
Interface Node
Figure 4-15 shows the CallP organization inside the Interface Node.
NODE_ACCESS
The NODE_ACCESS is used to manage the communication between the Control Node and the Interface Node. Then, it dispatches the messages on the channels located inside the Interface Node.
CallP_IN
The CallP_IN is located in the CEM module and supervised by the Control Node. The CallP_IN houses the switch manager function which allows to perform the time switch connections with the bearer channels or the signaling channels. When Callp sends a request to connect an Abis or an Ater circuit, the Switch Manager does the following actions: if the connection is not yet established:
it switches a TS of the Abis interface (E1/T1) to a TS of an S- link via the CEM -
module module
at last, it establishes the circuit connection via the 8K- RM module between both TS S-link which are coming respectively from a TS of the Abis interface and a TS of the Ater interface In the case of handover, the Control Node sends a request to: modify the connections In this case, the switch manager establishes the new links (this is done through Y connections in the 8K- RM module to preserve the voice quality) and then releases the old ones. release a connection In this case, the switch manager only releases the time switch inside the 8K- RM module, the connection with the CEM module may be used by the other circuits or could be used later In addition, the commands, that are coming from the CallP_IN, are sent to the active and passive CallP_SW8K. Therefore: as said previously, it duplicates every switching and provisioning command to both CallP_SW8Ks at CallP_SW8K recovery time, it re- synchronizes them autonomously all the data it uses for its own business, on the CEM module, must be duplicated on the mate CEM module
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-97
Ater_ACCESS LAPD_ACCESS
CEM
NODE_ACCESS Pool 8K
CallP_IN
LSA--RC (Up to 6) To/from MSC TCU 2G TCU e3 LAPD_ACCESS LSA--RC (Up to 4) NODE_ACCESS Pool PCM Speech/data channel To/from BTS
CEM
Switch 64K
CallP_SW64K CallP_CONF
CallP_CNX
DSP
CallP_TMA
Figure 4-15
4-98
Functional Architecture
CallP_SW64K
The CallP_SW64K manages each operation done by the CallP_64K located inside the CEM module. The CallP_SW64K uses the connection manager function to perform the 64K connections between: the TS S- link of the LSA which carries the PCM (E1/T1) links on the Ater interface or the PCM (E1/T1) links on the Abis interface and the TS S- link of the 8K- RM module which performs 8K/16K switch These operations requested by the CallP_SW8K manage the active and the passive CEM modules and insures a complete synchronization between them. The passive CEM module does not manage the links with the 8K- RM module, it only processes the communication.
Pool PCM
The Pool PCM is used for TS allocating on the Ater interface and the Abis interface. For a given TS on a PCM (E1/T1) link, it provides a DS0 on the S-link to interface the active CEM module with an LSA- RC module. The Pool PCM uses the PCM (E1/T1) mapping sub system. The CallP_SW64K will receive a pathend from the Pool_PCM.
CallP_SW8K
The CallP_SW8K provides a non- blocking sub- DS0 rate time switching function on 8 kbps channels, i.e., both bit position and timeslot number are switched between the incoming DS0 and outgoing DS0. The CallP_SW8K does not do any maintenance actions on the 8K-RM module. This is the responsibility of the I- Node_OAM group. Pool_SW8K
The Pool_SW8K is used to provide a free DS0 on the S- link which interfaces the active CEM module with the active 8K- RM module. Through a specific function, Pool_SW8K provides the pathend and the corresponding DS0 identifier for the 8K_proxy. The provider of the Pool_SW8K informs it when no more DS0s are available on the S- link interfaces. In this case, the garbage collector: checks all unused connections unblocks all unused connections reallocates all unused connections
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-99
4.3.2.6
Figure 4-15 shows the CallP organization inside the Transcoder Node.
NODE_ACCESS
The NODE_ACCESS is used to manage the communication between the Control Node and the Transcoder Node. Then, it dispatches the messages on the channels located inside the Transcoder Node.
CallP_TN
This is a call processing function, which is in charge of switching and connecting the following via the CallP_SW64K function: the PCM (E1/T1) links on the A interface the appropriate transcoding resource the PCM (E1/T1) links on the the Ater interface This function supports the management of each resource needed by the CallP resource to establish a connection between the BSC e3 and the MSC. It manages the pool of transcoding resources. In addition, the Resource-Allocation has to manage the pool of Ater interface circuits according to the requested Channel coding. Some treatments, such as observations, are not associated with a specific call. So this entity is in charge of collecting the counters and transmitting them to the BSC e3 by request. This entity is in charge of tracing information associated with the calls or with the pools for maintenance purposes.
4-100
Functional Architecture
CallP_SW64K
The CallP_SW64K function controls each operation performed inside the CEM module. This function is divided into the following main parts: switching matrix 64K management internal PCM (E1/T1) link monitoring The CallP_SW64K uses the connection manager function to perform the 64 K connections between the TS of the S- link of the LSA and the TRM module which carries the PCM (E1/T1) links. The PCM (E1/T1) links come from: the BSC e3 on the Ater interface the MSC on the A interface
Pool_PCM
The Pool_PCM function is used to allocate the TS to the: BSC e3 via the Ater interface MSC via the A interface For a TS on a given PCM (E1/T1), it provides a DS0 on the S- link which interfaces the active CEM module with an LSA- RC module. The Pool_PCM function manages the PCMA (E1/T1) links on the A interface side, and the PCM (E1/T1) links on the Ater interface side. The PCM (E1/T1) link connecting the TCU e3 and the MSC is considered as an object of the BSS, and is designated PCMA. Using the configuration and operation data provided by the BSC e3, the PCM (E1/T1) link management configures and monitors the PCM link transmission supports for all the associated external and internal PCM links. The PCMA management function generates PCMA operational status indications for changes that are transmitted to the BSC e3. The external PCM (E1/T1) links are operational as soon as the TCU e3 starts up.
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-101
CallP_CONF
It is used to configure and to supervise for a given vocoder an array of communications named: archipelago. The parameters of the vocoder are given by the system: speech coding law A or law law VAD PCM type: E1/T1 etc.
DSP
The DSP function is used to: process each speech/data channel adapt the full rate (FR), enhanced full rate (EFR) and adaptive multi- rate (AMR) FR or HR speech coding/decoding In this case, switching from one vocoding type to another is controlled by information contained in the frames received on the BSC e3 interface. handle calls in parallel roam between the networks of different operators transmit the speech and text alternately (VCO/HCO) The DSP function is located between the 120 channels (16 kbit/s or 8 kbit/s) of the BSC e3 and 120 channels (64 kbit/s) of the MSC. Speech communication represents full rate communication (containing 3 kbit/s signaling and 13 kbit/s coded speech) or half rate communication in the case of AMR. Should loss of speech frames occur, substitution procedures are applied. Data channels carry FR data with 9.6 kbit/s, 4.8 kbit/s, or 2.4 kbit/s. Should loss of data frames occur, filling frames are generated form the MSC or the BSC e3. Note: When the AMR is used, channel rates are the following: 10.2 kbit/s, 6.7 kbit/s, 5.9 kbit/s, or 4.75 kbit/s for FR rate and 6.7 kbit/s, 5.9 kbit/s, or 4.75 kbit/s for HR rate Error messages are generated to the BSC e3 should loss of synchronization occur or should the transcoder be unable to process the transmitted frames. The DSP are configured by the CEM module or self- synchronized by the traffic channels.
4-102
Functional Architecture
CallP_TMA
The CallP_TMA (TMA: TRM Master Application) group allows to interface with the DSP: the CallP_CONF group the Fault actor which is described inside the OAM architecture
4.4
4.4.1
4.4.2
Standards compliancy and detailed requirements The BSC/TCUe3 clock synchronization requirements are compliant with the recommendations in Telecordia GR- 253, GR- 1244 and ITU- T G.812/G.813. Wander performance at the MSC copper timing outputs to the TCU must meet requirements R5- 4 through R5- 6 as defined in Telcordia GR- 1244 (Issue 2, December 2000) and R5- 119 as defined in Telcordia GR- 253 (Issue 3, September 2000). See Figure 4- 16, Wander: MTIE Specifications and Figure 4-17, Wander: TDEV Specifications. Jitter performance at the MSC copper timing outputs to the TCU must meet requirement R5- 7 as defined in Telcordia GR-1244 (Issue 2, December 2000.)
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-103
Note:
Normal operation is considered to be locked or synced to a stratum 1 clock. Failed modes of operation are internal timing where the clock is self- timed rather than tracking a stratum clock. These failed modes of operation are known as free- running and holdover. -
See Figure 4- 18 - Jitter: Maximum Interface Jitter Specifications Transient performance at the MSC copper timing outputs to the TCU must
meet R5- through R5- 14 as defined in Telcordia GR- 1244 (Issue 2 -9 December 2000). See Figure 4- 19, Phase Transient Specifications . -
The stability of the clock leaving the TCU via its copper interfaces (which are intended for use by the BSC-IN as timing carriers) are determined by its wander performance, jitter performance, and transient performance, as well as the quality (i.e. stability) of the clock received from the MSC (or intervening node). The output performance requirements are the same as those identified in second large bullet above.
4-104
Functional Architecture
The stability of the clock between the BSC- and the BSC-IN -CN over the optical connection (ATM over SONET) is determined by the BSC- IN wander performance, jitter performance, and transient performance, as well as the quality (i.e. stability) of the clock received from the TCU (or intervening node). The output performance requirements are the same as those identified in second large bullet above. Copper interfaces carried as payload across optical (SONET/SDH) transport facilities are unsuitable for use for TCU or BSC- IN timing because of the excessive wander and jitter introduced as a result of pointer justifications at the optical transport layer. GR- 1244 and GR- 253 both strongly caution against the use of any such copper interfaces for network synchronization purposes.
Figure 4-16
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-105
Figure 4-17
Figure 4-18
4-106
Functional Architecture
Figure 4-19
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Functional Architecture
4-107
5-1
SCSI-PA
SCSI-PB
OMU
(active/passive)
OMU
(active/passive) ATM links (4x25 Mb/s)
Reception Transmission
Transmission
ATM--SW
ATM--SW
TMU
TMU
TMU
TMU
INTERFACE NODE
ATM links (155 Mb/s) on optical fiber ATM links (155 Mb/s) on optical fiber
ATM--RM
ATM--RM
Figure 5-1
5-2
5.1
OMU module
Each OMU module is used (see Figure 5-2): to manage each MMS module which houses a SCSI disk to run and control:
each ATM- SW module each TMU module
to supervise the Interface Node to supervise each Transcoder Node to supervise the BTSs to manage the debug access via the RS232 connector or the RJ45 connector which are located on the front panel to manage an external Ethernet access to the OMC- R or the TML via the RJ45 connector which is located on the front panel
A double slot is used to install each OMU module. An OMU module houses: the following VME boards:
OMU- SBC board OMU- PMC board -
an OMU- TM assembly -
Each OMU module can have access to: one private disk for its private data two shared disks managed in a mirroring way They are used to save the data in the event of an OMU module failure or a disk failure
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
5-3
OMU
Visual indicators
Unused
Figure 5-2
5-4
5.1.1
External interfaces The external interfaces of each OMU module are described below: on the double front panel:
two visual indicators (LEDs) one 9-pin connector for one asynchronous RS232 debug port one RJ45 connector for one 10/100Mbs Ethernet OMC port one removal request push- button to indicate a request to remove the OMU -
module
on the backplane:
the redundant ATM links the redundant - 48 Vdc links one Slot ID the SCSI buses to connect:
one SCSI private disk located inside the MMS module two SCSI shared mirrored disks located inside both MMS modules
the MTM bus connected to each module via the backplane an Ethernet 10/100 Mbps to link both OMU modules via the backplane an interface between the SIM modules and the OMU modules to connect and
to control the - 48 Vdc and the alarms to each of the other modules
5.1.2
Electrical characteristics Each OMU module: is powered by the - 48 Vdc which comes from the operator boxes via the PCIU frame assembly and the SIM modules houses:
a DC/DC converter which provides power to each component a ground for each board a fixed fuse to protect each component
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
5-5
5.1.3
Functional description Figure 5- shows each of the main functional blocks which are housed inside an -3 OMU module.
5.1.3.1
OMU--SBC board
The OMU-SBC board houses: two regular VME boards with high processing capability split up as follows:
one processor memory board one base I/O board
The processor memory board houses the following main components: a CPU an Ethernet interface for the communication between both OMU module an SCSI/B interface routed via the SCSI block located inside the OMU-TM assembly. It is used to manage one mirrored shared disk located inside an MMS module. a synchronous interface
PMC board
The PMC board houses an SCSI/P interface routed to the SCSI transceivers, which is located inside the OMU- TM assembly. The PMC board is used to manage one private disk, which is located inside an MMS module.
5-6
OMU module
LEDs
Reset Front panel VME boards Processor memory Ethernet interface CPU Synchronous interface SCSI/B interface (Base I/O) Ethernet interface
Asynchronous RS232 on 9-pin connector PCI bus
ITM block
dc/dc converter
--48 V dc
MTM bus
Ethernet link to OMU module ATM 25 block B ATM 25 block A ATM25 (B) ATM25 (A)
CPU
Bus 32 bits
VME64
Backplane
Debug bus SCSI/B bus SCSI/A bus
Bus 32 bits
Asynchronous interface
SCSI/B bus
Asynchronous interface
SCSI/A bus
SCSI transceivers
SCSI/P bus SCSI/P bus
SCSI/P interface
To private disk
OMU--SBC assembly
OMU--TM assembly
Figure 5-3
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
5-7
The base I/O board houses the following main components: VME64 interface This interface is used to convert a VME64 bus into a PCI bus. Note: For more information about the VME64 bus, please refer to the American National Standard for VME64 (ANSI/VITA 1- 1994). an asynchronous interface This interface is routed to the asynchronous block located on the OMU-TM assembly, then it is redirected either:
to the 9- pin connector located on the front panel to the debug access bus located on the backplane to the CPU located on the OMU- TM assembly -
a synchronous interface This interface is routed to the 25- pin connector on the OMU module front panel via the SLS block located on the OMU- TM assembly an Ethernet interface This interface is used to perform the communication between both OMU module an SCSI/A interface This interface is used to manage a Mirrored Shared disk located inside the MMS modules via the SCSI bus
5-8
5.1.3.2
OMU--TM assembly
The OMU-TM assembly houses an adapter board. It mainly provides: a point- to- point ATM25 interface with each ATM- SW module - a VME interface with the OMU- SBC board the power supply to the module via a DC/DC converter live insertion capability for the TMU module the SBC physical access to all interfaces reset control of the SBC
ITM Block
The ITM block is mainly composed of the ITM ASIC. The main functions of this block are to: provide a master access to the various resources of each module via the MTM bus read the backplane read the slot ID manage several types of information storage (board identity, configuration information, module test data and the fault log) select and control each LED located on the front panel interface the system with the removal request push- button -
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
5-9
ATM25 Block
The AAL (ATM Adaptation Layer) and the SAR (Segmentation and Reassembly) processing is handled by the ATM-SAR which receives all ATM cells to/from both ATM Interfaces. Reassembled AAL- 1 flow is routed toward a synchronous interface of the OMU- SBC board. Reassembled AAL- flow is routed by the CPU toward the OMU- SBC board via -5 the VME bus. The ATM25 block contains the following main components: an ATM25 interface
this interface converts each ATM25 link to an Utopia level one bus and vice
versa
an ATM- SAR interface this interface carries the OAM information, SS7 and LAPD protocols
this interface receives, transmits and processes the IP protocol over AAL- 5 -
The IP/AAL- 5 cells carry the traffic between each module which are located inside the Control Node and the Interface Node
this interface receives, transmits and processes the AAL- 1 protocol -
The AAL- 1 cells carry: OAM information for the entire BSS network SS7 and LAPD protocols between the Control Node and the Interface Node and the Transcoder Node
CPU
The main functions of this block are: to transport the frames between the ATM- SAR interface to select the ATM25 from the ATM- SAR interface to provide a SWACT condition signal to the other OMU-TM functional blocks
VME32 Block
The VME32 block is used for transmitting and receiving AAL- 5 traffic between the OMU-SBC board and the OMU- TM assembly. The VME64 interface transforms the VME64 bus of the Base I/O board into Bus 32 bits of the CPU. It is used to transfer the AAL- 5 traffic received by the CPU to the VME32 interface and vice versa.
5-10
Asynchronous interface
This interface is routed to the DB 9- pin connector on the OMU front panel via the Asynchronous interface located on the OMU- TM assembly. The asynchronous communication contains some port selection logic. It is used to provide: the capability of tri- stating the backplane asynchronous port during reset and while the OMU module is a slave a switchable transparent connection between the OMU-SBC processor asynchronous debug port 1 and backplane asynchronous port or OMU- TM processor asynchronous debug port a switchable transparent connection between the backplane asynchronous debug port and OMU-TM processor asynchronous debug port (for the passive OMU module) a transparent connection between the front panel asynchronous debug port and the OMU- SBC processor asynchronous debug port 2 SCSI tranceivers
This block is used to connect: the SCSI/A interface located on the base I/O board to the first shared mirrored disk the SCSI/B interface located on the processor memory board to the second shared mirrored disk the SCSI/P interface located on the PCM board to the private disk
Miscellaneous
In addition to the above, the OMU module contains support for test functions provided by the ITM block and routed via a debug bus located on the backplane.
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
5-11
5.1.4
5-12
5.2
TMU module
The TMU module (see Figure 5- is, from the traffic point of view, a mini BSC -4) with up to 300 Erlangs of capacity. It is based on a standard VME CPU board offering the processing power and a specific PMC (PCI Module Card) implementing the coupling function. From 2 to 12+2 TMU modules can take a place inside the Control Node as a function for the processing requirements. The TMU module manages the GSM protocols in a large acceptance: provide processing power for GSM CallP terminate GSM protocols (A, Abis and Ater interfaces) terminate low level GSM protocols (LAPD and SS7) The VME board gives the GSM processing capability, while the I/O a PMC board gives capability (LAPD and SS7). The VME board and the PMC board are SCSA compliant.
A single Spectrum slot is used to install each TMU module, which contains: a TMU-SBC assembly which houses:
a regular VME board with high processing capability split up as follows: a TMU- PMC board -
These components are connected to the TMU- TM assembly a TMU- TM assembly which houses an adapter board provides: a point- to- point ATM25 interface with each ATM- SW module - a VME interface with the OMU- SBC board -
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
5-13
TMU
Visual indicators
Figure 5-4
5-14
5.2.1
External interfaces The external interfaces of each TMU module are described below: on the single front panel:
two visual indicators (LEDs)
5.2.2
Electrical characteristics Each TMU module: is powered by the - 48 Vdc which comes from the operator boxes via the PCIU frame assembly and the SIM modules houses:
a dc/dc converter which provides power to each component a ground for each component a fixed fuse to protect each component
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
5-15
5.2.3
Functional description Figure 5- shows each of the main functional block, which are housed in a TMU -5 module.
5.2.3.1
TMU--SBC assembly
The TMU- SBC assembly houses: a regular VME board with high processing capability one PMC board
VME board
The VME board houses the following main components: a CPU VME64 interface The VME64 interface provides the VME64 bus on the OMU- TM assembly. It is used to transfer the AAL- 5 traffic received by the CPU to the OMU-TM assembly and vice versa. Note: For more information about the VME64 bus, please refer to the American National Standard for VME64 (ANSI/VITA 1- 1994) Asynchronous interface This interface is used, with some multiplexing logic, to select which processor the OMU-SBC board can talk with.
TMU-PMC board
The TMU-PMC board gives the I/O capability (LAPD, SS7) in compliance with the SC bus (SCSA standard). The TMU- PMC contains a master CPU that allows transferring the DS0 to ATM frames.
5-16
TMU module
LEDs
ITM block
dc/dc converter
--48 V dc
MTM bus
Debug access
CPU
PCI bus
Bus 32 bits
CPU
Bus 32 bits
CPU
Bus 32 bits
VME64 bus
VME32
TMU--TM assembly
Figure 5-5
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
5-17
5.2.3.2
TMU- TM assembly -
The TMU- TM assembly is mainly responsible for: adapting the VME64 bus to the ATM adapting the SC-BUS to constant bit rate ATM providing back plane communication redundancy providing to the SBC a VME64 slot 1 like electrical environment providing live insertion capability to the TMU module giving the SBC physical access to all interfaces resetting control of the SBC
ATM25 Block
The AAL (ATM Adaptation Layer) and the SAR (Segmentation and Reassembly) processing is handled by the ATM-SAR which receives all ATM cells to/from both ATM Interfaces. Reassembled AAL- 1 flow is routed toward the PMC by a SC- BUS Reassembled AAL- 5 flow is routed by the CPU toward the SBC board via the VME bus. The ATM25 block contains the following main components: an ATM25 interface This interface converts each ATM25 link to an Utopia level one bus and vice versa an ATM- SAR interface. This interface is used: to carry the OAM information, SS7 and LAPD protocols between the BSC e3
the IP/AAL- 5 protocol The IP/AAL- 5 cells carry the traffic between each module, which are located inside the Control Node and the Interface Node the AAL- protocol -1 The AAL- 1 cells carry: OAM information for the entire BSS SS7 and LAPD protocols between the BSC and MSC
5-18
CPU
The CPU block is used to: provide an interface to the VME interface for AAL- traffic between the -5 OMU-TM assembly and the OMU- SBC board receive the AAL- 5 cells from the ATM25 interface transmit the AAL- 5 cells to the ATM25 interface select the ATM25 cells for AAL- reception -1 provide a SWACT condition signal to other OMU- TM functional blocks ITM Block
The ITM block is mainly composed of the ITM ASIC. The main functions of this block are to: ensure access to these resources by the OMU Master of the MTM bus read the backplane read Slot ID manage several types of information storage (board identity, configuration information, module test data and the fault log) select and control each LED check the connections, which are run on the debug access bus
VME Block
The VME block is used to transmit and receive AAL- 5 traffic between the OMU- SBC board and OMU-TM assembly.
Miscellaneous
In addition to the above, the TMU module contains support for test functions provided by the ITM block and routed via a debug bus located on the backplane.
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
5-19
5.3
ATM-SW module
From a hardware perspective, the ATM subsystem is a key factor for the platform robustness and scalability. This subsystem provides a reliable backplane boards interconnection with live insertion capabilities. The ATM-SW module (see Figure 5- and Figure 5- houses the following -6 -7) main components: an ATM switch an ATM25 interface It performs the adaptation of the ATM25 links, which are located on the Control Node backplane, are performed in a dual star architecture. These links are used to connect both ATM- SW modules and each of them to: the TMU modules the OMU modules
an ATM155 interface The ATM155 links are used to connect each ATM- SW module to each ATM- RM module, which are located inside the Interface Node, via the SONET OC- optical multimode fibers. -3c
A single slot allows to install each ATM- SW module, which houses the following boards: an ATM- SW- SBC board It houses the ATM Cell Switch. an adapter board named: ATM- SW- TM It allows interfacing the ATM-SW-SBC with the:
ATM25 links ATM155 links LEDs
5-20
Guide slot
Fiber cable connector (**) To Rx connector on ATM--RM From Tx connector on ATM--RM Note: (*) (**) A connector extender is installed on all SC (Single Contact) connectors mating on the inside of the faceplate to facilitate connector removal. Notch key faces up. Tx connector Rx connector
Figure 5-6
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
5-21
ATM_SW
Visual indicators
TX OC--3 connector
RX OC--3 connector
Figure 5-7
5-22
5.3.1
External interfaces The external interfaces of each ATM-SW module are described below: on the front panel:
two visual indicators (LEDs) one TX connector for the OC3 optical multimode fiber one RX connector for the OC3 optical multimode fiber
on the backplane:
redundant ATM links redundant - 48 Vdc links one Slot ID MTM bus
5.3.2
Electrical characteristics Each ATM- SW module: is powered by the - 48 Vdc which comes from the operator boxes via the PCIU assembly and the SIM modules houses:
a DC/DC converter which provides power to each component a common ground for each board a fixed fuse to protect each component
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
5-23
5.3.3
Functional description Figure 5- shows each of the main functional blocks which are housed inside an -8 ATM- SW module. -
5.3.3.1
ATM--SW--SBC
It has the following main functions: manages the ATM25 switching manages the ATM25 adapter provides the ATM155 interface supervises the passive ATM- SW module ATM Switch
The ATM switching consists first of all in establishing for each communication a virtual circuit using Vc (Virtual channel) or Vp (Virtual path). These virtual circuits are established statically according to engineering rules, they are PVCs (Permanent Virtual Circuits). The main function of an ATM switch is to receive cells on a port and switch those cells to the proper output port based on the Vp and Vc values of the cell. This switching is controlled by a switching table that maps input ports to output ports based on the values of the Vp and Vc fields. While the cells are switched through the switching fabric, their header values are also translated from the incoming value to the outgoing value. Addressing tables converting between Vp, Vc and slot number are loaded from ATM-SW module at start- up time and stored in the flash EPROM of ATM part: AAL- routing tables are dynamic -1 AAL- routing tables are static -5
ATM25 adapter
Each of the three ATM25 interfaces is used to convert six ATM25 links into an Utopia Level one bus.
5-24
ATM-SW module
LEDs MTM bus
ITM block
Debug bus
ATM switch
Backplane
dc/dc converter
--48 V dc
Rx OC- connector -3
ATM25 interface
Utopia bus
ATM155 interface
ATM--SW--TM assembly
Figure 5-8
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
5-25
5.3.3.2
ATM--SW--TM
The ATM--SW--TM manages the following functions: interfaces the CC1- SBC to the Control Node backplane provides the power supply to the CC1 module provides the OC- optical multimode fibers -3c
ITM block
The ITM block is mainly composed of the ITM ASIC. The main functions of this block are to: ensure access to these resources by the OMU Master of the MTM bus read the backplane read Slot ID manage several types of information storage (board identity, configuration information, module test data and the fault log) select and control each LED control the connections, which are run on the debug access bus
ATM155 block
This block is used to convert the ATM155 links into SUNI/SDH frames.
OC-3c optical multimode fiber
This block is used to: provide access to the TX OC- multimode optical fiber -3c provide access to the RX OC- multimode optical fiber -3c supervise the optical link (the errors, the clock Sync, etc.)
Miscellaneous
In addition to the above, the ATM- SW module contains support for test functions provided by the ITM block and routed via a debug bus located on the backplane.
5-26
5.4
MMS modules
The Control Node houses four MMS modules (see Figure 5-9). Each of them contains a SCSI hard disk. They are split up as follows: two shared hard disks are managed in a mirroring way for both OMU modules Each of them is used to save the data in the event of a software or hardware failure inside the OMU module or the MMS module one private disk for the OMU- A module It is used to save the private data for the OMU- A module one private disk for the OMU- B module It is used to save the private data for the OMU- B module The MMS module provides circuitry and mechanical features. It is compliant with the Control Node hardware architecture.
5.4.1
External interfaces The external interfaces of each MMS module are described below: on the front panel:
two visual indicators (LEDs) one push button to request to remove the MMS module
on the backplane:
SCSI bus one SCSI slot ID MTM bus
5.4.2
Electrical characteristics Each MMS module: is powered by the - 48 Vdc which comes from the operator boxes via the PCIU frame assembly and the SIM modules houses:
a DC/DC converter which provides power to each component a ground for each metallic board a fixed fuse to protect each component
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
5-27
MMS
Visual indicators
Figure 5-9
5-28
5.4.3
Functional block description Figure 5-10 shows each main functional block inside an MMS module.
ITM block
Its main functions are to: read the backplane read the Slot ID manage several types of information storage (board identity, configuration information, module test data and the fault log) select and control each LED interface the system with the removal request push button
MMS module
LEDs
dc/dc converter
--48 V dc
MTM bus
Front panel
SCSI bus
Figure 5-10
PE/DCL/DD/0126 411--9001--126
Standard
5-29
SCSI Bus
SCSI terminators (see Figure 5-11) are placed on all MMS modules to provide the ability to terminate both ends of the SCSI bus and depending on the backplane- derived signals the terminators are either enabled or disabled. The backplane signals are slot dependent to allow the MMS modules to automatically configure into its correct role.
Hard disk
The Hard disk uses the standard 3.5 disk form and is mounted on the MMS module. All electrical connections are available through the 80- pin SCA-2 connector and are connected into the MMS module using 80- way ribbon cable and insulation displacement connectors.
SCSI/A
SCSI/PB
MMS Private disk MMS Mirrored shared disks : Transaction available when OMU--A is active. : Transaction available when OMU--B is active. : Transaction always available.
Figure 5-11
5-30
5.4.4
5.5
SIM module
For a description of the SIM module refer to paragraph 1.4.1.5.
5.6
FiIler module
For a description of the FILLER module refer to paragraph 1.4.2.2.
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
6-1
communicates with the Control Node via the TCP/IP protocol over AAL-5 routes the AAL- cells over the ATM network for the LAPD and SS7 channels -1 converts the AAL- 1 cells into DS0 links provides the 16 or 8 kbps circuits in the I- Node for bearer speech/data channels between the BTSs and the MSC via the TCU e3 or the SGSN via the PCUSN
ATM_SW
ATM_SW
Reception
INTERFACE NODE
ATM links (155 Mb/s) on optical fiber
ATM--RM
8K--RM
(active/passive)
8K--RM
(active/passive)
ATM--RM
3 IMC
CEM
(active/passive) 6 6 6 6 6 6
CEM
(active/passive) 6 6 6 6 6 6
LSA--RC 1
Rx1 8
LSA--RC 2
Tx2 Rx2 6
LSA--RC 3
Tx3 Rx3 4
LSA--RC 4
Tx4 Rx4 2
LSA--RC 5
Tx5 Rx5
Rx0 Tx1 10
Figure 6-1
6-2
The Interface Node hardware architecture for the BSC e3 is based on the general Spectrum platform with the following features: high speed telecom interfaces twenty-eight general purpose slots for application and interface modules two slots reserved for both SIM modules which provide to each other module the - 48 Vdc and the alarm links Note: The system does not support hot extraction of an active Interface Node CEM. The system does, however, support hot extraction when the CEM is passive. To ensure that the CEM is passive before performing a hot extraction, see Replacement of a CEM in the BSC/TCU e3 Maintenance Manual (411-9001- 132). The fault tolerant architecture is based on duplicated CEM modules. One CEM module is active (that is, actually performing the call processing procedure) while the other is inactive, ready to take over if the active module fails. Both CEM modules: receive identical PCM (E1/T1) links from each resource can communicate together via the IMC (Inter Module Communication) links, in order to synchronize:
call processing maintenance states
Each of the other RMs (ATM- RM, 8K- RM and LSA- RC modules) is connected to the CEM modules via the S-Link interfaces. This results in a point- -point -toarchitecture, which (when compared to bus architecture) provides: superior fault containment and isolation properties fewer signal-integrity- related problems easier backplane signal routing In addition, to the speech data channel, the S-Link interfaces transport messaging channels and overhead control and status bits between the CEM modules and each RM. Each S- Link interface provides 256 TS. Each slot for each: ATM-RM module has access to three S- Link interfaces, or 768 DS0 channels to each CEM module IEM module located inside the LSA- RC module has access to three S- Link interfaces, or 768 DS0 channels to each CEM module 8K- RM module has access to nine S-Link interfaces, or 2304 DS0 channels to each CEM module
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
6-3
Note: Nevertheless the 8K- RM module can be inserted in a generic slot. In the initial product release, the 8K- RM module must be installed in a slot with nine S- Link interfaces. -
6-4
6.1
CEM module
The CEM module (see Figure 6- provides the centralized resources required to -2) support the Interface Node applications. The CEM module manages the following functions: 64 K switching matrix message processing of the Interface Node: OAM and CallP over AAL- 5 control each of the other resource modules (8K- RM, ATM- RM and LSA- RC modules) clock subsystem alarm processing
6.1.1
External interfaces The external interfaces of each CEM module are described below: on the front panel:
two visual indicators (LEDs) an Ethernet link on the RJ45 connector to connect the TML
Note: You can connect the TML to the CEM module only after having plugged the TML on the OMU module or on the optional HUB(s) to find a hardware fault or a software fault inside the Interface Node. on the backplane:
S- link redundancy - 48 Vdc links redundancy IMC link redundancy MTM bus one Slot ID
6.1.2
Electrical characteristics Each CEM module: is powered by the - 48 Vdc which comes from the operator boxes via the PCIU frame assembly and the SIM modules houses:
a DC/DC converter which provides power to each component a common ground for each board a fixed fuse to protect each component
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
6-5
CEM
Visual indicators
Unused
Figure 6-2
6-6
6.1.3
Functional Blocks Figure 6- shows each of the main functional blocks, which are housed inside a -3 CEM module.
6.1.3.1
S--Link interfaces
The S-Link interfaces are split up as follows: a reception part a transmission part This block converts the parallel data format used on the CEM module and the 256 DS0 serial links to interface with each ATM-RM, 8K-RM and IEM module. In addition, it distributes the system clock to the RM, and provides low level control and status by means of overhead bits embedded in the S- Link format. Physically, this block is composed of 96 S- link interfaces. 6.1.3.2 Switching matrix 64 K Bandwidth allocator
The bandwidth allocator contains two parts: a selection part (BWA- S) a distribution part (BWA- D) This block is used to: groom platform overhead DS0 from the S- Link interfaces to the timeswitch, and vice-versa, allowing for variations in TS usage by each RM (ATM- RM, 8K- RM and IEM modules) selectively merge the DS0 stream with the S- Links apply digital padding (selectable on a per-DS0 basis) to DS0 from the timeswitch extract and insert messaging channels from the DS0 stream, and present them to the messaging block provide a mechanism to make improvements to switch from an active RM to an passive RM for the sparing operation
Timeswitch
The timeswitch provides the single DS0 rearrangement functions. It has 12K DS0, and is a double- buffered (N x DS0 capability) design, based on the ENET components.
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
6-7
CEM module
LEDs MTM bus
Reset CPU
ITM block
Front panel
IMC block
Ethernet interface
dc/dc converter
S--link interface
Messaging block
Timeswitch
BWA--selection
S--link interface
Figure 6-3
6-8
6.1.3.3
ITM block
The ITM block provides and interfaces the MTM bus between the ATM- RM modules and the other RMs. These functions can be also controlled and accessed by the CPU.
6.1.3.4 Messaging block
This block provides the protocol messaging functionality. Up to 32 messaging ports can be provisioned. Each port allows a bandwidth of: N x DS0. The messaging ports are used for: host messaging RM messaging IMC messaging one spare port for diagnostics
6.1.3.5 CPU
The CPU is responsible for local initialization, configuration, and maintenance of the RMs, as well as communication with the CEM module via the S-Link messaging facility. The CPU contains the main functions required to accommodate the various bus formats used on the RM. In addition, it provides serial access for messaging through the S-link interfaces.
6.1.3.6 Ethernet interface
The CEM module provides a 10BaseT Ethernet interface for the attachment of a TML, as well as debug tools.
6.1.3.7 Clock subsystem
This clock subsystem is used to generate the Interface Node system clock. It can generate a clock phase locked (8 KHz) to phase information acquired from an RM slot. In addition, this block allows to determine the CEM module activity and coordinates the SWitch of ACTivity between an active RM and a passive RM. The clock is distributed to the RMs via the S-link interfaces.
6.1.3.8 IMC block
This block is used to connect both CEM modules via the IMC links. An IMC link has a bandwidth of 126 DS0. It is a specific interface dedicated to the messaging between both CEM modules.
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
6-9
6.2
ATM-RM module
The ATM-RM module provides the centralized resources required to support the Interface Node applications. The ATM- RM module provides (see Figure 6- and Figure 6-4 -5): a SONET OC- physical interface that allows direct connection to the ATM -3 network located on the Control Node interworking functionality between the cell based ATM network (ATM25 interfaces) located on backplane of the Control Node and the switching network (S- link interfaces) located on backplane of the Interface Node The main functions of the ATM- RM module are: interface to an OC- 3 optical multi- mode fiber termination of ATM forum specified SONET transport and path overhead termination of ATM OAM and CallP cells map DS0 to ATM cells over AAL- 1 for Nx64 connections and speech/data channels
6.2.1
External interfaces The external interfaces of each ATM-RM module are described below: on the front panel: two visual indicators (LEDs) one TX connector for the OC- 3 multi- mode optical fiber one RX connector for the OC- 3 multi- mode optical fiber on the backplane: S- link redundancy - 48 Vdc links redundancy one Slot ID
6.2.2
Electrical characteristics Each ATM- RM module: is powered by the - 48 Vdc which comes from the operator boxes via the PCIU frame assembly and the SIM modules houses: a DC/DC converter which provides power to each component a common ground for each board a fixed fuse to protect each component
6-10
ATM-RM
Visual indicators
TX OC--3 connector
RX OC--3 connector
Figure 6-4
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
6-11
Attenuator
A connector extender is installed on all SC (Single Contact) connectors mating on the inside of the faceplate to facilitate connector removal. Notch key faces up.
Figure 6-5
6-12
6.2.3
Functional Blocks Figure 6- shows each of the main functional blocks, which are housed inside an -6 ATM- RM module. -
6.2.3.1
ITM block
The ITM block provides and interfaces the MTM bus between the ATM- RM module and the other RMs. These functions can be also controlled and accessed by the CPU.
6.2.3.2 CPU
The CPU contains the main functions to accommodate the various bus formats used on the RM. In addition it provides serial access for messaging through the S- link interfaces.
6.2.3.3 S--link Interfaces
The S- Link interfaces allow to interface the ATM-RM module to the CEM module via the backplane of the Interface Node. The S- Link interfaces provide: DS0 connectivity and the SPM messaging infrastructure. In total, the ATM- RM module can access to nine S-Link interfaces, or 2304 DS0 channels. For the Interface Node it has access to only three S-Link interfaces, or 768 TS to each CEM module.
6.2.3.4 ATM/DS0 Adaptation
The mapping of the ATM cells to DS0 channels is performed by the Nortel networks standard device designated the AAL- entity or AAE. A pair of AAE is used on the -1 ATM-RM module to map up 2048 ATM virtual circuits into a maximum to 2048 DS0s. This mapping has two general cases: Nx64 trunking, and single DS0 trunking (speech/single channel data traffic). Nx64 traffic aggregates multiple DS0s from a single frame into a single data path.
6.2.3.5
OC- optical interface -3 The optical module is used to: provide access to the Control Node via the:
OC- multi-3 -mode optical fiber for the transmission OC- 3 multi- mode optical fiber for the reception -
supervise the optical link (the errors, the clock Sync, etc.) The current design would require an external attenuator to limit the optical power for the multi-mode interfaces.
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
6-13
ATM-RM
LEDs MTM bus
Reset
ITM block
Front panel
To/from ATM_SW (Rx) via OC-3 optical multi mode fiber
dc/dc converter
Tx OC-3C connector
--48 V dc
Rx OC-3C connector
ATM/DS0 adaptation
Figure 6-6
6-14
6.3
8K-RM module
The 8K- RM module is also named SubRate Timeswitch (see Figure 6-7). It is an application- specific circuit module which performs a timeswitch function on sub- DS0 rate channels, allowing for the efficient switching of 8 and 16 kbps channels. The role of the 8K-RM module is to add subrate switching capability to the cabinet, as the CEM module is only capable of switching at a DS0 level (64 kbps channels). It provides a secondary stage of switching individual bits within DS0s, supporting up to 16 kbps channels (contained in 2268 DS0s). It manages the following main functions: transmits and receives data to/from two (active and inactive) CEM modules via nine S-links provides non- blocking sub- DS0 rate timeswitching on 8 kbps channels -
6.3.1
External interfaces The external interfaces of each 8K- RM module are described below: on the front panel:
two visual indicators (LEDs)
on the backplane:
S- link redundancy - 48 Vdc links redundancy MTM bus Slot ID
6.3.2
Electrical characteristics Each 8K-RM module: is powered by the - 48 Vdc which comes from the operator boxes via the PCIU frame assembly and the SIM modules houses:
a DC/DC converter which provides power to each component a common ground for each board a fixed fuse to protect each component
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
6-15
8K-RM
Visual indicators
Unused
Figure 6-7
6-16
6.3.3
Functional Blocks Figure 6- shows each of main functional blocks, which are housed in a 8K- RM -8 module.
6.3.3.1
ITM block
The ITM block provides and interfaces the MTM bus between the ATM- RM modules and the other RMs. These functions can be also controlled and accessed by the CPU.
6.3.3.2 S--link interface
The S- Link interfaces allow to interface the 8K-RM module to the CEM module via the backplane of the Interface Node. The S- Link interfaces provide: DS0 connectivity and the SPM messaging infrastructure. In total, the 8K- RM module can access to 9 S-Link interfaces, or 2304 DS0 channels.
6.3.3.3 Switching Matrix
The 8K-RM module switches 32,768 1- bit channels at an 8 kHz frame rate and operates on 8 channels in parallel. Also, data coming into and going out of the module is formatted as 8 channels per DS0. A separate connection memory is required for each speech memory. The complete matrix is composed of eight channels in parallel, one for each bit of the outgoing DS0. Incoming data samples are sequentially written into one half of the speech memory while they are simultaneously being read out of the other half of the speech memory.
6.3.3.4 CPU
The CPU contains the main functions to accommodate the various bus formats used on the RM. In addition it provides serial access for messaging through the S- link interfaces.
6.3.3.5 Messaging block
This block provides the protocol messaging functionality. Up to 32 messaging ports can be provisioned. Each port allows a bandwidth of: N x DS0. The messaging ports are used for: host messaging RM messaging IMC messaging one spare port for the diagnostics
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
6-17
8K-RM module
LEDs MTM bus
Reset
ITM block
--48 V dc
S--link (Tx)
S--link (Rx)
Figure 6-8
6-18
6.4
LSA-RC module
The Low Speed Access for the Interface Node is defined as either PCM (E1/T1) interfaces and is achieved via: a LSA-RC module housed inside the Interface Node shelf a CTU module housed inside the SAI frame assembly
The LSA-RC module provides an electrical interface for the signals on the PCM (E1/T1) links. The CTU module provide: copper management manual loopback lightning protection impedance matching Figure 6- shows the electrical architecture between a LSA- RC module in the -9 BSC e3 frame and a CTU module in the SAI frame. The LSA- RC module provides access to 21 PCM E1 links or 28 PCM T1 links. This quantity of PCM (E1/T1) links allows full utilization of the S- Link bandwidth available on the backplane.
It occupies three slots on the Interface Node shelf and consists of the following modules (see Figure 6-10 and Figure 6-11): a duplicated pair of IEM modules a single TIM module an RCM (Resource Complex Mini backplane) Due to the large quantity of PCM (E1/T1) links carried on a single LSA- RC module, the system has been designed to minimize the possibility that the failure of a single module will cause the failure of an entire LSA-RC module. This is accomplished by duplicating the IEM modules, which contain all of the electronic circuitry. The IEM module that is receiving the PCM signals is considered as the active IEM module while the other is considered as the passive IEM module.
This IEM module is used to: provide PCM (E1/T1) interfaces with each BTS: Abis interface provide PCM (E1/T1) interfaces with the TCU e3: Ater interface
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
6-19
convert the PCM (E1/T1) links in DS0 on the S- link interfaces. The DS0 transport:
SS7 channels LAPD channels speech/data channel
format the data in the proper high speed serial format necessary for processing by the CEM module
6-20
SAI frame
BSC e3 frame
CTU CTMx
LSA--RC
IEM 0
IEM 1
RCM
Note:
. . .
the bold lines show the PCM external links the regular line show the PCM internal links CTMx corresponds to seven: -- CTMD for 28 PCM T1 links TW pair -- CTMC for 21 PCM E1 links TW pair -- CTMP for 21 PCM E1 links coax
Figure 6-9
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
6-21
RCM
TIM module
IEM modules
Figure 6-10
6-22
IEM
TIM
IEM
Visual indicators
Figure 6-11
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
6-23
6.4.1
IEM module Two versions of the IEM module (see Figure 6-11) are available: one interfaces: twenty- one 120 four-wire PCM E1 or twenty-one 75 coax PCM E1 the other interfaces twenty- eight 100 four-wire PCM T1 The IEM module contains a PCM (E1/T1) adaptator to terminate each PCM link. The speech/data channels from all of the PCM (E1/T1) links are multiplexed onto the S- link interface via the PCM (E1/T1) adaptator and vice versa. The LSA- RC module provides management for either 21 E1 or 28 T1 PCM links. Nevertheless, each PCM (E1/T1) is an individual network element that can connect to another piece of transmission equipment located at a great distance away. During troubleshooting activities, it is desirable to quickly classify PCM (E1/T1) Faults. To this end, the LSA- RC module provides loopback switches for each PCM (E1/T1) via the CTMx boards which are housed inside the SAI frame. It is very important in the troubleshooting and maintenance of transmission equipment, that the loopback feature be operated only after the PCM (E1/T1) has been prepared for maintenance. Failure to do so could cause the unexpected termination of multiple network connections. After the BSC e3 cabinet is fully installed and all the cables are dressed within cable management facilities, it will not be obvious to the operator which Loopback push buttton on the CTMx board is associated with the corresponding LSA- RC module. The solution for this small dilemma is to know that (see Figure 1-23 and Figure 1-24): the minimum port number of a PCM (E1/T1) is located on the bottom left of a CTU module the maximum port number of a PCM (E1/T1) is located on the top right of a CTU module PROM resident inside the IEM modules enables functions that are directly related to the type of module that hosts them. However, their common purpose is to detect any changes in their physical environment (external alarm loops, PCM (E1/T1) connections.) They do the following: initialize board software and hardware by configuring the data loaded by the BSC e3 to match the host board operating needs detect and confirm physical changes in the BSC e3 environment (PCM alarms, alarm loops, etc.) report events to the BSC e3
6-24
The frame alignment circuit raises the following alarm condition signals: LOS - Loss of Signal Fault Definition
The LOS defect is detected when the incoming signal has no transitions. The LOS defect is cleared when the incoming signal has an average pulse density. The presence of the LOS fault at any time during the previous one- second interval
LED Requirement
LOS LED is turned: ON when LOS GSM fault is the highest ranking GSM fault OFF when LOS GSM fault is no longer the highest ranking GSM fault AIS - Alarm Indication Signal Fault Definition
On the T1 line, AIS is represented by an (unframed) all- ones signal. Internally, AIS is represented by all- ones in all timeslots. An AIS defect is: detected when the incoming signal is an unframed signal with ONEs density present for a time equal to or greater than T, where T is 3ms to 75ms cleared within a time period T when the incoming signal does not meet either the ONEs density or the unframed signal criteria, where T is 3ms to 75ms The presence of the AIS fault at any time during the previous one- second interval
LED Requirement
AIS LED is turned: ON when AIS GSM fault is the highest ranking GSM fault OFF when AIS GSM fault is no longer the highest ranking GSM fault
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
6-25
LFA for E1- Loss of signal Frame Alignment or LOF for T1 - Loss Of signal Frame alignment Fault Definition
Frame alignment is said to be lost when three consecutive errors of the alignment signals are received. It is also said to be lost when three errors of the bit 2 in TS0 in frames not containing alignment signals are received. LFA or LOF should be detected within 12 ms. It must be confirmed over several frames to avoid the unnecessary initiation of the frame alignment recovery procedure due to the transmission bit errors. The frame alignment recovery procedure should begin immediately once a LFA or a LOF has been confirmed. The maximum average reframe time should not exceed 15 ms (for ESF) and 50 ms (for SF). The maximum average reframe time is the average time to reframe when the maximum number of bit positions must be examined to locate the frame alignment signal. The presence of the LOF fault at any time during the previous one-second interval
LED Requirement
LFA LEDs or LOF LEDs are turned: ON when a frame alignment GSM fault is the highest ranking GSM fault OFF when a frame alignment GSM fault is no longer than the highest ranking GSM fault RAI - Remote alarm Indicator Fault Definition
For SF framing, RAI should be detected by bit 2 in every channel time slot being set to 0. For ESF framing, RAI should be detected by the frame alignment alarm sequence (FF00). The presence of the RAI fault at any time during the previous one-second interval
LED Requirement
RAI LED is turned: ON when the RAI GSM fault is the highest ranking GSM fault OFF when the RAI GSM fault is no longer the highest ranking GSM fault The application manages LFA for E1 or LOF for T1, AIS, and RAI. The corresponding alarm LED lights for at least 200 ms upon each alarm occurrence. The LEDs on the front panel of the board display these alarms.
6-26
6.4.1.1
Figure 6- shows the front panel of both IEM modules. -11 The T1 version of the faceplate is virtually a subset of the E1 version and will not be shown in Figure 6-11. The only difference will be that the acronym LFA will be displayed as LOF. The behavior described within will apply in either case.
Visual indicators
The visual indicators which contain the red and green LEDs at the top of the front panel are standard spectrum LEDs. They indicate the IEM module or the TIM module status. For more information about these LEDs refer to their generic description in paragraph 1.4.1.5.
LSA Specific Requirements on the Interactive front panel
Primarily, the LEDs on the front panel are used to display specific information about the alarms on the PCM (E1/T1) links. The requirements to display alarms for a PCM (E1/T1) link are different, so a slightly different set of LEDs are required which means that different faceplates are required. The philosophy regarding the display of alarms will be to display NO information when there are NO problems to report. As an exception to this rule, OK will be displayed on the alpha- numeric LED of the active IEM module when there are no alarms to report. This has the effect of passively indicating to the operator which IEM module is active and which is passive. The interactive portion of the front panel consists of the following elements: multiple span failure indication LED signal failure indication LEDs for an PCM E1 link: LOS, AIS, LFA, and RAI signal failure indication LEDs for a PCM T1 link: LOS, AIS, LOF, and RAI PCM (E1/T1) number failure indicator increment/decrement controls to show alarms for multiple failed spans Requirements will be presented here for each of these elements as they apply to GSM LSA E1 or T1.
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
6-27
If there is a failure in zero or one PCM (E1/T1) link, the multiple PCM (E1/T1) LED must be off. If one or more additional alarms are detected on other PCM (E1/T1) links within the same IEM module, then the multiple PCM (E1/T1) alarm will begin to blink, signalling to the operator that more information is available by pressing one of the arrow keys. Pressing the arrow key will increment (Up arrow key) or decrement (Down arrow key) the information displayed on the interactive front panel to the next Fault alarm. Like before the # of the troubled PCM will be displayed along with the fault condition LED. The LSA-RC module provides management for either 21 PCM E1 links or 28 PCM T1 links, but in the outside plant, each PCM (E1/T1) is an individual network element that can connect to another piece of transmission equipment located a great distance away.
Signal failure indicator
The signal failure LEDs have a transparent text cover which indicates the type of signal failure detected in the receive signal of the IEM module. The following requirements describe how these LEDs must be used by an E1 or a T1 LSA: no failure indicator LED will be lit if the span failure indicator shows OK, XX or blank if a failure exists in one or more PCM (E1/T1) links, the signal failure displayed on the faceplate must reflect the PCM (E1/T1) number shown in the PCM (E1/T1) indicator only the highest severity signal failure will be presented on the front panel for a given PCM (E1/T1) if a failure on an PCM (E1/T1) is cleared, the failure must be removed from the faceplate at the same time the clear is reported inside defect monitoring
6-28
The PCM (E1/T1) failure indicator is used in conjunction with the signal failure indicators to show the type of failures encountered on a given PCM (E1/T1). The following requirements describe how the PCM (E1/T1) failure indicator is used on the interactive front panel of the IEM module: it will be blank until the IEM module is brought into service it will contain the text OK on the active IEM module when all are provisioned :
PCM E1 links have no signal failures: LOS, AIS, LFA, or RAI PCM T1 links have no signal failures: LOS, AIS, LOF, or RAI
it will contain blank (no text or symbol) on the inactive IEM module it will contain the text XX on the active IEM module when there is a problem with the copper connection between the IEM module and the SAI frame. This indication will be shown regardless of whether carriers are in service if a single PCM (E1/T1) failure occurs, the PCM (E1/T1) failure indicator will be updated to show the affected PCM (E1/T1) number. The PCM (E1/T1) links are a numbered from 0 to 20 inclusive. The PCM (E1/T1) links are a numbered from 0 to 27 inclusive for multiple PCM (E1/T1) failures, the PCM (E1/T1) failure indicator shows the last, viewed PCM (E1/T1). The PCM (E1/T1) information is not changed as the consequence of a new PCM (E1/T1) failure PCM (span) failures must be sorted by a PCM (E1/T1) number. The PCM (E1/T1) links must not be listed by order of failure occurrence if a failure is cleared on an PCM (E1/T1) link displayed on the front panel and there are multiple PCM (E1/T1) links experiencing failures, the front panel will be updated to show the failure on the next, lowest PCM (E1/T1). If the clearing event is associated with the lowest numbered PCM(E1/T1), the next PCM (E1/T1) is shown if a failure is cleared on an PCM (E1/T1) displayed on the faceplate and this is the only PCM (E1/T1) failure present, the PCM (E1/T1) failure indicator must be updated to show OK provided no cabling problems exist
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
6-29
The increment/decrement PCM (E1/T1) control is used by the operator to check the status of multiple failed PCM (E1/T1) links. The increment/decrement control has behavior when the multiple PCM (E1/T1) links LED indicates multiple failures. The increment/decrement controls do nothing when there is a single PCM failure or the span indicator shows OK, XX, or blank. If the increment control (up arrow) is used when multiple PCM (E1/T1) link failures are present, the PCM (E1/T1) failure indicator must be updated to show the next, highest sorted PCM (E1/T1) link failure. If the last, selected span had the highest value for any failed span, the next span shown will have the lowest failed span value. If the decrement control (down arrow) is used when multiple PCM (E1/T1) link failures are present, the PCM (E1/T1) failure indicator must be updated to show the next, lowest sorted PCM (E1/T1) link failure. If the last, selected PCM (E1/T1) links had the lowest value for any failed PCM (E1/T1), the next PCM (E1/T1) shown will have the highest failed PCM (E1/T1) value.
IEM backplane
Each IEM module: is powered by the - 48 Vdc which comes from the operator boxes via the PCIU frame assembly and the SIM modules houses:
a dc/dc converter which provides power to each component a common ground for each board a fixed fuse to protect each component
6-30
6.4.1.3
Functional Blocks
Figure 6-12 shows each of the main functional blocks, which are housed inside an IEM module.
ITM block
The ITM block provides and interfaces the MTM bus between the ATM- RM modules and the other RMs. These functions can be also controlled and accessed by the CPU.
PCM adaptator
The PCM adaptator performs, on the 21 E1 or 28 T1 PCM links, the following functions: line driving framing receive path elastic store
(TX) and (RX) mapper
It is used to transfer the PCM (E1/T1) links between the channels on the S- Link interface and the respective channels of the PCM (E1/T1) adaptor. In addition, it allows to create the clock and the synchronization signals needed by the PCM (E1/T1) adaptor. There will also be a capability to provide a loopback of all the channels received from the framers to their respective transmit data inputs. The Receive Mapper will receive serial data from each of the PCM (E1/T1) adaptor at the line rate (2048 Mbps for PCM E1 links or 1544 Mbps for PCM T1 links). These streams of data will be converted to parallel bytes and multiplexed for PCM (E1/T1) links. The Transmit Mapper will do just the opposite as the Receive Mapper.
Clock Sync
Timing in the BSS must be synchronized with the BTS and the TCU e3, therefore, the IEM module must be able to provide phase and frequency error feedback to the BSC e3 system clock generator located in the CEM module. This is accomplished with a phase comparator located in the IEM module. One input to the phase comparator is derived from the S- Link clock. The other input is derived from one of the incoming PCM (E1/T1) links. The IEM module can select any of the incoming PCM (E1/T1) links to be the synchronous reference. Note: The LSA-RC module has no dedicated input port for an external synchronization reference.
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
6-31
CPU
The CPU is used to: configure the PCM (E1/T1) links (frame format, line coding, etc.) control the front panel status indicators
6-32
IEM module
LEDs
RCM
MTM bus
Reset
ITM block
--48 V dc
PCM adaptator
PCM links
Clock sync
Sync. links
Note: 1 Indicate that PCM links are directly connected to the TIM module. (*) The HDLC is not used in the interface node.
Figure 6-12
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Backplane
HDLC (*)
6-33
6.4.2
RCM The RCM was initially designed to provide additional interface signals for both IEM modules and the TIM module. The RCM provides the following features: interface for up to:
21 PCM E1 links 28 PCM T1 links
matched impedance for 120 or 75 10% between tip and ring optimal for the PCM E1 links matched impedance for 100 10% between tip and ring optimal for the PCM T1 links interface for additional status/control signals among LSA- RC module (IEM/TIM/IEM) interface for existing signals with matched impedance of 60 10% connection to Synchronization slots on the backplane of the Interface Node RCM slot identification for the IEM modules
6.4.2.1 Components layout
Figure 6-13 shows each main link inside the RCM. The RCM provides inter- connection for signals among both IEM modules, the TIM module and the backplane. Signal connections between both IEM modules and the backplane contain: S- link redundancy synchronization MTM bus
Signals between the both IEM modules and the TIM modules contain: PCM (E1/T1) differential pairs LED control
6-34
IEM module
RCM S--links
PCM links
TIM module
S--links
Figure 6-13
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Bakcplane
PCM links
6-35
6.4.3
TIM module The TIM module (see Figure 6-10) provides access to the line signals from the IEM modules. Without rear access on the Interface Node, all external interfaces must be located on the front of the circuit packs. The TIM module is used for PCM T1 links with compensation for impedance mismatch located in the CTMx board housed inside the SAI frame. The TIM module provides the following functions: interface for up to 28 PCM T1 links interface for up to 21 PCM E1 links matched impedance between tip and ring (optimal for the same for PCM (E1/T1)):
for PCM E1 120 10% for PCM E1 75 10% for PCM T1 100 10%
version and presence info to the IEM modules EMI filtering for all signals dedicated to/from the SAI frame does not receive the - 48 Vdc and therefore does not generate power on the TIM module
6.4.3.1 External interfaces
The external interfaces of the TIM module are described below: on the front panel:
two visual indicators (LEDs) which are controlled by the IEM module PCM (E1/T1) RX signals to the SAI frame PCM (E1/T1) TX signals to the SAI frame
on the backplane:
PCM (E1/T1) links redundancy
6.4.3.2 Connector description
The TIM module is connected to the RCM backplane. In addition, it is connected from both 62- pin connectors located on the front panel of the TIM module to both 62- pin connectors on the located on the CTB inside the SAI assembly. Figure 6-15 shows the description of one of these 62- pin connectors on the front panel of the TIM module.
6-36
TIM module
LEDs
RCM
LED control
Note: 1 Indicates that PCM links are directly connected to the IEM module.
Figure 6-14
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Backplane
6-37
6.4.3.3
Functional Blocks
Figure 6-14 shows each main functional block which are housed inside a TIM module. The TIM module basically provides a common interface to the line signals for both IEM modules. It contains EMI filters for these line signals, provides version and presence info to the IEM modules, and houses two LEDs which are controlled by the IEM modules.
Cable detection circuitry
There are two signals in the cable detection circuit which are controlled by both IEM modules. The TIM module simply carries these signals from the RCM backplane to the cable interface on the front of the card. One signal (CTB_LOOPA) is connected to the transmit cable, while the other signal (CTB_LOOPB) is connected to the receive cable.
EMI Filtering
EMI filtering is provided for each PCM (E1/T1) link on the cable interfaces of the TIM module. This filtering is required since high- frequency noise can be coupled onto these signals on the IEM module and carried out of the IEM module can through the cable interface. Two 62-pin connectors (see Figure 6-15) are being built for this purpose.
Board Stack-up
Solder mask is required for this board since the 62- pin connectors are soldered to the module (see Figure 6-15). Each signal layer is isolated from its neighboring signal layer with a ground layer for signal integrity. The receive line signals are tracked on separate layers from the transmit line signals in order to provide isolation for signal integrity.
6-38
(02) (00)
(05)
(08)
(11)
(14)
(17)
(20)
(23)
(03)
(06)
(09)
(12)
(15)
(18)
(21)
(24)
21 20 19 18 17 16 15 14 13 12 11 10 9
42 41 40 39 38 37 36 35 34 33 32 31 30 29 28 27 26 25 24 23 22 62 61 60 59 58 57 56 55 54 53 52 51 50 49 48 47 46 45 44 43
(P3) loop A
(01)
(04) Tip
(07)
(10)
(13)
(16)
(19)
(22)
(25)
GND
Note: The number in brackets indicates the number of the PCM (E1/T1) pin. (P3): Transmit PCM (E1/T1) links. (P4): Receive PCM (E1/T1) links.
Figure 6-15
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
6-39
6.5
SIM module
For a description of the SIM module refer to paragraph 1.4.2.2.
6.6
FiIler module
For a description of the FILLER module refer to paragraph 1.4.1.5.
7-1
converts the LAPD channels in DS0 links transports the SS7 signalling links via the DS0 links allows the communication between the Transcoder Node and the Control Node via the LAPD channels over DS0 links and via the Interface Node manages the GSM vocoding of the speech/data channels
TRM
TRM
TRM
TRM modules
TRM
3 ICM
CEM
(active/passive) 6 6 6 6
CEM
(active/passive) 6 6 6 6
LSA--RC 1
Tx1 Rx1 4
LSA--RC 2
Tx2 Rx2 2
LSA--RC 3
Tx3 Rx3
Figure 7-1
7-2
The Transcoder Node hardware architecture for the TCU e3 is based on the general Spectrum platform with the following features: high speed telecom interfaces twenty-eight general purpose slots for application and interface modules two slots reserved for both SIM modules which provide to each other module the - 48 Vdc and the alarm links Note: The system does not support hot extraction of an active Transcoder Node CEM. The system does, however, support hot extraction when the CEM is passive. To ensure that the CEM is passive before performing a hot extraction, see Replacement of a CEM in the BSC/TCU e3 Maintenance Manual (411-9001- 132). The fault tolerant architecture is based on duplicated CEM modules. One CEM module is active (i.e. actually performing call processing functions) while the other is inactive, ready to take over if the active unit fails. Both CEM modules: receive identical PCM (E1/T1) traffic from each resource can communicate with each other via the IMC (Inter Module Communication) links, in order to synchronize:
call processing maintenance states
Each RM (TRM and LSA- RC modules) is connected to the CEM modules via the S-Link interfaces. This results in a point- to- point architecture, which (when - compared to bus architectures) provides: superior fault containment and isolation properties fewer signal integrity related problems easier backplane signal routing In addition to the speech data channel, the S- Link interfaces transport messaging channels, overhead control and status bits between the CEM modules and each RM. Each S-Link provides 256 DS0. Each slot for each: TRM module has access to 3 S- Links, or 768 DS0 channels to each CEM module -
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
7-3
IEM module located inside the LSA- RC module has access to 3 S- Links, or 768 DS0 channels to each CEM module
7-4
7.1
CEM module
For a description of the CEM module, refer to paragraph 6.1.
7.2
TRM module
The TRM module (see Figure 7- manages the GSM vocoding of the speech/data -2) channels. CAUTION: HOT EXTRACTION OF A TRM BOARD IN THE TCU CAUSES THE CALLS PASSING THROUGH IT TO BE DROPPED. The speech channels can process the CTM (cellular text telephone modem) for transmission of a text telephone conversation. This task is accomplished by an array of DSPs (Digital Signal Processor). The flexibility and computational power of the TRM module allow it to run any of the GSM coding/decoding processes (full rate (FR), enhanced full rate (EFR), and adaptive multi- rate (AMR) FR or HR) on multiple traffic channels. The number of TRM module required depends on the operator capacity requirements. The Transcoder Node houses up to twelve TRM modules.
7.2.1
External interfaces The external interfaces of each TRM module are described below: on the front panel:
two visual indicators (LEDs)
on the backplane:
S- links redundant - 48 Vdc links redundant MTM bus
7.2.2
Electrical characteristics Each TRM module: is powered by the - 48 Vdc which comes from the operator boxes via the PCIU frame assembly and the SIM modules houses:
a DC/DC converter which provides power to each component
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
7-5
a common ground for each board a fixed fuse to protect each component
7-6
TRM
Visual indicators
Figure 7-2
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
7-7
7.2.3
Functional Blocks Figure 7- shows each of the main functional blocks, which are housed inside a -3 TRM module.
7.2.3.1
ITM block
The ITM block interfaces the MTM bus between the ATM- RM module and the other RMs. These functions can be also controlled and accessed by the CPU.
7.2.3.2 CPU
The CPU block is responsible for terminating SPM system messaging coming from the CEM module, and for configuring, loading and detecting faults on the DSP Archipelagoes.
7.2.3.3 S--Link block
The S- link functional block is used to interface to the three serial links (S-links) coming from the CEM module. These links carry speech/data, messaging and control functions between the TRM module and both CEM modules. Each speech/data channel is converted to 8 bits parallel format and carried on the PCM internal links to/from the DSP archipelagoes.
7.2.3.4 DSP Archipelago blocks
The DSP resources are grouped into three similar Archipelagoes. Each Archipelago interfaces with the S- link block on the PCM parallel link using the DIA (DSP Interface Asic). The DIA is able to read and to write up to 256 TSs on the S-link. The TS list to process is specified by the CPU. The MLB (Mailbox DSP) is also connected as a slave to the CPU block. It manages both the parallel voice transfers between the DIA and its three PPUs (Pre Processing Unit), and the parallel messaging transfers between the CPU block and its PPUs. The PPU is connected as a slave to the MLB, and as the master to four identical SPU (Signal Processing Units). The PPU conveys messaging between the MLB and the four SPUs, and typically manages pre and post- processing on the voice transfers (frame synchronization, parameters formatting, handover handling, etc.) The SPU manages pure vocoding, on request from the PPUs. The DSPs have the same design, running at least 150 Mips with a 128k-word (24- bits) internal memory that enables no external RAM. They are connected glueless together using their parallel CPU port and their serial busses configured in multiprocessor mode.
7-8
TRM module
LEDs MTM bus
Reset CPU
ITM block
Front panel
dc/dc converter
--48 V dc
Archipelago 3 Archipelago 2 Archipelago 1 Backplane S--link (Rx) SPU--DSP block PPU--DSP block MLB DIA CTM S--link interface S--link (Tx)
Figure 7-3
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
7-9
CAUTION: HOT EXTRACTION OF A TRM BOARD IN THE TCU CAUSES THE CALLS PASSING THROUGH IT TO BE DROPPED. The CTM allows reliable transmission of a text telephone conversation alternating with a speech conversation through the existing speech communication paths. If the PCM type is T1, the CTM process is initialized upon all the TRM module of the TCU e3 and the CTM is systematically activated on all the communications whatever the channel type required from the interface A messages.
7.3
LSA-RC module
For a functional description of the LSA- RC module, refer to paragraph 6.4. The LSA-RC module occupies three slots on the Transcoder Node shelf and consists of the following modules: a duplicated pair of IEM modules a single TIM module an RCM For a description of the IEM and the TIM modules, and the RCM, refer respectively to paragraph 6.4.1, paragraph 6.4.3 and paragraph 6.4.2.
7.4
SIM module
For a description of the SIM module refer to paragraph 1.4.2.2.
7.5
FiIler module
For a description of the FILLER module refer to paragraph 1.4.1.5.
7-10
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Software Description
8-1
SOFTWARE DESCRIPTION
8.1 Software architecture
The software part (see Figure 8- Figure 8- and Figure 8- of the BSC e3 -1, -2, -3) cabinet and the TCU e3 cabinet, as described in the functional architecture (refer to Chapter 4), is subdivided into the following areas: the OAM application This application manages:
the platform in accordance with requests provided by the OAM center and the
updated platform
the CallP application This application manages the network elements and signaling. It can perform call processing itself and supervision of network elements, but it can also be a transactional application
8.2
8-2
Software Description
Application and services layer TMG Abis Access Ater Access PCU Access
SPR
OBS/OBR
SUP--TCU
SPP
OMC services
C--Node_OAM
SS7
SUP--IN
Figure 8-1
Position of the core system in the layered Control Node software architecture
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Software Description
8-3
Application and services layer Node Access OAM System tests Pool PCM CallP_IN Pool 8K
Integrated Link Maintenance (ILM) Connection Manager (CM) Message Transfer System (MTS) Resource Manager (RMAN)
Figure 8-2
Position of the core system in the layered Interface Node software architecture
8-4
Software Description
Application and services layer Node Access OAM System tests Pool PCM CallP_TN Pool HDLC
Integrated Link Maintenance (ILM) Connection Manager (CM) Message Transfer System (MTS) Resource Manager (RMAN)
Figure 8-3
Position of the core system in the layered Transcoder Node software architecture
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Software Description
8-5
8.3
8.3.1
Software packages Pre-installed packages (OMU): -- (P) cn.aix.blvm -- (P) cn.aix.custom -- (P) cn.aix.pckg -cn.containers.omu -cn.installTml.omu
Description
Shared mirorred disk management Dynamic OS configuration Package management Containers Install the TML package on the OMU module
ATM packages (OMU): -- (P) cn.atmOmuSel.omu -- (S) cn.cc1.sbc.load -cn.omu.tm.flash.alt -cn.omu.tm.flash.fact -cn.omu.tm.flash.norm -cn.omu.tm.load -cn.tmu.tm.flash.alt -cn.tmu.tm.flash.fact -cn.tmu.tm.flash.norm -cn.tmu.tm.load
(OMU) (CC1) (OMU) (OMU) (OMU) (OMU) (TMU) (TMU) (TMU) (TMU)
ATM selection for switch activities Load the nominal image of the CC1 board Flash the alternate image of the OMU--TM board Flash the factory image of the OMU--TM board Flash the nominal image of the OMU--TM board Load the nominal image of the OMU--TM board Flash the alternate image of the TMU--TM board Flash the factory image of the TMU--TM board Flash the nominal image of the TMU--TM board Load the nominal image of the TMU--TM board
Constants and definition for Control Node architecture Constants and definition for platform architecture
Base OS for the OMU module on the Control Node Management labo services on the Base OS for the OAM Management tools on the Base OS for the OMU module (initialization process, start process, etc.) Observation on the Base OS for the OMU module
8-6
Software Description
Description
Messaging management
Fault management Fault tolerance local agent Tolerance central agent and Load Balancing management Fault tolerance management Fault tolerance tools
Overload management
Software management Software configuration files for the GSM Software management tools
OMU access and supervision of the other nodes: -- (P) cn.inac.omu -- (P) cn.supin.omu -cn.version
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Software Description
8-7
Software packages Data base access (OMU): -- (P) cn.das.omu -- (S) cn.das.omu.share
Description
Data base access process, located on the private disk, to give access to the data base, the libraries and to the associated UNIX commands Data base access process, located on the shared disk, to give access to the data base, the libraries and to the associated UNIX commands
CM CA process, located on the private disk, to manage the MIB with ADM for the object model mediation CM CA hybrid process, located on the shared disk, to centralize the observation data which are sent to the OMC--R CM LA process to give access to the data configuration
PM_CA process, located on the private disk, to centralize the observation data which are sent to the OMC--R
Service libraries (OMU): -cn.adm.omu -cn.apptemplateLib.omu -cn.communicationLib.omu -cn.debugLib.omu -cn.errorLib.omu -cn.parserLib.omu -cn.prngLib.omu -cn.sharedmemLib.omu -- (P) cn.sst.omu ----cn.synchronizationLib.omu cn.threadLib.omu cn.timeLib.omu cn.timerLib.omu
Shared libraries Application template library Communication library Debug library Error library Parser library PRNG library Shared memory library Single stream software to interface the software and the platform Synchronization library Thread library Time library Timer library
8-8
Software Description
Software packages OAM (OMU): -- (P) cn.actmngt.omu -cn.oamCa.omu -- (P) cn.oamcn.omu -cn.oamHm.omu -cn.oamLib.omu -- (S) cn.oam.omu.share -- (P) cn.upgradeCn.omu -cn.upgradeServices.omu -cn.upgradelmt.omu Basic platform software (TMU-PMC and TMU-SBC): -cn.tmu.pmc.flash.alt.H01 -cn.tmu.pmc.flash.fact.H01
Description
Activity Management OAM CA process SUP--CN management OAM hardware management OMC services OAM libraries and basic processes, located on the shared disk Upgrade package Upgrade services management Upgrade LMT package
Alternate software of the flash memory on the TMU--PMC Factory software of the flash memory on the TMU--PMC Software of the flash memory on the TMU--PMC Download software of the TMU--PMC Alternate software of the flash memory on the the TMU--SBC Factory software of the flash memory on the TMU--SBC Software of the flash memory on the TMU--SBC
STC management
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Software Description
8-9
Software packages Monolithic load for the GSM (TMU-SBC): -- (P) gsm.tmu.sbc.load.H1A
Description
Database (MIB)
OMU process
Note: (P): indicates the software package on the private disk (S): indicates the software package on the shared disk
Table 8-1 Presentation and description of the software packages inside the Control Node
8-10
Software Description
8.3.2
Interface Node For the Interface Node, each customer software contains a monolothic load on a software package which is split up as follows: Load_CEM_IN Load_ATM- RM_IN Load_IEM_IN Load_SRT_IN (for the 8K- RM also named SRT- RM) Note: A software package for the Interface Node corresponds to the software delivery which is supplied to the customer on a delivery medium.
8.3.3
Transcoder Node For the Transcoder Node, each customer software contain a monolothic load on a software package which is split up as follows: Load_CEM_TCUe3 Load_IEM_TCUe3 Load_TRM_TCUe3 Note: A software package for the Transcoder Node corresponds to the software delivery which is supplied to the customer on a delivery medium.
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
Dimensioning
9-1
DIMENSIONING
For information on the dimensioning with BSC e3 cabinet and TCU e3 cabinet refer to NTP < 138 >.
9-2
Dimensioning
PE/DCL/DD/0126 411--9001--126
Standard
14.10/EN
July 2004
For more information, please contact: For all countries, except USA: Documentation Department Parc dactivit de Magny-Chateaufort CHATEAUFORT 78928 YVELINES CEDEX 9 FRANCE Email : umts-gsm.ntp@nortelnetworks.com Fax : (33) (1) 39- -50-44- -29 In the USA: 2221 Lakeside Boulevard Richardson TX 75082 USA Tel: 1-800- NORTEL -4 1-800-466-7838 or (972) 684-5935