Vous êtes sur la page 1sur 110

IMS Hardware and Software

IMS Hardware and Software

Contents
IMS Hardware and Software 1
1 LAN Environment of the CSCF 3
1.1 ATCA based network Topology 4
1.2 SUN Netra based network Topology (IMS 8.2 EPx) 7
2 NSN IMS on ATCA Hardware 9
2.1 IMS on ATCA (Advanced Telecommunication Computing
Architecture) 10
2.2 IMS on ATCA (V9.1 onwards) 11
2.3 Components released from IMS V9.1 onwards 12
2.4 Components released from IMS V10.0 on 12
2.5 Base concepts and components of NSN IMS on ATCA HW 14
2.6 Reference Configuration of IMS on ATCA 18
2.7 Cabling of CFX-5000 (ATCA) 25
2.8 TIAMS (TSP Installation Administration and Maintenance Server) 34
2.9 CFX Load Balancer (V9.1 onwards) 35
2.10 Loosely Coupled Cluster - Single Node Pair SNP 40
3 IMS on HP Hardware (IMS V9.2 MP1) 47
3.1 General Architecture 50
3.2 Base Rack Assembly in IMS9.2 MP1 (with HP) 51
3.3 Reference Configuration of IMS V9.2 MP1 52
3.4 Cabling of IMS components (HP blades) 54
4 IP Management on ATCA 57
4.1 IP Config 60
4.2 IP Services 62
4.3 IP-Protocol Handler 64
5 Graceful Shutdown 67
5.1 General 68
5.2 Routing Principle 70
5.3 Shutdown procedure 72

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
1
IMS Hardware and Software

5.4 Shutdown states and actions 74


5.5 Administration 76
6 Attachment: Sun Architecture 81
6.1 SUN Netra T5220 HW 82
6.2 Cluster (based on SUN HW) 92
6.3 SW Structure (based on SUN HW) 94
6.4 IMS on SUN Netra 98
7 Exercise 103
8 Solution 107

CN37533EN10GLA0
2 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

1 LAN Environment of the CSCF

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
3
IMS Hardware and Software

1.1 ATCA based network Topology


The CSCF can be configured as a cluster or a single node solution (not in
commercial configuration).

 Administration LAN or so called “TSP ADMIN LAN”:


There are two different cases for using the Administration LAN, the Installation via
the TIAMS and the Kick-Start. As mentioned before, that kind of traffic is realized
via the Base Interface. The network elements which belong together (in the same
rack or spread over several racks) are connected via cables (BI), but no cable
leads to an external component (e. g. router). This kind of traffic is not routed at all.

 VLANs
From IMS V9.0 onwards the whole external traffic is realized via tagged VLANs.

 IMS Traffic LAN or so called “IMS LAN1 (and IMS LAN2)”


This Local Area Network is used for the Gm/Gq/Mw/ISC/Cx/Sh interfaces or with
other words between CSCFs, CSCF and HSS-FE, HSS-FE and Application
Servers, CSCF and Application Servers etc. So this LAN carries the signaling
traffic of the IMS entities. All entities (HSS-FEs/CSCFs) are connected to their
partners via the two redundant Hub blades. When the Mw/Gm traffic is to be
separated, this is realized via another VLAN on the same physical interface (FI).
The charging interfaces Rf, Ro and Bi based on Diameter and ftp/sftp are
separable from other traffic by using VLANs.

 The B&R LAN or so called “TSP B&R LAN”


For a backup and restore operation a huge amount of data must be transmitted.
To guarantee a good transmission quality, a separate VLAN is implemented. All
nodes (HSS-FEs/CSCFs) to this B&R network are connected via the two HUB
blades.

 OAM LAN or so called “TSP default LAN”


The operation administration and maintenance LAN is used for the management of
the network nodes via NetAct. The administration of the network elements can be
performed also via the LEMAF interface or an OAM Agent User Interface.

CN37533EN10GLA0
4 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

TIAMSs
NetAct
ADMIN LAN is realized
on BI Interfaces via
Admin LAN the backplane,
(TSP Admin LAN) for shelf interconnectivity the
BI IFs of the Hubs are physically
connected
OAM LAN
CSCFcluster (TSP Default LAN)

passive FI
passive BI

. B&R LAN
. passive/stby
. HUB blade 2 HUB blade

HUB interconnect
B & R server
HSS-FE
HUB blade 1

active
HUB blade

active FI
IMS Traffic LAN,
active BI OAM LAN,
CSCFcluster B&R LAN IMS Traffic LAN
are realized as VLANs e.g. LAN 1
. on the FI Interfaces
. IMS Traffic LAN
. e.g. LAN 2

CSCFs, HSS-FEs, ASs,


MMEs, GGSNs, MGCFs,

Fig. 1 ATCA based network Topology(1)

physical way of a diameter message,


when the S-CSCF blade and the
HSS-FE blade are located in the same shelf

passive/stby
Backplane
HUB blade

HUB blade 2

HUB interconnect
HSS-FE
HUB blade 1
active FI
active
HUB blade

IMS Traffic LAN


e.g. LAN 1
S-CSCFcluster

CSCFs, HSS-FEs,

Fig. 2 ATCA based network Topology(2)

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
5
IMS Hardware and Software

Backplane passive/stby
HUB blade

HSS-FE HUB blade 2

HUB interconnect

HUB blade 1

active
HUB blade

HUB blade 2

HUB blade 1

active
HUB blade
S-CSCFcluster
Backplane

physical way of a diameter message,


when the S-CSCF blade and the
HSS-FE blade are located in different shelfs
IMS Traffic LAN
e.g. LAN 1

CSCFs, HSS-FEs,
Fig. 3 ATCA based network Topology(3)

CN37533EN10GLA0
6 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

1.2 SUN Netra based network Topology (IMS 8.2 EPx)


The CSCF can be configured as a cluster or a single node solution (not in
commercial configuration). The HSS-FE is always a single node. On the following
page the RMS architecture is shown.
There are four different LANs:
 Administration LAN or so called “TSP ADMIN LAN”:

There are two different applications using this Administration LAN, the Installation
via an Install server and the access to the serial interface of the HSS/CSCF
Server.
Install Server Connection: The install Server, a Solaris based server, is used for
the installation of the HSS/CSCF Solaris Platform, the TSP and the applications.
The Install server is connected to this Administration LAN and can access from
here both HSS/CSCF Servers via the redundant LAN switches (e.g. realized by a
CISCO Catalyst 4948).
From a PC (Administrative Console) a Telnet Connection can be set up via the
Administration LAN and one of the LAN switches to a Terminal Concentrator
where the Telnet connection terminates. From here a serial connection goes to
both HSS/CSCF servers to be used e.g. during startup.
 IMS Traffic LAN or so called “IMS LAN1 (and IMS LAN2)”
This Local Area Network is used for the Gm/Gq/Mw/ISC/Cx/Sh interfaces or with
other words between CSCFs, CSCF and HSS, HSS and Application servers,
CSCF and application servers etc. So this LAN carries the signaling traffic of the
IMS entities except for the classic CCS7 between the HSS and the HLR. All nodes
(HSS/CSCFs) are connected to their partners via the two redundant LAN switches.
If the Mw/Gm traffic is to be separated a second IMS LAN: LAN2 is to be
connected to the CSCF, this IMS LAN2 carries the Gm traffic.
 The B&R LAN or so called “TSP B&R LAN”
For a backup and restore operation a huge amount of data must be transmitted.
To guarantee a good transmission quality, a separate LAN is implemented. All
nodes (HSS/CSCFs) and @vantage commanders are connected to this B&R
network via the two redundant LAN switches.
 OAM LAN or so called “TSP default LAN”
The operation administration and maintenance LAN is used for the management of
the network nodes. From the @vantage commander both HSS/CSCF servers
(cluster) can be accesses via both of the redundant LAN switches.
 VLANs
Virtual Local Area Networks are on one side necessary because of security
reasons (traffic separation) and on the other side they are used instead of physical
separation to save HW.
The support of several access (e.g. enterprise) VLANs on the Gm interface in
conjunction with the integrated P-CSCF/C-BCF configuration is necessary. The
access LANs may have overlapping address spaces. It is a must to support this by
using VLANs.

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
7
IMS Hardware and Software

The separation of traffic towards application servers is to be realized by tagged


VLANs
The charging interfaces Rf, Ro and Bi based on Diameter and ftp/sftp shall be
separable from other traffic by using VLANs.
Internal and external DNS-traffic shall be separable. This is mostly a security
issue. Generally, DNS-traffic shall be separable from all other traffic by VLANs.
There are following general restrictions regarding VLAN use due to TSP7000,
Solaris and/or Sun Cluster:
- The cluster interconnect must use physical interfaces.
- The Admin LAN must not be tagged.
- The Default LAN need to be untagged.

Terminal Concentrator

HSS/CSCF
Serial Server 2
HSS/CSCF
Interface
Server 1

PC with Telnet to the


Terminal concentrator
(administrator console)
L2 switch 1 L2 switch 2 @vantage Commander
with VLANs with VLANs

Administration LAN OAM LAN


(TSP ADMIN LAN) (TSP Default LAN)

Install Server

B&R LAN
(TSP B&R LAN)
IMS Traffic LAN
(IMS LAN1)
IMS Traffic LAN
(IMS LAN2)
Backup and Restore
Server
e.g. CSCF/HSS/AS

Fig. 4 Embedding in the LAN environment (SUN Netra HW)

CN37533EN10GLA0
8 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

2 NSN IMS on ATCA Hardware

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
9
IMS Hardware and Software

2.1 IMS on ATCA (Advanced Telecommunication


Computing Architecture)
From IMS9.0 onwards the ATCA technology for CFX-5000 and CMS-8200 is
introduced. The explanation of the IMS on the SUN architecture can be found in the
attachment of this chapter (IMS 8.2 EPx (x=3 represents IMS10.0)).

 CFX-5000 and CMS-8200 (HSS-FE, DRA)


V9.x/V10.x: ACPI4-B ATCA blade (Kontron AT8050), 1 * 2.0GHz/6C/2HT Intel
Nehalem (Westmere) with RTM CPRT4-A single-disk (Kontron)
 from V10.0 onwards:
ACPI5-A ATCA blade (Emerson ATCA 7370), dual CPU Sandy Bridge 8-core
blade, 2*2.1 GHz/8C/2HT with RTM CPRT5-A 2-disks (Emerson).
V9.2 MP1: HP c7000 blade system with HH ProLiant Bl460c Gen7 server
blades is released only for few customers, this architecture will be released for the
WOM later.

 As LAN switches two HUB Blades per shelf are in use:


V9.x/V10.x: AHUB3-A ATCA blade, 1*Broadcom BCM56304 1GE + 1*BCM56800
10GE switch.
 from V9.1 onwards another HUB Blade is released: AHUB3-B – (Dasan
A1100), 1x Broadcom BCM56334 and BCM56842 with RTM HBRT3-B (Dasan)

 One-NDS Data Repository V9.0, HP c7000 blade system with HH ProLiant Bl460c
Gen7 server blades
 CMS-8200 (SLF) V9.x/V10.x:SUN Netra T5220 with Turgo CPU (8 cores) , 1.2
GHz, 4 internal disks, 64 GB RAM
 PCS-5000 (cluster) for IMS V9.0: V5.0: SuN Netra T5220 with Turgo CPU (8
cores) 1.2 GHz, 64 GB RAM, 4 internal Discs and ST2540 with 12 Discs each
 PCS-5000 (cluster) from IMS V9.1 onwards: V6.2/V6.3: ACPI4-B ATCA blade
(Kontron AT8050), 1 * 2.0Ghz/6C/2HT Intel Nehalem (Westmere)

CN37533EN10GLA0
10 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

2.2 IMS on ATCA (V9.1 onwards)

EC208-A: Flextronics Equipment Cabinet; containing 3 shelves


ADPDU-A: Astec ATCA Power Distribution Unit

ACH16-A 16 slot Schroff Shelf (12 RU)

Redundant ATCA Hub Blades (AHUB3-B,


Dasan Radisys)

Redundant ASMGR-A ATCA 16 slot Shelf


Manager with Pigeon Point ShMM-500R Shelf
management module

14xxprocessing
14 Processing bladeBlade
slots slots
ACPI4-B
ACPI4-B – Kontron 1 CPU 1
– Kontron x 6CPU
core westmere
x 6 core
6x Samsung 8 GB DIMM RAM modules (*)
Westmere
with 6x 8 GBytes DDR3 DIMMs

2 x Hub Blade slots


AHUB3-B (Dasan)

Fig. 5 V9.1 configuration

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
11
IMS Hardware and Software

2.3 Components released from IMS V9.1 onwards


With IMS V9.1 new components were introduced which are also supported in IMS
V10.0.

CAB216SET-A, 598 mm Rack


The reason for introducing this rack:
 598 mm Rack will help to meet the requirements regarding footprint
 Compatible Cabinet/Rack for 19” equipment available

ASH16-A ATCA Rack Shelf


The reason for introducing this shelf:
 Basis for AC support
 Future proof with 40Gb/s FI backplane
 Higher Performance of the ATCA Shelf needed to support Sandy Bridge CPUs
 Lifecycle issue

AHUB3-B – Dasan HUB blade & HBRT3-B (RTM for the HUB blade)
The reasons for using the AHUB3-B – Dasan HUB blade are:
 it offers a improved feature set related to an enhanced external connectivity
 it offers a higher throughput related to a higher transaction rate

2.4 Components released from IMS V10.0 on


With IMS 10.0 the ACPI5-A ATCA blade (original vendor Emerson ATCA 7370), dual
CPU Sandy Bridge 8-core blade, 2*2.1 GHz/8C/2HT is released. It comes along with
the rear transmission module RTM CPRT5-A 2-disks (original vendor Emerson).
For Rack and Shelf the already released equipment is also available in V10.0.
Additionally in IMS V10.0 the
CAB216-C Rack Cabinet
and
ASA16-A shelf (AC powered, 16 slots) is released.

CN37533EN10GLA0
12 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

ATCA Shelf: ASH16-A (Schroff)

ATCA Rack Cabinet : CAB216-A (Schroff)

HUB Blade: AHUB3-B (Dasan)

RTM for the HUB Blade: HBRT3-B (Dasan)

Fig. 6 Configuration released from IMS V9.1 onwards

ACPI5-A
CPU blade CPRT5-A
RTM

Fig. 7 IMS 10.0 blades

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
13
IMS Hardware and Software

2.5 Base concepts and components of NSN IMS on


ATCA HW
ATCA stands for Advanced Telecommunication Computing Architecture and
represents a series of industrial specifications of the PICMG (PCI Industrial Computer
Manufacturers Group). PCI = Peripheral Component Interconnect.
It allows the telecommunications industry to develop efficient, scalable, high-
performance and high reliable telecommunications system components.

Compared to previous IMS rack mount server (RMS) system IMS 9 follows an
entirely new hardware architecture approach which requires major adjustments in
hardware and software.
In particular:

 the basic hardware architecture changes from rack-mount server systems to


ATCA blade server configurations.
 the operating system changes from Solaris 10 to Red Hat Enterprise Linux 6.x
(RHEL 6.3 for IMS V9.1/2, RHEL 6.4 for IMS10.0).
 the database system changes from Oracle RAC to SolidDB (IMS V10.0 uses
SolidDB v7.0). This implies a major conceptual change from a "shared all" to a
"shared nothing" database architecture where external storage systems are no
longer used.

CN37533EN10GLA0
14 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

The HW architecture of an ATCA blade can basically be compared to an RMS server


(or better: its motherboard). The ATCA blades are then mounted on a shelf. The shelf
also contains a pair of Hub-blades (i.e. switches) which means that a shelf can
basically be considered an IMS rack of the past. Shelves are then mounted on an
ATCA rack which is a bit wider than a standard 19" RMS rack (breadth ca 600mm.)
The ATCA system consists of following main components:
 Rack Cabinet: provides enclosure to multiple shelves (in our case up to three
shelves can be mounted on a rack cabinet).
For IMS the EC208-A cabinet is used (original supplier: Flextronics); from IMS
V9.1 onwards the CAB216-A cabinet, from V10.0 onwards the CAB216-C can be
used (supplier: Schroff).
 PEMs/PDUs: Power Distribution Unit from site power feed to shelf-level power
modules (PEMs). For the details please refer to the corresponding IMS release
notes.
 Shelves: provides enclosure, cooling fans, power entry, backplane, HW
management and slots to mount blades and RTMs.
Fore IMS the ACH16-A shelf is used (original supplier: Schroff). This shelf has a
height of 13U and provides slots for 16 blades. From IMS V9.1 onwards the
ASH16-A (DC) from V10.0 onwards the ASA16-A (AC) can be used (Schroff).

Fig. 8 Main components and overall physical assembly of an ATCA system

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
15
IMS Hardware and Software

 Blades (front boards) can be processing/CPU blades or Hub blades. For IMS the
ACPI4-B CPU-blade (single CPU/6-core/12 threads Intel Westmere, 48GB RAM)
is used together with a single disk RTM card (original supplier: Kontron).
ACPI5-A CPU blade (dual CPU Sandy Bridge 8-core blade, 2*2.1 GHz/8C/2HT)
together with a 2 disks RTM card CPRT5-A 2 (original supplier: Emerson).
AHUB3-B Hub blade from Dasan is in use.
AHUB3-A Hub blade is the only useable blade in IMS 9.0 (Radisys)
 AMC: Advanced Mezzanine Cards plug into the AMC bay of a blade extending the
features/capabilities of a blade by providing additional disk capacity, additional or
specific network interfaces, encryption/DSP processors and so on. AMCs are
accessible from the front side of a blade. For IMS V9/V10 AMCs are not planned
to be used.
RTMs (rear boards): Rear Transition Modules plug into the backside of the shelf.
RTMs are extension modules for front blades and are assigned/connected 1:1 to
them. Without front blades RTMs do not work (e.g. they do not have power).
RTMs, too, provide additional features to the blade by adding more disks, more or
specific interfaces, additional CPUs, switching functions, etc.
ACPI4-B: the CPRT4-A single-disk RTM is used (original supplier: Kontron).
ACPI5-A: the CPRT5-A 2 disks RTM is used (original supplier: Emerson)
AHUB3-B: the HBRT3-B is in use (original supplier: Dasan)
 There is also a Shelf Manager which is the management entity of a shelf. The
Shelf Manager usually consists of two redundant separately pluggable modules
which have their own slots in the shelf (i.e. they do not consume standard blade
slots). For IMS the ASMGR-A Shelf Manager is used (original supplier:
PigeonPoint).

CN37533EN10GLA0
16 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

CPU-Blade with RTM


Shelf side view

Front Blade Rear


Zone 3 Transition
Blade Module
front view

Zone 2

Zone 1

Backplane

Fig. 9 CPU-Blade with RTM

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
17
IMS Hardware and Software

2.6 Reference Configuration of IMS on ATCA


In the following chapter the Reference Configuration of IMS 10 is described,
comprising the IMS Core system (CSCF including Load Balancer, HSS including
One-NDS), local admin and install servers as well as a number of associated
elements (e.g. OAM and LAN-equipment) as needed for the basic operation of the
system.
The TIAMS (TSP Install, Administration and Management Server) is an element
implemented on ATCA blades and consisting of following (SW) components: Install
Server (IS), Admin Console (AdmC), Quorum Server (QS), System Manager (TASM
– TSP ATCA System Manager).
Own ATCA based HUB blades realize the network connectivity.
In the tables below an overview of the hardware of the network elements of IMS 10 is
given. The performance figures are approximations, for orientation only and really not
intended for system dimensioning.
In IMS on ATCA the CSCF network elements consist of 2 redundant blades forming a
single node pair (SNP).The HSS-FE network element is realized by one blade but in
the minimum configuration at least a 2 server system is deployed. This guarantees a
high degree of availability and a certain level of reliability and fault tolerance in
accordance with proper load balancing, capacity and performance delivery.

The Reference Configuration described below is a commercial one and made


up of:
 Two mandatory but for the 1M Reference Configuration four CFX-5000 CSCF 2-
node clusters with all CSCF-roles co-located (P-, I-, S-, E-CSCF, TRCF) as well as
the IBCF and the BGCF. From IMS 9.1 this configuration is extended by a CFX-
5000 Load Balancer (2-node cluster) which acts as the entry point to the CSCF
cluster farm.
 One mandatory TIAMS Server for (local) admin/install tasks of the core systems
(HSS, CSCF); the TIAMS consists of two ATCA blades.
 Two mandatory switch (Hub) blades per shelf.
 Two mandatory CMS-8200 HSS-FE single nodes, together with the respective
One-NDS distributed database and optional Utimaco HSM Box or a respective
software AuC solution; the HSS can be completed by an SLF (Subscriber Locator
Function). The HSS-FE is implemented on an ATCA blade. The One-NDS
distributed database is supported on RMS or HP architecture.
 Two optional Acme Packet border gateways or two optional OpenBGWs; the
border gateway is part of an extra rack.
 One mandatory NetAct 2-node including backup and restore facilities. NetAct is
part of an additional rack; and at least one mandatory OAM client for the NetAct.
 An optional PCS-5000 PCRF/SPDF system

CN37533EN10GLA0
18 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

Reference Configuration for about 1,000,000 active subscribers


Network Element # of nodes Base Hardware Platform
CSCF 5*2 ACPI4-B ATCA blade (Kontron AT8050)
(incl. Load Balancer) or 1 * 2.0Ghz/6C/2HT Intel Nehalem (Westmere), incl RTM
3*2 ACPI5-A ATCA blade (Emerson ATCA 7370)
2 * 2.1 GHz/8C/2HT Intel Sandy Bridge, incl RTM
HSS-FE 2 ACPI4-B ATCA blade (Kontron AT8050)
or 1 * 2.0Ghz/6C/2HT Intel Nehalem (Westmere), incl RTM
2 ACPI5-A ATCA blade (Emerson ATCA 7370)
2 * 2.1 GHz/8C/2HT Intel Sandy Bridge, incl RTM
LAN switch 2 AHUB3-B ATCA Hub blade (Dasan A1100)
(HUB blade) 1x Broadcom BCM56334 and BCM56842, incl RTM
or
AHUB3-A ATCA Hub blade (Radisys), incl RTM
TIAMS 2 ACPI4-B ATCA blade (Kontron AT8050)
1 * 2.0Ghz/6C/2HT Intel Nehalem (Westmere), incl RTM
ACPI5-A ATCA blade (Emerson ATCA 7370)
2 * 2.1 GHz/8C/2HT Intel Sandy Bridge, incl RTM
One-NDS DB 3*4+2*3 HP c7000 Blade System with HP ProLiant Bl460c Gen78
(not part of the IMS rack) or what recommended server blades.

ACME BG 2 1 * Acme Packet BG 4500


(not part of the IMS rack)
PCS 2 ACPI4-B ATCA blade (Kontron AT8050)
1x 2.0GHz/6C/2HT Intel Nehalem (Westmere)
NetAct 2 +2 HP Proliant DL360 (2 applic servers and 2 databaseservers)
(not part of the IMS rack) or
4+2 HP Proliant BL460c with C7000 enclosure (4AS + 2DS)
iNUMv9 3 ACPI4-B ATCA blade (Kontron AT8050)
(not part of the IMS rack) 1x 2.0GHz/6C/2HT Intel Nehalem (Westmere)

Fig. 10 Reference Configuration HW related

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
19
IMS Hardware and Software

Network Base Base HW Basic NE/CPU Configuration Capacity / Performance


Element Architecture Platform [#machine x #CPU per NE] [approximately]

HSS-FE single node ACPI4-B ATCA blade 1*1.2.0GHz/6C/2HT Intel Westmere 6,000,000 active subscribers
(CMS-8200v10) (front end) ACPI5-A ATCA blade 1*2 2.1 GHz/8C/2HT Sandy Bridge 13,500,000 active subscribers
One-NDS
distributed HP C7000 2*2.1 GHz Intel Xeon Sandy Bridge See One-NDS performance data
(v9.0)

SLF
single node SN T5220 1x1 1.2 GHz/8C UltraSparcT2 9,000,000 active subscribers
(CMS-8200v9.x)

DRA ACPI4-B ATCA blade 2*1 2.0GHz/6C/2HT Intel Westmere 14k Diameter req/resp
2 server HA ACPI5-A ATCA blade 28 k Diameter req/resp
(CMS-8200v10) 1*2 2.1 GHz/8C/2HT Sandy Bridge

CSCF 2-node cluster / ACPI4-B ATCA blade 2x1 2.0GHz/6C/2HT Intel Westm. 225,000 active subscribers 1)
(CFX-5000v10) single node pair ACPI5-A ATCA blade 2x2 2.1GHz/8C/2HT Sandy Bridge 610,000 active subscribers 1)

CSCF-LB ACPI4-B ATCA blade 2x1 2.0GHz/6C/2HT Intel Westm. up to 19 CSCFs (a whole rack
2 node cluster can be served)
(CFX-5000v10) ACPI5-A ATCA blade 2x2 2.1GHz/8C/2HT Sandy Bridge

ACME BGF single node or 2- Acme Packet 32,000 sessions (bi-directional


1x1 2.4 GHz Intel Core 2 Duo pinhole translations)2)
Media Proxy node HA BG 4500 v6.7
TIAMS ACPI4-B ATCA blade 2x1 2.0GHz/6C/2HT Intel Westm two servers per 40 blades
2 node HA
(Admin(/Install) ACPI5-A ATCA blade 2x2 2.1GHz/8C/2HT Sandy Bridge. (typ. 2 servers per rack)
1x Broadcom BCM56304 1GE + 4+2 1GE SFP BI ports
AHUB3-A or
1x BCM56800 10GEswitch modules 3+2 1/10GE XSFP FI ports
LAN-Switch fix-configured AHUB3-B
1x Broadcom BCM56334 1GE + 2 1GE + 4 10GE SFP[+] BI ports
(Hub blade) L2/L3 switch ATCA blade
1x BCM56842 10GEswitch modules 4+4 1/10GE SFP+ FI ports
each 2x14 1GE/10GE internally
NetAct for Core 2 servers HP Proliant 2x1 2.53GHz/4C/2HT Intel Xeon  70 NEs / 5 users5)
V7 SP2 or 2+2 servers DL360 G6/G7 4x1 2.53GHz/4C/2HT Intel Xeon  200 NEs / 10 users5)
HP DC7800 1x1 3.0 GHz/2C Intel Core2 Duo
NetAct Client Intel PC n/a
CMT (2 GB RAM)
25,000,000 subs.
ENUM Multiple single ACPI4-B ATCA blade 1x1 2.0GHz/6C/2HT Intel Westm. 50,000,000 NAPTRs
iNUM v104) nodes SN X4270 3 * 2 2.13 GHz/4C Intel Xeon 15,000 qps
> 65,000 prov reqs/h
1) all CSCF roles physically co-located
2) 2) max. practical capacity is about 19k sessions for the 4500 BG
3) DNS services can be co-located project-specifically

Fig. 11 Reference Configuration Capacity related

CN37533EN10GLA0
20 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

Base Rack Assembly in IMS

Minimal Commercial Configuration


1 TIAMS
cluster – 2 for a CSCF Cluster
all roles co-located
1 CSCF – 2 for the TIAMS Cluster (TSP Installation
cluster Administration and Maintenance Server)
1 1
1 2 3 4 1314 7 8 9 8 9 101112
5 6
with AdmC, QS, and TASM co-located
2 HSS-FE – 2 Hub Blades
– 2 for HSS-FE
2 Hub
blades

Fig. 12 IMS Minimal Commercial Configuration

2 TIAMS
3 Shelves Configuration
1 CSCF LB 42 blades
1 1 1 1 1 1 1
1 2 3 4 5 6 7 8 9
0 1 2 3 4 5 6 Clusters – 16 CFX Cluster all roles co-located
– 1 CFX Load Balancer Cluster
1 PCS cluster – IMS TIAMS Cluster (TSP Installation
(optional) Administration and Maintenance Server)
with AdmC, QS, and TASM co-located
2 DRA – 2 HSS-FE blades
(optional) – 1 PCS Cluster
– 2 DRA Diameter Routing Agent (optional)
2 IMS blades
1 1 1 1 1 1 1 for HSS-FE
1 2 3 4 5 6 7 8 9
0 1 2 3 4 5 6
16 CSCF
Clusters

6 Hub
blades

1 1 1 1 1 1 1
1 2 3 4 5 6 7 8 9
0 1 2 3 4 5 6

Fig. 13 IMS Commercial Configuration, full rack with all functions

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
21
IMS Hardware and Software

Blade Characteristics
With IMS 9 the CFX-5000 CSCF is based on ACPI4-B (Kontron AT8050) blade as
mentioned before. With IMS 10 the CFX-5000 CSCF is based on ACPI5-A (Emerson
7370) blade as mentioned before Single node and cluster configurations are
possible. The CFX-5000 hardware is used for all CSCF-roles (i.e. S-, P-, I-, E-CSCF,
TRCF), the FEE, the BGCF, the DTF (part of the MCF) and the [A/I]BCF. The storage
arrays are directly attached to the cluster elements via RTM cards.

CSCF, CSCF-LB 2 Node cluster (2 * ACPI4-B + CPRT4-A RTM each)


HW-Unit Quantity/Quality Description
CPU 1 * 2.0 GHz Intel Nehalem (Westmere) 6C/2HT
Main Memory 48 GB 6x8GB DIMMs, DDR-3, 1066MHz, ECC
Cache 12 MB L2
Drives -- 1x 300GB HDD via CPRT4-A RTM card
Ethernet Ports (ext.) 2 * 1GE on board; not usable if RTM ports are used

Internal Interfaces 2 * 10GE Fabric Interface (PICMG 3.1 Option 1/9)


2 * 1GE Base Interface
Serial Ports 1* RS-232; RJ-45

Other Interfaces --
ACM bay 1* PCIe *4; single wide; not used
USB Ports 2* USB 2.0
Mounting single-wide ATCA 3.0 compliant blade
Power Consumption 115 W typ w/o RTM and AMC; 135W max.;

Weight 3.0 kg w/o RTM and AMC

Fig. 14 ATCA ACPIA4-B CPU Blade

CN37533EN10GLA0
22 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

CSCF, CSCF-LB 2 Node cluster RTM Blade (CPRT4-A)


HW-Unit Quantity/Quality Description
CPU
Main Memory
Cache
Drives 1 * 300GB HDD SAS; 2.5", 10kRPM, 4.5ms

Ethernet Ports (ext.) 2x 1GE SFP

Internal Interfaces --

Serial Ports 1* RS-232; RJ-45

Other Interfaces 1* SAS


ACM bay --
USB Ports 2* USB 2.0
Mounting single-wide ATCA 3.0 compliant RTM
Power Consumption 20W typ.; 25W max. (10W w/o HDD)
Weight 760 g w/ HDD; 540g w/o HDD

Fig. 15 ATCA CPRT4A RTM Blade

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
23
IMS Hardware and Software

CSCF, CSCF-LB 2 Node cluster (2 * ACPI5-A + CPRT5-A RTM each)


HW-Unit Quantity/Quality Description
CPU 1 * 1.8/2.1 GHz GHz Intel Xeon (sandy Bridge EP) 8C/2HT (E5-2658)
Main Memory 128 GB 8x16GB DIMMs, DDR-3, 1600MHz, ECC, max. 128GB
Cache 20 MB L3
Drives -- 2*600GB HDD via CPRT5-A RTM card
Ethernet Ports (ext.) 2 * SFP 1GE, alternatively 2* RJ-45 1000BT

Internal Interfaces 2 * 10GE Fabric Interface (PICMG 3.1 Option 1/9)


2 * 1GE Base Interface (PICMG 3.0)
Serial Ports 1* RS-232; RJ-45

Other Interfaces --
ACM bay 1* PCIe *4; single wide; not used
USB Ports 2* USB 2.0
Mounting single-wide ATCA 3.0 compliant blade
Power Consumption 260 W w RTM (determined by simulation)

Weight 3.95 kg w/o DIMMS, AMC and any RTM

Fig. 16 ATCA ACPI5-A CPU Blade

CSCF, CSCF-LB 2 Node cluster -RTM Blade (CPRT5-A)


HW-Unit Quantity/Quality Description
CPU
Main Memory
Cache
Drives 1 * 600GB HDD SAS; 2.5", 10kRPM, 4.5ms

Ethernet Ports (ext.) 2x 1GE SFP

Internal Interfaces --

Serial Ports -

Other Interfaces 1* SAS SFF 8470 for disk cross sharing


ACM bay --
USB Ports 21* USB 2.0
Mounting single-wide ATCA 3.0 compliant RTM (70*322 mm)
Power Consumption 21W 22W max. (determined by simulation)
Weight 760 g w/ HDD; 540g w/o HDD

Fig. 17 ATCA CPRT5-A RTM Blade

CN37533EN10GLA0
24 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

2.7 Cabling of CFX-5000 (ATCA)


Compared to the Rack Mounted Systems, the number of physical network interfaces
is generally limited on ATCA. It offers a basic set of interfaces and predefined
communication channels to provide blade and site inter-connection.

The traffic separation is done by VLANs. This is also true for TSP standard LANs
such as CoreLAN (default) and B&R. The Hub RTM cards provide physical traffic
separation. Local interfaces of the CPU blade or it's RTM card shall not be used for
this purpose.
VLAN-tagging according to IEEE 802.1q is envisioned to be done by the CPU-blade
(i.e. not port-based). The mixing of tagged and untagged LANs on the same physical
wire is not standard compliant (though, supported by several switches). Therefore
this should be avoided.

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
25
IMS Hardware and Software

2.7.1 Basic Network Configuration


 There is a Base Interface (BI) intended to provide IP based transport in the ATCA
shelf. The BI has a dual star architecture (each CPU-blade is connected with each
Hub-blade) and supports 10/100/1000 BASE-T Ethernet speeds. It is intended for
basic NW services such as network booting, remote monitoring and high level
systems management. The BI interconnects are part of the ATCA backplane,
though, they may be disabled.
In the IMS ATCA solution CPU-boards with two Base Interfaces of 1 GE speed
each are used. These interfaces are connected with two physically distinct Hub
blades (switches) which are mounted on the ATCA shelf on dedicated slots. The
Hub blades also maintain the Fabric Interface but have dedicated Ethernet
switches for both, the Base Interface and the Fabric Interface (line speed each).
The entire shelf internal network architecture is redundant to provide for high
availability and fail over.

 There is the Fabric Interface (FI). The FI is the main data transport in the ATCA
shelf. It is intended to handle all user plane and other external traffic. While the
ATCA specification defines several network topologies for the FI (for example: fully
meshed) the FI of the used shelf is based on a dual star architecture. Thus, each
CPU blade is redundantly connected with each Hub blade.
The FI provides two 10GE interfaces per blade. The Hub blades (two per shelf) are
mounted on dedicated slots. They are not interconnected per se but in the IMS
and MSS solution there are dedicated lines to inter-connect hub blades among
each other with (redundant) 10GE links.
The current IMS approach is to provide interface redundancy based on L2 features
(i.e. "Bonding" in case of Linux as was the case with IPMP on Solaris/RMS). The
Bonding group consists of one active and one passive interface. To work properly
there is a need to have some kind of interconnection between the redundant
switches the IMS blades are connected to. The cluster interconnect is carried over
the Hub blades, too.
A system can consist of several shelves distributed over more than a single rack in
which the shelves (Hubs) are interconnected beyond rack boundaries.
The FI is connected to the operator site using 10GE links. Therefore, a respective
10GE infrastructure (SX ports) to be provided by the operator network is needed.

 There may be additional physical interfaces on blades, on rear transition


modules (these are dedicatedly assigned 1:1 to blades). These interfaces are not
directly associated with the ATCA backplane. They are not interconnected with the
Hub blades of the shelf but provide an additional way of connectivity.

CN37533EN10GLA0
26 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

2.7.2 IMS Network Architecture regarding the use of ATCA:


 The two ATCA Base Interfaces (BI) (1Gb) are physically separated, i.e. they are
not interconnected among each other and they are not interconnected to some
external network. For IMS this means that the Admin/Install Server (TIAMS) can
perform local maintenance tasks only for the rack it is actually mounted on. From
the point of view of performance a single Admin/Install server pair (two blades)
would be able to cope with an entire rack full of IMS blades (e.g. 40 blades).
 The Admin LAN which is used by the IMS for local admin tasks (e.g. installations,
emergency handling), is carried over the BI. Beyond it, the Admin LAN needs to be
untagged which is required for the Linux kickstart feature.
Because the BIs are physically separated and interface redundancy is not
provided fail-over would need to be done at application layer.
 The two ATCA Fabric Interfaces (FI) (10Gb) networks are physically
interconnected; they handle all other traffic (Application, B&R, OAM, SigTran,…).
The shelves of a system are interconnected in some way to create a single
system. At the moment, this interconnection is some kind of chaining of shelves
and can consist of more than 3 shelves (e.g. can comprise two racks). From IMS
point of view the (interconnected) FIs then provide a single, common broadcast
domain (LAN). The FIs are also connected with the external site network. The
Fabric Interface is connected with the external site network. Basically, only a single
pair of phys. ports (10GE) is envisioned for IMS. This means there is no physical
traffic separation per default. While the shelves are interconnected there is only
one dedicated Hub blade pair envisioned that provides for site connectivity. The
rest of the Hub blades are used for shelf interconnections only.
 With the exception of the AdminLAN any IMS traffic is carried over tagged VLANs.
These VLANs are carried over and provided at the Fabric Interface. There is no
untagged traffic of the IMS carried over FI. Even the TSP Default/Core LAN (OAM)
and the Cluster Interconnect (CI) are tagged. This also implies that the FIs are
used for all external traffic.
 Independent from BI and FI the L2 LAN is envisioned to be terminated at rack
boundaries. Site connectivity is achieved through L3 links. The BI is terminated at
system boundaries and not externally visible

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
27
IMS Hardware and Software

2.7.3 Connectivity
Initial State:
As mentioned above, the communication within the shelf is physically handled via the
backplane. The startup of the blades in a shelf results in the active state of the “left-
hand-side” Hub blade (that one in slot 8) and the passive state of the “right-hand-
side” Hub blade (that one in slot 9). HUB blade 8 now handles the complete VLAN
traffic of all CPU blades of that shelf. In the initial state the CPU blades send out the
VLAN traffic always via that link which is connected to the HUB blade in slot 8 (that is
now the active link).

The two Hub blades share the same IP addresses for each VLAN (e. g. Mw IF, Gm
IF, OAM LAN, …). The CPU blade (e. g. P-CSCF) sends out an ARP-broadcast for
the configured IP-address (has to be configured in the P-CSCF routing tables). The
active HUB (that one in slot 8) answers the ARP with a virtual MAC address (which is
also shared by the two HUBs). For the CPU-blade (e. g. P-CSCF), this is the
indicator to send all those VLAN messages to that MAC-address. The detection of
the active HUB blade from outside (e. g. Default Gateway) follows the same
procedure, i.e. traffic from the outside is also handled by the active HUB only.

Fault scenarios:
 HUB blade in slot 8 does not have a link to “the rest of the world” (external link is
down):
HUB blade 8 gets a lower priority and HUB blade 9 now turns to active state and
owns now the virtual MAC address. The VLAN traffic is sent from the CPU blade
via the active link to the Hub blade in slot 8, via the cross link the traffic is
forwarded to the Hub blade in slot 9 and sent out there. The answer is transmitted
via the same way back to the CPU blade.
 HUB blade in slot 8 is completely down  virtual MAC address is now held by
HUB blade in slot 9. Via the cross link the HUB blade in slot 9 recognized that the
partner is down and that it has to handle the complete VLAN traffic (it becomes
active):
The CPU blade does not get any answer or acknowledgement for the message
and sends the ARP via its second (former passive) FI, the destination IP-address
remains the same. The HUB blade in slot 9 answers the ARP with the virtual MAC
address, the VLAN traffic is sent now via the second FI to the second HUB blade.
From the HUB blade in slot 9 the traffic is now sent to the corresponding
destination. The answer is transmitted via the same way back to the CPU blade.
 The initial active FI at the CPU blade is down
The CPU blade switches to the former passive FI and sends the ARP to the HUB
blade in slot 9 that one answers now with the virtual MAC address. The complete
VLAN traffic is sent from the CPU blade to the HUB blade in slot 9, via the cross
link it is forwarded to the HUB blade in slot 8 and from there it is sent to the
corresponding destination. The answer is transmitted via the same way back to the
CPU blade.

CN37533EN10GLA0
28 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

CPU Blade RTM


Zone 3
IMS NE
the 2 FI interfaces (10Gbit) active
realize a Bonding Group passive
(active – passive, in case of failover)

Zone 2
the 2 BI interfaces (1Gbit) active
realize a Bonding Group
passive
(active – passive, in case of failover)

Zone 1

BI Interface is used for Admin LAN

HUB Blade 1 (active) FI Interface for all other traffic,


*1/1 SFP+ for IMS (FI) 10 Gbit Realized via tagged VLANs
Zone 3
1/2 SFP+ for IMS (FI) /shelf connect
1/3 SFP+ for IMS (FI) / cross link
1/4 SFP+ for IMS (FI) / cross link
Zone 2 HUB Blade 2 (passive)
1/1 SFP+ (BI) shelf connect *10 Gbit
Zone 3
1/2 SFP+ (BI) shelf connect
1/3 SFP+ (BI) (shelf connect)
1/4 SFP+ (BI) (shelf connect) Zone 1
Zone 2

In the projects the use


* of the SFP+ Ports can differ Zone 1
from the assignment in this 10 Gbit
picture

Fig. 18 Connectivity(1)

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
29
IMS Hardware and Software

Bonding groups:
As we have seen the CSCF is released as cluster configuration, from V9.2 onwards
as loosely coupled cluster (single node pair). The HSS-FE is released in a single
node configuration, but 2 HSS-FEs are recommended in a commercial configuration.
One NE CFX-5000 contains of two ATCA blades because of redundancy and not to
increase the number of network entities.
Because of the LAN redundancy each blade provides two redundant Ethernet ports
(Fabric Interfaces) which are called bonding group (named Bond0). All LANs except
TSP Admin LAN) are realized as virtual LANs via this bonding group. One of these
ETH Ports is active, the other one is standby. An IP address is allocated to each of
the two ETH ports. These port oriented IP addresses can be used for example for the
supervision of the physical path, maybe with "pings".
In addition to this Bond0 a physical address is assigned to a redundant pair of ETH
ports, this is called Bonding IP address. This address can be used to address a
physical CPU blade. I.e. it can be accessed via either port of the two redundant ETH
ports.

To guarantee the service in case of the active FI or the corresponding LAN fails, both
ports are in a so-called bonding group with at least one common so-called virtual IP
address. This virtual IP address is allocated to the active ETH port of the bonding
group. In case this port fails, the IP address floats to the other Port and the cluster
element sends out a "Gratuitous ARP" to inform the HUB blade 2 about the new,
changed MAC address.

In case of one CPU blade fails completely, a switchover or a floating of the IP


addresses starts. Also in this case a "Gratuitous ARP" is sent out to inform the
partner network elements about the MAC address change.

CN37533EN10GLA0
30 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

bonding group
IP redundancy
with ATCA eth1 (FI1) 2
stb 1 1

3 1
eth0 (FI0) 10.12.50.162 10.12.50.163
act

3 e. g. OAM VLAN 2001

HUB
blade 2
Cluster Element 1

HUB
blade 1 2 10.12.50.163

eth1 (FI1)
stb
eth0 (FI0) 10.12.50.166 10.12.50.167 10.12.50.163
act
bond 0 (bonding group)
e. g. IMS1 VLAN 2050
Binding IP address
(also called physical IP)

virtual IP address

Cluster Element 2

Fig. 19 Connectivity(2)

WARNING
V9.1: In case of both redundant ports of a bonding group fail, no switchover to the
redundant CPU blade is foreseen. So i.e. in case of such an unlikely failure situation
the Quorum Server has to decide what blade is the “Solid Master“ for that kind of
traffic. This will be the blade with the faulty FIs. As the Quorum Server reaches this
blade via the BI it decides that the second blade gets a shutdown. The CFX-5000 is
out of service.
V9.2 onwards (LCC): In case of both redundant ports of a bonding group fail, the
Quorum Server assigns the complete control to the second blade.

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
31
IMS Hardware and Software

IP Config: FI Cabling (IMS Stand-alone V9.1 onwards)


IMS external (L3) IMS external (L3) IMS external (L3)
Fabric Interface Fabric Interface Fabric Interface

Rear: 3/1 3/1 Rear: 3/1 3/1 Rear: 3/1 3/1


(SFP+) 3/2 3/2 (SFP+) 3/2 3/2 (SFP+) 3/2 3/2
3/3 3/3 3/3 3/3 3/3 3/3
3/4 3/4 3/4 3/4 3/4 3/4
Front: 1/1 1/1 Front: 1/1 1/1 Front: 1/1 1/1
(SFP+) 1/2 1/2 (SFP+) 1/2 1/2 (SFP+) 1/2 1/2
1/3 1/3 1/3 1/3 1/3 1/3
1/4 1/4 1/4 1/4 1/4 1/4
Shelf-1 AHUB3-B Shelf-1 AHUB3-B Shelf-1 AHUB3-B

Hubs are internally inter-


Rear: 3/1 3/1 Rear: 3/1 3/1
connected via a 10GE connection
(purple line). The second FI inter- (SFP+) 3/2 3/2 (SFP+) 3/2 3/2
connect (at 1/4) is required to 3/3 3/3 3/3 3/3
prevent from single point of failure. 3/4 3/4 3/4 3/4
IMS is first using front ports Front: 1/1 1/1 Front: 1/1 1/1
for site connection. If this is not (SFP+) 1/2 1/2 (SFP+) 1/2 1/2
sufficient RTM ports are used, too. 1/3 1/3 1/3 1/3
The first shelf is always 1/4 1/4 1/4 1/4
equipped with an RTM card. Shelf-2 AHUB3-B Shelf-2 AHUB3-B

A minimum of 2 redundant A minimum of 3 redundant Rear: 3/1 3/1


external links can be provided external links can be provided (SFP+)
without RTM cards and a maximum without RTM cards and a maximum
3/2 3/2
of 6 redundant external links can be of 7 redundant external links can be 3/3 3/3
provided with add. RTM cards.* provided with add. RTM cards. 3/4 3/4
Only the first shelf is used Front: 1/1 1/1
for site connectivity (i.e. ports of the (SFP+) 1/2 1/2
L3 (10GE) _______
2nd and 3rd shelf are not used). 1/3 1/3
L2 (10GE) 1/4 1/4 * port 1/2 is reserved for attaching
add. shelves later on without any re-
ISL (10GE) Shelf-3 AHUB3-B cabling

Fig. 20 proposed FI cabling

CN37533EN10GLA0
32 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

IP Config: BI Cabling (IMS Standalone V9.1 onwards)

Base Interface Base Interface Base Interface

Rear: 3/1 3/1 Rear: 3/1 3/1 Rear: 3/1 3/1


(SFP) 3/2 3/2 (SFP) 3/2 3/2 (SFP) 3/2 3/2

Front: 1/1 1/1 Front: 1/1 1/1 Front: 1/1 1/1


(SFP+) 1/2 1/2 (SFP+) 1/2 1/2 (SFP+) 1/2 1/2
1/3 1/3 1/3 1/3 1/3 1/3
1/4 1/4 1/4 1/4 1/4 1/4
Shelf-1 Shelf-1 Shelf-1

Rear: 3/1 3/1 Rear: 3/1 3/1


(SFP) 3/2 3/2 (SFP) 3/2 3/2
No BI cabling necessary in case of
a single shelf.

Front: 1/1 1/1 Front: 1/1 1/1


BI hubs are not interconnected (SFP+) 1/2 1/2 (SFP+) 1/2 1/2
among each other; the internal 1/3 1/3 1/3 1/3
Zone-2 1GE interconnect is not
used, too;
1/4 1/4 1/4 1/4
Shelf-2 Shelf-2

no external BI site interconnections Rear: 3/1 3/1


are envisioned and no RTM BI (SFP) 3/2 3/2
ports are planned to be used;

2 phys. ports per hub are used to Front: 1/1 1/1


interconnect shelves redundantly; (SFP+) 1/2 1/2
1/3 1/3
1/4 1/4
L2 (10GE) S3elf-1

Fig. 21 proposed BI cabling

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
33
IMS Hardware and Software

2.8 TIAMS (TSP Installation Administration and


Maintenance Server)
The IMS system requires at least one TSP Installation and Administration Server
(TIAMS) that is used for the purpose of local administration (LEMAF), first time
installation and in cases of emergency. The TIAMS is commonly used for HSS-
FE/DRA and CSCF (all roles). It is connected with the TSP AdminLAN, CoreLAN and
BackupLAN. The CoreLAN and BackupLAN connections are needed to cope with the
Software Upgrade Framework (SUF) and the Fast System Recovery (FSR) feature of
the TSP. CoreLAN and BackupLAN are tagged VLANs and carried over FI while the
AdminLAN is carried over BI (and untagged).

The AdminLAN is not connected to the operator's site. Therefore, it is not possible to
share a TIAMS over sites. Also, the Admin LAN does not belong to the operator but
to the IMS product / NSN Service. Therefore it is not designated to route its traffic
through routers controlled by the operator. To be able to share Admin/Install servers
among NEs that are placed at different subnets (Integration Areas) additional
measures are to be taken.

In commercial configurations the TIAMS consists of two ATCA blades. For test
systems a single blade version is also possible.
A TIAMS of two blades can cope with up to about 40 blades (e.g. a rack full of IMS
computing blades). The Admin/Install-Server is equipped with one disk of 300GB size
only.

The RTM cards of the TIAMS containing the disks are interconnected among each
other using dedicated RTM SFP SAS ports. This is to assist disk mirroring (shared all
configuration). The TIAMS work in active / cold standby mode with the cold standby
switched on. The cold standby, however, works in InitRAMdisk mode in which disks
are not accessed.

CN37533EN10GLA0
34 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

2.9 CFX Load Balancer (V9.1 onwards)


Before IMS V9.1 all CFX-5000 systems, which were integrated into a customer
network were visible as separated network elements. The load balancing concept
was based on the DNS server (priority and weight based), and internal mechanisms
provided by CFX-5000 (dispatcher), to achieve an equal load distribution. With the
introduction of ATCA hardware the performance per blade was decreasing and the
density per rack was increasing.
The other issue, customers want to have addressed, is the number of IP-Addresses
visible in their network and the number of IP-Addresses visible to the Clients. Most of
the IP-Addresses today have to be routable addresses. For network extensions, e.g.
adding a new CFX-5000 system the operator had to plan and organize the IP
infrastructure for this new system, which added additional internal effort and costs.
There are also customers, which do not want to allow the clients to select the CFX-
5000 server via DNS.

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
35
IMS Hardware and Software

Realization
Now with IMS V9.1 a Load Balancer for the Gm Interface is available. It runs on a
pair of ATCA blades in active/standby-configuration (CFX-5000 blades). One single
Load Balancer pair serves the whole rack (19 CFX-5000 clusters).

The Gm interface is now available at a virtual P-CSCF, which consist of the physical
Load Balancer and one or several real P-CSCF clusters. This means, the Load
Balancers Gm interface and the real P-CSCF servers Gm interface share the same
IP address. But the Gm IP address at the real P-CSCF servers is “hidden” to avoid
address resolution conflicts in the LAN (no ARP answer for this address). This is
realized via MAC address translation and configuring an ARP-inactive dummy
interface.
The Load Balancer acts as a MAC address rewriting Ethernet-switch which forwards
the IP packets fully transparently to the real P-CSCF-server.

The UE sees the virtual P-CSCF as a monolith with Gm_IP_ext being carried by the
Load Balancer. The Load Balancer sees monolithic real servers each with a HA_IP
address, but the MAC address of them can change.

The quorum server and the TSP cluster see single nodes, each one with its own
Nd_IP address. In each single node an ARP-inactive alias interface (called
“dummy:0”) is configured with Gm_IP_ext. There, an instance of the IP DP is
listening. On the active node the traffic comes into the blade on the backup node no
traffic is present.

The UE sends <src: UE_IP, dest: Gm_IP_ext, payload> to the Load Balancer. From
its configuration the LB knows the real servers’ HA_IP addresses and from ARP the
according current MAC addresses. The LB selects a real P-CSCF-server by some
criterion and forwards the packet to it – unchanged on IP level. The IP stack delivers
the IP packet to the IP_DP which listens on “dummy:0”, configured with Gm_IP_ext.
Beyond this point, the real P-CSCF-server behaves in the same way as in the other
CSCF roles.
All P-CSCF initiated messages (requests and responses) go directly from the node to
the UE (Direct Server Return mode). The node uses Gm_IP_ext in messages to the
UE.
Note: The LB is “dual legged”, i.e. there is one physical interface for the Gm-
reference-point (for UE traffic) and another physical interface for the real-server-
traffic.

CN37533EN10GLA0
36 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

Redundant
Virtual P-CSCF HA_IP_1 Real P-CSCF 1
IPSec SA

Nd_1a_IP bond0:1
active
bond0 IP DP
UE_IP
Gm IP ext dummy:0
Node 1a

Nd_1b_IP
bond0 backup
IP DP
Gm IP ext dummy:0
signalling
traffic Load Balancer Node 1b

active
Gm IP ext
Real P-CSCF 2
HA_IP_2
standby

Quorum
determining the
Server
QS_IP active node

Fig. 22 CFX Load Balancer

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
37
IMS Hardware and Software

Dispatching / Delivering the messages to the real P-CSCFs


The Load Balancer can send free messages (initial register requests) in principle to
any real P-CSCF-server; any other messages (“constrained message”) must go the
server, which has the existing registration data. This is called registration-state-full
dispatching.
There is a freely configurable and persistent dispatching-table ( called “session
table”), containing key-value-pairs. In our setting a suitable key is the source IP
address, i.e. unique and unchangeable UE address. The value indicates the address
of the P-CSCF holding the registration info.
Whenever a packet arrives, the LB looks up its table. If the packet’s source IP
address matches an existing table entry, the LB forwards the packet to the according
real server. Otherwise - the packet is (the first packet of) a free request – the LB
selects a real server according to some dispatching policy. It generates a new entry
in its table and forwards the packet then to the selected server.
An LB table entry has a configurable lifetime. The LB deletes an entry, if during the
entry’s lifetime no matching packet arrives. Otherwise a matching packet renews the
lifetime.
By configuring the lifetime to just a little more than the IMS re-registration timer it is
possible to implement registration-state-full dispatching.

Load Balancing Method / Dispatching policy:


A constraint for dispatching free messages is to keep the P-CSCFs in balance. Each
real P-CSCF has to provide its load info to the LB. On each P-CSCF the load score is
calculated (integer between 0 and 100).The LB sends periodically (default value 5
sec) a protocol request to the P-CSCF which sends back the current load score.
The LB selects for free requests that real P-CSCF with the lowest load level. For the
LB a free request is a request coming from an IP address, for which there is not yet
an entry in the LBs session table.

This method is called rndagent. The rndagent method is the recommended and
default Load Balancing Method. It is assigned during installation of the LB.

CN37533EN10GLA0
38 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

Fault scenarios:

 Interface failure on the active Fabric Interface (Linux Bonding) on the P-CSCF:
The driver provides a “virtual MAC address”. A bonding failover from one interface
to another one stays invisible for the LB.
 Real Server internal failover:
In case of a TCP-Cluster or SNP-failover or failback (planned or unplanned) the
HA_IP address gets activated on the other node. The LB learns the MAC address
change thru a gratuitous ARP message and sends messages henceforth to the
new MAC address. Here again the IP stack delivers the IP packet to the IP_DP,
which listens on “dummy:0”, configured with Gm_IP_ext. The normal P-CSCF-
handling takes place; this is possible, because the registration data had been
replicated.

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
39
IMS Hardware and Software

2.10 Loosely Coupled Cluster - Single Node Pair SNP


From IMS 9.2 onwards the loosely coupled cluster is introduced. This concept allows
combining several CPU blades to a cluster. As long as nothing else is announced,
the number of members in an LCC is set to 2, this is called Single Node Pair (SNP).

This architecture:
 minimizes the strong dependencies between the nodes of a cluster. The number of
software layers which have to communicate among each other is reduced to two.
There is a software called Inter node Communication Manager which exchanges
data between the application SW layer (e. g. CSCF) and another software called
CLM/CIPA which communicates between all other underlying SW layers (CAF,
TSP, Database).
 does not require an external storage which results in a reduction of HW, power
consumption and footprint.
 reduces the failover and failback time.

CN37533EN10GLA0
40 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

CFX Loosely Coupled Cluster LCC


Node 1 Node 2
Application SW Application SW
(CSCF) (CSCF)
Inter-node Inter-node
Communication Manager Communication Manager

CAF CAF

CLM/CIPA
CLM/CIPA
TSP TSP

Database Database

Operating System Operating System

ICM Inter-node Communication Manager


CLM Cluster Membership
CIPA Cluster less IP Alias Mechanism

Fig. 23

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
41
IMS Hardware and Software

Architectural Concept
The Inter-node Communications Manager (ICM) SW can be configured in three
modes:
 LB_Primary: Primary owner of the external IP addresses. Runs the dispatcher
processes and the roles processes.
 LB_Backup: Backup owner of IP addresses. Runs the dispatcher processes and
the roles processes. From ICM point of view both Primary and Backup are equal
(whichever comes up first, takes over the ICM controller functionality and hosts the
IP addresses), Primary and Backup are mandatory.
 LB_None: The node does not host any aggregated IPs and only runs the roles
processes. In 2 node solution, LB_None is not required.
The mode of the ICM in the different nodes is configured by the following
assignments in the icm.cfg file: IcmLbMode.1=p (=primary), IcmLbMode.2=b
(=backup).
The Primary and the Backup node share the same external IP addresses for the
different types of traffic (Gm IF, Cx IF, Mw IF, …). In the normal situation the external
IP addresses are visible and active only on Node 1 (that one, which came up first).
The Dispatcher Processes are active (listen) also on Node 1 and distribute the
incoming requests among all Role Processes (handling processes) on all existing
nodes. The Role Processes are identified by their Pseudo Process ID (PPI), visible in
the lskpmc value (e.g. PGW-WebGui).
The Dispatcher Processes use a Service Routing Table to assign or select the
handling Role Process (local or remote) for the actual request. If the assigned Role
Process is executed on a remote node (e. g. Node 2) the messages are routed
through the Inter-node Communication Manager to the destination node. Only the
Node 1 is allowed to update the Service Routing Table, all SRT info is replicated to
the other members of the LCC. The consistency of the Service Routing Table (SRT)
is permanently observed across all nodes in case of inconsistencies the actual data
is sent to the inconsistent node. The created or modified context data of the sessions
is replicated from the handling Role Process (via ICM) to all other nodes.

Context Replication:
The session related messages with the session contexts are exchanged between all
nodes of the LCC and the Quorum Server (part of the TIAMS). The data are stored in
a shared memory area, present in each node. In the case that the LCC is realizing a
P-CSCF (offering the Gm IF) and the UE uses IMS AKA/IPSec authentication the
active node has Security Association data and sequence numbers stored inside the
own Security Association Database which is located in the kernel. To keep such a UE
registered even after a switchover of the node also the Security Association data and
the sequence numbers are replicated towards the other nodes of the LCC. But of
course this replication is done to the kernel of the additional node(s).

CN37533EN10GLA0
42 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

external IP Addresses (VLANs) external IP Addresses (VLANs)


in use not in use

... ...
Cx Mw ISC Cx Mw ISC
IP Addr IP Addr IP Addr IP Addr IP Addr IP Addr

Dispatcher Processes Dispatcher Processes


Communication Manager
Inter-node

Communication Manager
Inter-node

Communication Manager
Inter-node
Pn Pn Pn
P03 P03 P03
P02 P02 P02
P01 P01 P01

Node 1 Node 2 Node 3


LB_Primary LB_Backup LB_None**
LB-status: active* LB-status: passive LB-status: passive
* Node 1 was active first ** not yet released

Pseudo
Process active backup Handling
Service Routing Table: Identifier Node Node Process
(simplified) PPI

Register 01 N1 N2 PCSCF01

Invite 03 N2 N1 ICSCF09

Subscribe n N2 N1 SCSCF0D

Fig. 24 LCC Architecture

Failover Scenarios:
Outage of Node 1(primary):
Node 1 as Primary node is executing the dispatchers in the active state. When node
1 goes down, node 2 recognizes it (missing heart beats). Node 2 takes over the
responsibility to update the Service Routing Table, and initiates the plumbing of the
external IP addresses on the own node. It activates the own dispatcher processes by
changing its state to active. The ICM in the node 2 modifies now the Service Routing
Table, all entries where node 1 is backup, the backup marking is removed. In a 2
node SNP no backup is present for the outage time. All entries where node 1 is
active, the backup entry is shifted to the active column and the backup entry is
removed.

Fallback of Node 1 (former primary):


When node 1 is up again, the ICM SW on node 2 recognizes it via the heart beats.
The ICM on node 2 modifies the Service Routing Table and enters node 1 as backup
node for all PPIs. Each change in the routing table is pushed to all other nodes as
well. Also the synchronization of the session context data is done. Once the
synchronization is done, more than 50% of the PPIs are assigned to node 1 as active
node. This is done to reach a proper load sharing situation. The ownership of the

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
43
IMS Hardware and Software

external IP addresses is not given back, which means that even after a failback the
Node 2 remains as LB-Primary node.

Outage of Node 2 (backup):


The ICM SW in the node 1 recognizes the failure of the node 2 because of missing
heart beats. The ICM in the node 1 modifies now the Service Routing Table, all
entries where node 2 is backup, the backup marking is removed. In a 2 node SNP no
backup is present for the outage time. All entries where node 2 is active, the backup
entry is shifted to the active column and the backup entry is removed.

Fallback of Node 2:
When node 2 is up again, the ICM SW on node1 recognizes it via the heart beats.
The ICM on node 1 modifies the Service Routing Table and enters node 2 as backup
node for all PPIs. Each change in the routing table is pushed to all other nodes as
well. Also the synchronization of the session context data is done. Once the
synchronization is done, more than 50% of the PPIs are assigned to node 2 as active
node. This is done to reach a proper load sharing situation.

CN37533EN10GLA0
44 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

OAM concept
The NetAct knows the LCC nodes as one common network element. The Fault
Management and the Performance Management Server Application of the NetAct are
connected directly to the FM & PM Agents of each node. Therefore the physical IP
addresses of the nodes have to be known by the NetAct.

The Configuration Management server is connected directly to the CM agent on the


Node 1 (this is the node which has the new - Class B - installation parameter
Common.Cmsync.NetAct.Provision set to YES). The configuration data is replicated
to all nodes using the Inter-node Communication Manager (same method as
described for the IMS-traffic). In case of failure on Node 1, the CM interface is not
switched over to Node 2. CM is not possible when node 1 is down.

NetAct
CM Server FM Server PM Server

IP0:N1
IP2:N2
OAM IP IP1:N1 OAM IP
OAM IP

CM FM PM FM PM
Agent Agent Agent Agent Agent

FM/PM FM/PM
Communication Manager
Inter-node

Communication Manager
Inter-node

DB DB

CM CM
DB DB

Node 1 Node 2
LB_Primary LB_Backup

Fig. 25

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
45
IMS Hardware and Software

Fig. 26 Provision Parameter visible in LEMAF (Class B parameter)

CN37533EN10GLA0
46 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

3 IMS on HP Hardware (IMS V9.2 MP1)

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
47
IMS Hardware and Software

From IMS V9.2 MP1 onwards the HP c7000 Blade System is introduced. It can be
considered as an evolution of rack-mount systems towards an IT blade architecture.
It has characteristics of both, RMS systems and ATCA blade systems.

The c7000 system comes in two flavors -- carrier-grade (with -48VDC power) and IT
(with AC power). The principally assembly of the c7000 system is as shown in the
figure below.
The CFX-5000 (CSCF, CSCF-LB and TIAMS) V9.2 MP1 and the CMS-8200 (HSS-
FE) are realized on HP ProLiant Bl460c Gen8 server blade with 2 CPUs (Intel Xeon
Sandy Bridge E5 2658 half height). The operating system in use is RHEL 6.3.
For the interconnection towards the network the HP Flex 10/10D interconnect
modules are in use.
In the default configuration the external (rack) switch HP 5820AF-24XG is deployed.

CN37533EN10GLA0
48 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

Rack:
first:  Wrightline rack 24
later:  HP10000 series rack Enclosure (front side):
G3 HP 642 HP Blade System C7000
with
up to 16 HP ProLiant Bl460c Gen 8
HH (half height) server blades
DC fuse panel Switch:
HP 5820AF-24XG

L2/L3 switches

AuC (HSM box)

Enclosure (rear side):


HP Blade System C7000
with
IMS components active cool fans and
HP Flex 10/10D Interconnect
modules (single wide)

IMS components

OA: on board Admin


with KVM

Fig. 27 HP deployment

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
49
IMS Hardware and Software

3.1 General Architecture


The c-Class enclosure is a single physical enclosure that hosts one or more real
physical servers, called server blades. In our IMS we use the HP Proliant Bl460c
Gen8. This blade is equipped with two internal disks running SolidDB for platform
data and configuration data. The enclosure provides external network connectivity to
the internal server blades using a hardware implementation so called virtual connect
modules (VC modules) dealing as mediator between real server and network. In our
IMS we use the HP VC Flex 10/10D Interconnect Module. For the network, a set of
servers (blades) appear to be a single large host. Based on this view, server blades
behind VC (virtual connect) modules can be changed (e.g. added or removed)
without impacting the network. This has advantages with respect to maintenance
activities such as upgrades/updates. It is also possible to move entire applications
from one physical location (e.g. an enclosure) to another. This is a kind of application
virtualization.

The interconnection of VC modules to (next hop) switches (located in the same rack)
is redundant.

CN37533EN10GLA0
50 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

3.2 Base Rack Assembly in IMS9.2 MP1 (with HP)


The picture below shows the HP shelf assembly in the IMS V9.2 MP1
The base rack assembly in IMS9.2 MP1 has the TIAMS located in slots #1 and #2,
the CFX-LB located in slots #3 and #4, the CSCF blades located in slots #5 with
increasing order and HSS-FEs located in slots #16 and #15 (decreasing order). This
is equivalent to ATCA and suitable for configurations with isolated shelves. The CFX-
LB is always to be considered working within shelf boundaries.
At the moment (IMS 9.2 MP1) does not include iNUM, PCS, NT-HLR, or other (non-
IMS "core") NEs. But, by concept there is no restriction to also cover those elements.

CFX-LB at
Minimal Commercial Configuration:
TIAMS blades at bay 3 and 4
bay 1 and 2 CFX-5000 blades at
bays 4 to 8 (left to right)
2 TIMAS blades
2 CFX-5000 blades
2 CMS-8200 (HSS-FE) blades

or

2 TIAMS blades
CMS-8200 blades at 2 CFX-LB blades
1 2 3 4 5 6 7 8 2 CFX-5000 blades
bays 16 to 15 (right to left)
2 CMS-8200 (HSS-FE) blades

9 10 11 12 13 14 15 16

blades of any type


bays 9 to 14 (left to right)

Fig. 28 Enclosure configuration

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
51
IMS Hardware and Software

3.3 Reference Configuration of IMS V9.2 MP1


Reference Configuration for about 3,000,000 active subscribers (active in IMS-core)

Network Element # Devices Base HW Platform


5*2 BL460c HP ProLiant server blade
CSCF
2*2.1GHz/8C/2HT Intel Xeon Sandy Bridge EP
CFX-LB 2 BL460c HP ProLiant server blade
2*2.1GHz/8C/2HT Intel Xeon Sandy Bridge EP
HSS-FE 2 BL460c HP ProLiant server blade
2*2.1GHz/8C/2HT Intel Xeon Sandy Bridge EP
TIAMS 2 BL460c HP ProLiant server blade
2*2.1GHz/8C/2HT Intel Xeon Sandy Bridge EP
Interconnect Modules 2 HP Virtual Connect Flex-10/10D Module
Rack Switch 2 HP 5820AF-24XG 24 SFP+ port switch

One-NDS DB 3*x HP c7000 Blade System with HP Proliant Bl460c Gen 8

Acme Packet BG 2 Acme Packet BG 4500

NetAct (NAC) 2+4 1 * HP Proliant DL360 (2 applic servers and 2 database servers) or
4+2 1 * HP Proliant BL460c with C7000 enclosure ( 4 AS + 2 DS)
iNumv9 3 Sun Netra X4270
2 * 2.13 GHz/4C/2HT Intel Xeon

Fig. 29 Reference Configuration HW related

CN37533EN10GLA0
52 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

Base Basic NE/CPU Configuration Capacity / Performance


Network Element Base HW Platform
Architecture [#machine x #CPU per NE] [approximately]

HSS-FE single node 2 * 2 2.1GHz/8C/2HT/16T ES-2658 Intel 6,000,000 active subscribers


HP ProLiant BL 460c
(CMS-8200v9.x) (front end) Xeon Sandy Bridge EP (just in the HSS subscribers)

One-NDS SN X4270 or 3,600,000 active subscribers1)


distributed See One NDS HW description
(v8.0 EP6 / v9.0) HP ProLiant BL 460c >10,000,000 active subscribers

CSCF (CFX-5000v9.2 .2 *2 2.1GHz/8C/2HT/16T ES-2658 Intel


single node LCC HP ProLiant BL 460c 365,000 active subscribers2)
MP1) Xeon Sandy Bridge EP

2 * 2 2.1GHz/8C/2HT/16T ES-2658 Intel


CFX-5000v9.2 MP1 LB 2 node cluster HP ProLiant BL 460c up to 6 CSCFs can be served3)
Xeon Sandy Bridge EP

DRA (CMS-8200v9.x) 2 server HA ACPI4-B ATCA blade 2x1 2.0GHz/6C/2HT Intel Westm

ACME BGF single node or 2- Acme Packet BG 32,000 sessions (bi-directional


1x1 2.4 GHz Intel Core 2 Duo pinhole translations)4)
Media Proxy node HA 4500 v6.2
TIAMS v9.2 MP1 2 * 2 2.1GHz/8C/2HT/16T ES-2658 Intel at least 14 blades can be served
2 node HA HP ProLiant BL 460c
(Admin(/Install) Xeon Sandy Bridge EP (2 enclosures per rack)
HP Flex-10/10D 2 * Flex-10/10D modules (HP proprietary) 16 * 10GBASE KR (internal)
fix-configured
Interconnect Module 10 * 10 GE SFP+ (uplink ports)
bridging device
4 * 10 GE (internal interconnect)
fix configured 24 * 10 GE SFP+ ports
Rack-Switch HP 5820AF -24XG 2* HP 5820AF-24XG switches (HP prop)
L2/L3 switch 2 * 1 GE RJ-45 ports

2 servers HP Proliant DL360 2x1 2.53GHz/4C/2HT Intel Xeon  70 NEs / 5 users


NetAct for Core v7
or 2+2 servers G6/G7 4x1 2.53GHz/4C/2HT Intel Xeon  200 NEs / 10 users
1x1 3.0 GHz/2C Intel Core2 Duo
NetAct Client Intel PC HP DC7800 CMT n/a
(2 GB RAM)
ENUM 25,000,000 subs
iNUM v9.1. ( DNS 50,000,000 NAPTRs
multiple single
services can be co- ACPI4-B ATCA blade 1x1 2.0GHz/6C/2HT Intel Westm. 15,000 gps
nodes
located project- > 65,000 prov reqs/h
specifically)

1) for the Routing-DS; for the Routing DSA > 10,000,000 subs.
2) all CSCF roles physically co-located
3) this is sufficient to support a single enclosure which is the requirement; technically, more CSCF clusters may be served
4) max. practical capacity is about 19k sessions for the 4500 BG

Fig. 30 Reference Configuration Capacity related

IT Blade Characteristics

CFX-5000 of CFX-LB (2 * HP Proliant BL460c Gen8)

HW Unit Quantity/ Quality Base Data

CPU 2 * 2.1 GHz Intel Xeon E5-2658 (Sandy Bridge EP, 8C/2HT/16T

Main Memory 128 GB 2*4*16GB DIMMs, DDR-3, ECC, 1333MHz, max. 512GB

Cache 20 MB L3

Drives 2*600 GB 6G SAS, 2.5’’, 10kRPM, max. internal is 2* 1TB

Ethernet Ports 2*10 GE 1* int. HP FlexFabric 10Gb 2-port 554FLB FlexibleLOM; there are
no Ethernet interfaces additional. to the internal ones
Additional Interfaces -- 1x internal Micro SDHC card slot;
1x internal USB 2.0 connector for USB flash media drives
Expansion Slots 2 (not used) x16 PCIe 3.0 for Type A MEZZ card (slot 1), Type B MEZZ card
(slot 2)
USB ports 1* available only via c-Class Blade SUV connector and cable

VGA 1* Integrated Matrox G200 standard video card (1280x1024/32bpp


and 1920x1200/16bpp)
Serial 1* available only via c-Class Blade SUV connector and cable

Management iLO 4

Power Consumption 270 W 1.26A@230V A; TDP of CPU is 95W

Weight 6.33 kg max. (min. is 4.75kg w/ one CPU, 2 DIMMs)

Fig. 31 Reference Configuration HW related

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
53
IMS Hardware and Software

3.4 Cabling of IMS components (HP blades)


Basically up to 16 HP server blades can be deployed per enclosure. The
interconnection towards the external LAN is realized by 2 redundant VC Flex-10/10D
modules. The server blades (e.g. CMS-8200, CFX-5000) are equipped with two
physical Network Interface Controllers (pNIC), each offers 4 flexNICs (physical
connectors pf1 - pf4). The pf1 flexNIC is assigned to carry the untagged AdminLAN
traffic (same with ATCA Base Interface). The pf4 flexNIC is assigned to carry all other
traffic as tagged VLANs (external traffic same with ATCA Fabric Interface). Via the
backplane the traffic is transmitted to the Flex VC 10/10D modules which offer the
connectivity to the “rest of the world”. This concept allows to change, to add or to
remove server blades behind the Flex VC.

The Flex VC 10/10D is deployed with 10 physical ports, each port can be configured
as 1GE or 10GE uplink. Each enclose defines its own Virtual Connect (VC) domain
and each domain defines in our solution two vNets (max 4 vNets, one per flexNIC).
One vNet is used for (rack-) internal, untagged, AdminLAN traffic (ATCA BI
equivalent). The other vNet is used for all external, tagged VLAN traffic (ATCA FI
equivalent).

In our solution 2 physical ports are used for uplinks (tagged VLAN traffic) the cross-
connection is realized with internal cross-connect ports (quasi backplane). In the
IMSV9.2 MP1 no enclosure interconnect is released. That means, vNets are not
interconnected beyond enclosure borders and the rack consists of two different
externally visible VC domains. Uplinks are used in active/standby mode. This is the
result of having the vNet distributed over two Interconnect Modules and connected to
two different external switches.

Traffic separation is done by use of VLANs, i.e. logically and in the same way as it is
done for ATCA. The Flex VC 10/10D module is not capable of VLAN routing. To
provide physical traffic separation towards an operator's site additional rack switches
are in use. These LAN devices are then able to split the traffic and distribute it to
different physical ports.

The interconnection of enclosures located in the same rack is not released with IMS
V9.2 MP1. (In opposite to IMS 9.2 with ATCA).

CN37533EN10GLA0
54 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

Enclosure (shelf)

.
. 2 Blade 1 pf1
pNIC2

vN-Adm
pf2
pf3
pf4
vN-Ext
pf1
pNIC1

pf2
pf3
CSCF pf4

pNIC: physical Network Interface Card


pf: physical function (flexNIC)
VC Domain

Fig. 32 IMS enclosure

X1 X1: free/not used (could be used as monitoring port);


X2: intended for AdminLAN;
X2 X3/X4: free/not used;
pNIC: physical Network Interface Card X5/X6: intended for external traffic (uplinks);
pf: physical function (flexNIC) Flex X7..X10: free/not used;
X3
VM 10/10D
X4
X5
. standby
X6
. 2
. Blade 1 X7
pf1
X8
pNIC2

pf2
X9
pf3
X10
pf4
X1
pf1
X2
pf2
pNIC1

Linux Bonding X3
(active/standby) pf3
X4
pf4
X5
active
pf1: Admin LAN (untagged internal traffic) X6
equivalent to ATCAs BI
no LINUX bonding X7
pf2: reserved
pf3: reserved X8
pf4: all tagged external traffic)
supports LINUX bonding X9
equivalent to ATCAs FI
X10

Fig. 33 Connectivity of CMS-8200 (external traffic only)

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
55
IMS Hardware and Software

vN-Adm
. 2 Blade 1
. pNIC2 pf1
pf2

vN-Ext
pf3
pf4
vN-Adm
pf1
pNIC1

pf2 vN-Ext

HSS-FE pf3
pf4
Cx Msg redundant paths used to
Enclosure 1 withstand outage of a switch
when one interconnect module
is in maintenance mode

.
vN-Adm

. 2 Blade 1
. pf1
pNIC2

pf2
vN-Ext

pf3
pf4
vN-Adm

pf1
pNIC1

pf2
vN-Ext

pf3
S-CSCF pf4

Cx Msg
Enclosure 2

Fig. 34 Connectivity via enclosures

CN37533EN10GLA0
56 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

4 IP Management on ATCA

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
57
IMS Hardware and Software

IP Management provides a GUI for the presentation of a local network element. Any
operator with HTTP access to a local network element and equipped with the
appropriate function rights is able to display a specific set of network-related
parameters.

TIP
The IP Management offers an alternative way to view some system parameters. As
lots of these parameters are "project specific" (class B) they cannot be changed via
the IP Management GUI.

CN37533EN10GLA0
58 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

Fig. 35 IP Management on ATCA

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
59
IMS Hardware and Software

4.1 IP Config
IP Management can be used for the following network administration tasks:

displaying the etc/hosts configuration file, the etc/inet/ipnodes configuration file


and the routing table of a specific node
displaying the available IP interfaces and their states.

Fig. 36 IP Config

CN37533EN10GLA0
60 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

Fig. 37 IP Routing

Fig. 38 IP Cluster Hostnames

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
61
IMS Hardware and Software

4.2 IP Services
There are two basic kinds of IP services: HIP services and non-HIP services. A HIP
service is not directly bound to a LAN. Instead, it is implicitly assigned via its
assigned IP address. This is necessary because an IP service can have just one IP
assignment per LAN, but a LAN can have more than one assigned IP address.

The window elements of the IP Services are:

Service Name
This column displays the name of the logical IP service. The logical IP service name
is required as search key to correlate HIP addresses and services that should use
these HIP addresses, because these data are defined in different database tables.

Component
The column displays the name of the component to which the IP service belongs to.

Type
This column displays the type of the IP service. The cluster software distributes the
IP traffic, which comes in on the public interface of the node with the Global Interface
(GI), with the cluster interconnects to the cluster nodes on which a server application
is running. The outgoing traffic of the server applications is handled locally by the
public interfaces of the nodes and is not rerouted to the GI node.

HIP Address / LAN


This column displays the address to which the IP service is assigned to (SUN
Architecture).

used Security Property


This column displays the preselected Security Property. This assignment can be
changed.

LAN
This column displays the name of the LAN to which the IP service is assigned to.

CN37533EN10GLA0
62 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

Fig. 39 IP Services

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
63
IMS Hardware and Software

4.3 IP-Protocol Handler


IP Management allows the configuration of IP-based protocol handlers available on a
network element. A protocol handler provides communications services that let a
device send data to other devices by transmitting and receiving data as specified by
the communications protocol.
Three types of IP-based protocol handlers can be configured with IP Management:
 server protocol handlers
 client protocol handlers
 combined protocol handlers

For each type of the available protocol handlers, the following properties can be
configured:
 Services: WEBSEC or WEBGUI
 Service Port: 9880 / 9881
 Userprocessgroupname: what Processgroup is allowed to use the service

CN37533EN10GLA0
64 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

Fig. 40 IP-Protocol Handler

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
65
IMS Hardware and Software

CN37533EN10GLA0
66 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

5 Graceful Shutdown

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
67
IMS Hardware and Software

5.1 General
Graceful Shutdown enables a “smooth” taking out of operation of a CFX-5000
(“CSCF”) from an IMS control network without service interruption. The IMS control
network keeps on running.

Graceful shutdown is divided in two major phases: die-out and tear-down.

The die-out phase is the “graceful” part, where the currently registered users will
further on be provided with full service, but new registrations will not be accepted.
Requests for new registrations are directed towards other active CFX-5000 (“CSCF”)
network elements. The die-out phase can be scheduled according to the operator’s
needs. The operator has full control over the initiation and duration of the die-out
phase.
The initiation is performed “manually” by the operator according to the indications in
the respective guideline. The duration of the die-out phase might be adjusted
depending on the several factors, e.g. the number of still registered users and/or still
active SIP-session, which are monitorable via OAM.

The OAM-personnel at the operator’s site is able to recall relevant data (e.g. number
of registered users, active SIP-dialogs) via specific OAM-counters. Based on this
data (or any other optional criteria) the OAM-operator is able to control the progress
of the individual phases.

When the operator considers the die-out phase to be completed he can initiate the
tear-down phase. This phase is also started “manually” by the operator (following the
respective guideline).
The tear-down phase is the more “severe” part: The objective of this phase is to
conclude all the activities in a determined way. Ongoing sessions will be terminated,
users will be forced to deregister (network initiated), relevant charging data will be
safely transferred.
Finally, when all activities are completed the CFX-5000(“CSCF”) might be taken out
of service.

CN37533EN10GLA0
68 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

Hard shutdown Graceful


shutdown

Fig. 41 Graceful shutdown

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
69
IMS Hardware and Software

5.2 Routing Principle


The UE has stored temporarily (retrieved e.g. during PDP Context activation) or fixed
(e.g. by user client set up) the address of the proxy CSCF. This address is stored
there in FQDN (fully qualified domain name) notation.
Before the P-CSCF is addressed the P-CSCF FQDN must be resolved by an DNS to
retrieve the IP address of one of the possible P-CSCF CFX 5000 NE. So it depends
on the DNS which one of them is delivered back and it depends on the stored time to
live (TTL) information in the DNS how often this information is renewed in the
upstream equipment.
In the P-CSCF the request line (example Registration) with the domain is evaluated
and the domain send to a DNS for resolving. The DNS delivers in the query response
similar to the above mentioned mechanism the IP address of one of the redundant I-
CSCF.
The I-CSCF retrieves from the HSS the capabilities and translates them into an S-
CSCF by contacting the "Table of S-CSCF SIP URIs and Capabilities“. This SIP URI
of the S-CSCF can now be used as an input for an DNS query to retrieve the IP
address of one of the redundant S-CSCFs.
In the S-CSCF the BGCF (external or internal) SIP URI of the BGCF is stored. This
SIP URI is used in an DNS NAPTR query and finally resolved into the IP address of
the BGCF (In case of the internal one is used the IP address of the S-CSCF is
replied).

As we can see the DNS responses influence the routing through the IMS in case the
TTL time is selected as a very short timer, otherwise the information retrieved from
the DNS is kept for a longer time in the NE cash. So to influence the routing in the
network (maybe requested because of maintenance work) just the TTL time has to
be shortened considerable and the entries in the DNS have to be modified (may be
one entry is cancelled).

CN37533EN10GLA0
70 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

DNS
P-CSCF1 P-CSCF2
a.b.c.d a.b.c.e

I-CSCF1 I-CSCF2 DNS


b.b.c.d b.b.c.e

S-CSCF1 S-CSCF2 DNS


c.b.c.d c.b.c.e

BGCF1 BGCF2 DNS


d.b.c.d d.b.c.e

Fig. 42 Routing principle in the IMS

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
71
IMS Hardware and Software

5.3 Shutdown procedure


Start configuration
There exists one X-CSCF instance, named „name.ims1.com“ by the user equipment
or other upstream NE. This X-CSCF instance is hosted on machine 1. And there
exists a second X-CSCF instance, named „name.ims1.com“ user equipment or other
upstream NE. This second X-CSCF instance is hosted on machine 2. Machine 1
uses the address a.b.c.d for the hosted applications. Machine 2 uses the address
a.b.c.e for the hosted applications.
In the DNS the following records exist:
name.ims1.com A a.b.c.d
name.ims1.com A a.b.c.e

Target configuration
There should be removed the 2nd x-CSCF instance. This x-CSCF instance is hosted
on machine 2. Machine 2 uses the address a.b.c.e for the hosted applications.

Actions
1. Determine the maximum caching time to live (TTL) of the DNS resource record
name.ims1.com. The TTL is specified in the resource record itself (following the
name. Write the value down. In the DNS change the TTL to a quite shorter value
to be more flexible
2. Wait for the time which was stored in the record originally (time is in seconds).
3. Remove the following resource record: name.ims1.com A a.b.c.e
4. From now on all further request should go to the x-CSCF machine 1.
5. Via the @commander modify for the x-CSCF instance hosted on machine 2 the
configuration parameter System.x-CSCF [overal]operating Mode to deregister
unregistered user, fully registered user, active user, or perform an CDR FTP
push or pull. The parameter values depend on the role of the CFX 500 to be shut
down (P-CSCF, I-CSCF,S-CSCF or BGCF). More about the role specific state
can be found on the next but one page.

WARNING
This document does not replace any official upgrade procedure, containing
graceful shutdown actions!

CN37533EN10GLA0
72 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

FQDN
FQDN name.ims1.net
name.ims1.net 2
SIP
SIP sip:x-cscf@ims.net
sip:x-cscf@ims.net
....
....

4
Query:name.ims1.net

4 4
a.b.c.d new Requests Query resp:a.b.c.d

DNS
NE 1 NE 2
a.b.c.d a.b.c.e

5
TTL
TTL==86400
86400300
300 1
name.ims1.net
name.ims1.net AAa.b.c.d
a.b.c.d
e.g. name.ims1.net
name.ims1.net AAa.b.c.e
a.b.c.e3
shutdown

Deregister semi registered users

Deregister passive users

Deregister active users

Collect CDRs

Fig. 43 Routing principle in the IMS

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
73
IMS Hardware and Software

5.4 Shutdown states and actions


P-CSCF
The P-CSCF can have the configurable state „Reject_Users“, where any initial and
reregistration shall be rejected with a configurable response value and all
registrations which expire are removed. Additionally the state „Remove_Dialogs“ is
possible, where all existing dialogs shall be released and removed.

I-CSCF
The I-CSCF can have the configurable state „Refuse new registration“, where any
initial and reregistration shall be rejected with a configurable response value.

S-CSCF
The S-CSCF can have 4 configurable states.
„Deregister semi registered users“: In this state semi registered users are removed in
case a reregistration is received. A semi registered user is a user using a default S-
CSCF.
„Deregister passive Users“: in this state fully registered users without active session
are removed in case a reregistration is received.
„Deregister active users“: In this state fully registered users having an active session
are removed in case a reregistration is received.
„Deregister and collect charging info“: In this state fully registered users having an
active session are removed in case a reregistration is received, after the CDRs are
copied to the billing center by a FTP push or pull. For the push the corresponding
data management setup is necessary, for the pull the corresponding FTP actions
have to be taken in the billing centre.
The above mentioned states or actions are executed in case a reregistration is
received from the user or in case the user has subscribed to the event notification. In
the latter case a notify is send to the user to inform the user about the network
initiated deregistration. This principle was implemented, because not all commercial
clients use the subscription to the event registration. The user should somehow find
out about the deregistration to make a new registration via the remaining network
element possible.
Instead of waiting for a reregistration, a timer related forced deregistration can take
place. I.e. in case the timer expires, the remaining users are compulsory
deregistered. In the S-CSCF the following timers exist: „Max time to wait for semi
users to deregister“, „Max time to wait for passive users to deregister“ and „Max time
to wait for active users to deregister“.

CN37533EN10GLA0
74 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

P-CSCF I-CSCF S-CSCF BGCF

Reject new dialogs Refuse new registration Deregister semi registered users Refuse new registration
shutdown

Deregister active users Deregister passive users

Deregister active users

Collect CDRs

Fig. 44 Shutdown States in the different roles

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
75
IMS Hardware and Software

5.5 Administration
Configuration Parameter Path Parameter Name Values
P-CSCF ims/cscf/cscfb/Syst P-CSCF Operating This parameter specifies the
Operating Mode em/ Mode operating mode of the P-CSCF role.
The operating modes
PLMN-PLMN/
are used to accept or refuse new
CSCF-x/CSCFB- initial registrations and sessions for
1/CSSYSTEM-1 graceful shutdown. Class: E.
Possible values: ACTIVE,
REJECT_USERS,
REMOVE_DIALOGS.

SIP Response ims/cscf/pcscf/Syst SIP response code The response code used for
code for states in em/ graceful shutdown stages that are
pcscf different different from 'ACTIVE' to reject
from active requests on Gm without an external
PLMN-PLMN/ session border control (SBC).
CSCF-x/PCSCF- Class: E
1/PCSYSTEM-1 Range: 400 to 599; Default: 480

SIP Response ims/cscf/pcscf/Syst SIP response code for The response code used for
code reject em SBC port graceful shutdown stages that are
request on Gm different from 'ACTIVE' to reject
where an requests on Gm where an external
PLMN-PLMN/ session border control (SBC) is
external SBC is
used CSCF-x/PCSCF- used. Class: E
1/PCSYSTEM-1 Range: 400 to 599; Default: 480

P-CSCF graceful ims/cscf/cscfb/Syst P-CSCF graceful Absolute date/time when graceful


shutdown start em shutdown start time shutdown (GS) starts on P-CSCF. It
time has to be set as GMT time. GS
starts immediately when a past date
PLMN-PLMN/ is specified. Time/Date
CSCF-x/CSCFB- format:dd.mm.yyyy HH:MM.
1/CSYSTEM-1 Class: E
Graceful ims/cscf/cscfb/Syst Graceful shutdown Time when graceful shutdown (GS)
shutdown start em start time on IBCF starts on IBCF: Time parameter is
time on IBCF either the empty string or has
format ‘day.month.year
PLMN-PLMN/ hour:minute. ‘GS starts immediately
CSCF-x/CSCFB- when a past date is specified. It has
1/CSYSTEM-1 to be set as GMT time.
Class: E
I-CSCF ims/cscf/icscf/Syste I-CSCF Operating This parameter specifies the
Operating Mode m/ Mode operating mode of the I-CSCF role.
The operating modes are used to
PLMN-PLMN/
accept or refuse new initial
CSCF-x/ICSCF- registrations for load balancing or
1/ICSYSTEM-1 graceful shutdown. Class: E.
Possible values: 0 (normal
operation), 1 (refuse new
registrations).
S-CSCF ims/cscf/cscfb/Syst S-CSCF Overall This parameter specifies the
Operating Mode em/ Operating Mode operating mode for the S-CSCF

CN37533EN10GLA0
76 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

Configuration Parameter Path Parameter Name Values


role. The operating modes are used
PLMN-PLMN/ to start network initiated de-
registrations of all users registered
CSCF-x/CSCFB-
at this S-CSCF and they are also
1/CSSYSTEM-1
used to accept or refuse new initial
registrations. Possible values: 0
(normal operation), 1 (deregister
semi-registered users), 2
(deregister passive users, i.e.,
without session), 3 (release
sessions + deregister all users), 4
(as 3 + collect charging data
records (CDR)). New registrations
are rejected if this parameter is not
set to 0. Class: E.
BGCF Operating ims/cscf/bgcf/Syste BGCF Operating Mode This parameter specifies the
Mode m/ operating mode for the breakout
gateway control function (BGCF)
PLMN-PLMN/
role. The operating modes are used
CSCF-x/BGCF- to accept or refuse new initial
1/BGSUSTEM-1 registrations for load balancing or
graceful shutdown. Possible values:
0 (normal operation), 1 (refuse new
registrations). Class: E.
IBCF Operating ims/cscf/cscfb/Syst IBCF Operating Mode IBCF’s graceful shutdown operation
Mode em/ PLMN-PLMN/ state. Possible values: 0 (normal
operation), 1 (reject dialog-initiating
CSCF-x/BGCF-
and standalone requests), 2
1/BGSUSTEM-1
(release existing INVITE and
SUBSCRIBE dialogs). Class: E.
Max time to wait ims/cscf/Scscf/Tim Max time to wait for This time in seconds tells how long
for semi users to er/ semi users to de- the DEREG_SEMI_USERS stage
de-register register may last before the next stage
begins. If the value is 0, this stage
PLMN-PLMN/ ends when all passive users are
CSCF-x/SCSCF- deregistered. Class: D
1/SCTIMER-1
Max time to wait ims/cscf/Scscf/Tim Max time to wait for This time in seconds tells how long
for passive users er/ passive users to de- the DEREG_PASSIVE_USERS
to de-register register stage may last before the next
stage begins. If the value is 0, this
PLMN-PLMN/ stage ends when all passive users
CSCF-x/SCSCF- are deregistered. Class: D
1/SCTIMER-1
Max time to wait ims/cscf/Scscf/Tim Max time to wait for This time in seconds tells how long
for active users er/ active users to de- the DEREG_ACTIVE_USERS
to de-register register stage may last before the next
PLMN-PLMN/
stage begins. If the value is 0, this
CSCF-x/SCSCF- stage ends when all users are
1/SCTIMER-1 deregistered. Class: D

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
77
IMS Hardware and Software

Fig. 45 Parameter: S-CSCF Overall Operating Mode in NetAct

CN37533EN10GLA0
78 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

Fig. 46 Parameter: P-CSCF Overall Operating Mode LEMAF

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
79
IMS Hardware and Software

CN37533EN10GLA0
80 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

6 Attachment: Sun Architecture

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
81
IMS Hardware and Software

6.1 SUN Netra T5220 HW


6.1.1 Main features
Hot-Pluggable Disk Drives
Sun Netra T5220 system hardware is designed to support “hot-plugging” of internal
disk drives. With the proper software support, a qualified service technician can
install or remove these components while the system is running.

Power Supply Redundancy


The system includes two hot-swap power supplies, which guarantee a normal
operation of the system also in case one of the power supplies fails.

Redundant Hot-Swappable Fan


The system configuration includes two drive specific fans to provide system cooling.
In case one fan fails, the system continues normal operation with the remaining fan.
There are three additional fans of the motherboard implemented.

Environmental Monitoring and Control


The Sun Netra T5220 system features an environmental monitoring subsystem
designed to protect against:
 Extreme temperatures
 Lack of adequate airflow through the system
 Power supply problems

Error Correction and Parity Checking


The Ultra Sparc T2 Multi-kernel processor offers parity protection in the internal
cache. An extended ECC Function corrects errors up to 4bit, as long as they are in
the same DRAM.

Predictive self healing


In the SUN Netra T5220 new fashioned Maintenance methods are used. The “Self
Healing Technology” offers the possibility to predict the brake down of system
Components to prevent really critical problems before they really appear.

Hardware assisted Cryptography


The UltraSPARC T2 multicore processor provides hardware-assisted acceleration of
RSA and DSA cryptographic operations. The Solaris 10 OS provides the
multithreaded device driver that supports the hardware-assisted cryptography.

CN37533EN10GLA0
82 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

• Hot-pluggable disk drives


• Redundant hot-swappable Power supplies
• Redundant, hot-swappable fan
• Environmental monitoring and fault protection
• Error correction and parity checking for improved data integrity
• Predictive self healing
• Hardware assisted Cryptography
• RAID 0 and RAID 1 support
Fig. 47 Main Features

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
83
IMS Hardware and Software

6.1.2 Main Components


Disk drives
The Sun Netra T5220 mass storage subsystem accommodates 4 SSF SAS-drives
with 300 Gigabyte each. The disk devices are hot pluggable.

Control display and switches


 System Status Indicators: Left to right: Locator LED Button, Service Required LED,
System Activity LED, Power Button
 three alarm level LEDs for the summary alarms like minor, major and critical

DVD Drive
The Sun Netra T5220 server provides front-panel access an IDE DVD-ROM drive. In
the IMS application there is no DVD ROM in the standard configuration.

Control displays and switches


SUN Netra T5220
DVD Drive
tton

on
D

(not in IMS standard configuration)


ity LE
tor bu

r butt
LED

Powe
Activ
Fault
Loca

Alarms: critical(red) / major (red) / minor (amber)


User (amber)

Fig. 48 SUN Netra T5220 front

CN37533EN10GLA0
84 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

Power Supply
The SUN Netra contains two hot-swappable 660W AC/DC power supply units (PSUs)
providing N+1 redundancy

PCI Cards
On the upper part up to 2 PCI-X 133MHz cards (Peripheral Component Interconnect)
cards and one PCI-E (PCI Express) can be plugged in. In the lower part 3 PCI-E (PCI
Express) card can be plugged in.

Giga Ethernet
Beneath the middle PCI-E slot 4 Giga Ethernet (RJ-45 connections) are provided.
They are used for cluster cross connections and connections to the different LANs
(B&R, IMS LAN, Default).

USB, TPE Fast Ethernet, Serial


The Fast Ethernet is used for the administrative LAN, and the serial interface is used
for the access via terminal adapter.

SUN Netra T5220


PCI -X slots (3,4) PCI -E slot (5)

GE
2x Power Supplies PCI -E slot (0) interfaces USB
PCI -E slot (1) PCI -E slot (2)
Serial management port
Net Management port Alarm port TTYA serial
interface

Fig. 49 SUN Netra T5220 rear view

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
85
IMS Hardware and Software

6.1.3 Detailed Components


On the opposite page an example configuration of an HSS single node is shown:

CPU
Sun's T5220 systems consists of the Ultra SparcT2 (UST2, original code name
Turgo) which provides 8 HW-cores per CPU chip each able to handle 8 parallel
threads (i.e. 64 threads total). It has a clock rate of 1.2 GHz.

Main Memory
16 slots that can be populated with one of the following types of fully buffered (FB)
DIMMS:
 1 GB (16 GB maximum)
 2 GB (32 GB maximum)
 4 GB (64 GB maximum) => used in our IMS configuration
.

Diskdrive
The T5220 is equipped with:
Four hot-pluggable 300 GB SAS drives without a DVD-RW drive
Integrated hard drive controller supports RAID 0 and RAID 1 (IMS configuration).

PCI cards and Ethernet Ports


The Sun Netra T5220 provides four onboard 10/100/1000BT Ethernet interfaces and
has two PCI-X and four PCI-E interfaces. The PCI-X interfaces can be equipped with
PCI-X cards, one full-length/full-height, one half-length/full-height.
The PCI-E interfaces can be equipped with one full length/full-height card and three
half length/half height Ethernet cards.

DVD ROM
The Sun Netra provides as a standard server configuration a DVD ROM drive. In the
IMS configuration it is not used.

USB Ports
There are two USB ports that can be used for keyboard and Mouse for a local
terminal.

CN37533EN10GLA0
86 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

Power
There are two Power Supplies each with an output of up to 660W and an input of
100-240V operated in an N+1 redundancy mode.

Single-Node CSCF (1x Sun Netra T5220)


HW-Unit Quantity / Quality Comment
®
CPU 1x 1.2 GHz 1x UltraSPARC -T2 processor with 8 cores / 64 threads;
Main Memory 64 GB 8 GB per Core; 16 DIMM slots a 4 GB (FB)
Cache 4 MB L2 (integrated); 16 KB instruction/8 KB data cache;
Disk Drive 4x 300 GB 10k RPM, SAS ( 2½ inch);
4x 10/100/1000BT onboard, on 2 controllers (RJ-45);
Ethernet Ports
1x 10 /100 BT dedicated management port (RJ -45, NET MGT);
1x ttya DB-9
Serial Ports
1x SC RJ-45 (SER MGT)
2 PCI-X 64 bit, 133 MHz (one of half length (slot 3), full
height otherwise ; PCI-X is converted to PCIe 4 -lane);
1 PCIe x8 ( full length, full height, on x16 connector (slot 5) );
PCI
1 PCIe x8 (low profile, half length on x8 connector (slot 2) );
2 PCIe x4 (low profile, half len gth on x8 connector (slots 0
and 1) );
DVD-ROM — T5220-systems with 4 disks can not have a DVD drive;
USB Ports 2 USB 2.0 (on the rear);
100...240V AC max. 3 A at 240V AC, max. 6 A at 120V AC;
Power Supply  660 W max. total consumption; 2 power supplies (N+1);
-40...-75V DC nominal;  19 A total at -48V DC;

Fig. 50 Example Configuration of a Single CSCF with one SUN Netra T5220

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
87
IMS Hardware and Software

6.1.4 Cabling CSCF (SUN Netra)


Standard Configuration
The CSCF be configured as a single node (for low cost solutions or for trials) or as a
cluster configuration. On the opposite page the cabling without the cross
connections, necessary for the cluster configuration is shown. For the storage of the
active subscriber profiles the SUN ST 2540 StorageTek with two fiber channel
interfaces was chosen. It is also possible to configure the Mw and Gm interface to a
different LAN “IMS LAN2”.
 PCI card slots:
PCI-E 2) Dual Giga Ethernet
“nxge4” to the partner Netra T5220 same port
(just in cluster solution/see cluster configuration)
“nxge 5” to the partner switch B&R LAN
“nxge 6” to the own IMS LAN2
PCI-X 3) PCI dual fiber channel card with two 2Gbit/s ports.
SN T5220(1):
“c2” to ST 2540 (own side)
“c3” to ST 2540 (partner side)
SN T5220(2):
“c2” to ST 2540 (partner side)
“c3” to ST 2540 (own side)
PCI-X 4) PCI dual fiber channel card with two 2Gbit/s ports.
SN T5220(1):
“c0” to ST 2540 (own side)
“c1” to ST 2540 (partner side)
SN T5220(2):
“c0” to ST 2540 (partner side)
“c1” to ST 2540 (own side)
PCI-E 5) PCI quad Giga Ethernet card with three GE ports in use:
“nxge0” to the partner LAN switch “TSP default LAN”
“nxge1” to the partner LAN switch “IMS LAN1”
“nxge2” to the partner LAN switch “IMS LAN2”
“nxge3” to the partner LAN switch “TSP admin LAN”

CN37533EN10GLA0
88 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

 Internal GE card:
GE 0..3) Integrated Giga Ethernet ports
“e1000g0” to the own LAN switch “TSP default LAN” LAN
“e1000g1” to the own LAN switch “IMS LAN1” LAN
“e1000g2” to the own LAN switch “TSP B&R LAN” LAN
“e1000g3” to the partner Netra T5220 same port
(just in cluster solution/see cluster configuration)

 Other Ports
serial Serial connector port that goes to the terminal concentrator to be
accessed via “TSP Admin LAN” from a PC with Telnet application.
LAN To the own LAN switch “TSP Admin LAN”

LAN Switch
IMS TSP ADMIN TSP B&R IMS TSP
SN T5220 LAN 2 LAN LAN LAN 1 Default LAN PCI-E Quad GE card:
• „nxge 0“ to TSP default LAN partner
PCI-X Dual Fibre Channel • „nxge 1“ to IMS LAN1 partner
PCI Dual Fibre Channel •„c0“ to ST 2540 own side • „nxge 2“ to IMS LAN2 partner
„c2“ ST 2540 own •„c1“ to ST 2540 partner side • „nxge 3“ to TSP Admin LAN partner
„c3“ ST 2540 partner In the second Cluster element vice versa nxge0 nxge1 nxge2 nxge3

PCI-X PCI-X
3 4 PCI-E
5
serial

PCI-E PCI-E PCI-E


0 empty 1 empty 2

LAN
0 1 2 3

GE

Serial Connection to terminal Integrated GE ports: PCI-E Quad Giga Ethernet card
Concentrator • „e1000g0“ to own TSP default LAN •„nxge4“ Cluster Inter-Connection
(console) • „e1000g1“ to own IMS LAN1 •„nxge5“ B&R partner
• „e1000g2“ to own Backup and Restore Server •„nxge6“ IMS2 own
• „e1000g3“ cluster inter connect •„nxge7“
nxge4 nxge5 nxge6 nxge7
„LAN“ own LAN switch TSP
g0 g1 g2 g3
admin LAN
LAN
„c0“ „c0“ admin console „c2“ „c2“ admin console
own part. own part. own part.

ST2540

Fig. 51 Cabling of CSCF

NOTE
The CSCF in single node configuration as a low cost solution uses the 4 internal
HDDs of a T5220, each with 146 GB capacity.

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
89
IMS Hardware and Software

Cluster Configuration (Cross Connections)


In case the CSCF is configured in a cluster configuration the following cross
connections are necessary:
 PCI card slots:
PCI-E 2) Giga Ethernet
“nxge4” to the partner Netra T5220 same port
PCI-X 3) PCI dual fiber channel card with two 2Gbit/s ports. Port “c3” (cluster 1)
and "c2" (cluster2) goes to the external storage device (e.g. ST2540) of
the partner SUN Netra T5220.
PCI-X4) PCI dual fiber channel card with two 2Gbit/s ports. Port “c1” (cluster 1)
and "c0" (cluster2) goes to the external storage device (e.g. ST2540) of
the partner SUN Netra T5220.
 Internal GE card:
Integrated Giga Ethernet ports
“e1000g3” to the partner Netra T5220 same port

CN37533EN10GLA0
90 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

SN T5220 (1) C3 C1 SN T5220 (2) C2 C0


PCI-X PCI-X
PCI-X PCI-X
3 4 3 4

PCI-E PCI-E
2 2

0 1 2 3 0 1 2 3

Fig. 52 CSCF Cluster Configuration: Cross connections

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
91
IMS Hardware and Software

6.2 Cluster (based on SUN HW)

IPMP groups
As we have seen the CSCF as well as the HSS can be provided in a single node and
in a cluster configuration.
In case they are clustered, they are interconnected with a redundant Giga Ethernet
connection between both cluster elements. One node element (NE) contains two
cluster elements (CE), but this NE or cluster is regarded as one HSS or one CSCF.
So i.e. the node elements are clustered because of redundancy and not to increase
the number of network entities, or with other words the solution of a 1+1 redundancy
was chosen.
The LAN environment comprises on one side a duplicated, redundant LAN via two
redundant switches and on the other side a separated LAN, i.e. the LAN is separated
into different LAN Networks with different functions (TSP Admin LAN, IMS LAN, B&R
LAN,…).
Because of the LAN redundancy each cluster element provides two redundant
Ethernet ports per separated LAN. So 2 ports for TSP Admin LAN, 2 ports for IMS
LAN etc. are available. One of these ETH Ports is active, the other one is standby.
An IP address is allocated to each of the two ETH ports. These IP addresses are
called IPMP (IP multi-path) IP addresses. These port oriented IP addresses can be
used for example for the supervision of the physical path, maybe with "pings".
In addition to this IPMP IP addresses a physical address is assigned to a redundant
pair of ETH ports. This address can be used to address a physical cluster element.
I.e. it can be accessed via either port of the two redundant ETH ports.
To guarantee the service in case of the active port or the corresponding LAN fails,
both ports are in a so-called IPMP (IP multi path) group with at least one common so-
called virtual IP address. This virtual IP address is allocated to the active ETH port of
the IPMP group. In case this port fails, the IP address floats to the other Port and the
cluster element sends out a "Gratuitous ARP" to inform the partner (L2 switch) about
the new, changed MAC address. This IP address is supported by the NSN TSP
software.

In case of one cluster element fails, a switchover or a floating of the IP addresses


starts, similar like inside of the IPMP group. Also in this case a " Gratuitous ARP" is
send out to inform the partner network elements about the MAC address change.
Finally a so called HIP (high availability) IP address, supported by the SUN Cluster
SW, is available. This address can be used for incoming service access e.g. "Default
LAN": web GUI (http) or Corba, or "IMS LAN": Cx interface and Sh interface in the
HSS. In opposite to the virtual IP address, this HIP IP address exist just once in a
cluster. So it means one cluster element is accessible from outside via this IP
address (may be the other cluster element is accessed internally via the cross
connection). In case this cluster element fails, the HIP IP address switches over to
the second cluster element.

CN37533EN10GLA0
92 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

WARNING
In case of both redundant ports of an IPMP fail, no switchover to the redundant
cluster element is foreseen. So i.e. in case of such an unlikely failure situation
the whole LAN fails.

IPMP Group
10.12.50.160
2
stb 1 1

3 1
10.12.50.161 10.12.50.162 10.12.50.163 10.12.50.169
act

3
Redundant LANs
Cluster Element 1
2 2

10.12.50.163

10.12.50.164
stb
10.12.50.165 10.12.50.166 10.12.50.167
act

Virtual IP address
10.12.51.160
IPMP IP address stb
Physical IP address 10.12.51.161 10.12.51.162 10.12.51.163
act
HIP IP address Cluster Element 2

Fig. 53 Virtual IP Addresses

Single Node Configuration


The CSCF in single node configuration is released as a low cost solution. This
realization uses the 4 internal HDDs of a T5220, each with 146 GB capacity.

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
93
IMS Hardware and Software

6.3 SW Structure (based on SUN HW)

The @vantage node elements are based on the TSP7000 platform. This Platform is
also used for IN @vantage, IMS, HLRi etc. The TSP7000 platform allows the
installation of the @vantage applications (CSCF, HSS) on a Cluster configuration.
The TSP7000 Platform supports several different internal functions, interfaces to the
OEM software and the application software.

The @vantage Commander can be split in tree main parts:


 HW, OS and OEM-components
 TSP7000 layer
 @vantage application Software

@vantage application SW

TSP7000 layer

Hardware, Operating System and


OEM-components

Fig. 54 Main Layers

CN37533EN10GLA0
94 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

6.3.1 HW, OS and OEM-components


Hardware
The hardware can be from SUN or Fujitsu Siemens (on special customer request).
Details are described in the already discussed previous chapter.

Operating system and Cluster software


The operating system for both hardware components is Solaris 10. In case the server
should be installed as cluster, a cluster software package is necessary. For SUN
hardware, the SUN Cluster software 3.2 must be used and for Fujitsu Siemens
hardware the Prime Cluster software.

OEM-components
OEM components are the Oracle (data base), Sun Volume Manager 1.0 (disk and
data management), NetWorker (Backup and restore Software), Netscape (Internet
Browser), Apache Tomcat (environment software to execute Java code), Apache
Web Server.
The Volume Manager is a software raid system and is used to manage the hard
disks. Disk space and fault tolerance can be configured with this software.
Oracle is the database software which is used for the TSP7000 platform.
The NetWorker is an online backup software and is used to back up the whole
@commander and the connected @vantage NEs.

Oracle Volume Net Net Apache


Manager Worker Scape Tom Cat

SUN Cluster

Solaris 10 Solaris 10

SUN HW (node 1) SUN HW (node 2)

Fig. 55 HW, OS and OEM components

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
95
IMS Hardware and Software

6.3.2 Telco Service Platform (TSP)


The TSP7000 is the Nokia Solutions and Networks carrier-grade telecommunication
platform. The platform provides highest availability and scalable solutions. In the
cluster configuration, the TSP offers applications a single image view.
The TSP7000 is used for different HW configurations: SUN Hardware and FSC
Hardware. In IMS solutions, TSP7000 is present not only in the CSCF and HSS, but
in all products, such as: Home Location Register (HLRi), Call Session Control
Function (CSCF), Proxy Call Session Control Function (P-CSCF), Home Subscriber
Server (HSS), @vantage Commander etc.
In the current IMS version TSP 9.0 is used.

The following basic functions of TSP are used from PCS applications:
 Process Management
The Process Management starts processes during the startup and monitors them
during operation. It generates alarms in case of process failures.
 Security Management
The Security function is used for user identification and authorization (i.e. creation,
modification and deletion of users, password handling, handling of privileges etc)
 Performance management
Performance management comprises the statistics counter management.
Statistics counters are handled by the statistics manager
 Fault management
The fault management is responsible for:
- event management
- alarm management
The TSP7000 platform supports central alarm surveillance. It collects alarm
notifications from hardware and operating system via the "syslog" listener and from
oracle via the "alert log" listener. The TSP7000 alarm surveillance function reports
all alarms to this function
Via SNMP interface all alarms are sent to the @vantage Commander and reported
to the Operator.
- trace management
The Trace Management can be started via the @vantage Commander: The trace
points can be set to specific processes or subsystem and then the output is written
to log files. Error messages are always traced automatically without starting a
trace session.
- audit and recovery management
All critical resources are monitored and audited by the Audit and Recovery
mechanism. Inside the system exist threshold values which are used for a
periodically comparison with specific system values. If the checked values are not
inside their allowed range, audits are generated.

CN37533EN10GLA0
96 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

 Web services
The web services are used for the access to the internal TSP7000 functions like
Trace Management and Alarm Surveillance in more comfortable way.
 Backup and Restore
The Backup and Restore function is implemented in the TSP 7000 Platform (in
older versions it was implemented in the @vantage Commander Software). The
Backup and Restore function is a client used for the interconnection between
backup server - Oracle database and backup server - @vantage Commander
configuration.

For the TSP platform Oracle 11 Database is used.

6.3.3 Applications
On top of the TSP7000 SW the application SW is located. There is the application
software for the HSS (CMS 8200) or the CSCF with its different variants I-CSCF,
PCSCF and S-CSCF (CFX-5000).

Process/Node Trace Installation/ WEB


TSP 7000 Manager Manager RSU Services

Audits/ Backup/ Alarm


Security
Recoveries Restore Surveillance

Fig. 56 Software Parts TSP

CMS 8200 CFX- 5000


Application Application

Fig. 57 Applications

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
97
IMS Hardware and Software

6.4 IMS on SUN Netra


6.4.1 Commercial installations supported
 CFX-5000 (cluster): SUN Netra T5220 with Turgo CPU (8 cores), 1.2 GHz, 64 GB
RAM, 4 internal Discs and ST2540 with 12 Discs each.
 CMS.8200 (HSSc cluster): SUN Netra T5220 with Turgo CPU (8 cores) , 1.2 GHz,
64 GB RAM, 4 internal Discs and ST2540 with 12 Discs each
 CMS-8200 (HSSd-FE or SLF): SUN Netra T5220 with Turgo CPU (8 cores) , 1.2
GHz, 4 internal disks, 64 GB RAM
 As LAN switch the Cysco Catalyst C4948 should be used.
 PCS-5000 (cluster): SN Netra T5220 with Turgo CPU (8 cores) 1.2 GHz, 64 GB
RAM, 4 internal Discs and ST2540 with 12 Discs each or PCS-5000 cluster on
ATCA.

TIP
The CFX-5000 (CSCF) on single-node configuration for small customers is released
on Sun Netra T5220 without external storages, without PCS collocation and without
IBCF functionality.

CN37533EN10GLA0
98 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

Sun Netra T5220

Cisco Catalyst C4948

Fig. 58 Used HW Sun Netra T5220 and the LAN switch C4948

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
99
IMS Hardware and Software

6.4.2 Cabling example with SUN Netra

CSCF
Administrative Terminal cluster

eri0
Console / Server Concentrator
IMS 7.0

t0 t1 t2 t3 t3 t2 t1 t0

slot

slot
0 0
8 9 10 11 nxge 1 1 nxge 11 10 9 8
PCIe

PCIe
4 5 6 7 nxge 2 2 nxge 7 6 5 4
0 1 2 3 nxge 5 5 nxge 3 2 1 0
CI
PCI-X

PCI-X
c2 c3 FC 3 3 FC c3 c2
c0 c1 FC 4 4 FC c1 c0

GE GE GE GE GE GE GE GE
e1000g0 e1000g1 e1000g2 e1000g3 CI e1000g3 e1000g2 e1000g1 e1000g0

USB A USB B USB B USB A


SC LAN serial serial serial serial LAN SC

SN T5220 (1) SN T5220 (2)

FC0 FC1 FC1 FC0 FC0 FC1 FC1 FC0


eth COM COM eth eth COM COM eth

RAID-1 ST 2540 (1) RAID-2 RAID-2 ST 2540 (2) RAID-1

IMS IMS TSP IMS TSP TSP TSP TSP IMS TSP IMS IMS
LAN 4 LAN 2 Default LAN LAN 1 B&R LAN Admin LAN ISL Admin LAN B&R LAN LAN 1 Default LAN LAN 2 LAN 4
LAN Switch 1 (with VLANs) LAN Switch 2 (with VLANs)

Fig. 59 SCSCF Cluster T5220

CN37533EN10GLA0
100 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

CSCF
Administrative Terminal single node

eri0
Console / Server Concentrator
IMS 7.0

t0 t1 t2 t3

slot
0
8 9 10 11 nxge 1

PCIe
4 5 6 7 nxge 2
0 1 2 3 nxge 5

PCI-X
3
4

GE GE GE GE
e1000g0 e1000g1 e1000g2 e1000g3

USB A USB B
SC LAN serial serial

SN T5220 (1)

IMS IMS IMS TSP IMS TSP TSP TSP TSP IMS TSP IMS IMS IMS
LAN 5 LAN 4 LAN 2 Default LAN LAN 1 B&R LAN Admin LAN ISL Admin LAN B&R LAN LAN 1 Default LAN LAN 2 LAN 4 LAN 5
LAN Switch 1 (with VLANs) LAN Switch 2 (with VLANs)

Fig. 60 SCSCF Single T5220

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
101
IMS Hardware and Software

CN37533EN10GLA0
102 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

7 Exercise

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
103
IMS Hardware and Software

Exercise 1
Title: LAN Networks

Objectives: The participant can explain the purpose of the different


networks

Pre-requisite: none

Task
Please answer the following questions
Query
Which one of the following statements is correct?

The Administration LAN (TSP Admin LAN) is used to Yes No


interconnect the TIAMS with the terminal concentrator
The Administration LAN (TSP Admin LAN) is used to Yes No
interconnect the TIAMS with the NetAct
The Administration LAN (TSP Admin LAN) is used to Yes No
interconnect the TIAMS with the HSS-FE or CSCF to perform
SW loading
The OAM LAN (TSP default LAN) is used to interconnect the Yes No
TIAMS with the NetAct
The OAM LAN (TSP default LAN) is used to interconnect the Yes No
NetAct with the node elements
The OAM LAN (TSP default LAN) is used to interconnect the Yes No
@vantage commander with the NetAct
The B&R LAN (TSP B&R LAN) is used to interconnect the Yes No
TIAMS with the HSS or CSCF to restore the data in case of a
SW crash
The IMS Traffic LAN (IMS LAN1 or 2) is used for performance Yes No
and traffic measurement
The IMS Traffic LAN (IMS LAN1 or 2) is used for traffic on Yes No
Gm/Gq/MW/ISC/Cx/Sh interfaces

CN37533EN10GLA0
104 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

Exercise 2
Title: IMS Hardware/SW

Objectives: The participants can describe the HW of the IMS

Pre-requisite: none

Task
Please answer the following questions
Query
Which one of the following statements is correct or incorrect?
In the cluster solution the two CPU blades Yes No
are interconnected via the HUB blades
Each LAN is realized via a separate LAN Yes No
cable connected to each CPU blade
The whole shelf internal traffic is realized Yes No
via the backplane
With the ATCA HW the HSS-FE is also Yes No
realized as cluster
With the ATCA HW installation parameters Yes No
can be modified now via the TSP-WebGUI
With the NetAct the configuration Yes No
parameters can be modified directly
With the LEMAF the configuration Yes No
parameters can be modified directly

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
105
IMS Hardware and Software

CN37533EN10GLA0
106 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

8 Solution

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
107
IMS Hardware and Software

Solution 1
Title: LAN Networks

Objectives: The participant can explain the purpose of the different


networks

Pre-requisite: none

Task
Please answer the following questions
Query
Which one of the following statements is correct?
The Administration LAN (TSP Admin LAN) is used to Yes No
interconnect the TIAMS with the terminal concentrator
The Administration LAN (TSP Admin LAN) is used to Yes No
interconnect the TIAMS with the NetAct
The Administration LAN (TSP Admin LAN) is used to Yes No
interconnect the TIAMS with the HSS-FE or CSCF to perform
SW loading
The OAM LAN (TSP default LAN) is used to interconnect the Yes No
TIAMS with the NetAct
The OAM LAN (TSP default LAN) is used to interconnect the Yes No
NetAct with the node elements
The OAM LAN (TSP default LAN) is used to interconnect the Yes No
@vantage commander with the NetAct
The B&R LAN (TSP B&R LAN) is used to interconnect the Yes No
TIAMS with the HSS or CSCF to restore the data in case of a
SW crash
The IMS Traffic LAN (IMS LAN1 or 2) is used for performance Yes No
and traffic measurement
The IMS Traffic LAN (IMS LAN1 or 2) is used for traffic on Yes No
Gm/Gq/MW/ISC/Cx/Sh interfaces

CN37533EN10GLA0
108 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software

Solution 2
Title: Hardware/SW

Objectives: The participants can describe the HW of the IMS

Pre-requisite: none

Task
Please answer the following questions
Query
Which one of the following statements is correct or incorrect?
In the cluster solution the two CPU blades Yes No
are interconnected via the two HUB blades
Each LAN is realized via a separate LAN Yes No
cable connected to each CPU blade
The whole shelf internal traffic is realized Yes No
via the backplane
With the ATCA HW the HSS-FE is also Yes No
realized as cluster
With the ATCA HW installation parameters Yes No
can be modified now via the TSP-WebGUI
With the NetAct the configuration Yes No
parameters can be modified directly
With the LEMAF the configuration Yes No
parameters can be modified directly

CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
109
IMS Hardware and Software

CN37533EN10GLA0
110 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.

Vous aimerez peut-être aussi