Académique Documents
Professionnel Documents
Culture Documents
Contents
IMS Hardware and Software 1
1 LAN Environment of the CSCF 3
1.1 ATCA based network Topology 4
1.2 SUN Netra based network Topology (IMS 8.2 EPx) 7
2 NSN IMS on ATCA Hardware 9
2.1 IMS on ATCA (Advanced Telecommunication Computing
Architecture) 10
2.2 IMS on ATCA (V9.1 onwards) 11
2.3 Components released from IMS V9.1 onwards 12
2.4 Components released from IMS V10.0 on 12
2.5 Base concepts and components of NSN IMS on ATCA HW 14
2.6 Reference Configuration of IMS on ATCA 18
2.7 Cabling of CFX-5000 (ATCA) 25
2.8 TIAMS (TSP Installation Administration and Maintenance Server) 34
2.9 CFX Load Balancer (V9.1 onwards) 35
2.10 Loosely Coupled Cluster - Single Node Pair SNP 40
3 IMS on HP Hardware (IMS V9.2 MP1) 47
3.1 General Architecture 50
3.2 Base Rack Assembly in IMS9.2 MP1 (with HP) 51
3.3 Reference Configuration of IMS V9.2 MP1 52
3.4 Cabling of IMS components (HP blades) 54
4 IP Management on ATCA 57
4.1 IP Config 60
4.2 IP Services 62
4.3 IP-Protocol Handler 64
5 Graceful Shutdown 67
5.1 General 68
5.2 Routing Principle 70
5.3 Shutdown procedure 72
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
1
IMS Hardware and Software
CN37533EN10GLA0
2 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
3
IMS Hardware and Software
VLANs
From IMS V9.0 onwards the whole external traffic is realized via tagged VLANs.
CN37533EN10GLA0
4 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
TIAMSs
NetAct
ADMIN LAN is realized
on BI Interfaces via
Admin LAN the backplane,
(TSP Admin LAN) for shelf interconnectivity the
BI IFs of the Hubs are physically
connected
OAM LAN
CSCFcluster (TSP Default LAN)
passive FI
passive BI
. B&R LAN
. passive/stby
. HUB blade 2 HUB blade
HUB interconnect
B & R server
HSS-FE
HUB blade 1
active
HUB blade
active FI
IMS Traffic LAN,
active BI OAM LAN,
CSCFcluster B&R LAN IMS Traffic LAN
are realized as VLANs e.g. LAN 1
. on the FI Interfaces
. IMS Traffic LAN
. e.g. LAN 2
passive/stby
Backplane
HUB blade
HUB blade 2
HUB interconnect
HSS-FE
HUB blade 1
active FI
active
HUB blade
CSCFs, HSS-FEs,
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
5
IMS Hardware and Software
Backplane passive/stby
HUB blade
HUB interconnect
HUB blade 1
active
HUB blade
HUB blade 2
HUB blade 1
active
HUB blade
S-CSCFcluster
Backplane
CSCFs, HSS-FEs,
Fig. 3 ATCA based network Topology(3)
CN37533EN10GLA0
6 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
There are two different applications using this Administration LAN, the Installation
via an Install server and the access to the serial interface of the HSS/CSCF
Server.
Install Server Connection: The install Server, a Solaris based server, is used for
the installation of the HSS/CSCF Solaris Platform, the TSP and the applications.
The Install server is connected to this Administration LAN and can access from
here both HSS/CSCF Servers via the redundant LAN switches (e.g. realized by a
CISCO Catalyst 4948).
From a PC (Administrative Console) a Telnet Connection can be set up via the
Administration LAN and one of the LAN switches to a Terminal Concentrator
where the Telnet connection terminates. From here a serial connection goes to
both HSS/CSCF servers to be used e.g. during startup.
IMS Traffic LAN or so called “IMS LAN1 (and IMS LAN2)”
This Local Area Network is used for the Gm/Gq/Mw/ISC/Cx/Sh interfaces or with
other words between CSCFs, CSCF and HSS, HSS and Application servers,
CSCF and application servers etc. So this LAN carries the signaling traffic of the
IMS entities except for the classic CCS7 between the HSS and the HLR. All nodes
(HSS/CSCFs) are connected to their partners via the two redundant LAN switches.
If the Mw/Gm traffic is to be separated a second IMS LAN: LAN2 is to be
connected to the CSCF, this IMS LAN2 carries the Gm traffic.
The B&R LAN or so called “TSP B&R LAN”
For a backup and restore operation a huge amount of data must be transmitted.
To guarantee a good transmission quality, a separate LAN is implemented. All
nodes (HSS/CSCFs) and @vantage commanders are connected to this B&R
network via the two redundant LAN switches.
OAM LAN or so called “TSP default LAN”
The operation administration and maintenance LAN is used for the management of
the network nodes. From the @vantage commander both HSS/CSCF servers
(cluster) can be accesses via both of the redundant LAN switches.
VLANs
Virtual Local Area Networks are on one side necessary because of security
reasons (traffic separation) and on the other side they are used instead of physical
separation to save HW.
The support of several access (e.g. enterprise) VLANs on the Gm interface in
conjunction with the integrated P-CSCF/C-BCF configuration is necessary. The
access LANs may have overlapping address spaces. It is a must to support this by
using VLANs.
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
7
IMS Hardware and Software
Terminal Concentrator
HSS/CSCF
Serial Server 2
HSS/CSCF
Interface
Server 1
Install Server
B&R LAN
(TSP B&R LAN)
IMS Traffic LAN
(IMS LAN1)
IMS Traffic LAN
(IMS LAN2)
Backup and Restore
Server
e.g. CSCF/HSS/AS
CN37533EN10GLA0
8 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
9
IMS Hardware and Software
One-NDS Data Repository V9.0, HP c7000 blade system with HH ProLiant Bl460c
Gen7 server blades
CMS-8200 (SLF) V9.x/V10.x:SUN Netra T5220 with Turgo CPU (8 cores) , 1.2
GHz, 4 internal disks, 64 GB RAM
PCS-5000 (cluster) for IMS V9.0: V5.0: SuN Netra T5220 with Turgo CPU (8
cores) 1.2 GHz, 64 GB RAM, 4 internal Discs and ST2540 with 12 Discs each
PCS-5000 (cluster) from IMS V9.1 onwards: V6.2/V6.3: ACPI4-B ATCA blade
(Kontron AT8050), 1 * 2.0Ghz/6C/2HT Intel Nehalem (Westmere)
CN37533EN10GLA0
10 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
14xxprocessing
14 Processing bladeBlade
slots slots
ACPI4-B
ACPI4-B – Kontron 1 CPU 1
– Kontron x 6CPU
core westmere
x 6 core
6x Samsung 8 GB DIMM RAM modules (*)
Westmere
with 6x 8 GBytes DDR3 DIMMs
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
11
IMS Hardware and Software
AHUB3-B – Dasan HUB blade & HBRT3-B (RTM for the HUB blade)
The reasons for using the AHUB3-B – Dasan HUB blade are:
it offers a improved feature set related to an enhanced external connectivity
it offers a higher throughput related to a higher transaction rate
CN37533EN10GLA0
12 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
ACPI5-A
CPU blade CPRT5-A
RTM
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
13
IMS Hardware and Software
Compared to previous IMS rack mount server (RMS) system IMS 9 follows an
entirely new hardware architecture approach which requires major adjustments in
hardware and software.
In particular:
CN37533EN10GLA0
14 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
15
IMS Hardware and Software
Blades (front boards) can be processing/CPU blades or Hub blades. For IMS the
ACPI4-B CPU-blade (single CPU/6-core/12 threads Intel Westmere, 48GB RAM)
is used together with a single disk RTM card (original supplier: Kontron).
ACPI5-A CPU blade (dual CPU Sandy Bridge 8-core blade, 2*2.1 GHz/8C/2HT)
together with a 2 disks RTM card CPRT5-A 2 (original supplier: Emerson).
AHUB3-B Hub blade from Dasan is in use.
AHUB3-A Hub blade is the only useable blade in IMS 9.0 (Radisys)
AMC: Advanced Mezzanine Cards plug into the AMC bay of a blade extending the
features/capabilities of a blade by providing additional disk capacity, additional or
specific network interfaces, encryption/DSP processors and so on. AMCs are
accessible from the front side of a blade. For IMS V9/V10 AMCs are not planned
to be used.
RTMs (rear boards): Rear Transition Modules plug into the backside of the shelf.
RTMs are extension modules for front blades and are assigned/connected 1:1 to
them. Without front blades RTMs do not work (e.g. they do not have power).
RTMs, too, provide additional features to the blade by adding more disks, more or
specific interfaces, additional CPUs, switching functions, etc.
ACPI4-B: the CPRT4-A single-disk RTM is used (original supplier: Kontron).
ACPI5-A: the CPRT5-A 2 disks RTM is used (original supplier: Emerson)
AHUB3-B: the HBRT3-B is in use (original supplier: Dasan)
There is also a Shelf Manager which is the management entity of a shelf. The
Shelf Manager usually consists of two redundant separately pluggable modules
which have their own slots in the shelf (i.e. they do not consume standard blade
slots). For IMS the ASMGR-A Shelf Manager is used (original supplier:
PigeonPoint).
CN37533EN10GLA0
16 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
Zone 2
Zone 1
Backplane
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
17
IMS Hardware and Software
CN37533EN10GLA0
18 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
19
IMS Hardware and Software
HSS-FE single node ACPI4-B ATCA blade 1*1.2.0GHz/6C/2HT Intel Westmere 6,000,000 active subscribers
(CMS-8200v10) (front end) ACPI5-A ATCA blade 1*2 2.1 GHz/8C/2HT Sandy Bridge 13,500,000 active subscribers
One-NDS
distributed HP C7000 2*2.1 GHz Intel Xeon Sandy Bridge See One-NDS performance data
(v9.0)
SLF
single node SN T5220 1x1 1.2 GHz/8C UltraSparcT2 9,000,000 active subscribers
(CMS-8200v9.x)
DRA ACPI4-B ATCA blade 2*1 2.0GHz/6C/2HT Intel Westmere 14k Diameter req/resp
2 server HA ACPI5-A ATCA blade 28 k Diameter req/resp
(CMS-8200v10) 1*2 2.1 GHz/8C/2HT Sandy Bridge
CSCF 2-node cluster / ACPI4-B ATCA blade 2x1 2.0GHz/6C/2HT Intel Westm. 225,000 active subscribers 1)
(CFX-5000v10) single node pair ACPI5-A ATCA blade 2x2 2.1GHz/8C/2HT Sandy Bridge 610,000 active subscribers 1)
CSCF-LB ACPI4-B ATCA blade 2x1 2.0GHz/6C/2HT Intel Westm. up to 19 CSCFs (a whole rack
2 node cluster can be served)
(CFX-5000v10) ACPI5-A ATCA blade 2x2 2.1GHz/8C/2HT Sandy Bridge
CN37533EN10GLA0
20 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
2 TIAMS
3 Shelves Configuration
1 CSCF LB 42 blades
1 1 1 1 1 1 1
1 2 3 4 5 6 7 8 9
0 1 2 3 4 5 6 Clusters – 16 CFX Cluster all roles co-located
– 1 CFX Load Balancer Cluster
1 PCS cluster – IMS TIAMS Cluster (TSP Installation
(optional) Administration and Maintenance Server)
with AdmC, QS, and TASM co-located
2 DRA – 2 HSS-FE blades
(optional) – 1 PCS Cluster
– 2 DRA Diameter Routing Agent (optional)
2 IMS blades
1 1 1 1 1 1 1 for HSS-FE
1 2 3 4 5 6 7 8 9
0 1 2 3 4 5 6
16 CSCF
Clusters
6 Hub
blades
1 1 1 1 1 1 1
1 2 3 4 5 6 7 8 9
0 1 2 3 4 5 6
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
21
IMS Hardware and Software
Blade Characteristics
With IMS 9 the CFX-5000 CSCF is based on ACPI4-B (Kontron AT8050) blade as
mentioned before. With IMS 10 the CFX-5000 CSCF is based on ACPI5-A (Emerson
7370) blade as mentioned before Single node and cluster configurations are
possible. The CFX-5000 hardware is used for all CSCF-roles (i.e. S-, P-, I-, E-CSCF,
TRCF), the FEE, the BGCF, the DTF (part of the MCF) and the [A/I]BCF. The storage
arrays are directly attached to the cluster elements via RTM cards.
Other Interfaces --
ACM bay 1* PCIe *4; single wide; not used
USB Ports 2* USB 2.0
Mounting single-wide ATCA 3.0 compliant blade
Power Consumption 115 W typ w/o RTM and AMC; 135W max.;
CN37533EN10GLA0
22 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
Internal Interfaces --
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
23
IMS Hardware and Software
Other Interfaces --
ACM bay 1* PCIe *4; single wide; not used
USB Ports 2* USB 2.0
Mounting single-wide ATCA 3.0 compliant blade
Power Consumption 260 W w RTM (determined by simulation)
Internal Interfaces --
Serial Ports -
CN37533EN10GLA0
24 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
The traffic separation is done by VLANs. This is also true for TSP standard LANs
such as CoreLAN (default) and B&R. The Hub RTM cards provide physical traffic
separation. Local interfaces of the CPU blade or it's RTM card shall not be used for
this purpose.
VLAN-tagging according to IEEE 802.1q is envisioned to be done by the CPU-blade
(i.e. not port-based). The mixing of tagged and untagged LANs on the same physical
wire is not standard compliant (though, supported by several switches). Therefore
this should be avoided.
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
25
IMS Hardware and Software
There is the Fabric Interface (FI). The FI is the main data transport in the ATCA
shelf. It is intended to handle all user plane and other external traffic. While the
ATCA specification defines several network topologies for the FI (for example: fully
meshed) the FI of the used shelf is based on a dual star architecture. Thus, each
CPU blade is redundantly connected with each Hub blade.
The FI provides two 10GE interfaces per blade. The Hub blades (two per shelf) are
mounted on dedicated slots. They are not interconnected per se but in the IMS
and MSS solution there are dedicated lines to inter-connect hub blades among
each other with (redundant) 10GE links.
The current IMS approach is to provide interface redundancy based on L2 features
(i.e. "Bonding" in case of Linux as was the case with IPMP on Solaris/RMS). The
Bonding group consists of one active and one passive interface. To work properly
there is a need to have some kind of interconnection between the redundant
switches the IMS blades are connected to. The cluster interconnect is carried over
the Hub blades, too.
A system can consist of several shelves distributed over more than a single rack in
which the shelves (Hubs) are interconnected beyond rack boundaries.
The FI is connected to the operator site using 10GE links. Therefore, a respective
10GE infrastructure (SX ports) to be provided by the operator network is needed.
CN37533EN10GLA0
26 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
27
IMS Hardware and Software
2.7.3 Connectivity
Initial State:
As mentioned above, the communication within the shelf is physically handled via the
backplane. The startup of the blades in a shelf results in the active state of the “left-
hand-side” Hub blade (that one in slot 8) and the passive state of the “right-hand-
side” Hub blade (that one in slot 9). HUB blade 8 now handles the complete VLAN
traffic of all CPU blades of that shelf. In the initial state the CPU blades send out the
VLAN traffic always via that link which is connected to the HUB blade in slot 8 (that is
now the active link).
The two Hub blades share the same IP addresses for each VLAN (e. g. Mw IF, Gm
IF, OAM LAN, …). The CPU blade (e. g. P-CSCF) sends out an ARP-broadcast for
the configured IP-address (has to be configured in the P-CSCF routing tables). The
active HUB (that one in slot 8) answers the ARP with a virtual MAC address (which is
also shared by the two HUBs). For the CPU-blade (e. g. P-CSCF), this is the
indicator to send all those VLAN messages to that MAC-address. The detection of
the active HUB blade from outside (e. g. Default Gateway) follows the same
procedure, i.e. traffic from the outside is also handled by the active HUB only.
Fault scenarios:
HUB blade in slot 8 does not have a link to “the rest of the world” (external link is
down):
HUB blade 8 gets a lower priority and HUB blade 9 now turns to active state and
owns now the virtual MAC address. The VLAN traffic is sent from the CPU blade
via the active link to the Hub blade in slot 8, via the cross link the traffic is
forwarded to the Hub blade in slot 9 and sent out there. The answer is transmitted
via the same way back to the CPU blade.
HUB blade in slot 8 is completely down virtual MAC address is now held by
HUB blade in slot 9. Via the cross link the HUB blade in slot 9 recognized that the
partner is down and that it has to handle the complete VLAN traffic (it becomes
active):
The CPU blade does not get any answer or acknowledgement for the message
and sends the ARP via its second (former passive) FI, the destination IP-address
remains the same. The HUB blade in slot 9 answers the ARP with the virtual MAC
address, the VLAN traffic is sent now via the second FI to the second HUB blade.
From the HUB blade in slot 9 the traffic is now sent to the corresponding
destination. The answer is transmitted via the same way back to the CPU blade.
The initial active FI at the CPU blade is down
The CPU blade switches to the former passive FI and sends the ARP to the HUB
blade in slot 9 that one answers now with the virtual MAC address. The complete
VLAN traffic is sent from the CPU blade to the HUB blade in slot 9, via the cross
link it is forwarded to the HUB blade in slot 8 and from there it is sent to the
corresponding destination. The answer is transmitted via the same way back to the
CPU blade.
CN37533EN10GLA0
28 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
Zone 2
the 2 BI interfaces (1Gbit) active
realize a Bonding Group
passive
(active – passive, in case of failover)
Zone 1
Fig. 18 Connectivity(1)
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
29
IMS Hardware and Software
Bonding groups:
As we have seen the CSCF is released as cluster configuration, from V9.2 onwards
as loosely coupled cluster (single node pair). The HSS-FE is released in a single
node configuration, but 2 HSS-FEs are recommended in a commercial configuration.
One NE CFX-5000 contains of two ATCA blades because of redundancy and not to
increase the number of network entities.
Because of the LAN redundancy each blade provides two redundant Ethernet ports
(Fabric Interfaces) which are called bonding group (named Bond0). All LANs except
TSP Admin LAN) are realized as virtual LANs via this bonding group. One of these
ETH Ports is active, the other one is standby. An IP address is allocated to each of
the two ETH ports. These port oriented IP addresses can be used for example for the
supervision of the physical path, maybe with "pings".
In addition to this Bond0 a physical address is assigned to a redundant pair of ETH
ports, this is called Bonding IP address. This address can be used to address a
physical CPU blade. I.e. it can be accessed via either port of the two redundant ETH
ports.
To guarantee the service in case of the active FI or the corresponding LAN fails, both
ports are in a so-called bonding group with at least one common so-called virtual IP
address. This virtual IP address is allocated to the active ETH port of the bonding
group. In case this port fails, the IP address floats to the other Port and the cluster
element sends out a "Gratuitous ARP" to inform the HUB blade 2 about the new,
changed MAC address.
CN37533EN10GLA0
30 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
bonding group
IP redundancy
with ATCA eth1 (FI1) 2
stb 1 1
3 1
eth0 (FI0) 10.12.50.162 10.12.50.163
act
HUB
blade 2
Cluster Element 1
HUB
blade 1 2 10.12.50.163
eth1 (FI1)
stb
eth0 (FI0) 10.12.50.166 10.12.50.167 10.12.50.163
act
bond 0 (bonding group)
e. g. IMS1 VLAN 2050
Binding IP address
(also called physical IP)
virtual IP address
Cluster Element 2
Fig. 19 Connectivity(2)
WARNING
V9.1: In case of both redundant ports of a bonding group fail, no switchover to the
redundant CPU blade is foreseen. So i.e. in case of such an unlikely failure situation
the Quorum Server has to decide what blade is the “Solid Master“ for that kind of
traffic. This will be the blade with the faulty FIs. As the Quorum Server reaches this
blade via the BI it decides that the second blade gets a shutdown. The CFX-5000 is
out of service.
V9.2 onwards (LCC): In case of both redundant ports of a bonding group fail, the
Quorum Server assigns the complete control to the second blade.
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
31
IMS Hardware and Software
CN37533EN10GLA0
32 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
33
IMS Hardware and Software
The AdminLAN is not connected to the operator's site. Therefore, it is not possible to
share a TIAMS over sites. Also, the Admin LAN does not belong to the operator but
to the IMS product / NSN Service. Therefore it is not designated to route its traffic
through routers controlled by the operator. To be able to share Admin/Install servers
among NEs that are placed at different subnets (Integration Areas) additional
measures are to be taken.
In commercial configurations the TIAMS consists of two ATCA blades. For test
systems a single blade version is also possible.
A TIAMS of two blades can cope with up to about 40 blades (e.g. a rack full of IMS
computing blades). The Admin/Install-Server is equipped with one disk of 300GB size
only.
The RTM cards of the TIAMS containing the disks are interconnected among each
other using dedicated RTM SFP SAS ports. This is to assist disk mirroring (shared all
configuration). The TIAMS work in active / cold standby mode with the cold standby
switched on. The cold standby, however, works in InitRAMdisk mode in which disks
are not accessed.
CN37533EN10GLA0
34 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
35
IMS Hardware and Software
Realization
Now with IMS V9.1 a Load Balancer for the Gm Interface is available. It runs on a
pair of ATCA blades in active/standby-configuration (CFX-5000 blades). One single
Load Balancer pair serves the whole rack (19 CFX-5000 clusters).
The Gm interface is now available at a virtual P-CSCF, which consist of the physical
Load Balancer and one or several real P-CSCF clusters. This means, the Load
Balancers Gm interface and the real P-CSCF servers Gm interface share the same
IP address. But the Gm IP address at the real P-CSCF servers is “hidden” to avoid
address resolution conflicts in the LAN (no ARP answer for this address). This is
realized via MAC address translation and configuring an ARP-inactive dummy
interface.
The Load Balancer acts as a MAC address rewriting Ethernet-switch which forwards
the IP packets fully transparently to the real P-CSCF-server.
The UE sees the virtual P-CSCF as a monolith with Gm_IP_ext being carried by the
Load Balancer. The Load Balancer sees monolithic real servers each with a HA_IP
address, but the MAC address of them can change.
The quorum server and the TSP cluster see single nodes, each one with its own
Nd_IP address. In each single node an ARP-inactive alias interface (called
“dummy:0”) is configured with Gm_IP_ext. There, an instance of the IP DP is
listening. On the active node the traffic comes into the blade on the backup node no
traffic is present.
The UE sends <src: UE_IP, dest: Gm_IP_ext, payload> to the Load Balancer. From
its configuration the LB knows the real servers’ HA_IP addresses and from ARP the
according current MAC addresses. The LB selects a real P-CSCF-server by some
criterion and forwards the packet to it – unchanged on IP level. The IP stack delivers
the IP packet to the IP_DP which listens on “dummy:0”, configured with Gm_IP_ext.
Beyond this point, the real P-CSCF-server behaves in the same way as in the other
CSCF roles.
All P-CSCF initiated messages (requests and responses) go directly from the node to
the UE (Direct Server Return mode). The node uses Gm_IP_ext in messages to the
UE.
Note: The LB is “dual legged”, i.e. there is one physical interface for the Gm-
reference-point (for UE traffic) and another physical interface for the real-server-
traffic.
CN37533EN10GLA0
36 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
Redundant
Virtual P-CSCF HA_IP_1 Real P-CSCF 1
IPSec SA
Nd_1a_IP bond0:1
active
bond0 IP DP
UE_IP
Gm IP ext dummy:0
Node 1a
Nd_1b_IP
bond0 backup
IP DP
Gm IP ext dummy:0
signalling
traffic Load Balancer Node 1b
active
Gm IP ext
Real P-CSCF 2
HA_IP_2
standby
Quorum
determining the
Server
QS_IP active node
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
37
IMS Hardware and Software
This method is called rndagent. The rndagent method is the recommended and
default Load Balancing Method. It is assigned during installation of the LB.
CN37533EN10GLA0
38 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
Fault scenarios:
Interface failure on the active Fabric Interface (Linux Bonding) on the P-CSCF:
The driver provides a “virtual MAC address”. A bonding failover from one interface
to another one stays invisible for the LB.
Real Server internal failover:
In case of a TCP-Cluster or SNP-failover or failback (planned or unplanned) the
HA_IP address gets activated on the other node. The LB learns the MAC address
change thru a gratuitous ARP message and sends messages henceforth to the
new MAC address. Here again the IP stack delivers the IP packet to the IP_DP,
which listens on “dummy:0”, configured with Gm_IP_ext. The normal P-CSCF-
handling takes place; this is possible, because the registration data had been
replicated.
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
39
IMS Hardware and Software
This architecture:
minimizes the strong dependencies between the nodes of a cluster. The number of
software layers which have to communicate among each other is reduced to two.
There is a software called Inter node Communication Manager which exchanges
data between the application SW layer (e. g. CSCF) and another software called
CLM/CIPA which communicates between all other underlying SW layers (CAF,
TSP, Database).
does not require an external storage which results in a reduction of HW, power
consumption and footprint.
reduces the failover and failback time.
CN37533EN10GLA0
40 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
CAF CAF
CLM/CIPA
CLM/CIPA
TSP TSP
Database Database
Fig. 23
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
41
IMS Hardware and Software
Architectural Concept
The Inter-node Communications Manager (ICM) SW can be configured in three
modes:
LB_Primary: Primary owner of the external IP addresses. Runs the dispatcher
processes and the roles processes.
LB_Backup: Backup owner of IP addresses. Runs the dispatcher processes and
the roles processes. From ICM point of view both Primary and Backup are equal
(whichever comes up first, takes over the ICM controller functionality and hosts the
IP addresses), Primary and Backup are mandatory.
LB_None: The node does not host any aggregated IPs and only runs the roles
processes. In 2 node solution, LB_None is not required.
The mode of the ICM in the different nodes is configured by the following
assignments in the icm.cfg file: IcmLbMode.1=p (=primary), IcmLbMode.2=b
(=backup).
The Primary and the Backup node share the same external IP addresses for the
different types of traffic (Gm IF, Cx IF, Mw IF, …). In the normal situation the external
IP addresses are visible and active only on Node 1 (that one, which came up first).
The Dispatcher Processes are active (listen) also on Node 1 and distribute the
incoming requests among all Role Processes (handling processes) on all existing
nodes. The Role Processes are identified by their Pseudo Process ID (PPI), visible in
the lskpmc value (e.g. PGW-WebGui).
The Dispatcher Processes use a Service Routing Table to assign or select the
handling Role Process (local or remote) for the actual request. If the assigned Role
Process is executed on a remote node (e. g. Node 2) the messages are routed
through the Inter-node Communication Manager to the destination node. Only the
Node 1 is allowed to update the Service Routing Table, all SRT info is replicated to
the other members of the LCC. The consistency of the Service Routing Table (SRT)
is permanently observed across all nodes in case of inconsistencies the actual data
is sent to the inconsistent node. The created or modified context data of the sessions
is replicated from the handling Role Process (via ICM) to all other nodes.
Context Replication:
The session related messages with the session contexts are exchanged between all
nodes of the LCC and the Quorum Server (part of the TIAMS). The data are stored in
a shared memory area, present in each node. In the case that the LCC is realizing a
P-CSCF (offering the Gm IF) and the UE uses IMS AKA/IPSec authentication the
active node has Security Association data and sequence numbers stored inside the
own Security Association Database which is located in the kernel. To keep such a UE
registered even after a switchover of the node also the Security Association data and
the sequence numbers are replicated towards the other nodes of the LCC. But of
course this replication is done to the kernel of the additional node(s).
CN37533EN10GLA0
42 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
... ...
Cx Mw ISC Cx Mw ISC
IP Addr IP Addr IP Addr IP Addr IP Addr IP Addr
Communication Manager
Inter-node
Communication Manager
Inter-node
Pn Pn Pn
P03 P03 P03
P02 P02 P02
P01 P01 P01
Pseudo
Process active backup Handling
Service Routing Table: Identifier Node Node Process
(simplified) PPI
Register 01 N1 N2 PCSCF01
Invite 03 N2 N1 ICSCF09
Subscribe n N2 N1 SCSCF0D
Failover Scenarios:
Outage of Node 1(primary):
Node 1 as Primary node is executing the dispatchers in the active state. When node
1 goes down, node 2 recognizes it (missing heart beats). Node 2 takes over the
responsibility to update the Service Routing Table, and initiates the plumbing of the
external IP addresses on the own node. It activates the own dispatcher processes by
changing its state to active. The ICM in the node 2 modifies now the Service Routing
Table, all entries where node 1 is backup, the backup marking is removed. In a 2
node SNP no backup is present for the outage time. All entries where node 1 is
active, the backup entry is shifted to the active column and the backup entry is
removed.
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
43
IMS Hardware and Software
external IP addresses is not given back, which means that even after a failback the
Node 2 remains as LB-Primary node.
Fallback of Node 2:
When node 2 is up again, the ICM SW on node1 recognizes it via the heart beats.
The ICM on node 1 modifies the Service Routing Table and enters node 2 as backup
node for all PPIs. Each change in the routing table is pushed to all other nodes as
well. Also the synchronization of the session context data is done. Once the
synchronization is done, more than 50% of the PPIs are assigned to node 2 as active
node. This is done to reach a proper load sharing situation.
CN37533EN10GLA0
44 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
OAM concept
The NetAct knows the LCC nodes as one common network element. The Fault
Management and the Performance Management Server Application of the NetAct are
connected directly to the FM & PM Agents of each node. Therefore the physical IP
addresses of the nodes have to be known by the NetAct.
NetAct
CM Server FM Server PM Server
IP0:N1
IP2:N2
OAM IP IP1:N1 OAM IP
OAM IP
CM FM PM FM PM
Agent Agent Agent Agent Agent
FM/PM FM/PM
Communication Manager
Inter-node
Communication Manager
Inter-node
DB DB
CM CM
DB DB
Node 1 Node 2
LB_Primary LB_Backup
Fig. 25
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
45
IMS Hardware and Software
CN37533EN10GLA0
46 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
47
IMS Hardware and Software
From IMS V9.2 MP1 onwards the HP c7000 Blade System is introduced. It can be
considered as an evolution of rack-mount systems towards an IT blade architecture.
It has characteristics of both, RMS systems and ATCA blade systems.
The c7000 system comes in two flavors -- carrier-grade (with -48VDC power) and IT
(with AC power). The principally assembly of the c7000 system is as shown in the
figure below.
The CFX-5000 (CSCF, CSCF-LB and TIAMS) V9.2 MP1 and the CMS-8200 (HSS-
FE) are realized on HP ProLiant Bl460c Gen8 server blade with 2 CPUs (Intel Xeon
Sandy Bridge E5 2658 half height). The operating system in use is RHEL 6.3.
For the interconnection towards the network the HP Flex 10/10D interconnect
modules are in use.
In the default configuration the external (rack) switch HP 5820AF-24XG is deployed.
CN37533EN10GLA0
48 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
Rack:
first: Wrightline rack 24
later: HP10000 series rack Enclosure (front side):
G3 HP 642 HP Blade System C7000
with
up to 16 HP ProLiant Bl460c Gen 8
HH (half height) server blades
DC fuse panel Switch:
HP 5820AF-24XG
L2/L3 switches
IMS components
Fig. 27 HP deployment
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
49
IMS Hardware and Software
The interconnection of VC modules to (next hop) switches (located in the same rack)
is redundant.
CN37533EN10GLA0
50 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
CFX-LB at
Minimal Commercial Configuration:
TIAMS blades at bay 3 and 4
bay 1 and 2 CFX-5000 blades at
bays 4 to 8 (left to right)
2 TIMAS blades
2 CFX-5000 blades
2 CMS-8200 (HSS-FE) blades
or
2 TIAMS blades
CMS-8200 blades at 2 CFX-LB blades
1 2 3 4 5 6 7 8 2 CFX-5000 blades
bays 16 to 15 (right to left)
2 CMS-8200 (HSS-FE) blades
9 10 11 12 13 14 15 16
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
51
IMS Hardware and Software
NetAct (NAC) 2+4 1 * HP Proliant DL360 (2 applic servers and 2 database servers) or
4+2 1 * HP Proliant BL460c with C7000 enclosure ( 4 AS + 2 DS)
iNumv9 3 Sun Netra X4270
2 * 2.13 GHz/4C/2HT Intel Xeon
CN37533EN10GLA0
52 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
DRA (CMS-8200v9.x) 2 server HA ACPI4-B ATCA blade 2x1 2.0GHz/6C/2HT Intel Westm
1) for the Routing-DS; for the Routing DSA > 10,000,000 subs.
2) all CSCF roles physically co-located
3) this is sufficient to support a single enclosure which is the requirement; technically, more CSCF clusters may be served
4) max. practical capacity is about 19k sessions for the 4500 BG
IT Blade Characteristics
CPU 2 * 2.1 GHz Intel Xeon E5-2658 (Sandy Bridge EP, 8C/2HT/16T
Main Memory 128 GB 2*4*16GB DIMMs, DDR-3, ECC, 1333MHz, max. 512GB
Cache 20 MB L3
Ethernet Ports 2*10 GE 1* int. HP FlexFabric 10Gb 2-port 554FLB FlexibleLOM; there are
no Ethernet interfaces additional. to the internal ones
Additional Interfaces -- 1x internal Micro SDHC card slot;
1x internal USB 2.0 connector for USB flash media drives
Expansion Slots 2 (not used) x16 PCIe 3.0 for Type A MEZZ card (slot 1), Type B MEZZ card
(slot 2)
USB ports 1* available only via c-Class Blade SUV connector and cable
Management iLO 4
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
53
IMS Hardware and Software
The Flex VC 10/10D is deployed with 10 physical ports, each port can be configured
as 1GE or 10GE uplink. Each enclose defines its own Virtual Connect (VC) domain
and each domain defines in our solution two vNets (max 4 vNets, one per flexNIC).
One vNet is used for (rack-) internal, untagged, AdminLAN traffic (ATCA BI
equivalent). The other vNet is used for all external, tagged VLAN traffic (ATCA FI
equivalent).
In our solution 2 physical ports are used for uplinks (tagged VLAN traffic) the cross-
connection is realized with internal cross-connect ports (quasi backplane). In the
IMSV9.2 MP1 no enclosure interconnect is released. That means, vNets are not
interconnected beyond enclosure borders and the rack consists of two different
externally visible VC domains. Uplinks are used in active/standby mode. This is the
result of having the vNet distributed over two Interconnect Modules and connected to
two different external switches.
Traffic separation is done by use of VLANs, i.e. logically and in the same way as it is
done for ATCA. The Flex VC 10/10D module is not capable of VLAN routing. To
provide physical traffic separation towards an operator's site additional rack switches
are in use. These LAN devices are then able to split the traffic and distribute it to
different physical ports.
The interconnection of enclosures located in the same rack is not released with IMS
V9.2 MP1. (In opposite to IMS 9.2 with ATCA).
CN37533EN10GLA0
54 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
Enclosure (shelf)
.
. 2 Blade 1 pf1
pNIC2
vN-Adm
pf2
pf3
pf4
vN-Ext
pf1
pNIC1
pf2
pf3
CSCF pf4
pf2
X9
pf3
X10
pf4
X1
pf1
X2
pf2
pNIC1
Linux Bonding X3
(active/standby) pf3
X4
pf4
X5
active
pf1: Admin LAN (untagged internal traffic) X6
equivalent to ATCAs BI
no LINUX bonding X7
pf2: reserved
pf3: reserved X8
pf4: all tagged external traffic)
supports LINUX bonding X9
equivalent to ATCAs FI
X10
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
55
IMS Hardware and Software
vN-Adm
. 2 Blade 1
. pNIC2 pf1
pf2
vN-Ext
pf3
pf4
vN-Adm
pf1
pNIC1
pf2 vN-Ext
HSS-FE pf3
pf4
Cx Msg redundant paths used to
Enclosure 1 withstand outage of a switch
when one interconnect module
is in maintenance mode
.
vN-Adm
. 2 Blade 1
. pf1
pNIC2
pf2
vN-Ext
pf3
pf4
vN-Adm
pf1
pNIC1
pf2
vN-Ext
pf3
S-CSCF pf4
Cx Msg
Enclosure 2
CN37533EN10GLA0
56 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
4 IP Management on ATCA
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
57
IMS Hardware and Software
IP Management provides a GUI for the presentation of a local network element. Any
operator with HTTP access to a local network element and equipped with the
appropriate function rights is able to display a specific set of network-related
parameters.
TIP
The IP Management offers an alternative way to view some system parameters. As
lots of these parameters are "project specific" (class B) they cannot be changed via
the IP Management GUI.
CN37533EN10GLA0
58 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
59
IMS Hardware and Software
4.1 IP Config
IP Management can be used for the following network administration tasks:
Fig. 36 IP Config
CN37533EN10GLA0
60 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
Fig. 37 IP Routing
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
61
IMS Hardware and Software
4.2 IP Services
There are two basic kinds of IP services: HIP services and non-HIP services. A HIP
service is not directly bound to a LAN. Instead, it is implicitly assigned via its
assigned IP address. This is necessary because an IP service can have just one IP
assignment per LAN, but a LAN can have more than one assigned IP address.
Service Name
This column displays the name of the logical IP service. The logical IP service name
is required as search key to correlate HIP addresses and services that should use
these HIP addresses, because these data are defined in different database tables.
Component
The column displays the name of the component to which the IP service belongs to.
Type
This column displays the type of the IP service. The cluster software distributes the
IP traffic, which comes in on the public interface of the node with the Global Interface
(GI), with the cluster interconnects to the cluster nodes on which a server application
is running. The outgoing traffic of the server applications is handled locally by the
public interfaces of the nodes and is not rerouted to the GI node.
LAN
This column displays the name of the LAN to which the IP service is assigned to.
CN37533EN10GLA0
62 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
Fig. 39 IP Services
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
63
IMS Hardware and Software
For each type of the available protocol handlers, the following properties can be
configured:
Services: WEBSEC or WEBGUI
Service Port: 9880 / 9881
Userprocessgroupname: what Processgroup is allowed to use the service
CN37533EN10GLA0
64 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
65
IMS Hardware and Software
CN37533EN10GLA0
66 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
5 Graceful Shutdown
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
67
IMS Hardware and Software
5.1 General
Graceful Shutdown enables a “smooth” taking out of operation of a CFX-5000
(“CSCF”) from an IMS control network without service interruption. The IMS control
network keeps on running.
The die-out phase is the “graceful” part, where the currently registered users will
further on be provided with full service, but new registrations will not be accepted.
Requests for new registrations are directed towards other active CFX-5000 (“CSCF”)
network elements. The die-out phase can be scheduled according to the operator’s
needs. The operator has full control over the initiation and duration of the die-out
phase.
The initiation is performed “manually” by the operator according to the indications in
the respective guideline. The duration of the die-out phase might be adjusted
depending on the several factors, e.g. the number of still registered users and/or still
active SIP-session, which are monitorable via OAM.
The OAM-personnel at the operator’s site is able to recall relevant data (e.g. number
of registered users, active SIP-dialogs) via specific OAM-counters. Based on this
data (or any other optional criteria) the OAM-operator is able to control the progress
of the individual phases.
When the operator considers the die-out phase to be completed he can initiate the
tear-down phase. This phase is also started “manually” by the operator (following the
respective guideline).
The tear-down phase is the more “severe” part: The objective of this phase is to
conclude all the activities in a determined way. Ongoing sessions will be terminated,
users will be forced to deregister (network initiated), relevant charging data will be
safely transferred.
Finally, when all activities are completed the CFX-5000(“CSCF”) might be taken out
of service.
CN37533EN10GLA0
68 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
69
IMS Hardware and Software
As we can see the DNS responses influence the routing through the IMS in case the
TTL time is selected as a very short timer, otherwise the information retrieved from
the DNS is kept for a longer time in the NE cash. So to influence the routing in the
network (maybe requested because of maintenance work) just the TTL time has to
be shortened considerable and the entries in the DNS have to be modified (may be
one entry is cancelled).
CN37533EN10GLA0
70 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
DNS
P-CSCF1 P-CSCF2
a.b.c.d a.b.c.e
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
71
IMS Hardware and Software
Target configuration
There should be removed the 2nd x-CSCF instance. This x-CSCF instance is hosted
on machine 2. Machine 2 uses the address a.b.c.e for the hosted applications.
Actions
1. Determine the maximum caching time to live (TTL) of the DNS resource record
name.ims1.com. The TTL is specified in the resource record itself (following the
name. Write the value down. In the DNS change the TTL to a quite shorter value
to be more flexible
2. Wait for the time which was stored in the record originally (time is in seconds).
3. Remove the following resource record: name.ims1.com A a.b.c.e
4. From now on all further request should go to the x-CSCF machine 1.
5. Via the @commander modify for the x-CSCF instance hosted on machine 2 the
configuration parameter System.x-CSCF [overal]operating Mode to deregister
unregistered user, fully registered user, active user, or perform an CDR FTP
push or pull. The parameter values depend on the role of the CFX 500 to be shut
down (P-CSCF, I-CSCF,S-CSCF or BGCF). More about the role specific state
can be found on the next but one page.
WARNING
This document does not replace any official upgrade procedure, containing
graceful shutdown actions!
CN37533EN10GLA0
72 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
FQDN
FQDN name.ims1.net
name.ims1.net 2
SIP
SIP sip:x-cscf@ims.net
sip:x-cscf@ims.net
....
....
4
Query:name.ims1.net
4 4
a.b.c.d new Requests Query resp:a.b.c.d
DNS
NE 1 NE 2
a.b.c.d a.b.c.e
5
TTL
TTL==86400
86400300
300 1
name.ims1.net
name.ims1.net AAa.b.c.d
a.b.c.d
e.g. name.ims1.net
name.ims1.net AAa.b.c.e
a.b.c.e3
shutdown
Collect CDRs
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
73
IMS Hardware and Software
I-CSCF
The I-CSCF can have the configurable state „Refuse new registration“, where any
initial and reregistration shall be rejected with a configurable response value.
S-CSCF
The S-CSCF can have 4 configurable states.
„Deregister semi registered users“: In this state semi registered users are removed in
case a reregistration is received. A semi registered user is a user using a default S-
CSCF.
„Deregister passive Users“: in this state fully registered users without active session
are removed in case a reregistration is received.
„Deregister active users“: In this state fully registered users having an active session
are removed in case a reregistration is received.
„Deregister and collect charging info“: In this state fully registered users having an
active session are removed in case a reregistration is received, after the CDRs are
copied to the billing center by a FTP push or pull. For the push the corresponding
data management setup is necessary, for the pull the corresponding FTP actions
have to be taken in the billing centre.
The above mentioned states or actions are executed in case a reregistration is
received from the user or in case the user has subscribed to the event notification. In
the latter case a notify is send to the user to inform the user about the network
initiated deregistration. This principle was implemented, because not all commercial
clients use the subscription to the event registration. The user should somehow find
out about the deregistration to make a new registration via the remaining network
element possible.
Instead of waiting for a reregistration, a timer related forced deregistration can take
place. I.e. in case the timer expires, the remaining users are compulsory
deregistered. In the S-CSCF the following timers exist: „Max time to wait for semi
users to deregister“, „Max time to wait for passive users to deregister“ and „Max time
to wait for active users to deregister“.
CN37533EN10GLA0
74 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
Reject new dialogs Refuse new registration Deregister semi registered users Refuse new registration
shutdown
Collect CDRs
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
75
IMS Hardware and Software
5.5 Administration
Configuration Parameter Path Parameter Name Values
P-CSCF ims/cscf/cscfb/Syst P-CSCF Operating This parameter specifies the
Operating Mode em/ Mode operating mode of the P-CSCF role.
The operating modes
PLMN-PLMN/
are used to accept or refuse new
CSCF-x/CSCFB- initial registrations and sessions for
1/CSSYSTEM-1 graceful shutdown. Class: E.
Possible values: ACTIVE,
REJECT_USERS,
REMOVE_DIALOGS.
SIP Response ims/cscf/pcscf/Syst SIP response code The response code used for
code for states in em/ graceful shutdown stages that are
pcscf different different from 'ACTIVE' to reject
from active requests on Gm without an external
PLMN-PLMN/ session border control (SBC).
CSCF-x/PCSCF- Class: E
1/PCSYSTEM-1 Range: 400 to 599; Default: 480
SIP Response ims/cscf/pcscf/Syst SIP response code for The response code used for
code reject em SBC port graceful shutdown stages that are
request on Gm different from 'ACTIVE' to reject
where an requests on Gm where an external
PLMN-PLMN/ session border control (SBC) is
external SBC is
used CSCF-x/PCSCF- used. Class: E
1/PCSYSTEM-1 Range: 400 to 599; Default: 480
CN37533EN10GLA0
76 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
77
IMS Hardware and Software
CN37533EN10GLA0
78 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
79
IMS Hardware and Software
CN37533EN10GLA0
80 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
81
IMS Hardware and Software
CN37533EN10GLA0
82 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
83
IMS Hardware and Software
DVD Drive
The Sun Netra T5220 server provides front-panel access an IDE DVD-ROM drive. In
the IMS application there is no DVD ROM in the standard configuration.
on
D
r butt
LED
Powe
Activ
Fault
Loca
CN37533EN10GLA0
84 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
Power Supply
The SUN Netra contains two hot-swappable 660W AC/DC power supply units (PSUs)
providing N+1 redundancy
PCI Cards
On the upper part up to 2 PCI-X 133MHz cards (Peripheral Component Interconnect)
cards and one PCI-E (PCI Express) can be plugged in. In the lower part 3 PCI-E (PCI
Express) card can be plugged in.
Giga Ethernet
Beneath the middle PCI-E slot 4 Giga Ethernet (RJ-45 connections) are provided.
They are used for cluster cross connections and connections to the different LANs
(B&R, IMS LAN, Default).
GE
2x Power Supplies PCI -E slot (0) interfaces USB
PCI -E slot (1) PCI -E slot (2)
Serial management port
Net Management port Alarm port TTYA serial
interface
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
85
IMS Hardware and Software
CPU
Sun's T5220 systems consists of the Ultra SparcT2 (UST2, original code name
Turgo) which provides 8 HW-cores per CPU chip each able to handle 8 parallel
threads (i.e. 64 threads total). It has a clock rate of 1.2 GHz.
Main Memory
16 slots that can be populated with one of the following types of fully buffered (FB)
DIMMS:
1 GB (16 GB maximum)
2 GB (32 GB maximum)
4 GB (64 GB maximum) => used in our IMS configuration
.
Diskdrive
The T5220 is equipped with:
Four hot-pluggable 300 GB SAS drives without a DVD-RW drive
Integrated hard drive controller supports RAID 0 and RAID 1 (IMS configuration).
DVD ROM
The Sun Netra provides as a standard server configuration a DVD ROM drive. In the
IMS configuration it is not used.
USB Ports
There are two USB ports that can be used for keyboard and Mouse for a local
terminal.
CN37533EN10GLA0
86 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
Power
There are two Power Supplies each with an output of up to 660W and an input of
100-240V operated in an N+1 redundancy mode.
Fig. 50 Example Configuration of a Single CSCF with one SUN Netra T5220
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
87
IMS Hardware and Software
CN37533EN10GLA0
88 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
Internal GE card:
GE 0..3) Integrated Giga Ethernet ports
“e1000g0” to the own LAN switch “TSP default LAN” LAN
“e1000g1” to the own LAN switch “IMS LAN1” LAN
“e1000g2” to the own LAN switch “TSP B&R LAN” LAN
“e1000g3” to the partner Netra T5220 same port
(just in cluster solution/see cluster configuration)
Other Ports
serial Serial connector port that goes to the terminal concentrator to be
accessed via “TSP Admin LAN” from a PC with Telnet application.
LAN To the own LAN switch “TSP Admin LAN”
LAN Switch
IMS TSP ADMIN TSP B&R IMS TSP
SN T5220 LAN 2 LAN LAN LAN 1 Default LAN PCI-E Quad GE card:
• „nxge 0“ to TSP default LAN partner
PCI-X Dual Fibre Channel • „nxge 1“ to IMS LAN1 partner
PCI Dual Fibre Channel •„c0“ to ST 2540 own side • „nxge 2“ to IMS LAN2 partner
„c2“ ST 2540 own •„c1“ to ST 2540 partner side • „nxge 3“ to TSP Admin LAN partner
„c3“ ST 2540 partner In the second Cluster element vice versa nxge0 nxge1 nxge2 nxge3
PCI-X PCI-X
3 4 PCI-E
5
serial
LAN
0 1 2 3
GE
Serial Connection to terminal Integrated GE ports: PCI-E Quad Giga Ethernet card
Concentrator • „e1000g0“ to own TSP default LAN •„nxge4“ Cluster Inter-Connection
(console) • „e1000g1“ to own IMS LAN1 •„nxge5“ B&R partner
• „e1000g2“ to own Backup and Restore Server •„nxge6“ IMS2 own
• „e1000g3“ cluster inter connect •„nxge7“
nxge4 nxge5 nxge6 nxge7
„LAN“ own LAN switch TSP
g0 g1 g2 g3
admin LAN
LAN
„c0“ „c0“ admin console „c2“ „c2“ admin console
own part. own part. own part.
ST2540
NOTE
The CSCF in single node configuration as a low cost solution uses the 4 internal
HDDs of a T5220, each with 146 GB capacity.
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
89
IMS Hardware and Software
CN37533EN10GLA0
90 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
PCI-E PCI-E
2 2
0 1 2 3 0 1 2 3
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
91
IMS Hardware and Software
IPMP groups
As we have seen the CSCF as well as the HSS can be provided in a single node and
in a cluster configuration.
In case they are clustered, they are interconnected with a redundant Giga Ethernet
connection between both cluster elements. One node element (NE) contains two
cluster elements (CE), but this NE or cluster is regarded as one HSS or one CSCF.
So i.e. the node elements are clustered because of redundancy and not to increase
the number of network entities, or with other words the solution of a 1+1 redundancy
was chosen.
The LAN environment comprises on one side a duplicated, redundant LAN via two
redundant switches and on the other side a separated LAN, i.e. the LAN is separated
into different LAN Networks with different functions (TSP Admin LAN, IMS LAN, B&R
LAN,…).
Because of the LAN redundancy each cluster element provides two redundant
Ethernet ports per separated LAN. So 2 ports for TSP Admin LAN, 2 ports for IMS
LAN etc. are available. One of these ETH Ports is active, the other one is standby.
An IP address is allocated to each of the two ETH ports. These IP addresses are
called IPMP (IP multi-path) IP addresses. These port oriented IP addresses can be
used for example for the supervision of the physical path, maybe with "pings".
In addition to this IPMP IP addresses a physical address is assigned to a redundant
pair of ETH ports. This address can be used to address a physical cluster element.
I.e. it can be accessed via either port of the two redundant ETH ports.
To guarantee the service in case of the active port or the corresponding LAN fails,
both ports are in a so-called IPMP (IP multi path) group with at least one common so-
called virtual IP address. This virtual IP address is allocated to the active ETH port of
the IPMP group. In case this port fails, the IP address floats to the other Port and the
cluster element sends out a "Gratuitous ARP" to inform the partner (L2 switch) about
the new, changed MAC address. This IP address is supported by the NSN TSP
software.
CN37533EN10GLA0
92 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
WARNING
In case of both redundant ports of an IPMP fail, no switchover to the redundant
cluster element is foreseen. So i.e. in case of such an unlikely failure situation
the whole LAN fails.
IPMP Group
10.12.50.160
2
stb 1 1
3 1
10.12.50.161 10.12.50.162 10.12.50.163 10.12.50.169
act
3
Redundant LANs
Cluster Element 1
2 2
10.12.50.163
10.12.50.164
stb
10.12.50.165 10.12.50.166 10.12.50.167
act
Virtual IP address
10.12.51.160
IPMP IP address stb
Physical IP address 10.12.51.161 10.12.51.162 10.12.51.163
act
HIP IP address Cluster Element 2
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
93
IMS Hardware and Software
The @vantage node elements are based on the TSP7000 platform. This Platform is
also used for IN @vantage, IMS, HLRi etc. The TSP7000 platform allows the
installation of the @vantage applications (CSCF, HSS) on a Cluster configuration.
The TSP7000 Platform supports several different internal functions, interfaces to the
OEM software and the application software.
@vantage application SW
TSP7000 layer
CN37533EN10GLA0
94 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
OEM-components
OEM components are the Oracle (data base), Sun Volume Manager 1.0 (disk and
data management), NetWorker (Backup and restore Software), Netscape (Internet
Browser), Apache Tomcat (environment software to execute Java code), Apache
Web Server.
The Volume Manager is a software raid system and is used to manage the hard
disks. Disk space and fault tolerance can be configured with this software.
Oracle is the database software which is used for the TSP7000 platform.
The NetWorker is an online backup software and is used to back up the whole
@commander and the connected @vantage NEs.
SUN Cluster
Solaris 10 Solaris 10
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
95
IMS Hardware and Software
The following basic functions of TSP are used from PCS applications:
Process Management
The Process Management starts processes during the startup and monitors them
during operation. It generates alarms in case of process failures.
Security Management
The Security function is used for user identification and authorization (i.e. creation,
modification and deletion of users, password handling, handling of privileges etc)
Performance management
Performance management comprises the statistics counter management.
Statistics counters are handled by the statistics manager
Fault management
The fault management is responsible for:
- event management
- alarm management
The TSP7000 platform supports central alarm surveillance. It collects alarm
notifications from hardware and operating system via the "syslog" listener and from
oracle via the "alert log" listener. The TSP7000 alarm surveillance function reports
all alarms to this function
Via SNMP interface all alarms are sent to the @vantage Commander and reported
to the Operator.
- trace management
The Trace Management can be started via the @vantage Commander: The trace
points can be set to specific processes or subsystem and then the output is written
to log files. Error messages are always traced automatically without starting a
trace session.
- audit and recovery management
All critical resources are monitored and audited by the Audit and Recovery
mechanism. Inside the system exist threshold values which are used for a
periodically comparison with specific system values. If the checked values are not
inside their allowed range, audits are generated.
CN37533EN10GLA0
96 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
Web services
The web services are used for the access to the internal TSP7000 functions like
Trace Management and Alarm Surveillance in more comfortable way.
Backup and Restore
The Backup and Restore function is implemented in the TSP 7000 Platform (in
older versions it was implemented in the @vantage Commander Software). The
Backup and Restore function is a client used for the interconnection between
backup server - Oracle database and backup server - @vantage Commander
configuration.
6.3.3 Applications
On top of the TSP7000 SW the application SW is located. There is the application
software for the HSS (CMS 8200) or the CSCF with its different variants I-CSCF,
PCSCF and S-CSCF (CFX-5000).
Fig. 57 Applications
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
97
IMS Hardware and Software
TIP
The CFX-5000 (CSCF) on single-node configuration for small customers is released
on Sun Netra T5220 without external storages, without PCS collocation and without
IBCF functionality.
CN37533EN10GLA0
98 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
Fig. 58 Used HW Sun Netra T5220 and the LAN switch C4948
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
99
IMS Hardware and Software
CSCF
Administrative Terminal cluster
eri0
Console / Server Concentrator
IMS 7.0
t0 t1 t2 t3 t3 t2 t1 t0
slot
slot
0 0
8 9 10 11 nxge 1 1 nxge 11 10 9 8
PCIe
PCIe
4 5 6 7 nxge 2 2 nxge 7 6 5 4
0 1 2 3 nxge 5 5 nxge 3 2 1 0
CI
PCI-X
PCI-X
c2 c3 FC 3 3 FC c3 c2
c0 c1 FC 4 4 FC c1 c0
GE GE GE GE GE GE GE GE
e1000g0 e1000g1 e1000g2 e1000g3 CI e1000g3 e1000g2 e1000g1 e1000g0
IMS IMS TSP IMS TSP TSP TSP TSP IMS TSP IMS IMS
LAN 4 LAN 2 Default LAN LAN 1 B&R LAN Admin LAN ISL Admin LAN B&R LAN LAN 1 Default LAN LAN 2 LAN 4
LAN Switch 1 (with VLANs) LAN Switch 2 (with VLANs)
CN37533EN10GLA0
100 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
CSCF
Administrative Terminal single node
eri0
Console / Server Concentrator
IMS 7.0
t0 t1 t2 t3
slot
0
8 9 10 11 nxge 1
PCIe
4 5 6 7 nxge 2
0 1 2 3 nxge 5
PCI-X
3
4
GE GE GE GE
e1000g0 e1000g1 e1000g2 e1000g3
USB A USB B
SC LAN serial serial
SN T5220 (1)
IMS IMS IMS TSP IMS TSP TSP TSP TSP IMS TSP IMS IMS IMS
LAN 5 LAN 4 LAN 2 Default LAN LAN 1 B&R LAN Admin LAN ISL Admin LAN B&R LAN LAN 1 Default LAN LAN 2 LAN 4 LAN 5
LAN Switch 1 (with VLANs) LAN Switch 2 (with VLANs)
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
101
IMS Hardware and Software
CN37533EN10GLA0
102 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
7 Exercise
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
103
IMS Hardware and Software
Exercise 1
Title: LAN Networks
Pre-requisite: none
Task
Please answer the following questions
Query
Which one of the following statements is correct?
CN37533EN10GLA0
104 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
Exercise 2
Title: IMS Hardware/SW
Pre-requisite: none
Task
Please answer the following questions
Query
Which one of the following statements is correct or incorrect?
In the cluster solution the two CPU blades Yes No
are interconnected via the HUB blades
Each LAN is realized via a separate LAN Yes No
cable connected to each CPU blade
The whole shelf internal traffic is realized Yes No
via the backplane
With the ATCA HW the HSS-FE is also Yes No
realized as cluster
With the ATCA HW installation parameters Yes No
can be modified now via the TSP-WebGUI
With the NetAct the configuration Yes No
parameters can be modified directly
With the LEMAF the configuration Yes No
parameters can be modified directly
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
105
IMS Hardware and Software
CN37533EN10GLA0
106 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
8 Solution
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
107
IMS Hardware and Software
Solution 1
Title: LAN Networks
Pre-requisite: none
Task
Please answer the following questions
Query
Which one of the following statements is correct?
The Administration LAN (TSP Admin LAN) is used to Yes No
interconnect the TIAMS with the terminal concentrator
The Administration LAN (TSP Admin LAN) is used to Yes No
interconnect the TIAMS with the NetAct
The Administration LAN (TSP Admin LAN) is used to Yes No
interconnect the TIAMS with the HSS-FE or CSCF to perform
SW loading
The OAM LAN (TSP default LAN) is used to interconnect the Yes No
TIAMS with the NetAct
The OAM LAN (TSP default LAN) is used to interconnect the Yes No
NetAct with the node elements
The OAM LAN (TSP default LAN) is used to interconnect the Yes No
@vantage commander with the NetAct
The B&R LAN (TSP B&R LAN) is used to interconnect the Yes No
TIAMS with the HSS or CSCF to restore the data in case of a
SW crash
The IMS Traffic LAN (IMS LAN1 or 2) is used for performance Yes No
and traffic measurement
The IMS Traffic LAN (IMS LAN1 or 2) is used for traffic on Yes No
Gm/Gq/MW/ISC/Cx/Sh interfaces
CN37533EN10GLA0
108 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
IMS Hardware and Software
Solution 2
Title: Hardware/SW
Pre-requisite: none
Task
Please answer the following questions
Query
Which one of the following statements is correct or incorrect?
In the cluster solution the two CPU blades Yes No
are interconnected via the two HUB blades
Each LAN is realized via a separate LAN Yes No
cable connected to each CPU blade
The whole shelf internal traffic is realized Yes No
via the backplane
With the ATCA HW the HSS-FE is also Yes No
realized as cluster
With the ATCA HW installation parameters Yes No
can be modified now via the TSP-WebGUI
With the NetAct the configuration Yes No
parameters can be modified directly
With the LEMAF the configuration Yes No
parameters can be modified directly
CN37533EN10GLA0
Copyright ©2014 Nokia Solutions and Networks. All rights reserved.
109
IMS Hardware and Software
CN37533EN10GLA0
110 Copyright ©2014 Nokia Solutions and Networks. All rights reserved.