Vous êtes sur la page 1sur 64

Intel Expressway Service Gateway Installation Guide

Soft-Appliance Edition Version 2.8 September 2011

Order Number: 325745-001US

Disclaimer and Legal Information INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. UNLESS OTHERWISE AGREED IN WRITING BY INTEL, THE INTEL PRODUCTS ARE NOT DESIGNED NOR INTENDED FOR ANY APPLICATION IN WHICH THE FAILURE OF THE INTEL PRODUCT COULD CREATE A SITUATION WHERE PERSONAL INJURY OR DEATH MAY OCCUR. Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the absence or characteristics of any features or instructions marked "reserved" or "undefined." Intel reserves these for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them. The information here is subject to change without notice. Do not finalize a design with this information. The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request. Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order. Copies of documents which have an order number and are referenced in this document, or other Intel literature, may be obtained by calling 1-800-548-4725, or by visiting Intel's Web Site. Intel processor numbers are not a measure of performance. Processor numbers differentiate features within each processor family, not across different processor families. See http://www.intel.com/products/processor_number for details. Performance tests and ratings are measured using specific computer systems and/or components and reflect the approximate performance of Intel products as measured by those tests. Any difference in system hardware or software design or configuration may affect actual performance. Buyers should consult other sources of information to evaluate the performance of systems or components they are considering purchasing. For more information on performance tests and on the performance of Intel products, visit Intel Performance Benchmark Limitations. Results have been estimated based on internal Intel analysis and are provided for informational purposes only. Any difference in system hardware or software design or configuration may affect actual performance. Intel Expressway Service Gateway, Intel Expressway Tokenization Broker, Intel Services Designer, Intel Expressway Service Gateway for Healthcare, Intel SOAE-H, and Intel are trademarks of Intel Corporation in the U.S. and other countries. * Other names and brands may be claimed as the property of others. Copyright 2011, Intel Corporation. All rights reserved.

Intel Expressway Service Gateway Installation Guide

Document Revision History

Document Revision History

Date

Revision

Description Initial document published for Intel Expressway Service Gateway v2.8 Document published for Intel Expressway Service Gateway v2.8

August 2011 September 2011

001 002

Intel Expressway Service Gateway Installation Guide

Document Revision History

Intel Expressway Service Gateway Installation Guide

Contents

Contents
1.0 Introduction .............................................................................................................. 3 1.1 Supported Servers .............................................................................................. 3 1.2 Hardware Requirements ....................................................................................... 3 1.3 Software Requirements ........................................................................................ 4 1.4 Support for Virtual Machines ................................................................................. 4 1.5 Installing Unlimited Strength Java* Cryptography Extension (JCE) ............................. 4 1.6 Security Support ................................................................................................. 5 1.7 Supported Transport Protocols .............................................................................. 5 1.8 Supported Authentication Protocols........................................................................ 6 Preparing Your System for ESG Installation............................................................... 7 2.1 Red Hat Enterprise Linux OS* AS5 Installation Requirements for ESG......................... 7 2.2 SUSE Linux Enterprise 11 OS* Installation and Configuration for ESG......................... 8 2.3 Enabling Ports................................................................................................... 10 2.4 Installing the Java Runtime Environment .............................................................. 11 2.5 Setting the Path to the CLI ................................................................................. 11 2.6 Set Parameter Limits for ESG.............................................................................. 11 Installation Procedure for ESG................................................................................. 13 3.1 Permissions for Service Gateway ......................................................................... 13 3.2 Prerequisites..................................................................................................... 13 3.3 Installing Service Gateway ................................................................................. 13 3.3.1 Example of Postinstalling ESG .................................................................. 14 3.4 Starting and Stopping ESG Service ...................................................................... 16 3.5 Uninstall and Reinstall Service Gateway................................................................ 16 3.6 Making a Network Interface Active....................................................................... 17 3.7 Making a Network Interface Inactive .................................................................... 17 Accessing the Management Console ........................................................................ 19 4.1 Logging into the Management Console ................................................................. 19 4.2 Removing the Web Browser Security Warning Caused by the Management Console .... 21 Managing a Collection of Service Gateway Machines................................................ 23 5.1 Hardware, Software, and Network Requirements for a Cluster ................................. 24 5.2 Setting up a Service Gateway Cluster................................................................... 27 5.2.1 Example of Postinstalling Service Gateway on a Slave Node ......................... 29 5.3 Cluster Operation, Communication, and Management............................................. 30 5.3.1 Viewing the Status of a Nodes Message Processing..................................... 30 5.3.2 Managing Nodes in a Service Gateway Cluster............................................ 32 5.3.3 Viewing a Nodes Logs ............................................................................ 32 5.3.4 Message and File Transfer between Nodes ................................................. 33 5.4 Removing a Node from a Cluster ......................................................................... 34 5.4.1 Removing a Slave Node from a Cluster...................................................... 34 5.4.2 Removing a Master Node from a Cluster .................................................... 34 5.5 Changing the IP Address for a Nodes Management Network Interface ...................... 36 5.5.1 Changing the IP Address for a Slave Nodes Management NIC ...................... 36 5.5.2 Changing the IP Address for a Master Nodes Management NIC .................... 36 Front 6.1 6.2 6.3 6.4 6.5 End Load Balancing for HTTP Traffic ............................................................... 39 Prerequisites for Load Balancing .......................................................................... 40 Installing and Configuring a Load Balancer on a Service Gateway Cluster .................. 41 Starting, Stopping, or Uninstalling the Load Balancer ............................................. 42 Determining the Load Balancer Version ................................................................ 42 Monitoring Traffic Handled by the Load Balancer.................................................... 42

2.0

3.0

4.0

5.0

6.0

Intel Expressway Service Gateway Installation Guide

Contents

6.6 6.7 6.8 6.9 6.10 7.0

Describing the Command Syntax for lbconfig .........................................................43 6.6.1 Example of Executing lbconfig ..................................................................44 Defining Load Balancing Algorithms......................................................................45 Using Connection Affinity ....................................................................................46 Configuring an Application to use Front End Load Balancing.....................................46 Failover and Electing a Director ...........................................................................47

Integrating Hardware Cavium Cards with Service Gateway......................................49 7.1 Prerequisites for Integrating a Cavium Card with Service Gateway............................49 7.2 Installing a Cavium Device Driver ........................................................................49 7.3 Removing a Cavium Device Driver .......................................................................50 7.4 Creating and Using a Backup of Cavium Device Driver ............................................50 Upgrade Procedure ..................................................................................................53 8.1 Upgrade Command Syntax..................................................................................53 8.2 Back up Service Gateway Logs Before Upgrade ......................................................53 8.3 Upgrading Service Gateway ................................................................................54 8.3.1 Example of an Upgrade ...........................................................................55 8.3.2 Backing Out an Upgrade ..........................................................................56 8.4 Check the status of the Service Gateway ..............................................................56 8.5 Performing a Cluster-wide Upgrade ......................................................................56 8.5.1 Prerequisites for a Cluster-wide Upgrade....................................................56 8.5.2 Procedure for Upgrading a Cluster.............................................................57 8.5.3 Backing out a Cluster-wide upgrade ..........................................................57 Troubleshooting a Service Gateway Installation ......................................................59

8.0

9.0

Intel Expressway Service Gateway Installation Guide

Introduction

1.0

Introduction
Intel Expressway Service Gateway (ESG), also known as Service Gateway, is a software-appliance designed to simplify and secure application architecture on-premise or in the cloud. Service Gateway expedites deployments by addressing common security and performance challenges. ESG accelerates, secures, integrates and routes XML, web services and legacy data in a single, easy to manage software appliance form factor. This document provides instructions about installing the Service Gateway on the Linux* operating system.

1.1

Supported Servers
Service Gateway is a server-based product that provides optimal performance when the server is dedicated to it. However, other software can run on the server if required. The Service Gateway comes in a soft-appliance the form factor, which is the ESG software installed on a customer provided operating system and hardware or in a virtual machine. Intel Expressway Service Gateways soft-appliance form factor is designed for Intel OEM servers. ESG can be installed on any supported Intel OEM server, which includes the following: Dell PowerEdge* 2950 (Quad-Core) HP ProLiant* DL380 G5 server (Dual Core or Quad Core) HP ProLiant* BL460C server (Dual-Core)

1.2

Hardware Requirements
The minimum processor and memory configuration for the Service Gateway is Pentium 4 class processor with a 4 gigabytes of RAM. The recommended processor and memory configuration for the Service Gateway is 2 Quad core processors (8 core, 2 socket) and 8 gigabytes of RAM.

Intel Expressway Service Gateway Installation Guide

Introduction

1.3

Software Requirements
To install Service Gateway in a software environment, the system must have the following: Table 1. ESG Software Environment
Item Software Version Red Hat Enterprise Linux* AS 5 64-bit SUSE* Linux Enterprise Server 11 (SLES 11) 64-bit Java Runtime Environment (JRE*) 1.6.0_22 or greater.

Operating System

Java Runtime Environment*

1.4

Support for Virtual Machines


You can run Service Gateway on the following virtual machines: Oracle Virtual Manager* v2.1.5 VMWare ESX* 3.0 VMWare ESXi* 3.5 To run Service Gateway on a virtual machine, the VM must meet the following requirements: 2 CPU cores 8 GB of RAM minimum of 10 GB of free disk space ESG performance scales proportionally to the number of cores. For example, if the runtime processed 1000 msg/sec on a virtual machine with 2 CPU cores allocated, then allocating the 4 CPU cores doubles message throughput.

1.5

Installing Unlimited Strength Java* Cryptography Extension (JCE)


In the JRE used by ESG, you must install unlimited JCE. If you do not, then the runtime cannot perform Java-based cryptographic functions. To install the unlimited JCE, perform the following steps. 1. If the ESG service is running, stop it now. 2. Verify the Java Runtime Environment (JRE) is installed on your system and ESG will use that JRE. 3. Go to the Download Java(TM) Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files 6 web page. 4. On the web page, download the jce_policy-6.zip. 5. Unzip the jce_policy-6.zip. 6. Open the jce folder. 7. In the jce folder, open the README.txt. 8. In the README.txt, follow the instructions about installing the unlimited JCE into the JRE used by ESG.

Intel Expressway Service Gateway Installation Guide

Introduction

9. Start up the ESG service.

1.6

Security Support
Service Gateway supports cryptographic processing using a Cavium hardware security card for high performance applications or software-only OpenSSL*. For details about setting up Service Gateway to use Cavium hardware cards, refer to the section 7.0 Integrating Hardware Cavium Cards with Service Gateway. When you install Service Gateway, the runtime uses the default version of OpenSSL for cryptographic processing, such as a WS-security policy encrypting a message request. The version of OpenSSL used by Service Gateway depends on the operating system that the runtime is installed on. On software only versions of Service Gateway, the runtime supports OpenSSL 0.9.8o. If Service Gateway uses a Cavium security card for security offloading, then the runtime uses OpenSSL 0.9.8o.

1.7
Table 2.

Supported Transport Protocols


This table lists transport protocols supported by Service Gateway: ESG Transport Protocols
Protocol HTTP(S) v1.0 and v1.1 JMS v1.1 IBM MQ MLLP Release 1 and Release 2 File Transfer Protocol (FTP) Secure File Transfer Protocol (SFTP) File Raw TCP N/A Description Can modify metadata? Yes Yes Yes No Yes Yes

ESG can communicate with any message broker that supports the JMS standard APIs. Service Gateway has been tested with Sun MQ*, Oracle AQ*, WebSphere MQ Series*, and Active MQ*.
N/A Mirth v1.6.0. This is an implementation of MLLP Release 1. RFC 959 File Transfer Protocol 2.1 network protocol that provides file access, file transfer, and file management functionality over an, encrypted, reliable data stream. Treats a file system as an endpoint. ESG can get or put files on a file system that is accessible from a network, such as NFS. N/A

Yes No

Intel Expressway Service Gateway Installation Guide

Introduction

1.8
Table 3.

Supported Authentication Protocols


This table lists authentication protocols supported by ESG: ESG Authentication Protocols
Protocol LDAP CA SiteMinder* Tivoli Access Manager* Oracle Access Manager* Online Certificate Status Protocol Oracle Entitlements Server* WS-Trust Specific Version Tested 6.0.4 6.0 6.0 10.14.01 with OID 10.14.01 RFC 2560 and RFC 5019 10.1.4.3.0 on top of Weblogic* 10.3 with Oracle* Database 10.2.10 Active Directory Federation Service 2.0 with WS-Trust 1.3

For information about the authentication protocols used in ESG, refer to the Security Reference Guide for Intel Expressway Service Gateway.

Intel Expressway Service Gateway Installation Guide

Preparing Your System for ESG Installation

2.0

Preparing Your System for ESG Installation


This section documents the files and parameters required by ESG. If these files are absent or parameters are not set when Service Gateway is installed, your install will fail or your system will not function correctly.

2.1

Red Hat Enterprise Linux OS* AS5 Installation Requirements for ESG
Service Gateway software (ESG) requires certain features of the Linux* operating system that are not the default Red Hat Enterprise Linux OS* installation. When you install Red Hat Enterprise Linux OS* AS5, use the following when installing: 1. Install with Machine use as: Software Development Web Server

2. Select the Customize Later radio button.

3. Language and other options should follow local administration guidelines. 4. During the post-reboot installation stage of Red Hat Enterprise Linux OS* 5, we recommend that you select: No Firewall. If you must enable the firewall, you need to carefully follow port enabling and set ups for these two items. Then, perform the procedure in section 2.3 Enabling Ports. 5. Perform the procedure in section 2.4 Installing the Java Runtime Environment. 6. Perform the procedure in section 2.6 Set Parameter Limits for ESG. 7. Perform the procedure in section 3.3 Installing Service Gateway.

Intel Expressway Service Gateway Installation Guide

Preparing Your System for ESG Installation

2.2

SUSE Linux Enterprise 11 OS* Installation and Configuration for ESG


To install SLES11 64-bit operating system so that Service Gateway can run on it, perform the following steps. 1. As a result, the Welcome screen displays.
a

2. In the Welcome screen, perform the following steps. a. b. c. d. In the Language drop-down menu, select English (US). In the Keyboard Layout drop-down menu, select English (US). Read the License Agreement and, then if you agree, select I agree to the License Terms check box. Select the Next button.

3. In the Media Check screen, you can check the installation media to avoid install problems by selecting the Start Check button. Once you have checked the installation media, select the Next button. 4. In the Installation Mode window, select the New Installation radio button and then select the Next button. 5. In the Clock and Time Zone screen, perform the following steps. a. b. In the Region drop-down menu, select the country or geographic location where the machine will reside. In the Time zone drop-down menu, select the time zone where the machine will reside.

Intel Expressway Service Gateway Installation Guide

Preparing Your System for ESG Installation

c.

Update the date and time by performing the following steps. i. Select the Change button. ii. In the Current Time field, enter the current time in the UTC format. iii. In the Current Date field, enter the current date. iv. Select the Accept button.

d. e. a. b.

Verify that the Hardware Clock Set to UTC check box is selected. Select the Next button. Select the Physical Machine (Also Fully Virtualized Guests) radio button. Select the Next button.

6. In the Server Base Scenario screen, perform the following steps.

7. In the Installation Settings screen, verify that all the configuration options are correct and then select the Install button.

8. In the Confirm Package License fonts dialog box, read the license and if you agree with the license select the I Agree button. 9. In the Confirm Installation dialog box, select the Install button. 10. Wait several minutes for the installation to complete 11. .In the Password for the System Administrator root screen, perform the following steps. a. b. In the Password for Root User field, enter the root users password. In the Confirm Password field, reenter the root users password.

Intel Expressway Service Gateway Installation Guide

Preparing Your System for ESG Installation

c. a. b. c. d.

Select the Next button. In the Hostname field, enter the machines hostname. In Domain Name field, enter the machines domain name. Clear the Change Hostname via DHCP check box. Select the Next button.

12. In the Hostname and Domain Name screen, perform the following steps.

13. In the Network Configuration screen, select the Next button. 14. In the Test Internet Connection screen, select the Next button. 15. In the Network Services Configuration screen, select the Next button. 16. In the User Authentication Method screen, select the appropriate authentication method and then click the Next button. Note: If you select an authentication method other then Local, then there may be additional configuration steps needs. 17. In the New Local User screen, populate each field with the appropriate information and then select the Next button. 18. In the Release Notes screen, select the Next button. 19. Wait several minutes for hardware configuration to complete. 20. In the Hardware Configuration screen, select the Next button. 21. In the Installation Completed screen, select the Finish button. As a result, the login screen displays. 22. In the login screen, perform the following steps. a. b. c. d. In the username field, enter root. Select the Log in button. In the Password field, enter the root users password. Select the Log in button.

23. If you have enabled the Linux* operating system's firewall, then perform the procedure in 2.3 Enabling Ports. 24. Perform the procedure in section 2.4 Installing the Java Runtime Environment. 25. Perform the procedure in section 2.6 Set Parameter Limits for ESG. 26. Perform the procedure in section 3.3 Installing Service Gateway.

2.3

Enabling Ports
If you have enabled the Linux* operating system's firewall, then the ports that the ESG requires are disabled. You need to ensure that four ports are open on the firewall for the following processes. Must have a TCP port available for the Management Console port, which clients use to access the web interface. The default Management Console port is 8443. Must have a TCP port available for Operation, Administration, and Management (OAM) communication. The port that ESG uses for this is defined during the postinstall process. The default OAM communication port is 9443. Must have a TCP port available for exchanging files between nodes in an ESG cluster. The port that ESG uses for this is defined during the postinstall process. The default OAM file port is 9444.

10

Intel Expressway Service Gateway Installation Guide

Preparing Your System for ESG Installation

Must have a UDP port available for nodes to communicate about whether cluster election needs to occur. The port that ESG uses for this is defined during the postinstall process. The default OAM cluster election port is 9445.

2.4

Installing the Java Runtime Environment


1. If you have not already done so, install a JRE on Linux machine. The JRE must be 1.6.022 or great. 2. Set the environment variable JRE_HOME to point to the JRE. ESG uses the JRE_HOME environment variable to access a JRE.

2.5

Setting the Path to the CLI


To avoid specifying the full path every time you execute an ESG CLI command, perform the following steps. 1. Log into the Linux machine as the root user. 2. Open /etc/profile in a text editor. 3. Scroll to the bottom of the file and insert the following line: PATH=$PATH:/opt/scr/ clibin 4. Save and close the profile. 5. Log out off the machine and log back in.

2.6

Set Parameter Limits for ESG


Linux OS* has system wide limits for a number of parameters that must be reset for the ESG to install and run properly. To change these kernel parameters, perform the following steps. 1. Log on as a user with root authority. 2. Open /etc/sysctl.conf in a text editor. 3. In sysctl.conf, insert the following entries. Each entry must be on its own line. kernel.msgmni=160 kernel.shmmax=2684354560 kernel.msgmnb=512000 net.unix.max_dgram_qlen=1000 net.core.optmem_max=20480 net.core.wmem_default=135168 net.core.wmem_max=135168 net.core.rmem_default=135168 net.core.rmem_max=135168 4. Save and close sysipcctl.conf. 5. To load in sysctl settings from /etc/sysctl.conf, execute the following command: sysctl -p.This reloads the sysctl.conf parameters for the current session so a reboot is not needed. 6. To view the updated parameters in sysctl, execute the command: ipcs -l 7. To view all sysctl settings, execute the command:sysctl -a 8. Open /etc/security/limits.conf in a text editor.

Intel Expressway Service Gateway Installation Guide

11

Preparing Your System for ESG Installation

9. In the limits.conf, insert the following lines <ESG> hard nofile 65536 <ESG> soft nofile 65536 10. Change ESG to the user that ESG is installed under. Typically, the user is nobody. 11. Save the limits.conf file.

12

Intel Expressway Service Gateway Installation Guide

Installation Procedure for ESG

3.0

Installation Procedure for ESG


Service Gateway is installed using the RPM* tool.

3.1

Permissions for Service Gateway


For security reasons, it is recommended that ESG be installed under the user nobody. The user nobody does not have root access nor does the user have a shell. Only people who need to command line access to the ESG should have access to the ESGs user ID. Installing ESG under an ID with low permissions means that application programmers are not able to use any privileged port numbers, which are 1 through 1024.

3.2

Prerequisites
Prior to installing ESG, you need to have the following: You must have administrator rights to install Service Gateway. On the machine where ESG will be installed, there must be a NIC bound to an isolated network. An isolated network means the network does not permit external access of any kind. During ESGs postinstall, you will bind the security gateways management traffic to this NIC. root access to the machine. root access is required to register Service Gateway as an OS service. esg--runtime-[os]-64bit-[rx_y_z].rpm, where OS name of the Linux operating system x_y_z the release number of Service Gateway

3.3

Installing Service Gateway


To install Service Gateway, perform the following steps. 1. Copy the ESG RPM into a directory on the target system. Use SCP (secure copy) or FTP to do this. 2. Ensure that you have root privileges to do the RPM install. For security reasons, it is recommended that you install ESG under a non-root user. 3. If you plan on using load balancing for HTTP traffic, then stop this procedure and refer to section 6.0 Front End Load Balancing for HTTP Traffic. 4. If you are installing this instance of the ESG as a slave node in a cluster, then stop performing this procedure and instead refer to section 5.0 Managing a Collection of Service Gateway Machines. If you are installing this as a master node or as a standalone instance of the ESG, then continue on. 5. Execute the following command to install the ESG: rpm -i [name of rpm], where [name of rpm] is the absolute file path to the RPM. The following is an example. rpm -i /tmp/esg-runtime-as5-64bit-r2_8_0.rpm

Intel Expressway Service Gateway Installation Guide

13

Installation Procedure for ESG

6. Determine how many network interfaces the ESG runtime needs to use. If the machine where the ESG will be installed does not have enough NICs, then you must either install the NICs now or install the ESG on a different machine. CAUTION: FOR THE ESG TO USE NIC HARDWARE INSTALLED AFTER A POSTINSTALLATION, REQUIRES RUNNING ANOTHER POSTINSTALL AFTER THE HARDWARE IS INSTALLED. 7. Start the postinstallation process by executing the following command: cli postinstall. a. b. c. d. e. f. g. h. i. j. k. l. When asked if you want to postinstall, type yes and then press enter. When asked to specify a value for JRE_HOME, use the default value by pressing enter. When asked if you want to add this node to an existing cluster, type no and press enter. When asked to enter the management interface from the above list, type the name of the NIC bound to an isolated network and then press enter. When asked to specify a userid or username as which this software should run, accept the default by pressing enter. When asked to specify a groupid or group name as which this software should run, accept the default by pressing enter. When asked to enter a a port number for the Web Interface, accept the default by pressing enter. When asked to enter a name for this cluster, accept the default by pressing enter. When asked to enter a port number for OAM cluster communication, accept the default by pressing enter. When asked to enter a port number for OAM cluster file transfer, accept the default by pressing enter. When asked to enter a port number for OAM cluster election, accept the default by pressing enter. When asked Are these OK, use the default answer of yes by pressing enter.

8. Configure the ESG service so that it automatically starts each time the machine restarts by executing the following command: chkconfig --add soae 9. Start the ESG by executing the following command: cli serviceStart. For additional details about starting and stopping the service, refer to section 3.4 Starting and Stopping ESG Service.

3.3.1

Example of Postinstalling ESG


The following is an example of postinstalling Service Gateway on a master node or standalone instance. ~>cd RPM rpm -i /tmp/esg-runtime-as5-64bit-r2_08_0.rpm

ESG operational code will use openSSL libraries installed in /opt/scr-openssl/ssl/lib.

14

Intel Expressway Service Gateway Installation Guide

Installation Procedure for ESG

Next run the script: /opt/scr/clibin/cli postinstall and answer the questions.

/etc/init.d/soae has been installed. It supports chkconfig: chkconfig --add soae /opt/scr/clibin/cli serviceStart or can be manually linked into the desired rc initialization directories. [root@iclab002 ~]# cli postinstall

Please enter value for JRE_HOME (default="/usr/java/latest/jre"):

Add this node to an existing cluster (y/n, or q to quit): n Detecting network configuration

Intf ----eth0 eth1 lo

Address ----------------10.203.43.59 10.1.133.64 127.0.0.1

Enter the management interface from the above list [default=eth1]: Selected eth1 for management interface Enter a userid or user name as which this software should run (default=nobody): Enter a groupid or group name as which this software should run (default=nobody): Enter port number for Web Interface [8443]: Using 8443 Enter name for this cluster [ESG-cluster]: Using Cluster name ESG-cluster Enter port number for OAM cluster communication [9443]: Using 9443 Enter port number for OAM cluster file transfer [9444]: Using 9444 Enter port number for OAM cluster election [9445]: Using 9445 Selected the following: cluster name: ESG-cluster OAM cluster communication port: 9443 OAM cluster file transfer port: 9444

Intel Expressway Service Gateway Installation Guide

15

Installation Procedure for ESG

OAM cluster election port: 9445 Are these OK (yes or no) [yes]: Using these values. Successfully installed

3.4

Starting and Stopping ESG Service


To start the ESG, perform the following steps. 1. To automatically start Service Gateway when the Linux OS is restarted or rebooted, execute the command chkconfig --add soae. 2. To start the ESG, execute the command cli serviceStart. 3. To determine whether ESG is running, execute the command cli status. If the string ACT is in the command output, then the service has started. The following is an example of this commands output. CLUSTER:1(ESG-cluster) state=ACT NODE:1-0(iclab002) state=ACT Service state=ACT Master=YES MasterName=iclab002 Mode=NORMAL uptime: 3 days, 15 hours, 35 minutes, 1 seconds Current Config: factory 1 TCAs (WARNING=1) 0 Alarms 0 Non-Act Managed Objects 0 Apps Deployed *** status Sun Sep 19 12:22:02 CDT 2010 *** 4. To stop the ESG, execute the command cli serviceStop. 5. To determine whether ESG is running, execute the command cli status. If the string Service state=OOS is in the command output, then the service has stopped. The following is an example of this commands output. *** status Sun Sep 19 12:23:31 2010 *** Node is down! Service state=OOS *** status Sun Sep 19 12:23:31 2010 ***

3.5

Uninstall and Reinstall Service Gateway


To uninstall and reinstall ESG, perform the following steps. 1. Execute the command cli serviceStop. As a result, the following output displays. Stopping soaed: [ OK ] 2. Execute the command: rpm -e ESG. 3. Once the RPM is removed, reinstall ESG by performing the procedure in section 3.3 Installing Service Gateway.

16

Intel Expressway Service Gateway Installation Guide

Installation Procedure for ESG

3.6

Making a Network Interface Active


If a NIC is installed prior to postinstallation and the NIC is down, you can activate the network interface and configure the ESG to use it by performing the following steps. 1. Stop the ESG by executing the command cli serviceStop. 2. Display a list of all the network interfaces installed on the machine by executing the command: ifconfig -a. 3. From the network interface list, identify the inactive NIC that you need to activate. 4. Verify that the NIC was installed prior to ESG postinstallation. If it was installed after a postinstall, then stop performing this procedure because the ESG cannot use the network interface. 5. Activate the NIC by executing the command: ifconfig [network interface] up. 6. Execute the command cli scanInterface --preserveOamInterface. 7. Start the ESG by executing the command cli serviceStart.

3.7

Making a Network Interface Inactive


To make a NIC inactive and configure the ESG to stop using the inactive NIC, perform the following steps. 1. Stop the ESG by executing the command cli serviceStop. 2. Display a list of all the network interfaces installed on the machine by executing the command: ifconfig -a. 3. From the network interface list, identify the active NIC that you need to deactivate. 4. Deactivate the NIC by executing the command: ifconfig [network interface] down. 5. Execute the command cli scanInterface --preserveOamInterface. 6. Start the ESG by executing the command cli serviceStart.

Intel Expressway Service Gateway Installation Guide

17

Installation Procedure for ESG

18

Intel Expressway Service Gateway Installation Guide

Accessing the Management Console

4.0

Accessing the

Management Console

The Management Console provides web-based access to the administrative functions of the Service Gateway runtime. The following sections explain how to login into the Intel Expressway Service Gateway Management Console right after ESG is installed or upgraded and how to resolve the security warnings that occur when a user first logs into the Management Console.

4.1

Logging into the Management Console


The Management Console provides web-based access to the administrative functions of the Service Gateway runtime. After you have installed Service Gateway, you can access the Management Console by performing the following steps. 1. Verify that you have the appropriate credentials to log in to the Management Console. 2. Open a web browser. The Management Console is only supported on Mozilla Firefox* 3.0 or higher and Internet Explorer* 6.0 or higher. 3. Verify that the web browser is configured with the following settings: JavaScript* is enabled, SSL is enabled, and popup windows are allowed. 4. In a web browsers address bar, type the following address and then press enter: https://[hostname]:[port number].

[hostname] is the name or IP address bound to the management network interface. The management network interface is a NIC bound to an isolated

Intel Expressway Service Gateway Installation Guide

19

Accessing the Management Console

network, which is a network that does not permit external access. You specified the management NIC during the postinstall process. [Port number] is the web interface port specified during postinstallation. The default port number is 8443.

5. In the User name and Password fields, enter valid login credentials. If login credentials have not been set up yet, then you can use one of following the default usernames. Table 4. Default Login Credentials for Management Console
User ID admin opsadmin cfgadmin secadmin passwd Password Privileges Security administration, Operator administration, and Configuration administrator Operator administration only Configuration administration only Security administration only

WARNING:

THESE DEFAULT LOGIN CREDENTIALS SHOULD ONLY ALLOWED IN TESTING ENVIRONMENTS. ALLOWING THE USE OF THESE CREDENTIALS IN A PRODUCTION ENVIRONMENT IS INSECURE. 6. Select the Sign In button. As a result, the Management Console displays in your web browser. If your username has been assigned all the ESG roles, then the follow page displays:

Note:

A warning may display about security acceleration hardware. This warning only appears if you have Cavium network hardware cards installed on the same system as Service Gateway. To remove this warning, refer to the Installation Guide for Intel Expressway Service Gateway which provides the integration procedure for ESG and Cavium cards.

20

Intel Expressway Service Gateway Installation Guide

Accessing the Management Console

4.2

Removing the Web Browser Security Warning Caused by the Management Console
You can access and manage the ESG from any computers Internet Explorer or Firefox web browser. A browser can only communicate with the Management Console over an SSL connection. To avoid certificate errors when the Management Console is loaded into a web browser, you must install a client certificate into ESG and the issuers certificate in the web browser. When you first access Management Console, the web browser may display a security warning about the connection being untrusted. The following screenshot is of the security warning Firefox displays.

A browser can only communicate with Management Console over an SSL connection. This SSL connection requires a X.509 certificate that identifies Management Console. When ESG is installed, a self-signed certificate is automatically generated and archived in a JKS-type keystore. This certificate is only valid for 3 months after the installation. Once the original SSL certificate expires, you must delete the expired certificate and then create a new one. You can use the keytool provided with your JRE installation to create, delete, and manage SSL certificates in this keystore. Note: For details about managing SSL certificates in a keystore, refer to the following keytool documentation: http://download.oracle.com/javase/6/docs/technotes/tools/solaris/ keytool.html. To avoid certificate errors when the Management Console is loaded into a web browser, you must install a client certificate into ESG and the issuers certificate into the web browser. To install SSL certificates into Management Console and the web browser, perform the following steps. 1. In a system where OpenSSL is installed, verify that you have root privileges in the system. 2. Create the clients private key. For example: openssl genrsa -des3 -out client.key -passout pass:securityadmin. The output of this command is client.key, which is the clients private key. 3. Ensure that you retain the password that encrypted the key. 4. Generate a client certificate request using the client key. This certificate must identify the Management Console. For example: openssl req -new -key client.key -out client.csr. The output of this command is client.csr, which is the Client Certificate Request (CSR) that will be sent to a CA Authority. 5. Send the CSR to a Trusted Root Certificate Authority. The Trusted Root CA signs the X.509 certificate and then returns this certificate to you. The signed X.509 certificate must be in a PEM format and have the file extension crt.

Intel Expressway Service Gateway Installation Guide

21

Accessing the Management Console

6. If needed, obtain the CA Path that links the Trusted Root Certificate Authority who signed the X.509 certificate to the client certificate. The CA Path must be in a PEM format. 7. If you have not already done so, install the issuer certificate into each systems web browser, where the Management Console will be accessed. If Firefox is used, then install the issuer certificate in the Certificate Managers Authorities tab. If Internet Explorer is used, then install the issuer certificate in the Certificates Trusted Root Certification Authorities tab.

8. In the system where the ESG is installed, verify that you have root privileges. Then, create a folder. Name the folder Cert. 9. Copy the following files into the Cert folder. CA Path file that contains a chain of PEM format certificates starting with the immediate CA certificate that signed the target certificate following through to immediate CA certificates if applicable and ending with the high level (root) CA. This file must be in a PEM format. Client certificate X.509 certificate that identifies the Management Console and was signed by a CA. This must be in a PEM format. Client certificates private key used by ESG to decrypt data sent by a web browser. The web browser used the client certificate to encrypt the data.

10. Verify that you have root privileges in the system where ESG is installed. 11. In the system where ESG is installed, execute the cli SetWiCert command. To successfully execute the command, you must specify the absolute path to the CA Path file, the client key, and the client certificate. For example, if the certificates and key are located in /home/lablogin/cert and the certificate file name is client.pem, the key file name is client.key, and the CA Path file name is client_root.pem, then you execute the following command. cli setWiCert -w /home/lablogin/cert/client.pem -k /home/lablogin/ cert/client.key -c /home/lablogin/cert/client_root.pem

22

Intel Expressway Service Gateway Installation Guide

Managing a Collection of Service Gateway Machines

5.0

Managing a Collection of Service Gateway Machines


A cluster is a group of linked servers that behave like a single server. The individual systems within the cluster are called nodes. The node that controls management and administration of the cluster is the master node. All other nodes are called slaves. The ESG cluster is a group of machines where the Service Gateway is installed on each machine and all the ESG installations can be viewed and administered from the master node. Service Gateway cluster is a management cluster, which means it is used for simplifying management tasks across multiple machines (configuration changes in a master node are automatically propagated to the other nodes) and scalability (the more machines you have, the more messages a particular application can process on each machine). Out of the box, an ESG cluster is not a high availability cluster, meaning that it does not evenly distribute messages across the nodes in a cluster. If a client sends a message transaction to a particular node, then the whole transaction is handled by that node without any other node being involved in the message processing. Consequently, for redundancy, failover, and load balancing purposes, you may want to install a front end load balancer that evenly distributes message transactions across the cluster and will route messages away from a node if it fails. The ESG nodes share the same system and application configuration data. If a configuration setting is changed in the master node, then that change is automatically replicated to all the other nodes in the cluster. In most cases, you can manage a cluster by making a change in the master node and then have the master node seamlessly propagate that change to all the other nodes in a cluster. The data on the master node takes precedence over data in a slave node. When a slave node synchronizes data with a master node, all the data on the slave node that differs with data found on the master node is deleted on the slave node and replaced with the data from the master node. For example, if you uploaded and deployed an application called foobar to a master node, then the master node pushes this application to the slave nodes who then deploy it in the same manner as the master node. However, there are two exceptions to data being replicated across the nodes in a cluster: logs and statistical data. While logs and statistical data from all the nodes in cluster can be viewed on the master nodes Management Console, it does not mean the master node stores the slave nodes logs or statistics. If a slave node dies, then the logs and statistical data stored on that machine can no longer be accessed by the master node. Service Gateway cluster provides the following features and functionality: In the master node, upload, deploy, and manage applications. When an application is deployed from the master node, it is deployed to all the slave nodes automatically. In the master node, manage system and application configurations and administration for all nodes in a cluster. When a particular application or system component is created, deleted, or modified in the master node, that change is propagated to all the other nodes in the cluster.

Intel Expressway Service Gateway Installation Guide

23

Managing a Collection of Service Gateway Machines

To support scalability, the Management Console provides a single operational view across all members of the cluster. An attempt to access a slave nodes Management Console causes an automatic redirect in a web browser to the master nodes Management Console. If for any reason the master nodes ESG service stops running, then a master election automatically takes place in the cluster. A master election is the process in which the slave nodes can no longer communicate with the master nodes ESG service and as a consequence elect one of the remaining nodes to be the master. If the former master nodes ESG service starts up again, it will automatically be added back into the cluster as a slave node. From the master node, you can collect statistics and debug application and system issues for all nodes in the cluster. The master node collects statistics, logs, message processing, and component status from all the nodes in the cluster and then presents that information within a single view in the master nodes Management Console. Manual administrative changes are automatically executed across all nodes in the cluster. For example, if you deactivate an application configuration on the master nodes Management Console, then the application configuration automatically becomes inactive on all the slave nodes in the cluster. From the master nodes Management Console, ability to execute cluster wide commands that start, stop, and test components on all nodes in the cluster simultaneously. If a slave node fails, the cluster instantly identifies this failure and throws up an alarm. Once identified, the cluster will not attempt to any data to the slave node until the slave node starts up again. If the node that failed starts up again, the immediately synchronizes all the data on the slave node with the data on the cluster. For example, if an application is deployed to the cluster while the slave node is down, when the slave node comes back up the cluster pushes that application onto the slave node. If a master node fails, the cluster performs a master election. A master election is the process by which a slave node becomes the master because the original master is no longer available. If the former master node comes back up, then it automatically rejoins the cluster as a slave node. If an ESG cluster processes HTTP message transactions, then you can use load balancing to intelligently distribute the messages across nodes. With the clustering and load balancing combined, you can improve the availability and failover for applications deployed to the ESG cluster. When one node fails, the load balancer routes messages to another node, which will process the message the exact same way the failed node would have. For implementing load balancing, refer to section 6.0 Front End Load Balancing for HTTP Traffic.

5.1

Hardware, Software, and Network Requirements for a Cluster


In the ESG cluster, the nodes must conform to the following requirements. Currently, testing has been done on a 8 node cluster. This is the maximum number of recommended nodes in a cluster. If you create a cluster with more than 8 nodes, you may encounter issues related to data synchronization and internode communication. Must have a NIC available for the OAM process. All the nodes must use the network interface that is named the same and is bound to the same network for the ESGs Operation, Administration and Management (OAM) process. This is defined during the postinstall process. The default OAM NIC is eth0. For example, if the master node assigns the OAM process to NIC named eth0 and the NIC is bound to the

24

Intel Expressway Service Gateway Installation Guide

Managing a Collection of Service Gateway Machines

Acme network, then every other node must assign the OAM process to a NIC named eth0 and bind the NIC to the Acme network. Must have a TCP port available for OAM communication. All the nodes must have the same OAM communication port, which the nodes use to communicate with one another. This is defined during the postinstall process. The default OAM communication port is 9443. Must have a TCP port available for exchanging files between nodes. All the nodes must have the same OAM file port, which the nodes use to exchange files with one another. This is defined during the postinstall process. The default OAM file port is 9444. Must have a UDP port available for nodes to communicate about whether cluster election needs to occur (i.e. nodes learn that the master node has died) and then if necessary which slave node will become the master. All the nodes must have the same OAM cluster election port. This is defined during the postinstall process. The default OAM cluster election port is 9445. If you have more then one ESG cluster on the same network, then the clusters can not share the following ports: OAM communication port, OAM file port, and OAM cluster election port. If firewalls are erected between the nodes in a cluster, then the following ports must be opened in the firewalls: OAM communication port, OAM file port, and OAM cluster election port. The cluster election port is a UDP port and all other ports are TCP ports. If one node uses a security card for cryptographic acceleration, then every other node in the cluster must have one as well. If one node uses a Hardware Security Module, then every other node in the cluster must have one as well. You can not cluster software installations and hardware appliances of ESG together. The cluster must either consist of all software or all hardware appliances. However, an ESG cluster can contain virtual machines and bare metal machines. The nodes should have the same ports in use at all times. For example, if the master node is using port 8443 then all the other nodes in the cluster should be using port 8443. You should avoid a situation where one node is using a port that no other node is using or a node is not using a port that every other node is using. In the cluster, all the machines clocks must be synchronized with one another to within a second. Before you create a cluster, it is highly recommended that you set up all the machines to use the same NTP time source and that the NTP time source have a low offset. Must have a TCP port available for the Management Console port. All the nodes must provide access to the Management Console through the same port and this port cannot be blocked by any nodes firewall. The default Management Console port is 8443. If a firewall is erected between the master node and a user who is on a different network, then for the user to access the Management Console the Management Console port must be opened on the firewall. This is a TCP port.

Intel Expressway Service Gateway Installation Guide

25

Managing a Collection of Service Gateway Machines

Before setting up a cluster, verify that an appropriate hostname is assigned to each machine. It is highly recommended all the members of a cluster have the same computing power, such as CPU and RAM. If they do not, then each nodes runtime performance will differ from one another, such as message throughput and the size of the messages a node can process. If you have nodes with a number of CPU cores that differ from other nodes in the cluster, then you should always set the number of workflow threads to zero. If you set the workflow threads above zero, then you may degrade runtime performance because ESG may use a number of threads that exceeds the number of CPU cores on one of the nodes. If you have machines with different amounts of disk space, then application design and deployment should be restricted based on the node with the lowest amount of disk space. For example, in a two-node cluster, node1 has 60 GB of disk space and node2 has 100 GB of disk space. In this scenario, you should design applications and file storage based on the limit of 60 GB. All nodes must run the same operating system and OS version. In order to identify the source of alarms, alerts, and logs, each node must have a unique name within the cluster. No node may have the same node name as another node in the cluster. On each node in the cluster, the JRE used by ESG must have unlimited JCE installed. All the nodes must run the same version of Service Gateway. All machines should be either 32-bit or 64-bit machines. You should not combine 32- and 64-bit machines within a cluster. On a node, each NIC must be uniquely named and no two NICs may have the same logical name assigned to it. All the nodes must reside in the same timezone. Only static IP addresses should be assigned to each nodes management network interface. If the IP address changes, then the node cannot communicate with any other node in the cluster until you manually update the IP on the node where it changed. When joining a node to a cluster, you may only join one node at a time. The network bound to each NIC on the master node must be the same network bound to the NIC of the same name on every other node in the cluster. For example, if you have a two node cluster, the master node could have a NIC named eth1 on a network named Acme. Then, the slave node must have a NIC named eth1 on the same network named Acme.The network bound to each NIC on the master node must be the same network bound to the NIC of the same name on every other node in the cluster. For example, if you have a two node cluster, the master node could have a NIC named eth1 on a network named Acme. Then, the slave node must have a NIC named eth1 on the same network named Acme. A master node will have a set of active NICs that are each assigned a name, such as eth1 and eth0. The slave nodes must have at least the same number of NICs with the same names as the master node. For example, if the master node had two active NICs named eth1 and eth0, then every slave node must have two NICs named eth1 and eth0. The number of network interfaces on a slave node can exceed the number of network interfaces on a master node. However, any additional network interfaces on the slave node will not be used in the cluster. For example, if the master node has two network interfaces and the slave node has three network interfaces, then only two network interfaces are used in the cluster.

26

Intel Expressway Service Gateway Installation Guide

Managing a Collection of Service Gateway Machines

5.2

Setting up a

Service Gateway

Cluster

To set up the ESG cluster, perform the following steps. 1. Determine whether you need to implement load balancing for HTTP traffic. If you do, then you must install and configure the load balancer on each node before you set up the cluster. To install and configure an ESG load balancer, refer to section 6.0 Front End Load Balancing for HTTP Traffic. 2. Obtain two or more machines where ESG can be installed. 3. Verify that these machines conform to the following requirements: 1.1 Supported Servers 1.2 Hardware Requirements 1.3 Software Requirements 5.1 Hardware, Software, and Network Requirements for a Cluster

4. In the group of machines select which one will be the master node. 5. Log into the master node via an SSH session. If you are not already root, then su to the root user id now. 6. In the machine which will become the master node, use the RPM to install the ESG. During the postinstall, process you must specify that the machine is NOT a node in a cluster. For the procedure about installing ESG, refer to section 3.3 Installing Service Gateway. 7. Identify the name of the master nodes management network interface, which is also known as the Operation, Administration, and Management (OAM) NIC. Each node has its own OAM network interface that the node uses to communicate with every other node in the cluster and the master node uses to propagate configuration changes and application data to all other nodes in the cluster. To determine the name of the master nodes management network interface, perform the following steps. a. b. c. d. e. Log into the master node with a user account that has all the ESG roles assigned to it. Execute the following command: cli moStatus -t intf. As a result, a list of all the network interfaces used by the ESG displays. Execute following command for one of the interfaces in the list: cli moDetails -t intf -n [name of network interface]. In the output of the moDetails command, search for the string Information specific to this object type. In the Information specific to this object type section, search for the string Is OAM interface. If the string Is OAM interface = true, then this is the management interface. If the string is OAM interface = false, then this is not the management interface. Continue executing the cli moDetails command on each network interface until the output displays Is OAM interface = true. URL to the Management Console, which includes the port number that the Management Console is listening on. Username and password that has full access to the instance of Service Gateway. This means all of the Service Gateway roles have been assigned to the username.

f.

8. Obtain the following information about the master node:

9. Verify that the system and application configurations are closed on the master node.

Intel Expressway Service Gateway Installation Guide

27

Managing a Collection of Service Gateway Machines

10. In each slave node, perform the following steps. a. b. c. d. e. f. g. Obtain an RPM of the ESG. The ESG version must be identical to the one installed on the master node. Copy the ESG RPM into a directory on the target system. Use SCP (secure copy) or FTP to do this. Ensure that you have root privileges to do the RPM install. For security reasons, it is recommended that you install the ESG under a non-root user. Execute the following command to install the ESG: rpm -i [ESG rpm], where [ESG rpm] is the absolute file path to the ESG RPM. Start the postinstallation process by executing the command: cli postinstall When asked if you want to postinstall, enter yes and then press enter. When instructed to specify a value for JRE_HOME, either enter a directory location to the Java Runtime Environment or accept the default value. Then, press enter. When asked to specify a management interface, enter the master nodes management interface and then press enter. When asked for a master node login and password, enter credentials that has full access to the master nodes instance of Service Gateway. This means the user must have all the ESG roles assigned to it. When asked if you accept the master nodes X.509 certificate, enter yes and then press enter. The following command output displays if the node is successfully added to the cluster. Successfully completed reload current Successfully installed To automatically start Service Gateway when the Linux OS is restarted or rebooted, execute the command chkconfig --add soae. Start the ESG by executing the command cli serviceStart.

h. i.

j.

k. l.

11. On the master node, execute the command cli status. Before you take any other action, the output of this command must display the string Service state=ACT. The following is an example of this output. CLUSTER:1(ESG-cluster) state=ACT_DGRD NODE:1-1(icbl021) state=ACT_DGRD Service state=ACT Master=NO MasterName=iclab002 Mode=INIT uptime: 23 seconds Current Config: HTTP 1 TCAs (WARNING=1) 2 Alarms (WARNING=2) 6 Non-Act Managed Objects (ACT_DGRD=3,OOS_AUTO=2,OOS_AUTO_START=1) 1 Apps Deployed (ACT_DGRD=1) icbl021 view of other nodes in cluster: NODE:1-0(iclab002) state=ACT *** status Sun Sep 19 17:17:22 CDT 2010 ***

28

Intel Expressway Service Gateway Installation Guide

Managing a Collection of Service Gateway Machines

5.2.1

Example of Postinstalling Service Gateway on a Slave Node


The following is an example of postinstalling Service Gateway on a slave node. cli postinstall This command will reset the software package and delete any configs that may have been created. Are you sure you want to postinstall (yes|no)? yes You answered yes

Please enter value for JRE_HOME (default="/usr/java/latest/jre"):

Add this node to an existing cluster (y/n, or q to quit): yes

please input url of master node: https://intel002:8443 Detecting network configuration

Intf ----eth0 eth1 lo

Address ----------------10.203.43.59 10.1.133.64 127.0.0.1

Enter the management interface from the above list [default=eth0]: Selected eth0 for management interface Master node login:admin password: [ [ Version: V3 Subject: CN=foobar, OU=Expressway, O=Intel Signature Algorithm: SHA1withRSA, OID = 1.2.840.113549.1.1.5

Key:

Sun RSA public key, 1024 bits

modulus: 111400538014406489819022678058520748621231110454030743744078891355630 332279362310240269499357704095494366156599741360612218430441653731771 708414100629171160051422272436661896161290032264381790995380647091272 486143721944408164849096079043119855400246809539051312275249617431894 768579703614143032300551415556179 public exponent: 65537 Validity: [From: Mon Sep 13 20:29:50 CDT 2010, To: Thu Sep 10 20:29:50 CDT 2020]

Intel Expressway Service Gateway Installation Guide

29

Managing a Collection of Service Gateway Machines

Issuer: CN=iclab002, OU=Expressway, O=Intel SerialNumber: [ 4c8ed00e]

] Algorithm: [SHA1withRSA] Signature: 0000: 10 CB 67 DB BA A6 D6 38 ..g....8........ 0010: 7A 5F 5F E1 D0 6F 6E B6 z__..on......... 0020: CF 07 3C C2 B7 C6 52 16 ..<...R.......C. 0030: A9 3E 0F EA 02 C0 1D 39 .>.....9&^..$B6/ 0040: 4D D1 A6 8D DC 3C CF B5 M....<......W..J 0050: 12 46 85 66 7C EF 48 1B K.3c... 0060: 71 28 A6 C1 D2 CA 09 EA q(........$.... 0070: 01 5A C0 57 5A CB AA 14 .=.FJ BF A3 80 AB 9D B3 DC 8F 83 03 17 1D 7F 9D CC EC 1D 12 8F B6 CC 0D 43 D3 26 5E 1D 2E 24 42 36 2F 8C 0A 90 FE 57 DD C0 4A 2D 4B 92 33 63 82 A5 CD 8D BB 24 BA D4 D4 C8 20 73 A4 20 96 3D 9E 46 4A .Z.WZ...s. .F.f..H.-

] Do you accept the above certificate, y/n? (n)y Successfully completed reload current Successfully installed

5.3

Cluster Operation, Communication, and Management


Clustering is transparent to the administrator. However, each node within a cluster can be viewed individually on the Management Console, allowing you to determine the status of each individual machine. Any time a change is made to the master node, that change is propagated to all the nodes in the cluster. Changes that are propagated from one node to another are: Application changes Application configuration changes System configuration changes ESG software changes The following subsections explain how to user the master nodes Management Console to view and manage all the nodes in a cluster.

5.3.1

Viewing the Status of a Nodes Message Processing


Even if the cluster is behind a load balancer, each node will process different message transactions at different times. On some nodes, the message transactions may be processed successfully each time. On other nodes, all the message transaction may

30

Intel Expressway Service Gateway Installation Guide

Managing a Collection of Service Gateway Machines

fail. In addition, the message throughput of one node may differ from another. The Management Consoles Dashboard provides a filter that lets you see message processing for each node or for the entire cluster. To view message transaction information about each node in a cluster, perform the following steps. 1. Log in to the Management Console with a user account that has the operation admin role assigned to it. 2. Select the Dashboard tab. 3. In the Dashboard tab, click the Node Selector drop-down menu. As a result, the drop-down menu displays options for the cluster and each node in the cluster.

4. To view data about messages processed by a particular node, select that node 5. from the Node Selector drop-down menu. As a result, all the information displayed in the dashboard comes from the message processing performed by the node you selected. 6. In the Graph drop-down menu, select one the of the following options. Requests processed tracks whether message transactions processed by a node were successful or failed. Requests latency Latency is the time that elapses between Service Gateway receiving a message from a client and the runtime returning a message response to the client. This option tracks the latency for the node. Open Transactions tracks how many message transactions the node is currently processing

7. To collected detailed statistics about the messages processed by the node, then select Detailed from the Collect metrics drop-down menu. 8. To only view data about a particular application or operation that is being processed by the node, select the appropriate option from the Service Selector drop-down menu.

Intel Expressway Service Gateway Installation Guide

31

Managing a Collection of Service Gateway Machines

5.3.2

Managing Nodes in a Service Gateway Cluster


1. Log in to the Management Console with a user account that has the operations admin role assigned to it. 2. Select the Components tab. 3. In the Components tab, select the Nodes option.

4. In the Nodes table, consider the following information. Name string the cluster uses to identify the node. Host Name identifies the hostname of the machine where the ESG is installed Role indicates whether the node is the master node or a slave node. State indicates whether the master node can communicate with the slave node. If everything is functioning as expected, then the string ACTIVE appears in the State column. If communication between the master and slave node is failing, then the string COMMUNICATION_PROBLEM displays in the State column.

5. To stop, restart, or test the node, select the appropriate link in the Operations column. 6. To view interval alerts and the nodes IP address, select the nodes arrow in the Nodes table.

7. If alerts appear in the Interval Alerts table, you can remove them by selecting the Dismiss link.

5.3.3

Viewing a Nodes Logs


By default, Management Console provides a cluster-wide view of the logs generated from each node. This means that while you see all the logs generated from the cluster, you cannot tell which node the logs came from. To only view logs generated from a particular node, perform the following steps in the Management Console.

32

Intel Expressway Service Gateway Installation Guide

Managing a Collection of Service Gateway Machines

1. Select the Logs tab. 2. In the Logs tab, click the Node Selector drop-down menu.

3. In the Node Selector drop-down menu, select the node that you want to view logs from. 4. Perform a log search for transaction, exceptions, commands, or alerts. As a result, logs only display if they were generated in the node chosen from the Node Selector drop-down menu.

5.3.4

Message and File Transfer between Nodes


In a cluster, all message and file transfers between nodes is implemented via secure communications. Message transfer is done via SSL over TCP. File transfer is done via SCP (Secure Copy) or SFTP (Secure File Transfer Protocol). When performing a postinstall in the master node, the following cluster communication settings are defined: Management interface the network interface that exchanges instructions and information for Operation, Administration, and Management (OAM). This is also known as the OAM network interface. Each node has its own OAM network interface, which the node uses to communicate with every other node in the cluster. This is the interface that the master node uses to propagate changes and data to all other nodes in the cluster Operation, Administration, and Management (OAM) cluster communication port this is the nodes port where instructions and information for OAM are sent and received. By default, it is 9443. OAM cluster file transfer port this is the port where files are exchanged between the nodes. By default, it is 9444. OAM cluster election if the master fails, then this is the port where communication is exchanged to determine which node becomes the master. By default, it is 9445.

CAUTION:

IF YOU HAVE MORE THEN ONE ESG CLUSTER ON THE SAME NETWORK, THEN THE CLUSTERS CAN NOT SHARE THE FOLLOWING PORTS: OAM COMMUNICATION PORT, OAM FILE PORT, AND OAM CLUSTER ELECTION PORT.

Intel Expressway Service Gateway Installation Guide

33

Managing a Collection of Service Gateway Machines

5.4

Removing a Node from a Cluster


For a variety of reasons, such as replacing a nodes hardware, a node may need to be removed from the cluster. You can remove slave and master nodes, but the procedure for each is different.

5.4.1

Removing a Slave Node from a Cluster


To remove a slave node from a cluster, perform the following steps. 1. Determine which node you need to remove from the cluster. 2. Obtain the nodes name. For details about retrieving this information, refer to section 5.3.2 Managing Nodes in a Service Gateway Cluster. 3. Access both the master and slave nodes from CLI windows. 4. In the master node, perform the following steps. a. Before a node can be removed, all the nodes in a cluster must be in the active state. To determine the state of all nodes execute the cli status. If the service state for each node is state=ACT, then all the nodes are active and you can remove the node. To remove the node, execute the command cli removeNode -n [nodename]. As a result, the following output should display: Successfully deleted the node '[node name]'.

b.

5. In the node that you removed from the cluster, execute the following commands: cli status. The output of the cli status command must contain the following strings: Service state=ACT Master=YES 6. Even though the node is removed from the cluster, it still considers itself a part of a cluster in which it cannot communicate with any of the other nodes. If you need the node to be a completely standalone machine, then perform the following steps. a. You must run the postinstall command. This will delete all your application, security, log, and system data. To save application configurations, you must export them from the Management Console. For the procedure about exporting configurations, refer to the Operation and Administration Guide for Intel Expressway Service Gateway. Execute the cli postinstall command. During postinstall, you will be asked if the node should be added to an existing cluster. Answer no. Start the service by executing cli serviceStart. In the Management Console, import the application configurations that you exported. For the procedure about importing configurations, refer to the Operation and Administration Guide for Intel Expressway Service Gateway.

b. c. d. e.

5.4.2

Removing a Master Node from a Cluster


To remove a master node from a cluster, perform the following steps. 1. Obtain the master nodes name. For details about retrieving this information, refer to section 5.3.2 Managing Nodes in a Service Gateway Cluster. 2. Log in to the master node.

34

Intel Expressway Service Gateway Installation Guide

Managing a Collection of Service Gateway Machines

3. In the master node, shut down the ESG service by executing the command cli serviceStop. As a result, after several minutes, another node in the cluster will be elected as master. 4. Log in to another node in the cluster. 5. In that node, execute the command cli status. In the command output, verify that another node has been elected master. 6. If it has, then log in to the former master node. 7. In the former master node, start the ESG service by executing the command cli serviceStart. As result, the former master node is added back to the cluster as a slave node. 8. Access both the former master node and the current master node from CLI windows. 9. In the current master node, perform the following steps. a. Before a node can be removed, all the nodes in a cluster must be in the active state. To determine the state of all nodes execute the cli status. If the service state for each node is state=ACT, then all the nodes are active and you can remove the node. To remove the former master node, execute the command cli removeNode -n [nodename], where the [nodename] is the name of the former master node. As a result, the following output should display: Successfully deleted the node '[node name]'.

b.

10. In the node that you removed from the cluster, execute the following commands: cli status. The output of the cli status command must contain the following strings: Service state=ACT Master=YES 11. Even though the node is removed from the cluster, it still considers itself a part of a cluster in which it cannot communicate with any of the other nodes. If you need the node to be a completely standalone machine, then perform the following steps. a. You must run the postinstall command. This will delete all your application, security, log, and system data. To save application configurations, you must export them from the Management Console. For the procedure about exporting configurations, refer to the Operation and Administration Guide for Intel Expressway Service Gateway. Execute the cli postinstall command. During postinstall, you will be asked if the node should be added to an existing cluster. Answer no. Start the service by executing cli serviceStart. In the Management Console, import the application configurations that you exported. For the procedure about importing configurations, refer to the Operation and Administration Guide for Intel Expressway Service Gateway.

b. c. d. e.

Intel Expressway Service Gateway Installation Guide

35

Managing a Collection of Service Gateway Machines

5.5

Changing the IP Address for a Nodes Management Network Interface


The management network interface exchanges instructions and information for Operation, Administration, and Management (OAM). This is also known as the OAM network interface. Each node has its own OAM network interface, which the node uses to communicate with every other node in the cluster. This is the interface that the master node uses to propagate changes and data to all other nodes in the cluster. Only static IP addresses should be assigned to each node and those addresses should never change. If the IP address changes for a nodes management network interface, then the node will be unable to communicate with any of the nodes in the cluster. To address this issue, perform a procedure in one of the following subsections.

5.5.1

Changing the IP Address for a Slave Nodes Management NIC


1. Determine which slave node has lost the ability to communicate with the cluster. 2. Verify that the issue is the IP address for the nodes management NIC has changed. To determine which NIC is the management network interface, perform the following steps in the node. a. b. c. d. Execute the following command: cli moStatus -t intf. As a result, a list of all the network interfaces used by ESG displays. Execute following command for one of the interfaces in the list: cli moDetails -t intf -n [name of network interface]. In the output of the moDetails command, search for the string Information specific to this object type. In that section, search for the string Is OAM interface. If the string Is OAM interface = true, then this is the management interface. If the string is OAM interface = false, then this is not the management interface. Continue executing the cli moDetails command on each network interface until the output displays Is OAM interface = true.

e.

3. Obtain the nodes name. For details about retrieving this information, refer to section 5.3.2 Managing Nodes in a Service Gateway Cluster. 4. In the master node, execute the following command: cli editNodeOamIp -nodename [Node Name] --oam_ip [new IP address for OAM NIC]

5.5.2

Changing the IP Address for a Master Nodes Management NIC


1. Verify that master node has lost the ability to communicate with the cluster. 2. Verify that the issue is the IP address for the nodes management NIC has changed. 3. Obtain the nodes name. For details about retrieving this information, refer toTo determine which NIC is the management interface, perform the following steps in the node. a. b. c. d. Execute the following command: cli moStatus -t intf. As a result, a list of all the network interfaces used by ESG displays. Execute following command for one of the interfaces in the list: cli moDetails -t intf -n [name of network interface]. In the output of the moDetails command, search for the string Information specific to this object type. In that section, search for the string Is OAM interface. If the string Is OAM interface = true, then this is the management interface. If the string is s OAM interface = false, then this is not the management interface.

36

Intel Expressway Service Gateway Installation Guide

Managing a Collection of Service Gateway Machines

e.

Continue executing the cli moDetails command on each network interface until the output displays Is OAM interface = true. section 5.3.2 Managing Nodes in a Service Gateway Cluster.

4. Determine which slave node was elected as the master. 5. In the new master node, execute the following command: cli editNodeOamIp -nodename [Node Name] --oam_ip [new IP address for OAM NIC]

Intel Expressway Service Gateway Installation Guide

37

Managing a Collection of Service Gateway Machines

38

Intel Expressway Service Gateway Installation Guide

Front End Load Balancing for HTTP Traffic

6.0

Front End Load Balancing for HTTP Traffic


Due to the importance and high volume of messages that Service Gateway receives from an HTTP client, you may need to set up a load balancer that distributes message requests to nodes in a Service Gateway cluster. By distributing messages this way, the ESG supports failover, maximizes message throughput, minimizes response time, and avoids overloading any particular ESG instance. To avoid the expense of a dedicated load balancer, the ESG provides an implementation of Linux Virtual Server* (LVS) that is highly scalable and available. The LVS load balancer can distribute messages to up to six nodes in a ESG cluster. Figure 1. Front End Load Balancing in a ESG Cluster demonstrates how load balancing works in a three-node cluster.

Figure 1.

Front End Load Balancing in a ESG Cluster

The following step-by-step process describes how load balancing works in an ESG cluster 1. Only one node in the load balancer group accepts incoming connections from a client. This node is called the Director. During initial set up of the load balancer, the first real sever that is started becomes the Director. If two nodes start at the same time, then the node with the lowest IP address becomes Director. 2. All interfaces are on the same network. The interfaces labeled F1, F2, and F3 are front end interfaces. They are used to hold the VIP when a node becomes the Director. The interfaces labeled B1, B2, and B3 are back end interfaces. They are used by the load-balancing software to distribute network traffic.

Intel Expressway Service Gateway Installation Guide

39

Front End Load Balancing for HTTP Traffic

3. The Director is assigned the Virtual IP address (VIP). The Director binds the VIP to an external interface (F1). If another node takes the Director role, the VIP is moved to its external interface. A VIP is an IP address that is not connected to a physical network interface card (NIC). The VIP is bound to one physical NIC on the node, such as eth0. The binding is based on the NICs name. The physical NIC that the VIP is bound to is determined during the set up of the load balancer. 4. Messages are sent to the VIP, which is bound to a physical NIC. 5. Based on a load balancing algorithm, the Director determines what node will receive the message request. 6. Once that decision is made, the message is sent to the back end NIC of the Directors node. 7. From the Directors back end NIC, the message is sent to the receiving nodes loopback network interface, also known as the LO. LO is a virtual network interface that is not connected to any hardware but is fully integrated into the systems internal network infrastructure. 8. The receiving node processes the message request. Then, the node sends a message response directly to the client.

6.1

Prerequisites for Load Balancing


In the load balanced environment, the ESG machines must conform to the following requirements. You must obtain the load balancer installer from the Service Gateway Customer Support Portal. Load balancer must be installed and configured on each machine that will load balance messages before the ESG postinstall is performed on those machines. Two IP addresses must be assigned to each machine in the cluster. One IP address is assigned to the front end network interface, which is used to communicate with the Director and hold the VIP if the node becomes the Director. The other IP address is assigned to the back end network interface, which is used to transmit messages to external endpoints. Each machine requires two network interfaces, one for the front end and the other for the back end. The NICs must be on the same subnet. Need one floating IP address known as a VIP (virtual IP address) in addition to the two unique ones assigned to each machine. This is the IP address that a client uses to access the ESG. The subnet of the VIP must be different from all the IP addresses assigned to the machines in the cluster. All the IP addresses assigned to the machines must be static and never change. The machines must be in the ESG cluster. You must install, configure, and start the load balancer before you set up clustering. However, clustering must be set up right after that. To set up an ESG, refer to section 5.0 Managing a Collection of Service Gateway Machines. No more than 6 nodes can be in a cluster. Each node must conform to the requirements in section 5.1 Hardware, Software, and Network Requirements for a Cluster.

40

Intel Expressway Service Gateway Installation Guide

Front End Load Balancing for HTTP Traffic

6.2

Installing and Configuring a Load Balancer on a Gateway Cluster


1. Log into a machine with root privileges. 2. Copy the load balance installer to the machine.

Service

3. To install the load balancer, execute the following command on the load balance installer file: sh [loadbalancer].sh 4. By default, to execute the load balancer configuration commands, you must provide the absolute file path to the load balancers bin directory:/opt/scr-lb/bin. To avoid specifying the full path, execute the following command: PATH=$PATH:/opt/scr-lb/bin 5. Familiarize yourself with how to use the lbconfig command. The lbconfig command sets up the load balancer on a machine by allowing you to specify settings, such as the VIP, back end IP addresses that messages will be routed to, and load balancing algorithms used by the Director. For information about the lbconfig, refer to sections 6.6 Describing the Command Syntax for lbconfig and 6.6.1 Example of Executing lbconfig. 6. Execute the lbconfig command with the command options appropriate for your load balanced environment. The following is an example. /opt/scr-lb/bin/lbconfig --vip=vip:eth0:10.0.10.100/24 backend_server=10.0.10.102:1,10.0.10.103:2 --lb_algo=wrr 7. You must configure the load balancer so that it runs automatically after the machine reboots. To do this, execute the following command: chkconfig --add soae_lb 8. You must encrypt communication between the nodes in a load balanced cluster by creating a VRRP password for the production system. To do this, perform the following steps. a. b. c. d. Execute the command /opt/scr-lb/bin/lbpasswd The default password is passwd. When the Old password displays, enter passwd. When the New password prompt displays, enter a 14-character alphanumeric string. When the Re-enter new password prompt displays, enter the same 14character alphanumeric string that you specified in the New password prompt.

9. Start the load balancer service by executing the following command: service soae_lb start 10. Perform steps 1. through 9. on each machine that will be part of the load balanced cluster. In each machine, you must do the following: The lbconfig must be executed with the same command options on each machine. For example, if the lbconfigs command option is lb_algo=wrr on one machine, then the load balancing algorithm must be weighted round robin on every other machine in the load balanced cluster. The VRRP password must be the same on each machine.

WARNING:

IN A LOAD BALANCED CLUSTER, IF THE LBCONFIG IS EXECUTED WITH COMMAND OPTIONS ON ONE MACHINE THAT DIFFER FROM COMMAND OPTIONS USED ON ANOTHER MACHINE, THEN THE LOAD BALANCER CANNOT FUNCTION CORRECTLY. IF THE MACHINES DO NOT ALL USE THE SAME VRRP PASSWORD, THEN THE MACHINES CANNOT DECRYPT COMMUNICATION FROM ONE ANOTHER.

WARNING:

Intel Expressway Service Gateway Installation Guide

41

Front End Load Balancing for HTTP Traffic

11. Place the machines where the load balancer was installed and configured into the ESG cluster. For instructions about setting up a cluster, refer to section 5.0 Managing a Collection of Service Gateway Machines.

6.3

Starting, Stopping, or Uninstalling the Load Balancer


1. To start the load balancer service, execute the following command: service /opt/scr-lb/bin/soae_lb start 2. To stop the load balancer service, execute the following command: service /opt/scr-lb/bin/soae_lb stop 3. To uninstall the load balancer service, execute the following command: /opt/scr-lb/bin/lbuninstall

6.4

Determining the Load Balancer Version


To view the load balancer version, execute the following command: /opt/scr-lb/bin/lbversion

6.5

Monitoring Traffic Handled by the Load Balancer


To view load balancer status, number of connections, and the size of messages handled by the load balancer, execute the following command: /opt/scr-lb/bin/lbstatus As a result, output displays similar to the following: VIP: VIP1, status: VIP Director Address:Port 10.0.10.100:55555 10.0.10.103:55555 10.0.10.102:55555 Conn 0 0 0 InPkts 0 0 0 InBytes 0 0 0

10.0.10.100:5000

10.0.10.100:55556 10.0.10.103:55556 10.0.10.102:55556

27257 148 27109

55189036 296740 54892298

10227M 54987180 10172M

42

Intel Expressway Service Gateway Installation Guide

Front End Load Balancing for HTTP Traffic

6.6

Describing the Command Syntax for lbconfig


lbconfig command sets up the load balancer on a machine by allowing you to specify settings, such as the VIP and the load balancing algorithms that the Director uses. To configure the load balancer, use some of the command options in Table 5. List of lbconfig Command Options with the lbconfig executable. Table 5.
Command

List of lbconfig Command Options


Description Specifies the network interface used for sending and receiving VRRP advertisements. If the VRRP network interface is eth0, then command option looks like this: lbconfig --vrrp_if=eth0 The ESG load balancer uses Virtual Router Redundancy Protocol (VRRP) to monitor the status of all the nodes in a load balanced cluster. If a node fails then the VRRP informs the LVS not to route messages to that node. In addition, if the Director fails, then the VRRP informs the LVS of the failure and elects a new node as the Director. It is highly recommended that the VRRP network interface is identical to the physical network interface associated with the VIP. This is an optional command. The default VRRP network interface is eth0. The VRRP advertisement broadcast interval is 1 second. Comma, separated list of the real servers IP addresses. rs stands for real server. The real servers are the nodes to which the director sends messages to. The list must include the IP address of the node, which holds the role of Director. If load balancing is implemented in a three-node cluster and the IP addresses are 10.0.10.100, 10.0.10.101, and 10.0.10.102, then the command option looks like this lbconfig --backend_server=10.0.10.100, 10.0.10.101, 10.0.10.102 You can add a weight parameter, to each IP address that indicates the relative priority for which a node serves a request. The weights are used by the wrr and wlc balancing algorithms. If you used the weight parameter, then the command option looks like this: lbconfig --backend_server=10.0.10.100:3, 10.0.10.101:5, 10.0.10.102:3 The weights are implemented as a ratio. For example, if two servers are set as weight 3 and weight 5, the first receives three connections for each five connections sent to the other. Defines the VIP, the physical NIC that the VIP is bound to, and the virtual IP address and netmask. The VIP name can consist of letters, digits, and underscores. The name of the VIP is represented as a network interface in the Service Gateway. If the VIP is 10.0.10.100/24 and the physical NIC is eth0, then the command option looks like this: --vip=vip:eth0:10.0.10.100/24 To use load balancing in an HTTP application, you must select the VIP network interface from the input servers network interface drop-down menu.
Note: This argument must precede any reference to the VIP in a --port argument.

--vrrp_if

-backend_s erver

--vips

Intel Expressway Service Gateway Installation Guide

43

Front End Load Balancing for HTTP Traffic

Table 5.
Command --last

List of lbconfig Command Options


Description Displays the last lbconfig command executed. Useful for replicating lbconfig commands on other nodes in a cluster. Specifies the default load balancing algorithm that the Director uses to distribute messages to the real servers. To specify the algorithm, you must enter the load balancing algorithm. For a description of each algorithm, refer to Table 6. Front end Load Balancing Algorithms. The following list identifies what each load balancing algorithm acronym stands for. rr=round robin wrr=weighted round robin lc=least-connection wlc=weighted least-connection sh=source hashing sed=shortest expected delay nq=never queue For example, if you need the Director to distribute messages using round robin, then the command option looks like this: lbconfig --lb_algo=rr Port where a particular load balancing algorithm is used. This command option requires the name of the VIPs network interface defined by the --vip argument typed before this argument. This is the format: --port=[name of VIP]:[port number]:[load balancing algorithm]. The port command option allows the Director to use different load balancing algorithms at once. If an HTTP applications input server uses the port number specified in the lbconfig command option, then messages are distributed based on the ports load balancing algorithm. For example:

--lb_algo

--port

1. Execute the lbconfig command with these options: lbconfig --vip=vip:eth0:10.0.10.100/24 - backend_server=10.0.10.102:1,10.0.10.103:2 --port=vip:5555:sed -lb_algo=nq 2. In a HTTP application, set the input servers port number to 5555 3. Then the director uses the sed load balancing algorithm to route messages. 4. If any other port number is specified in the input server, then the Director uses the never queue load balancing algorithm for routing messages.

-persistence _timeout -lb_group_i d

if the client establishes a connection to a node, any additional connections made by client within user-defined interval, are routed to the same node. This persistence timeout is the period of time during which a messages are routed to the same real server. Unit of time is in seconds. If you have multiple load balancing groups, then you must specify a unique identifier for each group.

6.6.1

Example of Executing lbconfig


Before executing the lbconfig command, assume the system has the following requirements:

44

Intel Expressway Service Gateway Installation Guide

Front End Load Balancing for HTTP Traffic

This is a two-node cluster. The network interface for VRRP is eth0. The physical NIC that the VIP is associated with is eth0. The virtual IP address and subnet mask is 10.0.10.100/24. The real servers IP addresses are 10.0.10.102 and 10.0.10.103. 10.0.10.102 has the highest priority. In general, the load balancing algorithm the Director must use is weighted round robin. In some rare cases, the Director must use the weighted least-connection algorithm. If multiple messages are sent by the same client within a 5 second window, then all the messages must be sent to the same real server. If the system has the requirements described above, then you would execute the following command. /opt/scr-lb/bin/lbconfig --vip=vip:eth0:10.0.10.100/24 --vrrp_if=eth0 -backend_server=10.0.10.102:2,10.0.10.103:1 --port=vip:5555:wlc -lb_algo=wrr --persistence_timeout=5

6.7

Defining Load Balancing Algorithms


Table 6. Front end Load Balancing Algorithms lists and defines all the load balancing algorithms supported by ESG. Table 6.
Algorithm

Front end Load Balancing Algorithms


Command Option Definition Sends requests to each server in a sequential order on a first come, first serve basis. For example, if three messages are sent to the VIP and there are three nodes in the cluster, then the Director sends a message to each node in a sequential order. Message1 goes to node1, message2 goes to node2, and message3 goes to node3. Behaves like round robin but is designed to better handle servers with different processing capacities. Wrr routes messages based on the weights assigned to the real servers. Weights are assigned when the lbconfig command was executed. Servers with higher weights receive more connections first and process more messages than servers with lesser weights. Servers with equal weights process an equal number of messages. Director sends messages to the server with the least number of established HTTP connections.

round robin

rr

Weighted round robin

wrr

leastconnection

lc

Intel Expressway Service Gateway Installation Guide

45

Front End Load Balancing for HTTP Traffic

Table 6.
Algorithm

Front end Load Balancing Algorithms


Command Option Definition Behaves like least connection but is designed to better handle servers with different processing capacities. wlc routes messages based on the weights assigned to the real servers. Weights are assigned when the lbconfig command was executed. Servers with higher weights receive more connections than servers with lesser weights. Servers with equal weights get an equal number of connections. Generates a hash from the clients IP address which is then saved to a hash table. Then, whenever a client with a particular IP address sends a message, that message is always routed to the same server. Director sends messages to the server with the shortest expected delay. The servers expected delay is calculated as follows: (C + 1) / U Where C is the number of connections to the server and U is the weight of the server. If an idle server available, Director sends message to idle server. If all servers are unavailable, then the message is sent to the server with the shortest expected delay.

weighted leastconnection

wlc

source hashing

sh

shortest expected delay

sed

never queue

nq

6.8

Using Connection Affinity


Connection affinity is the process by which a group of messages is assigned a unique identifier and then based on that identifier routed to the same real server. In the ESG, you can use the following types of connection affinities. Source hashing: Generates a hash from the clients IP address which is then saved to a hash table. Then, whenever a client with a particular IP address sends a message, the Director uses the IP address in order to route the messages to the same server. If the clients IP address changes, then the messages are no longer routed to the same server. This is a load balancing algorithm. Persistence timeout: if the client establishes a connection to a node, any additional connections made by client within user-defined interval, are routed to the same node. This persistence timeout is the period of time during which a messages are routed to the same real server. Unit of time is in seconds. This is a command option that can be executed with lbconfig.

6.9

Configuring an Application to use Front End Load Balancing


In the Management Console, perform the following steps. 1. Upload an HTTP proxy application into an application configuration. 2. In the application configuration, open the applications input server for editing. 3. In the input server, select the name of the VIP from the Network Interface dropdown menu.

46

Intel Expressway Service Gateway Installation Guide

Front End Load Balancing for HTTP Traffic

4. When the lbconfig was executed, you could have specified that a particular load balancing algorithm is used if a particular port is specified in the input servers Port field. If necessary, enter that port number in the input servers Port field. 5. Activate the application configuration.

6.10

Failover and Electing a Director


Two types of checks are periodically performed, which address an application or director failing. VRRP (Virtual Router Redundancy Protocol) heartbeat each node in the cluster sends out a VRRP advertisement every second to every other node in the cluster. The heartbeat checks whether the Director has failed and whether the remaining nodes in the cluster are active. If the Director has failed, then the other nodes in the cluster elect another Director. Healthcheck the Director sends a heartbeat every two seconds to each application that uses load balancing. If an application is not functioning, then the Director stops sending connections to that application.

CAUTION:

IF A NODE FAILS, THEN EXISTING CONNECTIONS AND DATA ARE LOST. UNTIL THE DIRECTOR NOTICES THE NODE IS DOWN, ANY CONNECTIONS DIRECTED TO IT ARE LOST. HEALTH CHECKER PERFORMS A CONNECTION TEST ON EACH NODE EVERY TWO SECONDS. THIS MEANS THERE IS A TWO SECOND WINDOW IN WHICH DATA COULD BE LOST DUE TO A NODE FAILING. The VIP is bound to one physical NIC on the Directors node, such as eth0. Each node in the cluster associates the VIP with the same NIC name, even when that node is not the Director nor possesses the VIP. For example, if theres a three node cluster and the Director is node1 and binds the VIP to eth0, then node2 and node3 associate the VIP with eth0. Consequently, if the Director node fails, then the Director, with its VIP, moves to a different node. The new Director already knows the VIP is bound to a particular NIC based on the network interfaces name, even though the network interfaces IP address has changed. For example, consider a two-node cluster, where node1 is the Director and the VIP is bound to eth0. If node1 fails and node2 becomes the Director, then node2 binds the VIP to eth0. If the Director node fails, such as the machine crashing or ESG service stopping, then a Director election automatically takes place. A Director election is the process by which the load balancing service elects one of the nodes to be the Director.

Intel Expressway Service Gateway Installation Guide

47

Front End Load Balancing for HTTP Traffic

48

Intel Expressway Service Gateway Installation Guide

Integrating Hardware Cavium Cards with Service Gateway

7.0

Integrating Hardware Cavium Cards with Service Gateway


Service Gateway supports a Cavium Network hardware card for cryptographic processing and acceleration. The ESG can offload security processing to the security card, which speeds up message throughput. To learn what version of the Cavium card that Service Gateway supports, contact customer support.

7.1

Prerequisites for Integrating a Cavium Card with Service Gateway


Before you can set up Service Gateway to use a Cavium card, you must have and know the following: The Cavium card must be installed on the same system as Service Gateway. To integrate Service Gateway with the Cavium security processors, you must obtain a specially built tar file provided by Intel. To use the hardware for security processing, the ESG requires OpenSSL libraries and device drivers contained in the tar file.

7.2

Installing a Cavium Device Driver


To integrate the ESG with a Cavium security card, you must install the cavium driver into the system where the ESG is installed. To do this, perform the following steps. 1. Obtain the secCardDrivers-r[version_number].tgz from the Salesforce website. This tar file contains OpenSSL libraries and device drivers that the ESG requires in order to use the hardware for security processing. 2. On the system where the ESG installed, create a folder named SecCardDriver. Copy the secCardDrivers-[version_number].tgz into the SecCardDriver folder. 3. Stop the ESG service by executing the command cli serviceStop. 4. Verify that a Cavium card is installed by executing the command cli secCardDriverInfo. The command output must display the following statement: Found security card cavium... If that statement does not display, then you do not have a security card installed that is supported by ESG and you cannot perform this procedure. 5. Install the Cavium device drivers by executing the secCardDriverInstall command, where for the -d --driverbundle option you enter the absolute file path to the secCardDrivers-r[version number].tgz. The following command is an example. cli secCardDriverInstall -d /home/lablogin/secCardDriver/ secCardDrivers-r2.5.1.tgz. 6. If a security card device driver and/or OpenSSL libraries was already installed, then the command fails with the following error message. There is a previous instance of openssl libraries and/or drivers installed. Please use --olddriverbackup to specify a file to save

Intel Expressway Service Gateway Installation Guide

49

Integrating Hardware Cavium Cards with Service Gateway

backup. If you receive this error message, then execute the secCardDriverInstall command. When executing this command, you must use the -b,-olddriverbackup option, which specifies the absolute file path file to where the backup of the existing driver will be written prior to the new driver being installed. This backup file must be in a tgz format. The following command is an example. cli secCardDriverInstall -d /home/lablogin/secCardDriver/ secCardDrivers-r2.5.1.tgz -b /home/lablogin/secCardDriver/ secCardDrivers_Backup.tgz 7. Start the ESG service by executing the command cli serviceStart.

7.3

Removing a Cavium Device Driver


After you perform the procedure in section 7.2 Installing a Cavium Device Driver, the Service Gateway is configured to use the Cavium card for cryptographic processing and acceleration. In some situations, such as the cavium card failing and needing to be replaced, you may need to configure the ESG to stop using the Cavium card and instead only use the OpenSSL libraries installed with the software for security processing. To do this, perform the following steps.

CAUTION:

IF YOU EXECUTE THE SECCARDDRIVERIGNORE COMMAND THEN ESG WILL NOT USE THE SECURITY CARD FOR ANY CRYPTOGRAPHIC PROCESSING OR SECURITY ACCELERATION. 1. Stop the ESG service by executing the command cli serviceStop. 2. Configure the ESG to stop using the Cavium security card by executing the secCardDriverIgnore command. When executing this command, you must use the -b,--olddriverbackup option, which specifies the absolute file path file to where the backup of the existing driver will be written prior to the new driver being installed. This backup file must be in a tgz format. The following command is an example. cli secCardDriverIgnore -b /home/lablogin/secCardDriver/ secCardDrivers_Backup.tgz 3. Start the ESG by executing the command cli serviceStart. To reconfigure the ESG to use the security card, perform the procedure in section 7.4 Creating and Using a Backup of Cavium Device Driver

7.4

Creating and Using a Backup of Cavium Device Driver


If you executed either the secCardDriverInstall or secCardDriverIgnore commands, then a backup is usually created that contains the device drivers and/or OpenSSL libraries that were replaced by those commands. The secCardDriverRestore removes the existing device drivers and/or OpenSSL libraries and replaces those files with the drivers and libraries from the backup. To do this, perform the following steps. 1. Execute the secCardDriverInstall or secCardDriverIgnore commands. Regardless of which command you execute, you must use the -b,-olddriverbackup with that command. This generates the tar file that contains the backup. 2. Replace the existing device drivers and/or OpenSSL libraries with ones in the backup tar file by executing the secCardDriverRestore command. When executing this command, you must use the -b,--olddriverbackup option, which specifies the absolute file path file to the backup of the previously installed drivers. This backup file must be in a tgz format. The following command is an example. cli secCardDriverRestore -b /home/lablogin/secCardDriver/

50

Intel Expressway Service Gateway Installation Guide

Integrating Hardware Cavium Cards with Service Gateway

secCardDrivers_Backup.tgz 3. Start the ESG service by executing the command cli serviceStart.

Intel Expressway Service Gateway Installation Guide

51

Integrating Hardware Cavium Cards with Service Gateway

52

Intel Expressway Service Gateway Installation Guide

Upgrade Procedure

8.0

Upgrade Procedure
The Service Gateway runtime is upgraded using the Command Line Interface (CLI). When you have a working ESG, you should always perform an upgrade rather than reinstallation except under extreme conditions, such as a disk corruption. An upgrade allows you to go from any lower release to any higher release of ESG. The upgrade command converts your existing configurations to the new release. A fresh installation completely deletes your existing application configurations. You may want to reinstall the current release for some reason. You can back out an upgrade with the b option on the cli installUpgrade. To do this, you must have the tgz file generated by executing the upgrade_save command. You also need the RPM of the release you wish to back out to. You can upgrade the ESG and move it to new hardware if the new hardware has the same set of network interfaces. For example, if current hardware has eth0 and eth1, then the new machine must also have eth0 and eth1. This can be done by using the cli upgrade_save and cli installUpgrade commands. a. b. c. Run cli upgrade_save on the current machine. Copy the tgz bundle that contains the ESG from the current machine to the new machine. Run cli installUpgrade using the tgz bundle from step b.

8.1

Upgrade Command Syntax


The full syntax for the upgrade command is this: cli upgrade -r rpm [-d upgrade_save_dir] [--upgradeCluster]
Where:

rpm name of the new RPM you are upgrading to upgrade_save_dir the directory where you want to save the backup of your currently installed RPM upgradeCluster upgrades all nodes of an entire cluster at the same time

8.2

Back up Service Gateway Logs Before Upgrade


When you perform an upgrade, all logs are automatically deleted. This occurs because the log format changes from release to release and an older log format cannot be read or searched by a newer version of ESG. You can preserve the logs by copying the logs to a secure directory location. To back up the logs, execute the following CLI command: cli saveLogFiles -f logs.tgz Where:

Intel Expressway Service Gateway Installation Guide

53

Upgrade Procedure

logs.tgz is the tgz file that contains all of the ESG logs. If you do not specify an absolute file path in the -f argument, then the tgz file is saved to the directory where saveLogFiles is executed.

8.3

Upgrading Service Gateway


To upgrade Service Gateway, perform the following steps. 1. Determine whether you are upgrading a cluster. If you are, then do not perform this procedure and instead perform the procedure in section 8.5 Performing a Cluster-wide Upgrade 2. While upgrading ESG is a stable process, it is possible that an error could occur or an application configuration could become corrupted. If that occurs, it may be necessary to back out the upgrade and restore the previous version of ESG with the deployed applications and application configurations created in that release. To do this, you must first store the application configuration data into tarball. Create this tarball now by executing the upgrade_save command. The following is an example. /tmp>cli upgrade_save /tmp/save upgrade tar file is: /tmp/save/save.tgz upgrade_save successfully completed /tmp> As a result, you now have the option to back out an upgrade and restore the previous version of ESG with your application configurations. For details, refer to section 8.3.2 Backing Out an Upgrade. 3. Copy the new RPM to the ESG machine. This example assume the RPM is in directory /tmp. 4. Make sure all application and system configurations are saved and closed. Upgrade cannot proceed if any configurations are open for edit. 5. Log in to the ESG machine with root privileges. 6. If the ESG is running, execute the following command: cli serviceStop 7. Run the CLI upgrade command, substituting the name of your ESG RPM for name: cli upgrade -r /tmp/name.rpm Where the name contains the version type and the version number. In the upgrade command, you must specify the full directory path to the RPM or the command can not execute. 8. If necessary, configure the ESG service so that it automatically starts each time the machine restarts by executing the following command: chkconfig --add soae 9. Start the upgraded ESG by executing the command: cli serviceStart 10. To back out the upgrade, perform the procedure in section 8.3.2 Backing Out an Upgrade. If this node was a part of a cluster, then after upgrade the node will be removed from the cluster. A cluster cannot contain nodes with different versions of the ESG. Once the node is removed form the cluster, it searches for another upgraded member of the cluster. If it finds a node that has the same version of the ESG and both nodes were part of the same cluster in a previous release, then they will form a cluster otherwise the node will elect itself as master.

54

Intel Expressway Service Gateway Installation Guide

Upgrade Procedure

8.3.1

Example of an Upgrade
The following is an example of upgrading to a new RPM. In this example, the system being upgraded is either a standalone system, or is the first node of a cluster to be upgraded. You must issue a cli serviceStop command before the upgrade can run successfully and that you must issue a cli serviceStart command once upgrade is completed to start the ESG again. /tmp>cli saveLogFiles -f logs.tgz cli upgrade -r /root/esg-runtime-as5-64bit-r2_8_0.rpm Warning: This command will delete all logs. If you would like to save a backup of these logs, execute command "cli saveLogFiles".

You are about to upgrade the software, do you want to continue 'yes|no'? yes You answered yes Pre-retrofit check of config cluster Pre-retrofit check of config current Pre-retrofit check of config factory test RPM for validity upgrade tar file is: /tmp/retrofit.esg.intel/systemBackup.tgz upgrade_save successfully completed Prepare Upgrade Stopping soaed: [ OK ]

Please wait while upgrade continues soae service will be stopped and uninstalled Please Wait .[root@iclab002 ~]# ..installing new esg rpm: /root/esg-runtime-as564bit-r2_8_0.rpm .........begin upgrade_finish configured uid/gid will be used Forcing service state to new state = POSTINSTALLING. esg patch in progress Installing with uid: 99 gid: 99 Forcing service state to new state = OOS. esg completed (upgrade) successfully. You are now ready to start the soae service. Hit Return to continue

Intel Expressway Service Gateway Installation Guide

55

Upgrade Procedure

8.3.2

Backing Out an Upgrade


While upgrading ESG is a stable process, it is possible that an error could occur or an application configuration could become corrupted. If that occurs, it may be necessary to back out the upgrade and restore the previous version of ESG with the deployed applications and application configurations created in that previous release. To back out to a previous release, perform the following steps. 1. Log into the machine where ESG installed as root. 2. Obtain ESG RPM of the version that you want to downgrade or install as. For example, if you upgraded to 2.8 from 2.7 and needed to revert back to 2.7, then youd obtain the 2.7 RPM. 3. cd to /tmp/retrofit.esg.intel/. 4. Execute the following command: ./restoreSystem -r [absolute path to ESG RPM]

8.4

Check the status of the Service Gateway


Once upgrade is complete, check the status of the ESG by executing the command: cli status -v The -v option stands for verbose, and will give details on any problems encountered.

8.5

Performing a Cluster-wide Upgrade


You can execute a single command to simultaneously upgrade all the nodes in a cluster. During a cluster-wide upgrade, the following occurs. Runs a configuration audit between master and other nodes to ensure all nodes have a consistent set of configurations. Copies the RPM to all nodes of the cluster. If any copy to any node fails, then the command does not execute. When upgrade is initiated, communication between the nodes stops. Each node then upgrades on its own. Upgrade forces the master to upgrade first. This makes it highly likely that master node will complete its upgrade first and become the master node in the new release. However, if the a node completes upgrading before the master, then it becomes the master and the master from the previous release becomes a slave in the new release. As each node completes the upgrade, it attempts to re-join the cluster. If any of the nodes fail to join the cluster after an upgrade, it will not be clear what the problem is. The ESG generates an alarm stating there is a communication problem for any nodes that fail to come up on the new release. For nodes that fail to come up on the new release, the administrator must investigate and resolve problems manually.

8.5.1

Prerequisites for a Cluster-wide Upgrade


Before upgrading a cluster, verify that the cluster meets the following requirements. All nodes must be in the ACTIVE state. If a node is not in the ACTIVE state or cannot communicate with the cluster, then the upgrade cannot start. Application and system configurations cannot be open for edit in any node.

56

Intel Expressway Service Gateway Installation Guide

Upgrade Procedure

None of the nodes are permitted to have an alarm or Threshold Crossing Alert (TCA), except if the TCA is caused by the factory configuration being active.

8.5.2

Procedure for Upgrading a Cluster


To upgrade an entire cluster using a single CLI command, perform the following steps. 1. Verify that the cluster conforms to the requirements in section 8.5.1 Prerequisites for a Cluster-wide Upgrade. 2. Login to the master node as root. 3. Apply the upgrade by executing the following command: cli upgrade --upgradeCluster -r /root/esg-runtime-as5-64bit-r2_8_0.rpm Where: -r specifies the RPM you are upgrading to. You must specify the absolute file path to the RPM.

8.5.3

Backing out a Cluster-wide upgrade


To back out the upgrade, perform the following steps. 1. Log into the master node as root. 2. Obtain ESG RPM of the version that you want to downgrade or install as. For example, if you upgraded to 2.8 from 2.7 and needed to revert back to 2.7, then youd obtain the 2.7 RPM. 3. cd to /tmp/retrofit.esg.intel/. 4. Execute the following command: ./restoreSystem -r [absolute path to ESG RPM] -c

Intel Expressway Service Gateway Installation Guide

57

Upgrade Procedure

58

Intel Expressway Service Gateway Installation Guide

Troubleshooting a Service Gateway Installation

9.0

Troubleshooting a Service Gateway Installation


This section lists certain typical problems you may encounter when installing a Service Gateway using the RPM* tool. Errors that can cause the ESG install to fail are:

Table 7.

Install problems and solutions


Error JRE dir not available The number of IPC queues has not been set to 120 or more Default route not set and is equal to 0.0.0.0 Solution Set the JRE_HOME environment variable Set the kernel.msgmni parameter as shown in Preparing Your System for ESG Installation Use the command:

route add default gw a.b.c.d


where a.b.c.d is the IP address of your gateway.

Intel Expressway Service Gateway Installation Guide

59

Troubleshooting a Service Gateway Installation

60

Intel Expressway Service Gateway Installation Guide

Vous aimerez peut-être aussi