Vous êtes sur la page 1sur 169

Chapter 1 Introduction to the Hardware of Hot Backup System ................. 1.1 Features of the Hot Backup System .......................................................

1.2 System Networking ................................................................................ 1.3 Hardware Configuration ......................................................................... 1.4 System Hardware Description ................................................................ Chapter 2 Connecting the System Hardware ................................................ 2.1 Connecting Disk Arrays .......................................................................... 2.2 Connecting the Hot Backup System ....................................................... Chapter 3 Preparation for Installation ........................................................... 3.1 Installation Procedure ............................................................................. 3.2 Software Configuration ........................................................................... 3.3 Checks Before Installation ...................................................................... 3.3.1 Checking the Solaris 8 ................................................................... 3.3.2 Checking the Sun Cluster .............................................................. 3.3.3 Checking the Volume Manager...................................................... 3.3.4 Checking the Configuration of Cluster ........................................... 3.3.5 Checking the Sybase Database ..................................................... 3.4 Planning System Resources .................................................................. 3.4.1 Planning Installation Resources ..................................................... 3.4.2 Planning Sun Cluster Environment ................................................ 3.4.3 Planning Global Devices and the Cluster File System ................... 3.4.4 Planning Volume Management ...................................................... 3.5 Configuring the Hardware ...................................................................... 3.5.1 Configuring the TC ......................................................................... 3.5.2 Configuring the Sun StorEdge3310s ............................................. 3.5.3 Preparing the Sun Fire V880 ......................................................... 3.6 Software Preparations ............................................................................ 3.6.1 Sun Solaris 8 Operating System Software ..................................... 3.6.2 Volume Management Software...................................................... 3.6.3 Cluster Software ............................................................................ 3.6.4 Sybase Database Software ........................................................... 3.6.5 M2000 Server Application .............................................................. 3.6.6 Patch Package ............................................................................... 3.6.7 Script Package ............................................................................... Chapter 4 Installing the Sun Solaris 8 ........................................................... 4.1 Preparations Before Installation ............................................................. 4.2 Installation Procedures ........................................................................... 4.3 Installing Patches of the Solaris 8 ..........................................................

1-1 1-1 1-1 1-2 1-3 2-1 2-1 2-1 3-1 3-1 3-1 3-2 3-2 3-5 3-5 3-5 3-5 3-7 3-7 3-11 3-12 3-12 3-13 3-13 3-17 3-36 3-37 3-37 3-37 3-37 3-38 3-38 3-38 3-38 4-1 4-1 4-1 4-11

4.4 Configuring the Sun Solaris Operating System ...................................... 4.5 Checking Installation Result ................................................................... Chapter 5 Installing the Sun Cluster .............................................................. 5.1 Checks Before Installation ...................................................................... 5.2 Installing the Sun Cluster on Master Node ............................................. 5.3 Installing the Sun Cluster on the Slave Node ......................................... 5.4 Checking Sun Cluster Installation .......................................................... 5.5 Configuring the File /etc/inet/ntp.conf.cluster ......................................... 5.6 Configuring NAFO Group ....................................................................... 5.6.1 Introduction to NAFO ..................................................................... 5.6.2 Creating an NAFO Group .............................................................. 5.6.3 Checking Configuration of NAFO Group ........................................ 5.7 Configuring Quorum Devices ................................................................. 5.8 Checking Installation Result ................................................................... Chapter 6 Installing the Volume Manager ..................................................... 6.1 Checks Before Installation ...................................................................... 6.2 Installing the Volume Manager ............................................................... 6.3 Checking Volume Manager Installation .................................................. Chapter 7 Configuring the Cluster ................................................................. 7.1 Mirroring the Root Disk........................................................................... 7.2 Creating m2000_rg................................................................................. 7.3 Checking Installation Result ................................................................... Chapter 8 Installing Sybase Database ........................................................... 8.1 Checks Before Installation ...................................................................... 8.2 Preparations Before Installation ............................................................. 8.2.1 Modifying the File .rhosts ............................................................... 8.2.2 Creating the User Sybase .............................................................. 8.2.3 Setting the Properties of Raw Devices .......................................... 8.2.4 Switching to the User Sybase and Editing the File .profile ............ 8.2.5 Modifying the File /etc/system ........................................................ 8.2.6 Rebooting the Cluster .................................................................... 8.3 Installing Sybase Database on the Master Node ................................... 8.3.1 Preparations Before Installation ..................................................... 8.3.2 Disk Array Partition Plan ................................................................ 8.3.3 Installation Procedure .................................................................... 8.4 Installing Sybase Database on the Slave Node ..................................... 8.4.1 Installation Procedure .................................................................... 8.4.2 Checking Installation Result ...........................................................

4-17 4-18 5-1 5-1 5-1 5-11 5-17 5-19 5-19 5-19 5-20 5-21 5-21 5-24 6-1 6-1 6-2 6-8 7-1 7-1 7-2 7-13 8-1 8-1 8-1 8-1 8-1 8-2 8-2 8-3 8-3 8-3 8-3 8-4 8-4 8-7 8-7 8-8

Chapter 9 Installing the M2000 Application .................................................. 9.1 Preparations Before Installation ............................................................. 9.1.1 Checking for Existing M2000 System ............................................ 9.1.2 Checking the User m2000 ............................................................. 9.2 Installing the M2000 Application on the Master Node ............................ 9.2.1 Switching the m2000_rg Resource Group to the Master Node ..... 9.2.2 Preparing the M2000 Software Package ....................................... 9.2.3 Starting the Sybase Database Server ............................................ 9.2.4 Installing the M2000 Application .................................................... 9.2.5 Checking Installation Result ........................................................... 9.3 Installing M2000 Application on the Slave Node .................................... 9.4 Installing the Adapter Monitor and Adapters .......................................... Chapter 10 Registering Sybase and M2000 Applications ............................ 10.1 Installing the Application Package ........................................................ 10.2 Registering the Applications on the Master Node ................................ 10.3 Checking the Registration .................................................................... 10.3.1 Checking Resource Registration ................................................. 10.3.2 Checking Sybase Processes ....................................................... 10.3.3 Checking M2000 Processesi ....................................................... 10.3.4 Checking the Switchover Function ............................................... Appendix A Modifying IP Addresses ............................................................. A.1 Introduction ............................................................................................ A.2 Modifying the IP Address of Solaris 8, Sybase and M2000 ................... A.3 Modifying Two IP Addresses of the Same Server ................................. A.4 Precautions ............................................................................................ Index .................................................................................................................

9-1 9-1 9-1 9-1 9-2 9-2 9-3 9-3 9-4 9-9 9-10 9-11 10-1 10-1 10-1 10-1 10-1 10-2 10-3 10-3 A-1 A-1 A-2 A-3 A-3

HUAWEI

iManager M2000 Two-node Cluster Hot Backup System Installation Manual V100R005

iManager M2000 Two-node Cluster Hot Backup System Installation Manual


Manual Version Product Version BOM T2-030436-20040916-C-1.50 V100R005 31041536

Huawei Technologies Co., Ltd. provides customers with comprehensive technical support and service. Please feel free to contact our local office or company headquarters.

Huawei Technologies Co., Ltd.


Address: Administration Building, Huawei Technologies Co., Ltd., Bantian, Longgang District, Shenzhen, P. R. China Postal Code: 518129 Website: http://www.huawei.com Email: support@huawei.com

Copyright 2004 Huawei Technologies Co., Ltd.

All Rights Reserved


No part of this manual may be reproduced or transmitted in any form or by any means without prior written consent of Huawei Technologies Co., Ltd.

Trademarks

, HUAWEI, C&C08, EAST8000, HONET,

, ViewPoint, INtess, ETS, DMC,

TELLIN, InfoLink, Netkey, Quidway, SYNLOCK, Radium, M900/M1800, TELESIGHT, Quidview, Musa, Airbridge, Tellwin, Inmedia, VRP, DOPRA, iTELLIN, HUAWEI OptiX, C&C08 iNET, NETENGINE, OptiX, iSite, U-SYS, iMUSE, OpenEye, Lansway, SmartAX, infoX, TopEng are trademarks of Huawei Technologies Co., Ltd. All other trademarks mentioned in this manual are the property of their respective holders.

Notice
The information in this manual is subject to change without notice. Every effort has been made in the preparation of this manual to ensure accuracy of the contents, but all statements, information, and recommendations in this manual do not constitute the warranty of any kind, express or implied.

About This Manual


Release Notes
This manual applies to the iManager M2000 V100R005.

Related Manuals
The related manuals are listed in the following table. Manual iManager M2000 Technical Manual iManager M2000 Operation Manual iManager M2000 Installation Manual iManager M2000 Maintenance Manual iManager M2000 Intelligent Report System User Manual iManager M2000 Two-node Cluster Hot Backup System Installation Manual Content It introduces the system structure, software functions, hardware functions and networking applications of the iManager M2000 Integrated Network Management System. It is used for assisting the users in GUIs of iManager M2000 Integrated Network Management System. It provides information for the system installation, including the installation of the hardware, server software and client software, and network applications. It introduces common problems and troubleshooting in the iManager M2000, as well as the commands of Solaris operating system and Sybase database. It provides an overall introduction to the iManager M2000 Intelligent Report System, including the system architecture, client operation, template customization, software installation, maintenance and examples. It introduces the basic features, system architecture and software & hardware configuration of the iManager M2000 two-node cluster hot backup system. Then it takes Sun Fire V880 as an example to illustrate the installation and configuration procedures of the system.

Organization
This manual first introduces the basic features, system architecture and software & hardware configuration of the iManager M2000 two-node cluster hot backup system. Then it takes Sun Fire V880 as an example to illustrate the installation and configuration procedures of the system. This manual comprises the following chapters:

Chapter 1 Introduction to the Hardware of Hot Backup System. It covers the basic features, system architecture and software and hardware configuration of the hot backup system. Chapter 2 Connecting the System Hardware. It covers the connection of two disk arrays, and that of the hot backup system. Chapter 3 Preparation for Installation. It covers the software and hardware preparation, brief installation procedure and system planning. Chapter 4 Installing the Sun Solaris 8. It introduces how to install the operating system for the hot backup system. Chapter 5 Installing the Sun Cluster. It describes how to install the cluster management software for the hot backup system. Chapter 6 Installing the Volume Manager. It describes how to install the volume management software for the hot backup system. Chapter 7 Configuring the Cluster. It describes how to mirror the root disk, create the m2000_rg resource group, and check the installation. Chapter 8 Installing Sybase Database. It describes how to install the Sybase system for the hot backup system. Chapter 9 Installing iManager M2000 Application. It describes how to install the iManager M2000 server applications for the hot backup system. Chapter 10 Registering Sybase and M2000 Applications. It describes how to register the applications of Sybase and M2000, and check the configuration. Appendix A Modifying IP Addresses. It describes how to modify the IP addresses of the hot backup system.

Intended Audience
The manual is intended for the following readers: Installation engineers and technicians Operation and maintenance personnel

Conventions
The manual uses the following conventions:

I. General conventions
Convention Arial Arial Narrow Boldface Courier New Description Normal paragraphs are in Arial. Warnings, Cautions, Notes and Tips are in Arial Narrow. Headings are in Boldface. Terminal Display is in Courier New.

II. Command conventions


Convention Boldface italic [] { x | y | ... } [ x | y | ... ] { x | y | ... } * [ x | y | ... ] * Description The keywords of a command line are in Boldface. Command arguments are in italic. Items (keywords or arguments) in square brackets [ ] are optional. Alternative items are grouped in braces and separated by vertical bars. One is selected. Optional alternative items are grouped in square brackets and separated by vertical bars. One or none is selected. Alternative items are grouped in braces and separated by vertical bars. A minimum of one or a maximum of all can be selected. Optional alternative items are grouped in square brackets and separated by vertical bars. Many or none can be selected.

III. GUI conventions


Convention <> [] / Description Button names are inside angle brackets. For example, click the <OK> button. Window names, menu items, data table and field names are inside square brackets. For example, pop up the [New User] window. Multi-level menus are separated by forward slashes. For example, [File/Create/Folder].

IV. Keyboard operation


Format <Key> <Key1+Key2> <Key1, Key2> Description Press the key with the key name inside angle brackets. For example, <Enter>, <Tab>, <Backspace>, or <A>. Press the keys concurrently. For example, <Ctrl+Alt+A> means the three keys should be pressed concurrently. Press the keys in turn. For example, <Alt, A> means the two keys should be pressed in turn.

V. Mouse operation
Action Click Double Click Drag Description Press the left button or right button quickly (left button by default). Press the left button twice continuously and quickly. Press and hold the left button and drag it to a certain position.

VI. Remark
Format // Description Double slashes are in front of command explanations, option explanations, procedure explanations, and other remark.

VII. Symbols
Eye-catching symbols are also used in the manual to highlight the points worthy of special attention during the operation. They are defined as follows:

Caution, Warning, Danger: Means reader be extremely careful during the operation. Note, Comment, Tip, Knowhow, Thought: Means a complementary description.

Environmental Protection
This product has been designed to comply with the requirements on environmental protection. For the proper storage, use and disposal of this product, national laws and regulations must be observed.

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Table of Contents

Table of Contents
Chapter 1 Introduction to the Hardware of Hot Backup System .............................................. 1-1 1.1 Features of the Hot Backup System .................................................................................. 1-1 1.2 System Networking ............................................................................................................ 1-1 1.3 Hardware Configuration..................................................................................................... 1-2 1.4 System Hardware Description ........................................................................................... 1-3 Chapter 2 Connecting the System Hardware ............................................................................. 2-1 2.1 Connecting Disk Arrays ..................................................................................................... 2-1 2.2 Connecting the Hot Backup System .................................................................................. 2-1 Chapter 3 Preparation for Installation ......................................................................................... 3-1 3.1 Installation Procedure ........................................................................................................ 3-1 3.2 Software Configuration ...................................................................................................... 3-1 3.3 Checks Before Installation ................................................................................................. 3-2 3.3.1 Checking the Solaris 8 ............................................................................................ 3-2 3.3.2 Checking the Sun Cluster ....................................................................................... 3-5 3.3.3 Checking the Volume Manager............................................................................... 3-5 3.3.4 Checking the Configuration of Cluster .................................................................... 3-5 3.3.5 Checking the Sybase Database.............................................................................. 3-5 3.4 Planning System Resources.............................................................................................. 3-7 3.4.1 Planning Installation Resources.............................................................................. 3-7 3.4.2 Planning Sun Cluster Environment ....................................................................... 3-11 3.4.3 Planning Global Devices and the Cluster File System.......................................... 3-12 3.4.4 Planning Volume Management ............................................................................. 3-12 3.5 Configuring the Hardware................................................................................................ 3-13 3.5.1 Configuring the TC ................................................................................................ 3-13 3.5.2 Configuring the Sun StorEdge3310s .................................................................... 3-17 3.5.3 Preparing the Sun Fire V880................................................................................. 3-36 3.6 Software Preparations ..................................................................................................... 3-37 3.6.1 Sun Solaris 8 Operating System Software............................................................ 3-37 3.6.2 Volume Management Software............................................................................. 3-37 3.6.3 Cluster Software.................................................................................................... 3-37 3.6.4 Sybase Database Software................................................................................... 3-38 3.6.5 M2000 Server Application ..................................................................................... 3-38 3.6.6 Patch Package ...................................................................................................... 3-38 3.6.7 Script Package ...................................................................................................... 3-38 Chapter 4 Installing the Sun Solaris 8......................................................................................... 4-1 4.1 Preparations Before Installation......................................................................................... 4-1 4.2 Installation Procedures ...................................................................................................... 4-1
i

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Table of Contents

4.3 Installing Patches of the Solaris 8.................................................................................... 4-11 4.4 Configuring the Sun Solaris Operating System ............................................................... 4-17 4.5 Checking Installation Result ............................................................................................ 4-18 Chapter 5 Installing the Sun Cluster ........................................................................................... 5-1 5.1 Checks Before Installation ................................................................................................. 5-1 5.2 Installing the Sun Cluster on Master Node ........................................................................ 5-1 5.3 Installing the Sun Cluster on the Slave Node .................................................................. 5-11 5.4 Checking Sun Cluster Installation.................................................................................... 5-17 5.5 Configuring the File /etc/inet/ntp.conf.cluster .................................................................. 5-19 5.6 Configuring NAFO Group ................................................................................................ 5-19 5.6.1 Introduction to NAFO ............................................................................................ 5-19 5.6.2 Creating an NAFO Group...................................................................................... 5-20 5.6.3 Checking Configuration of NAFO Group............................................................... 5-21 5.7 Configuring Quorum Devices........................................................................................... 5-21 5.8 Checking Installation Result ............................................................................................ 5-24 Chapter 6 Installing the Volume Manager................................................................................... 6-1 6.1 Checks Before Installation ................................................................................................. 6-1 6.2 Installing the Volume Manager .......................................................................................... 6-2 6.3 Checking Volume Manager Installation ............................................................................. 6-8 Chapter 7 Configuring the Cluster............................................................................................... 7-1 7.1 Mirroring the Root Disk ...................................................................................................... 7-1 7.2 Creating m2000_rg ............................................................................................................ 7-2 7.3 Checking Installation Result ............................................................................................ 7-13 Chapter 8 Installing Sybase Database ........................................................................................ 8-1 8.1 Checks Before Installation ................................................................................................. 8-1 8.2 Preparations Before Installation......................................................................................... 8-1 8.2.1 Modifying the File .rhosts ........................................................................................ 8-1 8.2.2 Creating the User Sybase ....................................................................................... 8-1 8.2.3 Setting the Properties of Raw Devices ................................................................... 8-2 8.2.4 Switching to the User Sybase and Editing the File .profile ..................................... 8-2 8.2.5 Modifying the File /etc/system................................................................................. 8-3 8.2.6 Rebooting the Cluster ............................................................................................. 8-3 8.3 Installing Sybase Database on the Master Node .............................................................. 8-3 8.3.1 Preparations Before Installation .............................................................................. 8-3 8.3.2 Disk Array Partition Plan ......................................................................................... 8-4 8.3.3 Installation Procedure ............................................................................................. 8-4 8.4 Installing Sybase Database on the Slave Node ................................................................ 8-7 8.4.1 Installation Procedure ............................................................................................. 8-7 8.4.2 Checking Installation Result.................................................................................... 8-8 Chapter 9 Installing the M2000 Application................................................................................ 9-1 9.1 Preparations Before Installation......................................................................................... 9-1
ii

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Table of Contents

9.1.1 Checking for Existing M2000 System ..................................................................... 9-1 9.1.2 Checking the User m2000....................................................................................... 9-1 9.2 Installing the M2000 Application on the Master Node ....................................................... 9-2 9.2.1 Switching the m2000_rg Resource Group to the Master Node .............................. 9-2 9.2.2 Preparing the M2000 Software Package ................................................................ 9-3 9.2.3 Starting the Sybase Database Server..................................................................... 9-3 9.2.4 Installing the M2000 Application ............................................................................. 9-4 9.2.5 Checking Installation Result.................................................................................... 9-9 9.3 Installing M2000 Application on the Slave Node ............................................................. 9-10 9.4 Installing the Adapter Monitor and Adapters ................................................................... 9-11 Chapter 10 Registering Sybase and M2000 Applications ....................................................... 10-1 10.1 Installing the Application Package ................................................................................. 10-1 10.2 Registering the Applications on the Master Node ......................................................... 10-1 10.3 Checking the Registration.............................................................................................. 10-1 10.3.1 Checking Resource Registration......................................................................... 10-1 10.3.2 Checking Sybase Processes .............................................................................. 10-2 10.3.3 Checking M2000 Processesi............................................................................... 10-3 10.3.4 Checking the Switchover Function...................................................................... 10-3 Appendix A Modifying IP Addresses...........................................................................................A-1 A.1 Introduction........................................................................................................................A-1 A.2 Modifying the IP Address of Solaris 8, Sybase and M2000 ..............................................A-2 A.3 Modifying Two IP Addresses of the Same Server.............................................................A-3 A.4 Precautions........................................................................................................................A-3

iii

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

List of Figures

List of Figures
Figure 1-1 Networking of the M2000 two-node cluster hot backup system........................... 1-2 Figure 1-2 Rear view of the Sun Fire V880 ........................................................................... 1-6 Figure 1-3 Partial zoom-in rear view of the Sun Fire V880.................................................... 1-6 Figure 1-4 Rear view of the Sun StorEdge3310 .................................................................... 1-7 Figure 1-5 Front view of the S3026........................................................................................ 1-7 Figure 2-1 Connecting the two disk arrays ............................................................................ 2-1 Figure 3-1 Connecting the administrative console to the TC............................................... 3-14 Figure 3-2 Front panel of the TC.......................................................................................... 3-14 Figure 3-3 Configuring the IP address for the Sun StorEdge3310 ...................................... 3-18 Figure 3-4 Setting the IP address and the mask ................................................................. 3-19 Figure 3-5 Configuring the SCSI channel ............................................................................ 3-20 Figure 3-6 Selecting the logical drive................................................................................... 3-21 Figure 3-7 Selecting the type of logical drive....................................................................... 3-21 Figure 3-8 Selecting drives .................................................................................................. 3-22 Figure 3-9 Selecting whether to create logical drives .......................................................... 3-22 Figure 3-10 Creating logical drives ...................................................................................... 3-23 Figure 3-11 Completion of the RAID5 mirroring................................................................... 3-23 Figure 3-12 Selecting logical drives ..................................................................................... 3-24 Figure 3-13 Selecting of the partition logical drive............................................................... 3-24 Figure 3-14 confirming the partitioning ................................................................................ 3-25 Figure 3-15 The first partition ............................................................................................... 3-25 Figure 3-16 Size of the first partition .................................................................................... 3-26 Figure 3-17 Confirming the partitioning................................................................................ 3-26 Figure 3-18 The second partition ......................................................................................... 3-27 Figure 3-19 Size of the second partition .............................................................................. 3-27 Figure 3-20 Confirming the partitioning................................................................................ 3-28 Figure 3-21 Main menu of the Sun StorEdge3310 .............................................................. 3-28 Figure 3-22 Selecting channel 1 .......................................................................................... 3-29 Figure 3-23 Selecting the logical drive................................................................................. 3-30 Figure 3-24 Selecting the first LUN...................................................................................... 3-30

iv

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

List of Figures

Figure 3-25 Selecting the logical drive................................................................................. 3-31 Figure 3-26 Selecting the partition of the logical drive......................................................... 3-31 Figure 3-27 Mapping the LUN.............................................................................................. 3-32 Figure 3-28 Confirming the LUN mapping ........................................................................... 3-32 Figure 3-29 Selecting the second LUN ................................................................................ 3-33 Figure 3-30 Selecting the logical drive................................................................................. 3-33 Figure 3-31 Selecting the partition of the logical drive........................................................ 3-34 Figure 3-32 Mapping the LUN.............................................................................................. 3-34 Figure 3-33 Confirming the mapping ................................................................................... 3-35 Figure 3-34 Selecting channel 3 .......................................................................................... 3-35 Figure 5-1 A sample of two-node cluster configuration........................................................ 5-20

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

List of Tables

List of Table
Table 1-1 Hardware configuration of the Sun Fire V880 M2000 hot backup system............. 1-3 Table 1-2 Type and components of the server ....................................................................... 1-3 Table 1-3 Specifications of the Sun Fire V880 ....................................................................... 1-4 Table 1-4 Type and components of the disk array ................................................................. 1-5 Table 1-5 Introduction to the Sun StorEdge 3310 .................................................................. 1-5 Table 1-6 Arrangement of the 24 network interfaces of the S3026 ........................................ 1-7 Table 1-7 Meanings of S3026 status indicators ..................................................................... 1-7 Table 2-1 Label description of HA cables ............................................................................... 2-1 Table 3-1 Installation procedure of the iManager M2000 hot backup system........................ 3-1 Table 3-2 Lists of non-integral time zones.............................................................................. 3-3 Table 3-3 IP addresses of the iManager M2000 hot backup system ..................................... 3-8 Table 3-4 Hard disk partitions and IP planning of host A (Sun Fire V880) ............................. 3-8 Table 3-5 Partition description of the first hard disk of the m2000svr-1 ................................. 3-8 Table 3-6 Hard disk partition and IP planning of the host B (Sun Fire V880)......................... 3-9 Table 3-7 IP planning............................................................................................................ 3-11 Table 3-8 Names of cluster components.............................................................................. 3-11 Table 3-9 Statuses of indicators in the TC configuration mode............................................ 3-15 Table 3-10 Statuses of TC fault indicators............................................................................ 3-15 Table 4-1 Partitions of system disk 1...................................................................................... 4-8 Table 8-1 Partitions of the Sun StorEdge3310 ....................................................................... 8-4 Table 9-1 Files and directories generated after installation.................................................... 9-9 Table A-1 IP address planning (default) .................................................................................A-1

vi

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 1 Introduction to the Hardware of Hot Backup System

Chapter 1 Introduction to the Hardware of Hot Backup System


This chapter introduces the features, networking, and hardware configuration of the hot backup system.

1.1 Features of the Hot Backup System


The iManager M2000 hot backup system is a combination of the iManager M2000 and the Sun cluster. The system has the following features: Minimize system down time caused by software or hardware faults. These faults usually result in system down if a single-host server is configured. Improve system maintainability and allow in-service maintenance of the system. The combination of the software and hardware of the iManager M2000 hot backup system guarantees its high availability. The redundant cluster interconnection, memory, and public network prevent the occurrence of single-point faults. The hot backup system monitors the system software and hardware continuously. It automatically performs switchover or restarts the M2000 system when a fault occurs.

1.2 System Networking


Figure 1-1 shows the system networking.

1-1

Installation Manual iManager M2000 Two-node Cluster Hot Backup system


LAN Switch

Chapter 1 Introduction to the Hardware of Hot Backup System

LAN Switch 100M LAN Switch

TC

12345678
ce0 ce2 ce1 ce3 ce0 ce2 Administrative Console

m2000svr-1
SCSI SCSI

m2000svr-2 BAM Disk Array

Disk Array

Figure 1-1 Networking of the M2000 two-node cluster hot backup system Two M2000 servers (high-performance Sun servers) act as the two nodes of the cluster. Here, take the V880 for example. The master and slave nodes are respectively named m2000svr-1 and m2000svr-2. The network interface name varies with the network adapter configured. Two disk arrays (Sun StorEdge3310) act as the mirror for each other. The terminal concentrator (TC) connects the administrative console with the two nodes. The administrative console performs single-node management on the cluster system and database, and handles faults in the cluster system.

1.3 Hardware Configuration


Table 1-1 lists the hardware configuration of the Sun Fire V880 M2000 two-node cluster hot backup system.

1-2

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 1 Introduction to the Hardware of Hot Backup System

Table 1-1 Hardware configuration of the Sun Fire V880 M2000 hot backup system Hardware Type Memory: 8 GB Hard disk: 73 GB x 2 M2000 server Sun Fire V880 server x 2 Tape: DDS4 SCSI card: PCI Dual Ultra3 SCSI host adapter x 2 Network adapter: 4-port network adapter x 2 No video adapter, keyboard, mouse, and audio adapter Disk arrays TC Management console LAN Switch Sun StorEdge3310 Nortel Micro Annex CPU: 650 MHz Sun Blade 150 Quidway S3026 Straight through network cable Crossover network cable RS232-RJ45 cable Accessories SCSI cable Connection board Host or console power cable DC power connector Memory: 512 MB Hard disk: 40 GB Quidway S3026 x 3 7 4 2 4 3 Sun StorEdge3310 (8 x 36 GB) x 2 A control module is needed. Configuration CPU: 1050 MHz x 4

1.4 System Hardware Description


This section describes the hardware of the system.

I. Introduction to the Sun Fire V880


The Sun Fire V880 is an enterprise level server of Sun. Table 1-2 lists the type of components of the server. Table 1-2 Type and components of the server Type of the server V880(DC) Server name m2000svr-1; m2000svr-2

1-3

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 1 Introduction to the Hardware of Hot Backup System

I/O slot PCI8 PCI7 PCI6 PCI5

Component order No. X6758A X6758A X2222A X2222A

Component PN No. 375-3057 375-3057 501-5727 501-5727

Component description PCI Dual Ultra3 SCSI PCI Dual Ultra3 SCSI Dual FastEthernet + Dual SCSI Dual FastEthernet + Dual SCSI

Table 1-3 lists the specifications of the Sun Fire V880. Table 1-3 Specifications of the Sun Fire V880 Physical dimensions Height Depth Width Weight Requirements of power supply AC input power Hot swap power supplies Redundant AC input power Current under 220 V Rated power Maximum power Environments requirements Temperature Operation Storage Operation Storage Operation Storage 714 mm (28.1 in.) 836 mm (32.9 in.) 480 mm (18.9 in.) 130.9 kg (288.6 lb) Value 100 V 240 V, 47 Hz 63 Hz 2 included 2 required 7A 1,515 W 3,000 W Value 5C 35C (41F - 95F) 20C +60C (-4F - 140F) 20% 80% 5% 95%, without dews <3 km <12 km Value

Relative humidity

Altitude Cooling

Well ventilated or air-conditioned.

II. Introduction to the Sun StorEdge 3310


The storage medium is the Sun StorEdge 3310. Table 1-4 lists the type and components of the disk array.

1-4

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 1 Introduction to the Hardware of Hot Backup System

Table 1-4 Type and components of the disk array Type of the disk array 3310 (1-AC) Slot of the hard disk Components order No. XTA-3310-36GB-10K Name of the disk array 3310-1, 3310-2 Component PN No. Component description 36 GB Sun StorEdge3310 hard disk

1-5

540-5522

Table 1-5 lists the specifications of the Sun StorEdge 3310. Table 1-5 Introduction to the Sun StorEdge 3310 Physical dimensions Height Depth Width Weigh Requirements of power supply Input voltage Maximum input current Output voltage DC power Input voltage Environment requirements Temperature Operation Storage Operation Storage Operation Storage 5C 35C (41F - 95F) 40C +65C (-40F - 149F) 10% 90% (not frozen) 0% 93% (not frozen) 0 km 3 km 0 km 12 km 133 mm (5.24 in.) 469.9 mm (18.5 in.) 444.5 mm (17.5 in.) 30.39 kg (67 lb) Value 90 V AC 264 V AC, 47 Hz 63 Hz (single-phase) 5A +5 V DC and +12 V DC 48 V DC (36 V DC 72 V DC) 90 V AC 264 V AC, 47 Hz 63 Hz (single-phase) Value Value

Humidity

Altitude Ventilation

Well ventilated or air-conditioned.

III. Rear View of the Sun Fire V880


Figure 1-2 shows the rear view of the Sun Fire V880.

1-5

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 1 Introduction to the Hardware of Hot Backup System

Figure 1-2 Rear view of the Sun Fire V880 Figure 1-3 shows the partial zoom-in rear view of the Sun Fire V880.

pci 8

pci 7

scsi 1 Network interface 0 (ce2) Network interface 1 (ce3)

pci 6

pci 5 Network interface 0 (ce0) Network interface 1 (ce1)

Figure 1-3 Partial zoom-in rear view of the Sun Fire V880
1-6

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 1 Introduction to the Hardware of Hot Backup System

IV. Rear View of the Sun StorEdge3310


Figure 1-4 shows the rear view of the Sun StorEdge3310.

(1)

(2)

(3) (4)

(5)

(6) (7)

(8)

(9)

(1) Power/fan module (3) CH1 (5)COM (7)CH3

(2) Power socket (4) CH2 (6)10/100 Base-T (8)/(9) EMU

Figure 1-4 Rear view of the Sun StorEdge3310

V. Introduction to the Quidway S3026 (LAN Switch)


The Quidway S3026 has twenty-four 10Base-T/100Base-TX Ethernet ports. Figure 1-5 shows the front panel of the S3026.

Power indicator

Console

Figure 1-5 Front view of the S3026 Table 1-6 lists the arrangement of the 24 network interfaces of the S3026 (front view). Table 1-6 Arrangement of the 24 network interfaces of the S3026 2 1 4 3 6 5 8 7 10 9 12 11 14 13 16 15 18 17 20 19 22 21 24 23

The Ethernet interface is an RJ45 connector. Status indicators are on the front panel. Table 1-7 explains these indicators. Table 1-7 Meanings of S3026 status indicators Indicator Power indicator Identifier POWER Status ON OFF Meaning The LAN Switch is powered on. The LAN Switch is powered off.

1-7

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 1 Introduction to the Hardware of Hot Backup System

Indicator

Identifier LINK/ACTIVE (Orange)

Status ON

Meaning The connection is normal. The connection is not set up. The data is being received or transmitting. 100 Mbit/s 10 Mbit/s

10Base-T/100BaseTX interface indicator

OFF Flashing

Speed (Green)

ON OFF

The 10Base-T/100Base-TX Ethernet port can work in semi-duplex, full duplex, or auto-negotiation mode. It can negotiate with other network equipment to choose the best working mode and transfer rate.

1-8

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 2 Connecting the System Hardware

Chapter 2 Connecting the System Hardware


This chapter introduces the connections of the system hardware.

2.1 Connecting Disk Arrays


Figure 2-1 shows the connections of the disk arrays.

Figure 2-1 Connecting the two disk arrays

2.2 Connecting the Hot Backup System


Connect cables, especially the SCSI cables, for the M2000 hot backup system correctly. Any incorrect connection may cause serious results. Before the connections, prepare labels according to specifications and attach them on both ends of cables. Table 2-1 provides the description of labels. Table 2-1 Label description of HA cables SN Connection type Source equipment Host 1 Source equipment port Network interface 0 in slot PCI5 (ce0) Network interface 0 in slot PCI6 (ce2) Network interface 0 in slot PCI5 Network interface 0 in slot PCI6 Target equipment Active LAN Switch Standby LAN Switch Active LAN Switch Standby LAN Switch Target equipment port Remarks

01

1000 M Ethernet

Public network

02

1000 M Ethernet

Host 1

Public network

03 04

1000 M Ethernet 1000 M Ethernet

Host 2 Host 2

Public network Public network

2-1

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 2 Connecting the System Hardware

SN

Connection type 1000 M Ethernet (crossover network cable) 1000 M Ethernet (crossover network cable) SCSI cable SCSI cable SCSI cable SCSI cable SCSI cable SCSI cable Serial port cable Serial port cable

Source equipment Host 1

Source equipment port Network interface 1 in slot PCI5 (ce1) Network interface 1 in slot PCI6 (ce3) SCSI 1 in slot PCI7 SCSI 1 in slot PCI8 SCSI 1 in slot PCI7 SCSI 1 in slot PCI8 Single Bus port Single Bus port Serial port A Serial port A

Target equipment Host 2

Target equipment port Network interface 1 in slot PCI5 (ce1) Network interface 1 in slot PCI6 (ce3) CH1 CH1 CH3 CH3 CH0 CH0 Port 2 Port 3

Remarks

05

Private network

06

Host 1

Host 2

Private network

07 08 09 10 11 12 13 14

Host 1 Host 1 Host 2 Host 2 3310-1 3310-2 Host 1 Host 2

3310-1 3310-2 3310-1 3310-2 3310-1 3310-2 TC TC

2-2

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 3 Preparation for Installation

Chapter 3 Preparation for Installation


The installation preparation of the iManager M2000 hot backup system involves: 1) 2) 3) 4) 5) Verify that the hardware is connected correctly. Make installation planning, including the planning of the host name, the IP address, and the disk space. Commission the TC. Configure the Sun StorEdge3310 disk arrays. Prepare the related software.

3.1 Installation Procedure


The installation of the iManager M2000 hot backup system involves the procedures listed in Table 3-1. Table 3-1 Installation procedure of the iManager M2000 hot backup system Step Tasks Plan the system. 1 Prepare for the installation Install hardware of hot backup system Configure the hardware. 2 3 4 5 6 7 8 9 Install the Solaris 8 operating system and patch Install Sun cluster Install the volume management software (VERITAS Volume Manager) Configure the cluster Installing Sybase database Install iManager application M2000 Install the application in the master node. Install the application in the slave node. Install the operating system. Install the operating system patch. Estimated time (hours) 0.5 1.0 1.0 1.5 1.0 1.0 1.0 0.5 3.0 1.0 2.0 1.0

Register the Sybase and M2000 applications. Check the installation result.

3.2 Software Configuration


The following is the software configured for the system.

3-1

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 3 Preparation for Installation

Operating system: Solaris 8 (02/04 SPARC Platform Edition) Volume management software: VERITAS Volume Manager 3.2 Cluster software: Sun Cluster 3.0 Database software: Sybase Adapter Server Enterprise 11.9.2 NMS software: iManager M2000 Patch package of the system: Patch 3.1 Installation and monitoring script packages: M2000-scripts-xx.tar.gz

Note: The xx in the name of script package file indicates the version of the script package. Select corresponding script package according to the M2000 system. The latest script package version is M2000-scripts-10.tar.gz.

3.3 Checks Before Installation


Because other installations are completed before the delivery of the system, you only need to install the M2000 applications on the master node on site. However, you need to check the following to ensure the correctness of the installation.

3.3.1 Checking the Solaris 8


Check the following items to see if the Solaris 8 operating system is correctly installed on both nodes.

I. Checking Whether the Operating System is Installed


After the server is started, if the login dialog box appears, it indicates that the operating system is installed. If the operating system is not installed, see Chapter 4 "Installing the Sun Solaris 8" and install it before proceeding with the following checks.

II. Checking the Version of the Operating System


Execute the following command to check the version of the operating system: $uname -a
SunOS SUN2000 5.8 Generic_108528-19 sun4u sparc SUNW,Fire V880

The third field in the above returned result indicates that the version of the operating system is Sun OS 5.8. The M2000 Server runs on this version. Install Sun OS 5.8 if the OS version is not Sun OS 5.8.

3-2

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 3 Preparation for Installation

If the patch number is earlier than 19, obtain the Solaris Patch3.0 under the directory "software\Mobile Communication\01-OMC and M2000\Patch\Solaris2.8 Patch3.0(M2000)\SUN\PATCH\8" at http://support.huawei.com and then install it.

III. Checking the Time Zone of the Operating System


Execute the following command to check the time zone: $echo $TZ Compare the result with the time zone applicable to your country. If they are different, change the TZ in /etc/TIMEZONE to the correct value. To change the integral TZ, 1) Set the TZ in /etc/TIMEZONE file to the correct time zone. # vi /etc/TIMEZONE 2) Reboot the server. #sync;sync;sync;reboot For a non-integral time zone, follow the method below to modify it. The following takes Calcutta as an example. For other cities, change the state name and area name accordingly. To change the non-integral TZ, 1) Check whether the following file exists. #ls -l /usr/share/lib/zoneinfo/Asia/Calcutta 2) If the file does not exist, execute the following commands as supper user: # cd /usr/share/lib/zoneinfo # zic src/asia Then repeat step 1 to check whether the file /usr/share/lib/zoneinfo/Asia/Calcutta exists. 3) If the file /usr/share/lib/zoneinfo/Asia/Calcutta exists, change TZ to Asia/Calcutta in the file /etc/TIMEZONE manually. # vi /etc/TIMEZONE Table 3-2 lists some non-integral time zones. Table 3-2 Lists of non-integral time zones Time zone (Windows) GMT-03:30 GMT+03:30 GMT+04:30 GMT+05:30 City Newfoundland Tehran Kabul Calcutta Path /usr/share/lib/zoneinfo/Canada/Newfoundland /usr/share/lib/zoneinfo/Asia/Tehran /usr/share/lib/zoneinfo/Asia/Kabul /usr/share/lib/zoneinfo/Asia/Calcutta

3-3

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 3 Preparation for Installation

Time zone (Windows) GMT+05:45 GMT+06:30 GMT+09:30 GMT+09:30

City Katmandu Rangoon Adelaide Darwin

Path /usr/share/lib/zoneinfo/Asia/Katmandu /usr/share/lib/zoneinfo/Asia/Rangoon /usr/share/lib/zoneinfo/Australia/Adelaide /usr/share/lib/zoneinfo/Australia/Darwin

Note: The TZ in the Solaris 8 system differs from that in the Windows system. Therefore, observe the following rules to set the TZ in the Solaris system: If the local TZ is GMT+3 in the Windows, set the TZ to GMT-3 in the Solaris. Similarly, if the TZ is GMT-5 in the Windows, set the TZ to GMT+5 in the Solaris. Violation of the above rules may cause exception of the clock synchronization system and result in inaccurate system clock.

IV. Checking the Current Time of the System


Execute the following command to check the current time of the system: $ date If the output result is inconsistent with the local time, correct it. For example, to change the time to 14:53:43 on March 28, 2003, execute the following command: # date 0328145303.43
Fri Mar 28 14:53:43 GMT 2003

V. Checking IP Address
Execute the following command to check the IP address of the server: #ifconfig a If the IP address does not meet the requirement of the LAN, change it. You need not reinstall the operating system.

VI. Checking Partitions


Execute the following command to check if partitions meet the requirement of Sybase installation: # format
Specify disk(enter its number):0 (or other disk numbers)

//partition: Define the partition table.

3-4

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 3 Preparation for Installation

//print: List the current partition table.

Check whether partitions are the same as those in Table 3-4. If there are different, install the operating system again.

3.3.2 Checking the Sun Cluster


Check the Cluster status and the configuration of the NAFO group to see if the Sun Cluster is correctly installed. For detailed, see section 5.8 "Checking Installation Result".

3.3.3 Checking the Volume Manager


Check the following items to see if the Volume Manager is correctly installed: Whether the system can be started normally. Whether the statuses of all volumes are "ACTIVE". Whether the root disk is encapsulated. For detailed, see section 6.3 "Checking Volume Manager Installation".

3.3.4 Checking the Configuration of Cluster


Check the following items to see if the Cluster is correctly configured: Mirroring of the root disk Mirroring of the M2000 volume Mount status of the volume fsdata Switchover of the m2000_rg resource group For details, see section 7.3 "Checking Installation Result".

3.3.5 Checking the Sybase Database


Check the following items to see if the Sybase database is correctly installed.

I. Checking Whether the Sybase Database is Installed


Proceed as follows to check whether the Sybase database is installed. If the Sybase database is not installed, install the Sybase database according to the instructions in Chapter 8 "Installing Sybase Database". 1) Switch to the Sybase user. #su - sybase If the system responds very slowly and displays "su: Unknown id: sybase", it indicates that the Sybase user does not exist. In this case, install the Sybase database. 2) Check the processes.

3-5

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 3 Preparation for Installation

$ps -ef|grep sybase


sybase .. /export/home/sybase/bin/dataserver -ssybserver .. sybase .. /export/home/sybase/bin/backupserver -Ssybserver_back ..

If the result contains the processes "dataserver" and "backupserver", it indicates that Sybase database is installed. Otherwise, reboot the system by executing the following commands: $cd /export/home/sybase/install $startserver -f RUN_sybserver -f RUN_sybserver_back Check the processes again. If you cannot access the directory or the above two processes do not exist, reinstall the Sybase database. 3) Log in to isql as sa. $isql -Usa -Pserver1234 1> If you can log in to the system, it indicates that the Sybase database is installed.

II. Checking the Version of the Sybase Database


Execute the following commands to check the version of the Sybase database: 1> select @@version 2> go
Adaptive Server Enterprise/11.9.2/1031/P/Sun_svr4/OS 5.5.1/FBO/Fri Aug 14 06:26:45 1998 (1 row affected)

1> "11.9.2" in the above result indicates the version of the Sybase. If the version of Sybase database is earlier than this, remove this Sybase and install the Sybase of version 11.9.2.

III. Checking the Equipment Information of the Database


Execute the following commands to check the equipment information of the Sybase database: 1> sp_helpdevice 2> go
device_name physical_name description

status cntrltype device_number low high data_dev MB 3 0 ----------------------------------------------------------------/dev/md/rdsk/d32 .. 3 special, default disk, physical disk,7000.00

3-6

Installation Manual iManager M2000 Two-node Cluster Hot Backup system


50331648 62914559 log_dev /dev/md/rdsk/d35

Chapter 3 Preparation for Installation

special, physical disk,7000.00 MB.. 2 67108864 79691775 0 4

1> If the result contains devices "data_dev" and "log_dev", you need not reinstall the Sybase database.

3.4 Planning System Resources


This section introduces the system planning.

3.4.1 Planning Installation Resources


Note: In the following planning, two Sun Fire V880 servers are named host A and host B.

Proceed as follows to plan resources for the installation:

I. Naming Servers
Name the host A m2000svr-1 and the host B m2000svr-2.

II. IP Address Planning


The iManager M2000 hot backup system is commissioned before delivery, and IP addresses of the two servers are specified at that time. The IP addresses described in this manual are those specified before the delivery. You can modify them as needed upon the installation of the system by referring to Appendix A Addresses". Figure 1-1 shows the logical connections of the iManager M2000 hot backup system. The two V880 servers have two IP addresses. When two servers run normally, they use one logical IP address for external communications. Therefore, the hot backup system also needs one logical IP address. The TC is responsible for communicating with the two servers through serial ports. The TC needs an IP address so that other terminals can access the TC and communicate with the servers through the TC. Therefore, the M2000 hot backup system must have at least four IP addresses. Table 3-3 describes the assignment of the IP addresses.
3-7

Modifying IP

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 3 Preparation for Installation

Table 3-3 IP addresses of the iManager M2000 hot backup system Device m2000svr-1 m2000svr-2 Logical IP of the hot backup system Terminal concentrator (TC) IP address 192.168.8.11 192.168.8.12 192.168.8.10 192.168.8.244 Mask 255.255.255.0 255.255.255.0 255.255.255.0 255.255.255.0

III. Partitions of the Built-in Hard Disks


The following describes how to partition the built-in hard disks of the host A and host B. Built-in hard disk partitions of the host A Table 3-4 provides the hard disk partition and IP planning information of host A (m2000svr-1). Table 3-4 Hard disk partitions and IP planning of host A (Sun Fire V880) Host name IP Address Netmask Boot disk m2000svr-1 192.168.8.11 255.255.255.0 c1t0d0 0 1 Hard disk partition 2 3 4 / swap overlap /opt /globedevices 10240MB 500 MB 43000 MB 16384 MB c1t1d0 0 1 2 3 4 overlap

Table 3-5 describes the partitions of the first hard disk of the m2000svr-1. Table 3-5 Partition description of the first hard disk of the m2000svr-1 Physical device c1t0d0s0 Partition name Root partition Purpose It is used to install the Solaris 8 operating system and manage files. It is the Solaris data exchange area. (It is recommended that this partition is one or two times greater than the memory.) It is the mirrored disk of the operating system. (This partition is unavailable and need not be partitioned.) Remarks

c1t0d0s1

swap

c1t0d0s2

overlap

3-8

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 3 Preparation for Installation

Physical device

Partition name

Purpose It is used to install the system software and third party software, including Sun Cluster, VERITAS Volume Manager, Sybase database, and drivers. It is a special partition assigned during the installation of the Sun Cluster 3.x. (Assign this partition in advance.) These three partitions are spare ones. Two of them are reserved for the mirroring of the system disk.

Remarks The /opt partition is used to separate the software from the operating system to improve the security.

c1t0d0s3

/opt

c1t0d0s4 c1t0d0s5 c1t0d0s6 c1t0d0s7

/globedevices

Reserve the rest space (about 1,100 MB) as the overhead for the system disk mirroring. The second built-in hard disk in the m2000svr-1 acts as the mirror of the first one and needs no partitioning.

Note: The names of the two built-in hard disks (73 GB) vary with actual situations. Here, c1t0do and c1t1d0 are assumed disk names.

Built-in hard disk partitions of the host B Table 3-6 provides the hard disk partition and IP planning information of the host B. Table 3-6 Hard disk partition and IP planning of the host B (Sun Fire V880) Host IP Address Netmask Hard disk m2000svr-2 192.168.8.12 255.255.255.0 c1t0d0 0 1 Hard disk partition 2 3 4 / Swap Overlap /opt /globedevices 10240MB 500 MB 43000 MB 16384 MB c1t1d0 0 1 2 3 4 overlap

3-9

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 3 Preparation for Installation

IV. Disk Array Planning


In the iManager M2000 hot backup system, two Sun StorEdge3310 disk arrays are used to realize the RAID5+1 redundancy strategy. That is, disk arrays in a Sun StorEdge3310 are configured into RAID5 and those in two Sun StorEdge3310s are mirrored into RAID1.

Note: In a 3310 disk array, there are eight 36-GB hard disks. One of them serves as a global hot backup disk. The other seven hard disks can work in the 6+1 protection mode with RAID5 configuration. That is, among the seven hard disks, only six hard disks are available at the same time, and the rest one serves as a redundant one for parity check. When one disk is damaged, data can be automatically recovered. RAID1 is configured for the two 3310 disk arrays. That is, the mirroring protection is implemented between two 3310 disk arrays. The host data can be read or written into the two disk arrays at the same time to ensure the system safety. In a word, the two 3310 disk arrays in the iManager M2000 hot backup system can provide a 36 x 6 space (about 201GB) to the system.

The space allocation of the Sun StorEdge3310 refers to the allocation of the 201 GB space.

Note: The VERITAS Volume Manager manages disk arrays. It is only required to partition one of the two Sun StorEdge3310s. The other Sun StorEdge3310 acts as the mirror of the first one through the Volume Manager.

The Sun StorEdge3310 is managed by its built-in operating system. Perform the following operations on both disk arrays through the built-in system to access the two partitions from the V880 server. 1) 2) 3) 4) Divide the eight hard disks in the RAID 5+1 hot backup mode to create a logical disk (about 201 GB). Partition the logical disk into two partitions: one with 200 MB and the other with about 201 GB. Map these two partitions to the Host LUNs. Reboot the Sun StorEdge3310s.

For details of the Sun StorEdge3310 disk arrays, see section 3.5.2 "Configuring the Sun StorEdge3310s".

3-10

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 3 Preparation for Installation

3.4.2 Planning Sun Cluster Environment


This section introduces the planning of the Sun cluster environment.

I. IP Planning
Set IP addresses for Sun cluster components according to the cluster configuration. In the cluster configuration, each node must connect to the public network. In addition, it is required to add these IP addresses to the file /etc/hosts. Table 3-7 lists the planned IP addresses necessary for the installation. Table 3-7 IP planning Item Administrative console TC Logical address IP planning One IP for each subnet One IP for each subnet One IP for each logical node IP address and subnet mask 192.168.8.236; 255.255.255.0 192.168.8.244; 255.255.255.0 192.168.8.10; 255.255.255.0

II. Components Planning


The cluster consists of a series of cluster components. Names of cluster components Table 3-8 lists the names of cluster components. Table 3-8 Names of cluster components Name of cluster component Cluster name Name of the master node Name of slave node the Unique in an enterprise Specified during the installation of the Solaris operating system Specified during the installation of Solaris operating system Specified during the configuration of the Cluster Name of the serial port TC Name of workstation of the administrative console Description Configuration M2000Cluster m2000svr-1 m2000svr-2 m2000svr TC m2000-console

Name of the logical node TC name Administrative console name

Cluster private network Cluster interconnection Cluster private host name Cluster public network
3-11

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 3 Preparation for Installation

Cluster disk device group Cluster Quorum device

3.4.3 Planning Global Devices and the Cluster File System


Note the following points when planning global devices and the cluster file system:

I. High-availability Global Devices and the Cluster File System


Observe the following principles to configure high-availability global devices and the cluster file system: Mirroring: Configure RAID1 or RAID5 for all global devices. Disks: Arrange disks properly so that mirroring is possible across disk extension units. Availability: Global devices must have physical connections to multiple nodes in the cluster. These configurations can tolerate the failure of a single node and support global devices with only one physical connection. If a global device connects to only one node and this node is disabled, you cannot access this global device through other nodes.

II. Mount Point of the Cluster File System


Observe the following principles to create mount points of the cluster file system: Create the mount point under the /global directory unless forbidden by some software. You can distinguish the cluster file system with the help of the /global directory. These file systems are globally available. Avoid nesting the mount points of the cluster file system. For example, do not set two file systems in the following way: one in the directory /global/a and the other in the directory /global/a/b. Otherwise, it may cause problems of availability and node boot sequence because the parent mount point may not exist. The only exception to this principle is when the devices of two file systems connect to the same physical node (for example, different slices of the same disk).

3.4.4 Planning Volume Management


The cluster adopts the volume management software to divide disks into some disk device groups and to manage these device groups as a unit. Currently, the iManager M2000 hot backup system employs the VERITAS Volume Manager for volume management.

I. Volume Management Planning


The volume management planning covers the following contents:
3-12

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 3 Preparation for Installation

Mirroring of multiple host disks Configure the two Sun StorEdge3310s as RAID1 Root disk mirroring Configure RAID1 for the two local disks. Unique name Node list Multi-port disk Disk for hot backup

II. VERITAS Volume Manager Planning


The VERITAS Volume Manager planning covers the following contents: Root disk group Create a default root device group (rootdg) on each node. The rootdg is a local disk group for the node. Encapsulation Encapsulate the whole disk. Two slices of the disk must be idle.

3.5 Configuring the Hardware


The hardware to be configured includes: TC Two Sun StorEdge3310 Sun Fire V880

3.5.1 Configuring the TC


Proceed as follows to configure the TC:

I. Connecting the TC
To connect the TC, proceed as follows: 1) 2) Connect Com port A of the administrative console to Port 1 of the TC with a network cable (with BD9/DB25-RJ45 connectors). Connect the Ethernet port of the TC to the administrative console with a network cable, as shown in Figure 3-1.

3-13

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 3 Preparation for Installation

Figure 3-1 Connecting the administrative console to the TC

II. Configuring the IP Address of the TC


To configure the IP address of the TC, proceed as follows: 1) On the administrative console, add the following statement to the end of the file /etc/remote: tc:\ :dv=/dev/term/a:br#9600:el=^C^S^Q^U^D:ie=%$:oe=^D: //Establish the connection between the administration console and the TC using the following command: #tip tc //Enter the TC configuration mode. 2) Configure the TC. If the TC is off, power it on. Within five seconds, press and hold <Test> for a second and then release it. If the TC is on, press <Test> until the power indicator flashes. About one second later, press <Test> again. Upon the completion of the above operations, the TC starts self-test. The self-test lasts about 30 seconds. After the self-test, you can see the information of the TC on the monitoring terminal of the administrative console. Figure 3-2 shows the front panel of the TC.

Figure 3-2 Front panel of the TC If the statuses of indicators on the TC front panel are the same as those listed in Table 3-9, and the monitoring terminal of the administrative console displays

3-14

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 3 Preparation for Installation

"monitor::", it indicates that the TC is connected correctly and you can start the configuration. Table 3-9 Statuses of indicators in the TC configuration mode Indicator Status Power (Green) ON Unit (Green) ON Net (Green) ON Attn (Yellow) ON Load (Green) OFF Active (Green) Flash irregularly Test (Orange) ON

Otherwise, locate and clear the fault according to description in Table 3-10 and the installation and maintenance manual delivered with the TC. Table 3-10 Statuses of TC fault indicators Fault Hardware failure Network test failure Network test interruption or network command execution failure Using the wrong mirror for booting Power (Green) ON ON Unit (Green) Flash ON Net (Green) OFF Flash Attn (Yellow) Flash OFF Load (Green) OFF OFF Active (Green) OFF Flash irregularly

ON

ON

OFF

Flash

OFF

Flash irregularly

ON

ON

ON

Flash

OFF

OFF

Execute the command addr to configure the IP address of the TC, subnet mask, and network address. monitor::addr
Enter Internet address [<uninitialized>]::192.168.8.244 Internet address: 192.168.8.244 Enter Subnet mask [255.255.0.0]:: 255.255.255.0 Subnet mask: 255.255.255.0 Enter Preferred load host Internet address [<any host>]:: 192.168.8.244 *** Warning: Load host and Internet address are the same *** Preferred load host address: 192.168.8.244 Enter Broadcast address [0.0.0.0]:: 192.168.8.255 Broadcast address: 192.168.8.255 Enter Preferred dump address [0.0.0.0]:: 192.168.8.244 Preferred dump address: 192.168.8.244 Select type of IP packet encapsulation (ieee802/ethernet) [<ethernet>]:: Press <Enter>. Type of IP packet encapsulation: <ethernet>

3-15

Installation Manual iManager M2000 Two-node Cluster Hot Backup system


Load Broadcast Y/N [Y]: n Load Broadcast: N

Chapter 3 Preparation for Installation

3)

Reboot the TC. Power off the TC and then power it on. During the start process, the Load and Active indicators flash for a while, and then the Load indicator is off and the Active indicator flashes irregularly.

4)

Test the network connectivity of the TC. On the administrative console, execute the command ping to test the connectivity. #ping 192.168.8.244
192.168.8.244 is alive

If the above information is displayed, it indicates the administrative console connects to the TC. If the displayed information is "no answer from TC ip address", it indicates the administrative console fails to connect to the TC. Check the network. If the network is normal, reconfigure the TC.

Note: The IP address after ping is that of the TC set in step 2.

5)

Exit the tip tool. At the prompt of the tip tool, enter the following contents: ~.
~ [EOT] #

III. Testing IP Address of the TC


To test the IP address of the TC, power off the TC and then reboot it. If you can telnet to the TC from other hosts, it indicates that the IP address is correct.

IV. Configuring Parameters of the TC Ports


To configure parameters of the TC ports, execute the following commands: #telnet 192.168.8.244
Trying 192.168.8.244... Connected to 192.168.8.244 Escape character is ^].

3-16

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 3 Preparation for Installation

Press <Enter>.
Rotaries Defined: cli Enter Annex port name or number:cli

Annex Command Line Interpreter * Copyright 1991 Xylogics, Inc. annex:su Password:

//Type the password. The default password is the IP address of the TC.
annex# admin Annex administration MICRO-XL-UX R7.0.1, 8 ports admin : set port=1-8 type dial_in imask_7bits Y You may need to reset the appropriate port, Annex subsystem or reboot the Annex for changes to take effect. admin : set port=2-7 mode slave admin : quit annex# boot bootfile: <Enter> warning: <Enter>

After configuring the TC, remove the serial port cable between the administrative console and TC. During the above operation, you may type command "?" or "help" to obtain the help information.

Note: If the Sun workstation is unavailable for the on-site deployment, you can use programs such as the HyperTerminal of the minicomputer to configure the TC. For details, see relevant documents of the HyperTerminal software.

3.5.2 Configuring the Sun StorEdge3310s


There are two Sun StorEdge3310 disk arrays. Configure them in the same way.

I. Preparations
Proceed as follows before configuring the Sun StorEdge3310. 1) Connect Com port B of the administrative console to the Serial A/B port of the Sun StorEdge3310 controller with a serial port cable (DB9-DB25).

3-17

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 3 Preparation for Installation

2)

Set parameters for serial port B at the administrative console.

Add the following statements to the file /etc/remote:


s3310:\ :dv=/dev/term/b:br#38400:el=^C^S^Q^U^D:ie=%$:oe=^D:

II. Configuring the Disk Arrays


To configure the disk arrays, proceed as follows: 1) 2) Power on the Sun StorEdge3310. Log in to the Sun StorEdge3310 in the tip mode from the administrative console. #tip s3310 3) 4) 5) Press <Ctrl+L> to refresh the screen. The system enters the initial configuration interface. Select [Terminal (VT 100 Mode)] and press <Enter> to display the main menu. Set the IP address of the Sun StorEdge3310. In the main menu, click [View and Edit Configuration Parameters/Communication Parameters/Internet Protocol (TCP/IP)/LAN0/Set IP Address]. Figure 3-3 shows the interface.

Figure 3-3 Configuring the IP address for the Sun StorEdge3310 Type the IP address, subnet mask, and network gateway IP address in the window, as shown in Figure 3-4.

3-18

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 3 Preparation for Installation

Figure 3-4 Setting the IP address and the mask Press <ESC>. The system asks you whether to set the IP address. If you select "Yes", the system asks whether to reboot the controller. Select "Yes" to reboot the controller. After the Sun StorEdge3310 reboots, log in to it. #telnet IP address of the disk array
Trying IP address of the disk array... Connected to IP address of the disk array. Escape character is ^].

Press <Ctrl+L>. The initial configuration interface appears. 6) Configure the SCSI channel of the Sun StorEdge3310. Click [Terminal (VT100 Mode)] in the menu. Press <Enter> to display the main menu.

3-19

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 3 Preparation for Installation

Figure 3-5 Configuring the SCSI channel In the main menu, click [view and edit Scsi channels] and then edit SCSI channels according to Figure 3-5. Set CHL1 and CHL3 as Host, that is, the Primary Controller. Press <ESC> to return to the main menu. Select "Yes" when the system asks you whether to reboot the controller. Telnet to the Sun StorEdge3310 again after it reboots. #telnet IP address of the disk array
Trying IP address of the disk array... Connected to IP address of the disk array. Escape character is ^].

Press <Ctrl+L> The initial configuration interface appears again. 7) Configure logical drives for the Sun StorEdge3310. Click [Terminal (VT100 Mode)]. Press <Enter> to enter the main menu. In the main menu click [view and edit Logical drives] and then press <Enter> to configure the logical drives.

Note: New disks have no logical drives. That is, in Figure 3-6, "NONE" is displayed for logical disks. If a logical drive exists, delete it first. If the LUN is mapped, the system prompts the failure of deleting the logical drive. In this case, unmap the Host LUN and then delete the logical drive.

3-20

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 3 Preparation for Installation

Select the first row and press <Enter>. Select "Yes" to create a logical drive. Press <Enter>, as shown in Figure 3-6.

Figure 3-6 Selecting the logical drive Select "RAID5" and press <Enter>, as shown in Figure 3-7.

Figure 3-7 Selecting the type of logical drive Select the first seven disks for RAID5 mirroring. The "*" appears in the Slot column, as shown in Figure 3-8.

3-21

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 3 Preparation for Installation

Figure 3-8 Selecting drives Press <ESC> to confirm the selection. Select "Yes" and then press <Enter> to create a logical drive, as shown in Figure 3-9.

Figure 3-9 Selecting whether to create logical drives The percentage displayed on the upper left corner indicates the mirroring progress, as shown in Figure 3-10.

3-22

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 3 Preparation for Installation

Figure 3-10 Creating logical drives You can press <Ctrl+L> to refresh the interface during the process. When the percentage disappears, it indicates the RAID5 mirroring is completed, as shown in Figure 3-11.

Figure 3-11 Completion of the RAID5 mirroring Press <ESC> twice. Select the prepared RAID5 logical drives, as shown in Figure 3-12.

3-23

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 3 Preparation for Installation

Then press <Enter>.

Figure 3-12 Selecting logical drives 8) Partition the logical drive. Select [Partition logical drive], as shown in Figure 3-13, then press <Enter> to start partitioning.

Figure 3-13 Selecting of the partition logical drive Select "Yes" as shown in Figure 3-14 and press <Enter>.

3-24

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 3 Preparation for Installation

Figure 3-14 confirming the partitioning Press <Enter> as shown in Figure 3-15.

Figure 3-15 The first partition Enter "200" after "Partition Size (MB):", as shown in Figure 3-16.

3-25

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 3 Preparation for Installation

Figure 3-16 Size of the first partition Press <Enter>. Select "Yes" and the press <Enter> to allocate space for the first partition, as shown in Figure 3-17.

Figure 3-17 Confirming the partitioning Select the second row, as shown in Figure 3-18.

3-26

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 3 Preparation for Installation

Figure 3-18 The second partition Press <Enter>. Enter "103228" at the prompt "Partition Size (MB):", as shown in Figure 3-19. Then press <Enter>.

Figure 3-19 Size of the second partition Select "Yes" and press <Enter> to start allocate space for the second partition, as shown in Figure 3-20.

3-27

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 3 Preparation for Installation

Figure 3-20 Confirming the partitioning Press <ESC> twice to return to the main menu. 9) Map the logical partition to the Host LUN. Partition 0: 200 MB Partition 1: 201 GB The size of each partition in the following figures is just an example. In the main menu, click [view and edit Host luns], as shown in Figure 3-21

Figure 3-21 Main menu of the Sun StorEdge3310


3-28

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 3 Preparation for Installation

Press <Enter> to map the logical partition to the LUN. Click [CHL 1 ID 0 (Primary Controller)] and then press <Enter>, as shown in Figure 3-22.

Figure 3-22 Selecting channel 1

Caution: Make sure that CHL 1 is the Primary Controller. If [CHL1 ID 0 (Secondary Controller)] is displayed, delete this channel first and then add [CHL 1 ID 0 (Primary Controller)].

Click [Logical Drive] and then press <Enter>, as shown in Figure 3-23.

3-29

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 3 Preparation for Installation

Figure 3-23 Selecting the logical drive Select Row 0 and then press <Enter> to map the first LUN, as shown in Figure 3-24.

Figure 3-24 Selecting the first LUN Select the prepared RAID5 logical drive and press <Enter>, as shown in Figure 3-25.

3-30

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 3 Preparation for Installation

Figure 3-25 Selecting the logical drive Select Row 0 and press <Enter>, as shown in Figure 3-26.

Figure 3-26 Selecting the partition of the logical drive Click [Map Host LUN] and press <Enter>, as shown in Figure 3-27.

3-31

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 3 Preparation for Installation

Figure 3-27 Mapping the LUN Select "Yes" and press <Enter>, as shown in Figure 3-28.

Figure 3-28 Confirming the LUN mapping Select Row 1 and then press <Enter> to map the second LUN, as shown in Figure 3-29.

3-32

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 3 Preparation for Installation

Figure 3-29 Selecting the second LUN Select the prepared RAID5 logical drive and press <Enter>, as shown in Figure 3-30.

Figure 3-30 Selecting the logical drive Select Row 1 and press <Enter>, as shown in Figure 3-31.

3-33

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 3 Preparation for Installation

Figure 3-31 Selecting the partition of the logical drive Click [Map Host LUN] and press <Enter>, as shown in Figure 3-32.

Figure 3-32 Mapping the LUN Select "Yes" and press <Enter>, as shown in Figure 3-33.

3-34

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 3 Preparation for Installation

Figure 3-33 Confirming the mapping Press <ESC> to return to channel selection interface, as shown in Figure 3-34.

Figure 3-34 Selecting channel 3 Map the LUN of channel 3 in the same way as that of channel 1. Press <ESC> to display the initial configuration interface again after the configuration is completed. Press <Ctrl+]> and the screen prompts:

3-35

Installation Manual iManager M2000 Two-node Cluster Hot Backup system


telnet>quit

Chapter 3 Preparation for Installation

Enter "quit" to exit the StorEdge3310 configuration interface.

3.5.3 Preparing the Sun Fire V880


There are two Sun Fire V880 minicomputers. Perform the same operations on them. Proceed as follows to prepare the Sun Fire V880 minicomputer:

I. Connecting the Administrative Console to the TC


If they have been connected (the administrative console has been connected to the TC when configuring the TC), skip the step.

II. Connecting the TC to the V880 Minicomputer


Connect the Sun Fire V880 to the TC with a network cable (with DB25-RJ45 connectors). Insert the DB25 connector of the network cable into the serial port of the Sun Fire V880 and connect the RJ45 connector to the Ethernet port on TC.

III. Editing the /etc/remote File on the Administrative Console


Add the following statement to the end of the /etc/remote file: V880:\ :dv=/dev/term/b:br#9600:el=^C^S^Q^U^D:ie=%$:oe=^D:

IV. Connecting the Sun Fire V880


Execute the following command on the administrative console: #tip V880 ok

V. Configuring the Sun Fire V880


To configure the Sun Fire V880, 1) Set the standard output. ok setenv output-device ttya 2) Set the standard input. ok setenv input-device ttya 3) Set the MAC address of the network adapter. ok setenv local-mac-address? true 4) Set no automatic boot for the Sun Fire V880. ok setenv auto-boot? false

3-36

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 3 Preparation for Installation

VI. Resetting the V880 Minicomputer


Execute the following command: ok reset-all

VII. Exiting the Tip Tool


Execute the following command: ~.
~ [EOT]

3.6 Software Preparations


The software you need to prepare includes: Sun Solaris 8 operating system Volume management software Cluster software Database software iManager M2000 server application Patch package Script package

3.6.1 Sun Solaris 8 Operating System Software


Check the following items: Whether the version of the Solaris 8 is 02/04 SPARC Platform. Whether installation disks of Solaris 8 are all ready. If the medium is DVD, there is only one disk. If the medium is CD-ROM, there are three disks: Solaris 8 software 1 of 2, Solaris 8 software 2 of 2, and Solaris 8 Language

3.6.2 Volume Management Software


Check the following items: Whether the volume management software is VERITAS Volume Manager. Whether the software version is 3.2. Whether the licenses (two) of the software are ready.

3.6.3 Cluster Software


Check the following items: Whether the cluster software is Sun Cluster.

3-37

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 3 Preparation for Installation

Whether the version is 3.0.

3.6.4 Sybase Database Software


Check the following items: Whether the database software is Sybase Adapter Server Enterprise. Whether the software version is 11.9. Whether the license of the software is ready. The license is in the plastic file package in the packing box of the software.

3.6.5 M2000 Server Application


Check the following items: Whether the iManager M2000 server application is iManager M2000V100RxxxBxxxDxxx. Whether the version is correct.

3.6.6 Patch Package


Check whether the patch package (disk) is Patch 3.1 provided by Sun (The patch package contains various patches for the system).

3.6.7 Script Package


Note: You can obtain the installation script "M2000-scripts-xx.tar.gz" from the website http://support.huawei.com. The script is under the directory: Software\Mobile Communication\90-OS PATCH\Solaris\solaris\M2000V1 Server Installation Script Package.

Check the following items: Whether the version of script package M2000-scripts-xx.tar.gz matches the version of the installed iManager M2000 system. Whether the script package is complete.

Note: If the package can be unpacked normally, it indicates that the script package is complete.

3-38

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 4 Installing the Sun Solaris 8

Chapter 4 Installing the Sun Solaris 8


This chapter introduces how to install the Sun Solaris 8 operating system.

Note: The SolarisTM 8 operating system is a product of Sun Microsystems, Inc. For detailed specifications, see related technical documents. Sun Microsystems, Inc. reserves all the rights of the company name, trademarks, and manuals involved in this chapter.

4.1 Preparations Before Installation


Check the following before installing the Sun Solaris 8 operating system: Whether the TC is correctly configured. Whether the Sun StorEdge3310 is properly configured. Whether the Sun Fire V880 is properly installed and configured. Whether the Sun Fire V880, the Sun StorEdge3100, the TC, and the administrative console are correctly connected and powered on. Whether the installation disks of the Solaris 8 are ready.

4.2 Installation Procedures


The following operations only involve the master node m2000svr-1. The installation of Sun Solaris 8 on the slave node m2000svr-2 is the same as that on the master node m2000svr-1. The installation of Sun Solaris 8 operating system includes the following tasks: 1) 2) 3) 4) Booting and identifying the system (about 10 minutes) Setting Network Parameters (about 10 minutes) Planning the Solaris 8 (about 10 minutes) Installing the Sun Solaris 8 operating system (about 50 minutes)

I. Booting and Identifying the System


To boot and identity the system, proceed as follows: 1) Connect the administrative console to the Sun Fire V880 through the TC. #telnet IP address of tc
Trying IP address of tc...

4-1

Installation Manual iManager M2000 Two-node Cluster Hot Backup system


Connected to IP address of tc. Escape character is ^].

Chapter 4 Installing the Sun Solaris 8

Press <Enter>
Rotaries Defined: cli Enter Annex port name or number: 2

Note: Enter "3" for the slave node.

Attached to port 2

Press <Enter>.
ok

Note: If the Sun Fire V880 is used for the first time, the ok prompt will appear on the screen. If the Sun Fire V880 is installed with the operating system, you need to enter the ok state by carrying out the following operations: Enter "Ctrl+]". telnet>send brk If the system does not shield the break signal, it indicates that you are in the ok state now. If the system shields the break signal, continue the following operations. Log in to the operating system and enter init 0 at the prompt. The system enters the ok state after reboot.

2) 3)

Put the Solaris 8 Software 1 (2/02 SPARC Platform Edition) into the CD-ROM drive. Enter the following command at the ok prompt: boot cdrom The system reboots. In about five minutes, the system asks you to select the installation language.

4)

Select the installation language.


Select a Language 0. English 1. French

4-2

Installation Manual iManager M2000 Two-node Cluster Hot Backup system


9. Traditional Chinese

Chapter 4 Installing the Sun Solaris 8

Please make a choice (0 - 9), or press h or ? for help: 0

Select "0 English" and press <Enter>. 5) Select the locale.


Select a Locale 0. English (C - 7-bit ASCII) 1. Albania (ISO8859-2) 18. Finland (ISO8859-15 - Euro)

Press Return to show more choices. Please make a choice (0 - 51), or press h or ? for help:

Select "0. English (C 7-bit ASCII)" and press <Enter>. Click <Continue> when the system prompts that the identification of the system starts. 6) Select the terminal.
What type of terminal are you using? 1) ANSI Standard CRT 2) DEC VT52 3) DEC VT100 4) Heathkit 19 5) Lear Siegler ADM31 6) PC Console 7) Sun Command Tool 8) Sun Workstation 9) Televideo 910 10) Televideo 925 11) Wyse Model 50 12) X Terminal Emulator (xterms) 13) Other Type the number of your choice and press Return: 3 The system is coming up. Please wait.

Select the terminal of "DEC VT100".

II. Setting Network Parameters


To set network parameters, proceed as follows: 1) Specify whether the system connects to the network.
Networked [X] Yes [ ] No

Select "Yes" and click <Continue> to continue the installation.

4-3

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 4 Installing the Sun Solaris 8

2)

Specify whether to use the DHCP.


Use DHCP [ ] Yes [X] No

Select "No" and click <Continue>. 3) Select the primary network interface.
Primary network interface [X] eri0 [ ] ce0 [ ] ce1 [ ] ce2 [ ] ce3 [ ] fcip0

Select "eri0" as the primary network interface and then click <Continue>. 4) Enter the host name. Host name: m2000svr-1 (according to actual situations) Click <Continue>.

Note: The host name of the master node is m2000svr-1 and that of the slave node is m2000svr-2.

5)

Enter the IP address.


IP address: 192.168.8.11 //Modify the IP address according to actual situations.

Click <Continue>.

Note: The IP address of the master node is 192.168.8.11 and that of the slave node is 192.168.8.12.

6)

Specify whether the host is in a certain subnet.


System part of a subnet [X] Yes [ ] No

Select "Yes" and click <Continue>. 7) Enter the subnet mask. Netmask: 255.255.255.0 //Set the subnet mask according to actual situations.

4-4

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 4 Installing the Sun Solaris 8

Click <Continue>. 8) Specify whether to enable the IPv6.


Enable IPv6 [ ] Yes [X] No

Select "No" and then click <Continue>. Then the system summarizes the above results. Confirm the results and click <Continue> to go on with the installation. 9) Specify whether to configure the Kerberos Security.
Configure Kerberos Security [ ] Yes [X] No

Select "No" and click <Continue>.


Configure Kerberos Security: No

Confirm the above result and click <Continue>. 10) Specify whether to provide the Name Service.
Name service [ ] NIS+ [ ] NIS [ ] DNS [ ] LDAP [X] None

Select "None" and click <Continue>.


Name service: None

Confirm the above result and click <Continue>. 11) Specify the time zone.
Specify timezone by: [ ] Geographic region [X] Offset from GMT [ ] Time zone file

Select "Offset from GMT" and click <set> to continue the installation.
Hours offset: -8

//Enter the time zone according to the actual situation (such as GMT-8 for China), and then click <Continue>.
Date and time: 2003-12-15 16:44 Year Month Day (4 digits) : 2003 (1-12) (1-31) : 12 : 15

4-5

Installation Manual iManager M2000 Two-node Cluster Hot Backup system


Hour (0-23) : 16 : 44

Chapter 4 Installing the Sun Solaris 8

Minute (0-59)

If the date or time is incorrect, modify it with the help of the up and down arrows. After the modification, click <Continue>.
Time zone: GMT-8 Date and time: 2003-12-15 16:44

The system then summarizes the information of the time zone and system time. Click <Continue>.

Note: As mentioned above, if the time zone of a local area is GMT+8 in the Windows, set the time zone to GMT-8 in the Solaris. Similarly, if the time zone is GMT-5 in the Windows, set the time zone to GMT+5 in the Solaris. Violation of the above rules can cause exceptions of the clock synchronization system and result in an inaccurate system clock.

III. Planning the Solaris 8


Set the following items for the Solaris 8 operating system: 1) Select initial installation.
F2_Upgrade F4_Initial F5_Exit F6_Help

Press <F4> to select the initial installation. 2) Select the standard installation.
F2_Standard F3_Go Back F4_Flash F5_Exit F6_Help

Press <F2> to select the standard installation. 3) Select geographic regions.


Select the geographic regions [ ] Eastern [X] Asia [ ] PC Kanji (ja_JP.PCK) [X] [X] [X] [X] [ ] [X] Simplified Chinese EUC Simplified Chinese GBK Simplified Chinese UTF-8 North America Canada-English (ISO8859-1) U.S.A. (en_US.ISO8859-1) Europe

4-6

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 4 Installing the Sun Solaris 8

Select the following regions: Asia --Simple Chinese EUC --Simple Chinese GBK --Simple Chinese UTF-8 North America <U.S.A> (en_US.IS08859-1) Click <Continue>. 4) Select installation type of Sun Solaris software package.
[X] Entire Distribution plus OEM Custom [ ] Entire Distribution 64-bit ................. 1763.00 MB [ ] Developer System Support 64-bit ............ 1713.00 MB [ ] End User System Support 64-bit ............. 1285.00 MB [ ] Core System Support 64-bit ................. 441.00 MB support 64-bit 1798.00 MB (F4 to

Select "Entire Distribution Plus OEM support 64-bit". Click <Continue>. 5) Select the hard disk available for the installation.
[X] c1t0d0 [ ] c1t1d0 (34730 MB) (34730 MB) boot disk 34730 MB (F4 to edit)

Select c1t0d0 and click <Continue>. 6) Specify whether to preserve data on the hard disk.
Preserve Data? ... Continue Go Back Preserve Exit Help

Click <Continue> not to preserve data. 7) Specify whether to lay out system files automatically.
Auto Layout Go Back Manual Layout Exit Help

Click <Auto Layout>. 8) Select "/, /opt, /usr, swap, and /var", and then click <Continue>.
[X] [X] / /opt

[ ] /usr [ ] /usr/openwin [ ] /var [X] swap

4-7

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 4 Installing the Sun Solaris 8

9)

Select the mode for layout of the file system and hard disk.
Continue Go Back Customize Exit Help

Click <Customize> for installation. Adjust partitions of the hard disk according to Table 4-1 and then click <OK> to continue the installation. Confirm the above result and click <Continue>. Table 4-1 lists the partition layout of system disk 1. Table 4-1 Partitions of system disk 1 Host name IP Address Netmask Boot disk m2000svr-1 192.168.8.11 255.255.255.0 c1t0d0 0 1 Disk partition 2 3 4 / swap overlap /opt /globaldevices 10,240 MB 500 MB 43,000MB 16,384 MB c1t1d0 0 1 2 3 4 overlap

The other hard disk c1t1d0 is used for mirroring. 10) Specify whether to mount the remote file system.
Mount Remote File Systems? Continue Go Back Remote Mounts Exit Help

Click <Continue> to continue the installation without mounting remote file systems. 11) Display the configuration.
Continue Change Exit Help

Confirm the results and click <Continue>. 12) Reboot the system.
Auto Reboot Manual Reboot

Click <Auto Reboot>.

IV. Installing the Solaris 8


To install the Sun Solaris 8 operating system, proceed as follows: 1) Install the Solaris 8 Software 1 of 2 (2/02 SPARC Platform Edition).

4-8

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 4 Installing the Sun Solaris 8

The system displays the installation progress. The entire installation process lasts about 25 minutes and the system reboots automatically after the installation. 2) Enter the root password. Press <Enter> and confirm the password.
Root password:****** Press Return to continue. Re-enter your root password.

Note: Keep the password in mind. It is recommended to use "root" as the password to facilitate the remote maintenance.

3)

Select the installation medium of Solaris 8 system.


Media: [X] CD/DVD [ ] Network File System

Next

Skip

Select <CD/DVD> and press <Enter> to continue the installation. 4) Insert the Solaris 8 Software 2 (2/02 SPARC Platform Edition) into the CD-ROM drive.
Please insert the CD/DVD for Solaris 8 Software 2 of 2 (2/02 SPARC Platform Edition). After you insert the disc, please press Enter.

Insert the Solaris 8 Software 2 (2/02 SPARC Platform Edition) as prompted and then press <Enter> to continue the installation. The screen displays the installation progress.
Installation Summary: Product Status Information

Solaris 8 packages (part 2)

Installed

Detail

Click <Next> to continue the installation.


Media: [X] CD/DVD [ ] Network File System

4-9

Installation Manual iManager M2000 Two-node Cluster Hot Backup system


Next Skip

Chapter 4 Installing the Sun Solaris 8

Select <CD/DVD> and then click <Next> to continue the installation (about 15 minutes).
Installing Solaris 8 packages (part 2) |-1%--------25%---------50%----------75%--------100%|

5)

Insert the CD for Solaris 8 Languages (2/02 SPARC Platform Edition).


Please insert the CD/DVD for Solaris 8 Languages (2/02 SPARC Platform Edition). After you insert the disc, please press Enter.

Insert the CD for Solaris 8 Languages (2/02 SPARC Platform Edition) as prompted. Press <Enter> to start installation.
Ready to Install Back Install Now Exit

Click <Install Now>.

Installation Summary: Product Status Information

Solaris 8 Simplified Chinese Localization

Installed

Detail

Press <Enter> to continue the installation (about 10 minutes). 6) Reboot the system.
Reboot Press Reboot Now to continue

Click <Reboot Now> to reboot the computer. The system reboots automatically. Now the installation of the operating system is completed. 7) Edit the file /etc/hosts. This file contains the names and corresponding IP addresses of all hosts on the network. Add the character string "Server name.net" behind loghost. Press <Tab> to separate them. For example:
127.0.0.1 localhost 192.168.8.11 m2000svr-1 loghost m2000svr-1.net

8)

Update the system time.


4-10

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 4 Installing the Sun Solaris 8

Caution: Update the system time after the installation. Ensure that system time must be the same as the current time after installation. Otherwise, bill problems may occur owing to the inaccurate clock.

Update the current system time using the command date. Command format: date mmddHHMM[cc]yy][.SS] For example, if you want update the time to 17:30:01 on February 13, 2003, enter the following command as root: #date 021317302003.01

4.3 Installing Patches of the Solaris 8


The following operations only involve m2000svr-1. The installation of Sun Solaris 8 patches on m2000svr-2 is the same as that on m2000svr-1. Installation of the Sun Solaris 8 operating system involves the following major procedures: 1) 2) 3) 4) Installing the standard .profile file (0.5 hour) Installing the recommended patch and extended patch of Sun Solaris operating system (1.5 hours) Installing the patch of the SUNWSan software (0.5 hour) Rebooting the system

I. Installing the Standard .profile File


Log in to the system as root and perform the following operations: 1) 2) Put the CD of Patch 3.1 into the CD-ROM of the host or the administrative console. Install the installation kit of the patch. #cd /cdrom/cdrom0/sun/install/ #./setup-standard.sh
Running setup-standard with EIS-CD Vn 02-JAN-02 ........ Is the power saving daemon to be disabled? [y/n]: y ........ Sun keyboard Power button to be disabled? [y/n]: y ........ Are the buttons & dials to be disabled? [y/n]: y ........ Are remote root logins to be allowed? [y/n]: y ........

4-11

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 4 Installing the Sun Solaris 8

Do you wish to add the SFK & SNIA packages? [y/n]: y ........ Do you wish to ENABLE the alternate break sequence? [E/n]: n ........ Do you want to install the ACT package? [y/n]: y ........ enter base directory for install (Default /opt/CTEact)? [?,q] <Enter> Do you wish to enable the email of panic facility [y] [y,n,?] n ........ Do you want this directory created now [y,n,?,q] y ........ Do you want to continue with the installation of <CTEact> [y,n,?] y ........ Press RETURN to continue...<Enter> ........ Agree to these terms? [y,n] y ........ Where should this package be installed? [/opt/SUNWexplo] <Enter> Company name []: <Enter> Contract ID []: <Enter> System serial number: []: <Enter> Contact name []: <Enter> Contact email address []: <Enter> Phone number []: <Enter> Address(line 1) []: <Enter> Address(line 2) []: <Enter> City []: <Enter> State []: <Enter> Zip []: <Enter> Please select your geo from this list 1) AMERICAS 2) EMEA 3) APAC [AMERICAS]

Note: The above three regions stand for America, Europe, and Asia-Pacific. Select the region according to actual situations. The selection here determines the country to be selected next.

4-12

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 4 Installing the Sun Solaris 8

Please Enter the two character Country code or enter ? for a list of countries. []: CN

Note: Enter a two-digit country code according to actual situations. For example, the country code for China is CN. You may key in "?" to query the country code.

........ Would you like all explorer output to be send to: explorer-database-apac@sun.com at the completion of explorer when mail or e is specified? [y,n] n ........ To enter multiple addresses, separate them with a comma(,). []: <Enter> Return address for explorer email output [] :<Enter> To receive email notification that your explorer output has been loaded into the repository, enter the email address to send the notification to. If notification is not wanted, enter a single . []: <Enter> ........ Are these values okey? [y,n]: y ........ Do you wish to run explorer once a week? [y,n]: n ....... Would you like to do this now? [y,n] n ........ Do you want this directory created now [y,n,?,q] y Do you want to continue with the installation of <SUNWexplo> [y,n,?] y ........ setup-standard completed.

II. Installing the Recommended Patch and Extended Patch of Sun Solaris Operating System
To install the recommended patch and extended patch of Sun Solaris operating system, 1) Execute the following command to validate the new environment variable. # . /.profile

4-13

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 4 Installing the Sun Solaris 8

2)

Insert the CD for Patch 3.1. Unpack the patches of Sun Solaris operating system to the directory /tmp/8 (default patch).
root@m2000svr-1# cd /cdrom/cdrom0/sun/patch/8 root@m2000svr-1# unpack-patches Are you ready to unpack patches into /tmp/8? [y/n]: y Unpacking.....

3) 4)

Execute the command #cd /;eject to take out the CD. Install the patches. Run the patch installation script under the corresponding directory according to the system prompt until the installation completes.
root@m2000svr-1# cd /tmp/8 root@m2000svr-1# ./install_all_patches

III. Installing the SUNWSan Software Package


Install the SAN software before installing the Volume Manager. The SAN software package is installed during the installation of the Sun Solaris Patch3.1. If you need to install it separately, perform the following operations. 1) 2) 3) Put the CD of SAN into the CD-ROM. Log in to the system as root. Open the corresponding directory. # mkdir -p /opt/sun/SAN # cd /cdrom/cdrom0/sun/progs/SAN/4.1 4) Copy the installation package to the temporary directory. # cp Solaris_8_SFK_packages.tar.gz /opt/sun/SAN 5) Unpack the installation package. # cd /opt/sun/SAN # gzcat Solaris_8_SFK_packages.tar.gz |tar xvf 6) Install the package. # cd Solaris_8_SFK_packages # pkgadd -d . Parameters to be selected are as follows:
Select package(s) you wish to process (or 'all' to process all packages). (default: all) [?,??,q]: 1,2,3 Do you want to continue with the installation of <SUNWcfpl>[y,n,?] y Do you want to install these conflicting files [y,n,?,q] y Do you want to continue with the installation of <SUNWsan> [y,n,?] y

After installing the package, select "q" to exit.

4-14

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 4 Installing the Sun Solaris 8

IV. Installing the Patch of the SUNWSan


To install the patch of the SUNWSan, proceed as follows: 1) 2) Put the CD of the patch into the CD-ROM. Log in to the system as root and open the corresponding directory. # cd /cdrom/cdrom/sun/patch/SAN/8 3) Unpack the patch package. # unpack-patches
Are you ready to unpack patches into /tmp/8? [y/n]: y Target for unpacking patches /tmp/8 exists! May I delete /tmp/8? [y/n] y

4) 5)

Execute the command #cd /;eject to take out the CD. Run the patch installation script in the corresponding directory according to the system prompt. # cd /tmp/8 # ./install_patches
Are you ready to continue with install? [y/n]: y

V. Rebooting the Operating System


Execute the following commands to reboot the operating system: # sync;sync;sync;sync # shutdown -y -g0 -i6

VI. Installing the Driver of the SCSI Card


Execute the following commands to install the driver of the SCSI card: # mkdir -p /opt/sun/driver # cd /cdrom/cdrom0/sun/progs/SUNWqus/8 # cp SUNWqus_8_all.tar.gz /opt/sun/driver # cd /opt/sun/driver # gzcat SUNWqus_8_all.tar.gz | tar xvf # pkgadd -d .
Select package(s) you wish to process (or all to process all packages). (default: all) [?,??,q]:

Press <Enter>
........ Do you want to continue with the installation of <SUNWqus> [y,n,?] y ........ Do you want to continue with the installation of <SUNWqusux> [y,n,?] y

4-15

Installation Manual iManager M2000 Two-node Cluster Hot Backup system


........

Chapter 4 Installing the Sun Solaris 8

Select package(s) you wish to process (or all to process all packages). (default: all) [?,??,q]:q

VII. Installing the Patch of the SCSI Card Driver


Execute the following commands to install the patch of the SCSI card driver: # cd /cdrom/cdrom0/sun/patch/SUNWqus/8 # unpack-patches
Are you ready to unpack patches into /tmp/8? [y/n]: y Target for unpacking patches /tmp/8 exists! May I delete /tmp/8? [y/n]: y

# cd /tmp/8 # ./install_patches
Are you ready to continue with install? [y/n] y

VIII. Installing Scripts for the Sun Two-node Cluster Hot Backup System
Put the CD of the script software in the CD-ROM and execute the following commands: # cp M2000-scripts-xx.tar.gz /opt/ # cd /opt # gzcat M2000-scripts-xx.tar.gz | tar xvf

Note: You can obtain the mirror installation script "M2000-scripts-xx.tar.gz" by accessing the website http://support.huawei.com. The script is under the directory: software\Mobile Communication\90-OS PATCH\Solaris\solaris\M2000V1 Server Installation Script Package. "XX" stands for the version. For example, the latest version is 1.0, so the name of the script package is M2000-scripts-10.tar.gz.

IX. Configuring the Sun Solaris 8 Kernel


Execute the following commands: # cd /opt/m2kv1/system/kernel # cp sd.conf /kernel/drv

X. Rebooting the System


Execute the following commands to reboot the system: # sync;sync;sync;sync;sync

4-16

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 4 Installing the Sun Solaris 8

# reboot -- -r

4.4 Configuring the Sun Solaris Operating System


To configure the Sun Solaris operating system, 1) Configure the File /etc/hosts. After configuration, the contents in the file /etc/host of the master node are as follows:
# # Internet host table # 127.0.0.1 localhost 192.168.8.11 192.168.8.12 192.168.8.10 192.168.8.244 192.168.8.236 m2000svr-1 loghost m2000svr-1.net m2000svr-2 m2000svr tc m2000-console

After configuration, the contents in the file /etc/hosts of the slave node are as follows:
# # Internet host table # 127.0.0.1 localhost 192.168.8.11 192.168.8.12 192.168.8.10 192.168.8.244 192.168.8.236 m2000svr-1 m2000svr-2 loghost m2000svr tc m2000-console m2000svr-2.net

Note: The file /etc/hosts contains IP addresses of all nodes and logical nodes in the cluster: 192.168.8.11 is the IP address of the master node. 192.168.8.12 is the IP address of the slave node. 192.168.8.10 is the IP address of m2000svr logical node. 192.168.8.244 is the IP address of the TC. 192.168.8.236 is the IP address of the administrative console. Enter specific IP addresses according to the IP planning shown in Table 3-3.

2)

Configuring the File /etc/system.

4-17

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 4 Installing the Sun Solaris 8

Add a line "exclude: lofs" at the end of the file /etc/system: 3) Rebooting the Server. Execute the following commands to reboot the server: #sync;sync;sync;sync;sync #shutdown -y -g0 -i6

4.5 Checking Installation Result


After installing the Solaris operating system, check: Whether the Solaris operating system can start normally Whether you can search the disks on the Sun StorEdge3310 Execute the following command to check disks: # format
Searching for disks...done C3t0d0: configured with capacity of 198.00MB C3t0d1: configured with capacity of 100.80GB c4t0d0: configured with capacity of 198.00MB c4t0d1: configured with capacity of 100.80GB

AVAILABLE DISK SELECTIONS: 0. c1t0d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107> /pci@8,600000/SUNW,qlc@4/fp@0,0/ssd@w21000004cfc33819,0 1. c1t1d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107> /pci@8,600000/SUNW,qlc@4/fp@0,0/ssd@w21000004cfc33722,0 2. c3t0d0 <SUN-StorEdge3310-0325 cyl 198 alt 2 hd 64 sec 32> /pci@8,700000/pci@2/scsi@4/sd@0,0 3. c3t0d1 <SUN-StorEdge3310-0325 cyl 51612 alt 2 hd 64 sec 64> /pci@8,700000/pci@2/scsi@4/sd@0,1 4. c4t0d0 <SUN-StorEdge3310-0325 cyl 198 alt 2 hd 64 sec 32> /pci@8,700000/pci@2/scsi@5/sd@0,0 5. c4t0d1 <SUN-StorEdge3310-0325 cyl 51612 alt 2 hd 64 sec 64> /pci@8,700000/pci@2/scsi@5/sd@0,1

Specify disk (enter its number): <Ctrl+D> Execute the command format to query disks, and the system displays the above information. The number of the disks is six.

4-18

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 4 Installing the Sun Solaris 8

Note: The M2000 system occupies the bigger disks while the smaller disk (198MB) is reserved. c3t0d1 and c4t0d1 are the bigger disks and belong to two different disk arrays respectively. These two disks are used to create the volume and volume mirroring of the M2000.

4-19

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 5 Installing the Sun Cluster

Chapter 5 Installing the Sun Cluster


This chapter introduces how to install the Sun Cluster.

Note: The Sun Cluster management software is a product of Sun Microsystems, Inc. For detailed instructions, see the related technical documents about the Sun cluster. Sun Microsystems, Inc. reserves all rights of the company name, trademarks, and manual names involved in this chapter.

5.1 Checks Before Installation


Note: Unless otherwise specified, perform the following operations described on both the master and slave nodes.

Before installing the Sun Cluster, check whether: The Solaris system is installed on the master and slave nodes. The patch of the Solaris operating system is installed on the master and slave nodes. The Sun StorEdge3310 disk arrays can be searched in the Solaris operating system. The files /etc/hosts and /etc/system are modified.

5.2 Installing the Sun Cluster on Master Node


To install the Sun cluster on the master node, proceed as follows:

I. Installing and Configuring Sun Cluster


1) 2) Insert the CD of the Sun cluster into the CD-ROM. Execute the following commands to install and configure Sun cluster:

#cd /cdrom/cdrom0/SunCluster_3.0/Sol_8/Tools #./scinstall


*** Main Menu ***

5-1

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 5 Installing the Sun Cluster

Please select from one of the following (*) options:

* 1) Establish a new cluster using this machine as the first node * 2) Add this machine as a node in an established cluster 3) Configure a cluster to be JumpStarted from this install server 4) Add support for new data services to this cluster node 5) Print release information for this cluster node

* ?) Help with menu options * q) Quit

Option:1

*** Establishing a New Cluster ***

This option is used to establish a new cluster using this machine as the first node in that cluster.

Once the cluster framework software is installed, you will be asked for the name of the cluster. Then, sccheck(1M) is run to test this machine for basic Sun Cluster pre-configuration requirements.

After sccheck(1M) passes, you will be asked for the names of the other nodes which will initially be joining that cluster. In addition, you will be asked to provide certain cluster transport configuration information.

Press Ctrl-d at any time to return to the Main Menu.

Do you want to continue (yes/no) [yes]?yes

>>> Software Package Installation <<<

Installation of the Sun Cluster framework software packages will take a few minutes to complete.

Is it okay to continue (yes/no) [yes]?yes

** Installing SunCluster 3.1 framework ** SUNWscr.....done SUNWscu.....done

5-2

Installation Manual iManager M2000 Two-node Cluster Hot Backup system


SUNWscnm....done SUNWscdev...done SUNWscgds...done SUNWscman...done SUNWscsal...done SUNWscsam...done SUNWscvm....done SUNWmdm.....done SUNWscva....done SUNWscvr....done SUNWscvw....done SUNWfsc.....done SUNWfscvw...done SUNWjsc.....done SUNWjscman..done SUNWjscvw...done SUNWkscvw...done SUNWcsc.....done SUNWcscvw...done SUNWhscvw...done

Chapter 5 Installing the Sun Cluster

Hit ENTER to continue: //Press <Enter> to continue.

>>> Cluster Name <<<

Each cluster has a name assigned to it. The name can be made up of any characters other than whitespace. It may be up to 256 characters in length. And, you may want to assign a cluster name which will be the same as one of the failover logical host names in the cluster. Create each cluster name to be unique within the namespace of your enterprise.

What is the name of the cluster you want to establish? M2000Cluster

>>> Check <<<

This step runs sccheck(1M) to verify that certain basic hardware and software pre-configuration requirements have been met. If sccheck(1M) detects potential problems with configuring this machine as a cluster node, a list of warnings is printed.

5-3

Installation Manual iManager M2000 Two-node Cluster Hot Backup system


Hit ENTER to continue: //Press <Enter> to continue

Chapter 5 Installing the Sun Cluster

Running sccheck ... done All sccheck tests passed. Hit ENTER to continue: //Press <Enter> to continue

>>> Cluster Nodes <<<

This release of Sun Cluster supports a total of up to 16 nodes.

Please list the names of the other nodes planned for the initial cluster configuration. List one node name per line. When finished, type Control-D:

Node name:

m2000svr-2

Note: The node name above is that of the slave node.

Node name (Ctrl-D to finish):<Ctrl+D>

This is the complete list of nodes:

m2000svr-1 m2000svr-2

Is it correct (yes/no) [yes]?yes

>>> Authenticating Requests to Add Nodes <<<

Once the first node establishes itself as a single node cluster, other nodes attempting to add themselves to the cluster configuration must be found on the list of nodes you just provided. The list can be modified once the cluster has been established using scconf(1M) or other tools.

By default, nodes are not securely authenticated as they attempt to add themselves to the cluster configuration. This is generally considered adequate, since nodes which are not physically connected to the private cluster interconnect will never be able to actually

5-4

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 5 Installing the Sun Cluster

join the cluster. However, DES authentication is available. If DES authentication is selected, you must configure all necessary encryption keys before any node will be allowed to join the cluster (see keyserv(1M), publickey(4)).

Do you need to use DES authentication (yes/no) [no]?no

>>> Network Address for the Cluster Transport <<<

The private cluster transport uses a default network address of 172.16.0.0. But, if this network address is already in use elsewhere within your enterprise, you may need to select another address from the range of recommended private addresses (see RFC 1597 for details).

If you do select another network address, please bear in mind that the Sun Clustering software requires that the rightmost two octets always be zero.

The default netmask is 255.255.0.0; you may select another netmask, as long as it minimally masks all bits given in the network address and does not contain any "holes".

Is it okay to accept the default network address (yes/no) [yes]?yes

Caution: The private network IP address for the Cluster is 172.16.0.0 in this step. If the IP address is in use, select "no" in this step and configure the IP address as prompted. The last two bytes in the IP address must be "0".

Is it okay to accept the default netmask (yes/no) [yes]?yes

>>> Point-to-Point Cables <<<

The two nodes of a two-node cluster may use a directly-connected interconnect. That is, no cluster transport junctions are configured. However, when there are greater than two nodes, this interactive form of scinstall assumes that there will be exactly two cluster transport junctions.

5-5

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 5 Installing the Sun Cluster

Does this two-node cluster use transport junctions (yes/no) [yes]?yes

>>> Cluster Transport Junctions <<<

All cluster transport adapters in this cluster must be cabled to a transport junction, or "switch". And, each adapter on a given node must be cabled to a different junction. Interactive scinstall requires that you identify two switches for use in the cluster and the two transport adapters on each node to which they are cabled.

What is the name of the first junction in the cluster [switch1]? Hit ENTER to continue. //Press <Enter> to continue

What is the name of the second junction in the cluster [switch2]? Hit ENTER to continue. //Press <Enter> to continue

>>> Cluster Transport Adapters and Cables <<<

You must configure at least two transport adapters on each node which serve as connection points to the private cluster transport. More than two connection points are allowed, but this interactive form of scinstall assumes exactly two.

Note that interactive scinstall does not allow you to specify any special transport adapter properties settings. If your adapters have special properties which must be set, you may need to use non-interactive scinstall by specifying a complete set of command line options. For more information, please refer to the man pages for your adapters in the scconf_transp_adap family of man pages (e.g., scconf_transp_adap_hme(1M)).

Hit ENTER to continue: //Press <Enter> to continue

Select the first cluster transport adapter to use:

1) ce3 2) ce2 3) ce11 4) ce0 5) Other

Option: 3

5-6

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 5 Installing the Sun Cluster

Note: The network adapter may vary with the configurations of the computer. Select the correct network adapter according to the actual situation.

Adapter "ce1" is an ethernet adapter.

Searching for any unexpected network traffic on "ce1" ... done Verification completed. No traffic was detected over a 10 second sample period.

All transport adapters support the "dlpi" transport type. Ethernet adapters are supported only with the "dlpi" transport; however, other adapter types may support other types of transport. For more information on which transports are supported with which adapters, please refer to the scconf_transp_adap family of man pages (scconf_transp_adap_hme(1M), ...).

The "dlpi" transport type will be set for this cluster.

Name of the junction to which "ce1" is connected [switch1]? Hit ENTER to continue. //Press <Enter> to continue

Each adapter is cabled to a particular port on a transport junction. And, each port is assigned a name. You may explicitly assign a name to each port. Or, for ethernet switches, you may allow scinstall to assign a default name for you. The default port name assignment sets the name to the node number of the node hosting the transport adapter at the other end of the cable.

For more information regarding port naming requirements, refer to the scconf_transp_jct family of man pages (e.g., scconf_transp_jct_dolphinswitch(1M)).

Use the default port name for the "ce1" connection (yes/no) [yes]?yes

Select the second cluster transport adapter to use:

1) ce3 2) ce2 3) Other

5-7

Installation Manual iManager M2000 Two-node Cluster Hot Backup system


Option:1

Chapter 5 Installing the Sun Cluster

Note: The network adapter may vary with the configurations of the computer. Select the correct network adapter according to the actual situation.

Adapter "ce3" is an ethernet adapter.

Searching for any unexpected network traffic on "ce3" ... done Verification completed. No traffic was detected over a 10 second sample period.

Name of the junction to which "ce3" is connected [switch2]?

Press <Enter> to continue


Use the default port name for the "ce3" connection (yes/no) [yes]?yes

>>> Global Devices File System <<<

Each node in the cluster must have a local file system mounted on /global/.devices/node@<nodeID> before it can successfully participate as a cluster member. Since the "nodeID" is not assigned until scinstall is run, scinstall will set this up for you. However, in order to do this, you must supply the name of either an already-mounted file system or raw disk partition at this time. This file system or partition should be at least 512 MB in size.

If an already-mounted file system is used, the file system must be empty. If a raw disk partition is used, a new file system will be created for you.

The default is to use /globaldevices.

Is it okay to use this default (yes/no) [yes]?yes

>>> Automatic Reboot <<<

Once scinstall has successfully installed and initialized the Sun Cluster software for this machine, it will be necessary to reboot. After the reboot, this machine will be established as the first node

5-8

Installation Manual iManager M2000 Two-node Cluster Hot Backup system


in the new cluster.

Chapter 5 Installing the Sun Cluster

Do you want scinstall to reboot for you (yes/no) [yes]?no

You will need to manually reboot this node in "cluster mode" after scinstall successfully completes.

Hit ENTER to continue: //Press <Enter> to continue

>>> Confirmation <<<

Your responses indicate the following options to scinstall:

scinstall -ik \ -C M2000Cluster \ -F \ -T node=m2000svr-1,node=m2000svr-2,authtype=sys \ -A trtype=dlpi,name=ce1 -A trtype=dlpi,name=ce3 \ -B type=switch,name=switch1 -B type=switch,name=switch2 \ -m endpoint=:ce1,endpoint=switch1 \ -m endpoint=:ce3,endpoint=switch2

Are these the options you want to use (yes/no) [yes]?yes

Do you want to continue with the install (yes/no) [yes]?yes

Checking device to use for global devices file system ... done

Initializing cluster name to "M2000Cluster" ... done Initializing authentication options ... done Initializing configuration for adapter "ce1" ... done Initializing configuration for adapter "ce3" ... done Initializing configuration for junction "switch1" ... done Initializing configuration for junction "switch2" ... done Initializing configuration for cable ... done Initializing configuration for cable ... done

Setting the node ID for "m2000svr-1" ... done (id=1)

Setting the major number for the "did" driver ... done "did" driver major number set to 300 Checking for global devices global file system ... done

5-9

Installation Manual iManager M2000 Two-node Cluster Hot Backup system


Updating vfstab ... done

Chapter 5 Installing the Sun Cluster

Verifying that NTP is configured ... done Installing a default NTP configuration ... done Please complete the NTP configuration after scinstall has finished.

Verifying that "cluster" is set for "hosts" in nsswitch.conf ... done Adding the "cluster" switch to "hosts" in nsswitch.conf ... done

Verifying that "cluster" is set for "netmasks" in nsswitch.conf ... done Adding the "cluster" switch to "netmasks" in nsswitch.conf ... done

Verifying that power management is NOT configured ... done Unconfiguring power management ... done /etc/power.conf has been renamed to /etc/power.conf.101001001139 Power management is incompatible with the HA goals of the cluster. Please do not attempt to re-configure power management.

Ensure that the EEPROM parameter "local-mac-address?" is set to "true" ... done

Ensure network routing is disabled ... done Network routing has been disabled on this node by creating /etc/notrouter. Having a cluster node act as a router is not supported by Sun Cluster. Please do not re-enable network routing.

Hit ENTER to continue: //Press <Enter> to continue

*** Main Menu ***

Please select from one of the following (*) options:

1) Establish a new cluster using this machine as the first node 2) Add this machine as a node in an established cluster 3) Configure a cluster to be JumpStarted from this install server * 4) Add support for new data services to this cluster node * 5) Print release information for this cluster node

* ?) Help with menu options * q) Quit

Option:q

5-10

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 5 Installing the Sun Cluster

II. Installing the Patch of the Sun Cluster


To install the patch of the Sun Cluster, 1) 2) 3) Insert the CD of the patch 3.1. Log in to the system as root. Execute the following commands: #cd /cdrom/cdrom0/sun/patch/SunCluster/3.0/8 #unpack-patches
Are you ready to unpack patches into /tmp/8? [y/n]: y

#cd /tmp/8 #./install_patches


Are you ready to continue with install? [y/n]: y

III. Rebooting the Server


Execute the following commands to reboot the server: #sync;sync;sync;sync;sync #shutdown -y -g0 -i6

5.3 Installing the Sun Cluster on the Slave Node


To install the Sun cluster on the slave node, proceed as follows: 1) 2) Insert the CD of the Sun Cluster into the CD-ROM. Execute the following commands to install and configure the Sun cluster: #cd /cdrom/cdrom0/SunCluster_3.0/Sol_8/Tools #./scinstall
*** Main Menu ***

Please select from one of the following (*) options:

* 1) Establish a new cluster using this machine as the first node * 2) Add this machine as a node in an established cluster 3) Configure a cluster to be JumpStarted from this install server 4) Add support for new data services to this cluster node 5) Print release information for this cluster node

* ?) Help with menu options * q) Quit

Option:2

5-11

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 5 Installing the Sun Cluster

*** Adding a Node to an Established Cluster ***

This option is used to add this machine as a node in an already established cluster. If this is an initial cluster install, there may only be a single node which has established itself in the new cluster.

Once the cluster framework software is installed, you will be asked to provide both the name of the cluster and the name of one of the nodes already in the cluster. Then, sccheck(1M) is run to test this machine for basic Sun Cluster pre-configuration requirements.

After sccheck(1M) passes, you may be asked to provide certain cluster transport configuration information.

Press Ctrl-d at any time to return to the Main Menu.

Do you want to continue (yes/no) [yes]?yes

>>> Software Package Installation <<<

Installation of the Sun Cluster framework software packages will take a few minutes to complete.

Is it okay to continue (yes/no) [yes]?yes

** Installing SunCluster 3.1 framework ** SUNWscr.....done SUNWscu.....done SUNWscnm....done SUNWscdev...done SUNWscgds...done SUNWscman...done SUNWscsal...done SUNWscsam...done SUNWscvm....done SUNWmdm.....done SUNWscva....done SUNWscvr....done SUNWscvw....done SUNWfsc.....done SUNWfscvw...done

5-12

Installation Manual iManager M2000 Two-node Cluster Hot Backup system


SUNWjsc.....done SUNWjscman..done SUNWjscvw...done SUNWkscvw...done SUNWcsc.....done SUNWcscvw...done SUNWhscvw...done

Chapter 5 Installing the Sun Cluster

Hit ENTER to continue: //Press <Enter> to continue

>>> Sponsoring Node <<<

For any machine to join a cluster, it must identify a node in that cluster willing to "sponsor" its membership in the cluster. When configuring a new cluster, this "sponsor" node is typically the first node used to build the new cluster. However, if the cluster is already established, the "sponsoring" node can be any node in that cluster.

Already established clusters can keep a list of hosts which are able to configure themselves as new cluster members. This machine should be in the join list of any cluster which it tries to join. If the list does not include this machine, you may need to add it using scconf(1M) or other tools.

And, if the target cluster uses DES to authenticate new machines attempting to configure themselves as new cluster members, the necessary encryption keys must be configured before any attempt to join.

What is the name of the sponsoring node?

m2000svr-1

Note: The node name above is that of the master node.

>>> Cluster Name <<<

Each cluster has a name assigned to it. When adding a node to the cluster, you must identify the name of the cluster you are attempting to join. A sanity check is performed to verify that the "sponsoring"

5-13

Installation Manual iManager M2000 Two-node Cluster Hot Backup system


node is a member of that cluster.

Chapter 5 Installing the Sun Cluster

What is the name of the cluster you want to join?

M2000Cluster

Attempting to contact "m2000svr-1" ... done

Cluster name "M2000Cluster" is correct.

Hit ENTER to continue: //Press <Enter> to continue

>>> Check <<<

This step runs sccheck(1M) to verify that certain basic hardware and software pre-configuration requirements have been met. If sccheck(1M) detects potential problems with configuring this machine as a cluster node, a list of warnings is printed.

Hit ENTER to continue: //Press <Enter> to continue

Running sccheck ... done All sccheck tests passed.

Hit ENTER to continue: //Press <Enter> to continue >>> Autodiscovery of Cluster Transport <<<

If you are using ethernet adapters as your cluster transport adapters, autodiscovery is the best method for configuring the cluster transport.

Do you want to use autodiscovery (yes/no) [yes]?yes

Probing

.......

The following connections were discovered:

m2000svr-1:ce1 m2000svr-1:ce3

switch1 switch2

m2000svr-2:ce1 m2000svr-2:ce3

Is it okay to add these connections to the configuration (yes/no) [yes]?yes

>>> Global Devices File System <<< Each node in the cluster must have a local file system mounted on /global/.devices/node@<nodeID> before it can successfully participate

5-14

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 5 Installing the Sun Cluster

as a cluster member. Since the "nodeID" is not assigned until scinstall is run, scinstall will set this up for you. However, in order to do this, you must supply the name of either an already-mounted file system or raw disk partition at this time. This file system or partition should be at least 512 MB in size.

If an already-mounted file system is used, the file system must be empty. If a raw disk partition is used, a new file system will be created for you.

The default is to use /globaldevices.

Is it okay to use this default (yes/no) [yes]?yes

>>> Automatic Reboot <<<

Once scinstall has successfully installed and initialized the Sun Cluster software for this machine, it will be necessary to reboot. The reboot will cause this machine to join the cluster for the first time.

Do you want scinstall to reboot for you (yes/no) [yes]?no

You will need to manually reboot this node in "cluster mode" after scinstall successfully completes.

Hit ENTER to continue: //Press <Enter> to continue

>>> Confirmation <<<

Your responses indicate the following options to scinstall:

scinstall -ik \ -C M2000Cluster \ -N m2000svr-1 \ -A trtype=dlpi,name=ce1 -A trtype=dlpi,name=ce3 \ -m endpoint=:ce1,endpoint=switch1 \ -m endpoint=:ce1,endpoint=switch2

Are these the options you want to use (yes/no) [yes]?yes

Do you want to continue with the install (yes/no) [yes]?yes

5-15

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 5 Installing the Sun Cluster

Checking device to use for global devices file system ... done

Adding node "m2000svr-2" to the cluster configuration ... done Adding adapter "ce1" to the cluster configuration ... done Adding adapter "ce3" to the cluster configuration ... done Adding cable to the cluster configuration ... done Adding cable to the cluster configuration ... done

Copying the config from "m2000svr-1" ... done

Setting the node ID for "m2000svr-2" ... done (id=2)

Setting the major number for the "did" driver ... Obtaining the major number for the "did" driver from "m2000svr-1" ... done "did" driver major number set to 300

Checking for global devices global file system ... done Updating vfstab ... done

Verifying that NTP is configured ... done Installing a default NTP configuration ... done Please complete the NTP configuration after scinstall has finished.

Verifying that "cluster" is set for "hosts" in nsswitch.conf ... done Adding the "cluster" switch to "hosts" in nsswitch.conf ... done

Verifying that "cluster" is set for "netmasks" in nsswitch.conf ... done Adding the "cluster" switch to "netmasks" in nsswitch.conf ... done

Verifying that power management is NOT configured ... done Unconfiguring power management ... done /etc/power.conf has been renamed to /etc/power.conf.101001022213 Power management is incompatible with the HA goals of the cluster. Please do not attempt to re-configure power management.

Ensure that the EEPROM parameter "local-mac-address?" is set to "true" ... done

Ensure network routing is disabled ... done Network routing has been disabled on this node by creating /etc/notrouter. Having a cluster node act as a router is not supported by Sun Cluster. Please do not re-enable network routing.

5-16

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 5 Installing the Sun Cluster

Hit ENTER to continue: //Press <Enter> to continue

*** Main Menu ***

Please select from one of the following (*) options: 1) Establish a new cluster using this machine as the first node 2) Add this machine as a node in an established cluster 3) Configure a cluster to be JumpStarted from this install server * 4) Add support for new data services to this cluster node * 5) Print release information for this cluster node

* ?) Help with menu options * q) Quit

Option: q

3)

Execute the following commands to install the patch of the Sun cluster: # cd /cdrom/cdrom0/sun/patch/SunCluster/3.0/8 #unpack-patches
Are you ready to unpack patches into /tmp/8? [y/n]: y

#cd /tmp/8 #./install_patches


Are you ready to continue with install? [y/n]: y

4)

Execute the following commands to reboot the server: #sync;sync;sync;sync;sync #shutdown -y -g0 -i6

5.4 Checking Sun Cluster Installation


To verify the installation of the Sun cluster on the master/slave node, perform the following operations on the master/slave node and check the output on the screen: #scstat
------------------------------------------------------------------- Cluster Nodes --

Node name --------Cluster node: Cluster node: m2000svr-1 m2000svr-2

Status -----Online Online

------------------------------------------------------------------

5-17

Installation Manual iManager M2000 Two-node Cluster Hot Backup system


-- Cluster Transport Paths --

Chapter 5 Installing the Sun Cluster

Endpoint -------Transport path: Transport path: m2000svr-1:ce3 m2000svr-1:ce1

Endpoint -------m2000svr-2:ce3 m2000svr-2:ce1

Status -----Path online Path online

------------------------------------------------------------------

-- Quorum Summary --

Quorum votes possible: Quorum votes needed: Quorum votes present:

1 1 1

-- Quorum Votes by Node --

Node Name --------Node votes: Node votes: m2000svr-1 m2000svr-2

Present Possible Status ------- -------- -----1 0 1 0 Online Online

-- Quorum Votes by Device --

Device Name -----------

Present Possible Status ------- -------- ------

------------------------------------------------------------------- Device Group Servers --

Device Group ------------- Device Group Status --

Primary -------

Secondary ---------

Device Group

Status

Note: Check whether all the check items on the screen are "online". If yes, it indicates that the installation of the Sun cluster is successful. You can proceed with the configurations in the following sections. If not, it indicates that the installation fails. You need to reinstall the Sun cluster.

5-18

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 5 Installing the Sun Cluster

5.5 Configuring the File /etc/inet/ntp.conf.cluster


Note: Configure the file /etc/inet/ntp.conf.cluster on both the master and slave nodes.

To configure the file /etc/inet/ntp.conf.cluster, 1) 2) Edit the file /etc/inet/ntp.conf.cluster. Add "#" in front of the line "peer clusternode?-priv" (where "?" stands for 3, 4, 5 16) to invalidate it.

5.6 Configuring NAFO Group


This section introduces how to configure the network adapter fail over (NAFO) group on the master and slave nodes.

5.6.1 Introduction to NAFO


The hot backup system consists of the following hardware components: Cluster nodes with local disks (unshared) Multi-host storage (disks shared between nodes) Removable media (tapes and CD-ROM) Cluster interconnect Public network interfaces Client systems Administrative console Console access devices The SunPlex system enables you to combine these components into a variety of configurations. Figure 5-1 shows a sample of cluster configurations.

5-19

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 5 Installing the Sun Cluster

NAFO: Network Adapter Fail Over

Figure 5-1 A sample of two-node cluster configuration A NAFO group consists of the two network interfaces connected with the public network, as shown in Figure 5-1. These two network interfaces work in the active/standby mode.

5.6.2 Creating an NAFO Group


To create an NAFO group, execute the following command as root: # pnmset
In the following, you will be prompted to do configuration for network adapter failover

Do you want to continue ... [y/n]: y

How many NAFO groups to configure [1]:

Enter NAFO group number [0]: Enter space-separted list of adapters in nafo0: ce0 ce2

//Specify ce0 and ce3 as network interfaces. These two interfaces work in the active/standby mode.
Checking configuration of nafo0: Testing active adapter ce0... Testing adapter ce2...

5-20

Installation Manual iManager M2000 Two-node Cluster Hot Backup system


NAFO configuration completed

Chapter 5 Installing the Sun Cluster

5.6.3 Checking Configuration of NAFO Group


To check whether the NAFO group is created successfully, execute the following command as root: # pnmstat -l
group nafo0 adapters ce0:ce2 status OK fo_time act_adp eri0

NEVER

If the output status is "OK", it indicates that the NAFO group is created.

5.7 Configuring Quorum Devices


Note: Perform the operations described in this section on the master node.

To configure Quorum devices on the master mode, proceed as follows: 1) Execute the following command to view disk devices in the cluster. #scdidadm -L
1 2 3 4 4 5 5 6 6 7 7 8 9 10 m2000svr-1:/dev/rdsk/c0t6d0 m2000svr-1:/dev/rdsk/c1t0d0 m2000svr-1:/dev/rdsk/c1t1d0 m2000svr-1:/dev/rdsk/c3t1d0 m2000svr-2:/dev/rdsk/c3t0d0 m2000svr-1:/dev/rdsk/c3t1d1 m2000svr-2:/dev/rdsk/c3t0d1 m2000svr-1:/dev/rdsk/c4t1d0 m2000svr-2:/dev/rdsk/c4t0d0 m2000svr-1:/dev/rdsk/c4t1d1 m2000svr-2:/dev/rdsk/c4t0d1 m2000svr-2:/dev/rdsk/c0t6d0 m2000svr-2:/dev/rdsk/c1t0d0 m2000svr-2:/dev/rdsk/c1t1d0 /dev/did/rdsk/d1 /dev/did/rdsk/d2 /dev/did/rdsk/d3 /dev/did/rdsk/d4 /dev/did/rdsk/d4 /dev/did/rdsk/d5 /dev/did/rdsk/d5 /dev/did/rdsk/d6 /dev/did/rdsk/d6 /dev/did/rdsk/d7 /dev/did/rdsk/d7 /dev/did/rdsk/d8 /dev/did/rdsk/d9 /dev/did/rdsk/d10

5-21

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 5 Installing the Sun Cluster

Note: Only disks shared by the master and slave nodes can serve as a Quorum device. From the output on the screen, you can see that the master and slave nodes share the four disks /dev/did/rdsk/d4, /dev/did/rdsk/d5, /dev/did/rdsk/d6, and /dev/did/rdsk/d7. You can select any one from these four disks as a Quorum device. In the example, /dev/did/rdsk/d7 is selected as a Quorum device.

2)

Execute the following command to configure Quorum devices: #scsetup


>>> Initial Cluster Setup <<<

This program has detected that the cluster "installmode" attribute is still enabled. As such, certain initial cluster setup steps will be performed at this time. This includes adding any necessary quorum devices, then resetting both the quorum vote counts and the "installmode" property.

Please do not proceed if any additional nodes have yet to join the cluster.

Is it okay to continue (yes/no) [yes]?yes

Do you want to add any quorum disks (yes/no) [yes]?yes

Dual-ported SCSI-2 disks may be used as quorum devices in two-node clusters. However, clusters with more than two nodes require that SCSI-3 PGR disks be used for all disks with more than two node-to-disk paths. You can use a disk containing user data or one that is a member of a device group as a quorum device.

Each quorum disk must be connected to at least two nodes. Please refer to the Sun Cluster documentation for more information on supported quorum device topologies.

Which global device do you want to use (d<N>)?d7

5-22

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 5 Installing the Sun Cluster

Note: The "d7" here represents the shared disk /dev/did/rdsk/d7, which is selected as a Quorum device in the above operation.

Is it okay to proceed with the update (yes/no) [yes]?yes

scconf -a -q globaldev=d4

Command completed successfully.

Hit ENTER to continue: //Press <Enter> to continue

Do you want to add another quorum disk (yes/no)?no

Once the "installmode" property has been reset, this program will skip "Initial Cluster Setup" each time it is run again in the future. However, quorum devices can always be added to the cluster using the regular menu options. Resetting this property fully activates quorum settings and is necessary for the normal and safe operation of the cluster.

Is it okay to reset "installmode" (yes/no) [yes]?yes

scconf -c -q reset scconf -a -T node=.

Cluster initialization is complete.

Type ENTER to proceed to the main menu: //Press <Enter> to continue

*** Main Menu ***

Please select from one of the following options:

1) Quorum 2) Resource groups 3) Cluster interconnect 4) Device groups and volumes 5) Private hostnames 6) New nodes

5-23

Installation Manual iManager M2000 Two-node Cluster Hot Backup system


7) Other cluster properties

Chapter 5 Installing the Sun Cluster

?) Help with menu options q) Quit

Option:q

3)

Execute the following commands to reboot the system: #sync;sync;sync;sync;sync #scshutdown -y -g0 The master and slave nodes enter the "ok" state after reboot. Execute the following command on the master and slave nodes to boot the system: ok boot

5.8 Checking Installation Result


To check the installation result, proceed as follows:

I. Checking Cluster Status


Execute the following command: #scstat The system displays the following information if the cluster is in normal status.
------------------------------------------------------------------

-- Cluster Nodes --

Node name --------Cluster node: Cluster node: m2000svr-1 m2000svr-2

Status -----Online Online

------------------------------------------------------------------- Cluster Transport Paths -Endpoint -------Transport path: Transport path: m2000svr-1:ce3 m2000svr-1:ce1 Endpoint -------m2000svr-2:ce3 m2000svr-2:ce1 Status -----Path online Path online

------------------------------------------------------------------

-- Quorum Summary --

5-24

Installation Manual iManager M2000 Two-node Cluster Hot Backup system


Quorum votes possible: Quorum votes needed: Quorum votes present: 3 2 3

Chapter 5 Installing the Sun Cluster

-- Quorum Votes by Node --

Node Name --------Node votes: Node votes: m2000svr-1 m2000svr-2

Present Possible Status ------- -------- -----1 1 1 1 Online Online

-- Quorum Votes by Device -Device Name ----------Device votes: /dev/did/rdsk/d7s2 Present Possible Status ------- -------- -----1 1 Online

------------------------------------------------------------------- Device Group Servers -Device Group ------------- Device Group Status -Device Group Status Primary ------Secondary ---------

------------------------------------------------------------------

II. Checking the Configuration of NAFO Group


To check whether the configuration of NAPO group on the master node is correct, 1) Check IP addresses of the active and standby network adapters. # ifconfig -a
ce1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2 inet 192.168.8.11 netmask ffffff00 broadcast 192.168.8.255 ether 0:3:ba:28:5c:ab

The above result indicates the IP address of m2000svr-1 is bound to the active network adapter ce1. 2) Remove the network cable from the active network adapter, and check the IP addresses of the active and standby network adapters about ten seconds later. # ifconfig -a
ce3: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 6

5-25

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 5 Installing the Sun Cluster

inet 192.168.8.11 netmask ffffff00 broadcast 192.168.8.255 ether 0:3:ba:17:45:56

At this time, the IP address of m2000svr-1 is bound to the standby network adapter ce3. It indicates the IP address is successfully switched to that of the standby network adapter. 3) 4) Connect the network cable to the active network adapter again and remove the one from the standby network adapter. Check the IP addresses of the active and standby network adapters by executing the command ifconfig a about ten seconds later. If the execution result is the same as that in step 1, it indicates that the IP address is switched to that of the active network adapter.

Note: You can use the same method to check the configuration of the NAFO on the slave node.

5-26

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 6 Installing the Volume Manager

Chapter 6 Installing the Volume Manager


Note: The Volume Manager is a product of VERITAS Software Corporation. For detailed instructions, see the related technical documents of VERITAS Volume Manager. VERITAS Software Corporation reserves all the rights of the company name, trademarks, and manual names involved in this chapter.

This chapter introduces how to install the volume manager on the mater and slave nodes.

Caution Do not install the VERITAS Volume Manager on the master and slave nodes simultaneously. Install it on the master node first, and then on the slave node.

6.1 Checks Before Installation


Note: Perform the operations described in this section on the master and slave nodes.

Before installing the VERITAS Volume Manager, check the following: Whether the Sun cluster is correctly installed. Check the status of the cluster and configuration of the NAFO group. For details, see section 5.8 "Checking Installation Result". Whether the SUNWsan software package is installed. Execute the following command: #pkginfo | grep SUNWsan
system SUNWsan SAN Foundation Kit

If the system displays the above information, it indicates that the software package is installed.

6-1

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 6 Installing the Volume Manager

Whether patches of the VERITAS Volume Manager are installed. Execute the following commands: #showrev -p |grep 111413 #showrev -p |grep 111412 #showrev -p |grep 111095 If the system displays the output information on the screen, it indicates that patches are installed.

6.2 Installing the Volume Manager


Note: Unless otherwise specified, perform the operations described in this section on the master and slave nodes.

Proceed as follows to install the VERITAS Volume Manager: 1) 2) Insert the CD of VERITAS volume manager into the CD-ROM. Execute the following commands to install the VERITAS volume manager: # cd /cdrom/cdrom0 # scvxinstall
Do you want Volume Manager to encapsulate root [no]?yes

Where is the Volume Manager cdrom?/cdrom/cdrom0 Disabling DMP. Installing packages from /cdrom/cdrom0/pkgs. Installing VRTSvlic. Installing VRTSvxvm. Installing VRTSvmdev. Installing VRTSvmman. Obtaining the clusterwide vxio number... Using 272 as the vxio major number. Volume Manager installation is complete.

Please enter a Volume Manager license key: 7234 4164 7106 8669 8949 077

Note: The license here is the DEMO license. Apply for an official license from Sun Microsystems, Inc. when installing the VERITAS Volume Manager on site.

6-2

Installation Manual iManager M2000 Two-node Cluster Hot Backup system


Installing Volume Manager license. Verifying encapsulation requirements.

Chapter 6 Installing the Volume Manager

The Volume Manager root disk encapsulation step will begin in 20 seconds. Type Ctrl-C to abort .................... Arranging for Volume Manager encapsulation of the root disk. The vxconfigd daemon has been started and is in disabled mode... Reinitialized the volboot file... Created the rootdg... Added the rootdisk to the rootdg... The setup to encapsulation rootdisk is complete... Updating /global/.devices entry in /etc/vfatab.

This node will be re-booted in 20 seconds. Type Ctrl-C to abort....................

Then the system reboots automatically twice. 3) Execute the following commands to install other software packages of the VERITAS volume manager: # cd /cdrom/cdrom0/pkgs # pkgadd -d . VRTSvmsa VRTSfsdoc
Processing package instance <VRTSvmsa> from

</opt/sun/software/foundationproduct3.4sunw/pkgs>

VERITAS Volume Manager Storage Administrator (sparc) 3.2,REV=07.27.2001.19.47 Copyright (C) 2001 VERITAS Software Corporation. All Rights Reserved.

VERITAS, VERITAS SOFTWARE, the VERITAS logo, Business Without Interruption, VERITAS The Data Availability Company and Volume Manager are trademarks or registered trademarks of VERITAS Software Corporation in the US and/or other countries. Other product names mentioned herein may be trademarks or registered trademarks of their respective companies.

Where should this package be installed? (default: /opt/VRTSvmsa]) [?,q] Hit ENTER to continue: //Press <Enter> to continue.

Should the Volume Manager Storage Administrator Server be installed on this system? (The Volume Manager Storage Administrator Client will

be installed regardless) (default: y) [y,n,?,q] y

The selected base directory </opt/VRTSvmsa> must exist before installation is attempted. Do you want this directory created now [y,n,?,q] y

6-3

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 6 Installing the Volume Manager

Using </opt/VRTSvmsa> as the package base directory. ## Processing package information. ## Processing system information. 4 package pathnames are already properly installed. ## Verifying disk space requirements. ## Checking for conflicts with packages already installed. ## Checking for setuid/setgid programs.

This package contains scripts which will be executed with super-user permission during the process of installing this package.

Do you want to continue with the installation of <VRTSvmsa> [y,n,?] y .................

Installation of <VRTSvmsa> was successful.

Processing

package

instance

<VRTSfsdoc>

from

</opt/sun/software/foundationproduc t3.4sunw/pkgs>

VERITAS File System Documentation (sparc) 3.4,REV=GA03 Copyright (c) 1991 - 2000 VERITAS SOFTWARE CORP. ALL RIGHTS RESERVED.

THIS SOFTWARE IS THE PROPERTY OF AND IS LICENSED BY VERITAS SOFTWARE, AND/OR ITS SUPPLIERS. Using </opt> as the package base directory. ## Processing package information. ## Processing system information. ## Verifying disk space requirements. ## Checking for conflicts with packages already installed. ## Checking for setuid/setgid programs.

Installing VERITAS File System Documentation as <VRTSfsdoc>

## Installing part 1 of 1. /opt/VRTSfsdoc/install/fsinstall.pdf /opt/VRTSfsdoc/install/fsinstall.ps /opt/VRTSfsdoc/sys_admin/fssag.pdf /opt/VRTSfsdoc/sys_admin/fssag.ps [ verifying class <all> ] Installation of <VRTSfsdoc> was successful.

4)

Execute the following command to disable the DMP:

6-4

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 6 Installing the Volume Manager

# vxdiskadm:
Volume Manager Support Operations Menu: VolumeManager/Disk

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 list

Add or initialize one or more disks Encapsulate one or more disks Remove a disk Remove a disk for replacement Replace a failed or removed disk Mirror volumes on a disk Move volumes from a disk Enable access to (import) a disk group Remove access to (deport) a disk group Enable (online) a disk device Disable (offline) a disk device Mark a disk as a spare for a disk group Turn off the spare flag on a disk Unrelocate subdisks back to a disk Exclude a disk from hot-relocation use Make a disk available for hot-relocation use Prevent multipathing/Suppress devices from VxVM's view Allow multipathing/Unsuppress devices from VxVM's view List currently suppressed/non-multipathed devices Change the disk naming scheme Get the newly connected/zoned disks in VxVM view List disk information

? ?? q

Display help about menu Display help about the menuing system Exit from menus

Select an operation to perform: 17

Volume Manager Installation Menu: VolumeManager/Disk/ExcludeDevices

This operation might lead to some devices being suppressed from VxVM's view or prevent them from being multipathed by vxdmp (This operation can be reversed using the vxdiskadm command). Do you want to continue ? [y,n,q,?] (default: y) y

Volume Manager Device Operations

6-5

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 6 Installing the Volume Manager

Menu: VolumeManager/Disk/ExcludeDevices

1 2 3 4 5 6 7 8

Suppress all paths through a controller from VxVM's view Suppress a path from VxVM's view Suppress disks from VxVM's view by specifying a VID:PID combination Suppress all but one paths to a disk Prevent multipathing of all disks on a controller by VxVM Prevent multipathing of a disk by VxVM Prevent multipathing of disks by specifying a VID:PID combination List currently suppressed/non-multipathed devices

? ?? q

Display help about menu Display help about the menuing system Exit from menus

Select an operation to perform: 5

Exclude controllers from DMP Menu: VolumeManager/Disk/ExcludeDevices/CTLR-DMP

Use this operation to exclude all disks on a controller from being multipathed by vxdmp.

As a result of this operation, all disks having a path through the specified controller will be claimed in the OTHER_DISKS category and hence, not multipathed by vxdmp. This operation can be reversed using the vxdiskadm command.

You can specify a controller name at the prompt. A controller name is of the form c#, example c3, c11 etc. Enter 'all' to exclude all paths on all the controllers on the host. To see the list of controllers on the system, type 'list'.

Enter a controller name [<ctlr-name>,all,list,list-exclude,q,?] all

No disk will be multipathed by vxdmp as a result of this operation !

Continue operation? [y,n,q,?] (default: n) y

This operation will take effect only after a reboot. Hit RETURN to continue. //Press <Enter> to continue

Volume Manager Device Operations

6-6

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 6 Installing the Volume Manager

Menu: VolumeManager/Disk/ExcludeDevices

1 2 3 4 5 6 7 8

Suppress all paths through a controller from VxVM's view Suppress a path from VxVM's view Suppress disks from VxVM's view by specifying a VID:PID combination Suppress all but one paths to a disk Prevent multipathing of all disks on a controller by VxVM Prevent multipathing of a disk by VxVM Prevent multipathing of disks by specifying a VID:PID combination List currently suppressed/non-multipathed devices

? ?? q

Display help about menu Display help about the menuing system Exit from menus

Select an operation to perform: q

Volume Manager Support Operations Menu: VolumeManager/Disk

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 list ?

Add or initialize one or more disks Encapsulate one or more disks Remove a disk Remove a disk for replacement Replace a failed or removed disk Mirror volumes on a disk Move volumes from a disk Enable access to (import) a disk group Remove access to (deport) a disk group Enable (online) a disk device Disable (offline) a disk device Mark a disk as a spare for a disk group Turn off the spare flag on a disk Unrelocate subdisks back to a disk Exclude a disk from hot-relocation use Make a disk available for hot-relocation use Prevent multipathing/Suppress devices from VxVM's view Allow multipathing/Unsuppress devices from VxVM's view List currently suppressed/non-multipathed devices Change the disk naming scheme Get the newly connected/zoned disks in VxVM view List disk information Display help about menu

6-7

Installation Manual iManager M2000 Two-node Cluster Hot Backup system


?? q

Chapter 6 Installing the Volume Manager

Display help about the menuing system Exit from menus

Select an operation to perform: q

The system must be shut down and rebooted for the device suppression/ unsuppression operations you have performed to take effect. To shutdown your system, cd to / and type

shutdown -g0 -y -i6

Do not attempt to use the device suppression/unsuppression operations again before the system is rebooted.

Goodbye.

5)

Insert the CD of the patch 3.1, log in to the system as root, and execute the following commands to install patches of VERITAS volume manager: # cd /cdrom/cdrom0/sun/patch/veritas-vm/3.2/8 # unpack-patches
Are you ready to unpack patches into /tmp/8? [y/n]: y

# cd /tmp/8 # ./install_patches
Are you ready to continue with install? [y/n]: y

6)

Execute the following commands to reboot the system: # sync;sync;sync;sync;sync # shutdown -y -g0 -i6

6.3 Checking Volume Manager Installation


Note: Perform the operations described in this section on both the master and slave nodes.

After installing the VERITAS Volume Manager, check whether: The system can be started normally. The statuses of all volumes are "ACTIVE". #vxprint
Disk group: rootdg TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0

6-8

Installation Manual iManager M2000 Two-node Cluster Hot Backup system


dg rootdg rootdg -

Chapter 6 Installing the Volume Manager


-

dm rootdisk_1

c1t0d0s2

71124291 -

node@1

fsgen node@1

ENABLED ENABLED ENABLED

1025595 1025595 1025595

ACTIVE ACTIVE -

pl node@1-01

sd rootdisk_1-03 node@1-01

opt

fsgen opt

ENABLED ENABLED ENABLED

16779312 16779312 16779312 0

ACTIVE ACTIVE -

pl opt-01

sd rootdisk_1-04 opt-01

rootvol

root rootvol

ENABLED ENABLED ENABLED ENABLED

35211132 35211132 1 0 -

ACTIVE ACTIVE -

Block0

pl rootvol-01

sd rootdisk_1-B0 rootvol-01 sd rootdisk_1-02 rootvol-01

35211131 1

swapvol

swap swapvol

ENABLED ENABLED ENABLED

16779312 16779312 16779312 0

ACTIVE ACTIVE -

pl swapvol-01

sd rootdisk_1-01 swapvol-01

The root disk is encapsulated. Execute the following command: #df k If the root disk is encapsulated, the system displays the following information. The names of all devices must contain /dev/vx/dsk/.
Filesystem /dev/vx/dsk/rootvol /proc fd mnttab swap swap /dev/vx/dsk/opt /dev/vx/dsk/node@1 kbytes used avail capacity 8% 0% 0% 0% 1% 1% 4% 1% Mounted on / /proc /dev/fd /etc/mnttab /var/run /tmp /opt /global/.devices/node@1

17331224 1300892 15857020 0 0 0 9824664 9824552 8258597 481316 0 0 0 0 0 0

136 9824528 24 9824528 316749 7859263 3594 429591

/dev/vx/dsk/rootdisk_24vol 481316 3594 429591 1% /global/.devices/node@2

6-9

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 6 Installing the Volume Manager

Note: The output information in Step 2 and Step 3 is for the master node. For the slave node, the rootdisk_1 in the above output information must be rootdisk_2.

6-10

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 7 Configuring the Cluster

Chapter 7 Configuring the Cluster


This chapter describes how to configure the cluster.

7.1 Mirroring the Root Disk


Note: Unless otherwise specified, perform the operations described in this section on the master and slave nodes.

Proceed as follows to mirror the root disk. 1) Execute the following command to view the disk information: #vxdisk list
DEVICE c1t0d0s2 c1t1d0s2 c3t1d0s2 c3t1d1s2 c4t1d0s2 c4t1d1s2 TYPE sliced sliced sliced sliced sliced sliced DISK rootdisk_1 GROUP rootdg STATUS online error error error error error

Select the disk according to actual situations. Select the second local disk, c1t1d0s2 in the example, as the mirrored disk of the root disk. 2) Execute the following command to initialize the mirrored disk: #vxdiskunsetup -C c1t1d0 #vxdisksetup -i c1t1d0 #vxdisk list
DEVICE c1t0d0s2 c1t1d0s2 c3t1d0s2 c3t1d1s2 c4t1d0s2 c4t1d1s2 TYPE sliced sliced sliced sliced sliced sliced DISK rootdisk_1 GROUP rootdg STATUS online online error error error error

The status of c1t1d0s2 is online now.

7-1

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 7 Configuring the Cluster

3)

Add the mirrored disk to rootdg. Execute the following command: On the master node #vxdg -g rootdg adddisk mirrdisk_1=c1t1d0 On the slave node #vxdg -g rootdg adddisk mirrdisk_2=c1t1d0

4)

Mirror the Root Disk. Execute the following command: On the master node #vxmirror rootdisk_1 mirrdisk_1 On the slave node #vxmirror rootdisk_2 mirrdisk_2 After the above operations, the operational terminal of the master node displays the following prompts:
! vxassist -g rootdg mirror rootvol mirrdisk_1 ! vxassist -g rootdg mirror swapvol mirrdisk_1 ! vxassist -g rootdg mirror node@1 mirrdisk_1 ! vxassist -g rootdg mirror opt mirrdisk_1 ! vxbootsetup

If the vxbootsetup appears, it indicates the mirroring of the root disk is complete. The mirroring process lasts 30 to 40 minutes. The way to check whether the mirroring of the root disk on the slave node completes is the same as that on the master node. .

7.2 Creating m2000_rg


Note: Unless otherwise specified, perform the operations described in this section only on the master node.

Proceed as follows to create m2000_rg.

I. Viewing the Disk Information


Execute the following command: #vxdisk list
DEVICE c1t0d0s2 c1t1d0s2 TYPE sliced sliced DISK rootdisk_1 mirrdisk_1 GROUP rootdg rootdg STATUS online online

7-2

Installation Manual iManager M2000 Two-node Cluster Hot Backup system


c3t1d0s2 c3t1d1s2 c4t1d0s2 c4t1d1s2 sliced sliced sliced sliced -

Chapter 7 Configuring the Cluster


error error error error

Among the above disks, the capacity of c3t1d1 and c4t1d1 is 100.80 GB. Select these two disks as m2000dg disk group.

Note: The disk name may vary with the device. Select the disk according to actual situations.

II. Creating the m2000dg Disk Group and Volume


To create the m2000dg disk group and volume, proceed as follows: 1) Execute the following commands to create the disk group and volumes: # cd /opt/m2kv1/volume/n20/cluster # ./mkvol.sh c3t1d1 c4t1d1 2) Execute the following command to check whether the disk group is created: # vxdisk g m2000dg list
DEVICE c3t1d1s2 c4t1d1s2 TYPE sliced sliced DISK m2000dg01 m2000dg02 GROUP m2000dg m2000dg STATUS online online

3)

Execute the following command to check whether volumes are created: # vxprint -g m2000dg
TY NAME PUTIL0 TY NAME PUTIL0 ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0

dm m2000dg01 dm m2000dg02

c3t1d1s2 c4t1d1s2

211394560 211394560 -

data

gen data

ENABLED ENABLED ENABLED

37748736 37748736 37748736 0

ACTIVE ACTIVE -

pl data-01

sd m2000dg01-03 data-01

fsdata

fsgen fsdata

ENABLED ENABLED ENABLED

131072000 131072000 131072000 0

ACTIVE ACTIVE -

pl fsdata-01

sd m2000dg01-05 fsdata-01

7-3

Installation Manual iManager M2000 Two-node Cluster Hot Backup system


v log gen log ENABLED ENABLED ENABLED

Chapter 7 Configuring the Cluster


37748736 37748736 37748736 0 ACTIVE ACTIVE -

pl log-01

sd m2000dg01-04 log-01

master

gen master

ENABLED ENABLED ENABLED

2457600 2457600 2457600

ACTIVE ACTIVE -

pl master-01

sd m2000dg01-01 master-01

sysproc

gen sysproc

ENABLED ENABLED ENABLED

409600 409600 409600

ACTIVE ACTIVE -

pl sysproc-01

sd m2000dg01-02 sysproc-01

If the system displays the above execution result, it indicates that volumes (data, log, master, sysproc, and fsdata) are successfully created. Otherwise, find out the reason and recreate the M2000 disk group and volumes. 4) Execute the following command to mirror the volumes: # ./mirvol.sh 5) 6) Perform the mirroring operation (The operation may last 1.5 to 2 hours) inside the system. View the mirroring progress using the following command: # vxtask list
TASKID 164 165 167 168 PTID TYPE/STATE PCT PROGRESS

ATCOPY/R 00.40% 0/131072000/527248 PLXATT fsdata fsdata-02 ATCOPY/R 18.15% 0/2457600/446152 PLXATT master master-02 ATCOPY/R 01.14% 0/37748736/432016 PLXATT log log-02 ATCOPY/R 01.14% 0/37748736/431520 PLXATT data data-02

"%" represents the current mirroring progress of the volumes. If there is no mirroring progress information, it indicates that the mirroring operation is completed.

Caution: The script mkvol.sh is used to create the disk group m2000dg and the necessary volumes with two disks as parameters. It is used to mirror volumes. The second disk (disk array) in the parameters is a mirrored disk. The paths of these two scripts are: Path of the Netra20 script: /opt/m2kv1/volume/n20/cluster Path of the V880 script: /opt/m2kv1/volume/v880/cluster

7-4

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 7 Configuring the Cluster

III. Adding m2000dg to the Disk Group of the Cluster


Execute the following command to add the M2000 disk group to the disk group of the cluster: # scsetup
*** Main Menu ***

Please select from one of the following options:

1) Quorum 2) Resource groups 3) Cluster interconnect 4) Device groups and volumes 5) Private hostnames 6) New nodes 7) Other cluster properties

?) Help with menu options q) Quit

Option:4

*** Device Groups Menu ***

Please select from one of the following options:

1) Register a VxVM disk group as a device group 2) Synchronize volume information for a VxVM device group 3) Unregister a VxVM device group 4) Add a node to a VxVM device group 5) Remove a node from a VxVM device group 6) Change key properties of a device group

?) Help q) Return to the Main Menu

Option:1

>>> Register a VxVM Disk Group as a Device Group <<<

VERITAS Volume Manager disk groups are always managed by the cluster as cluster device groups. This option is used to register a VxVM disk

7-5

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 7 Configuring the Cluster

group with the cluster as a cluster device group.

Is it okay to continue (yes/no) [yes]?yes

Name of the VxVM disk group you want to register?m2000dg

Primary ownership of a device group is determined by either specifying or not specifying a preferred ordering of the nodes that can own the device group. If an order is specified, this will be the order in which nodes will attempt to establish ownership. If an order is not specified, the first node that attempts to access a disk in the device group becomes the owner.

Do you want to configure a preferred ordering (yes/no) [yes]?yes

Are both nodes attached to all disks in this group (yes/no) [yes]?yes

Which node is the preferred primary for this device group?

m2000svr-1

Enable "failback" for this disk device group (yes/no) [no]?no

Is it okay to proceed with the update (yes/no) [yes]?yes

scconf

-a

-D

type=vxvm,name=m2000dg,nodelist=m2000svr-1:m2000svr-2,preferenced=true Command completed successfully.

Hit ENTER to continue: //Press <Enter> to continue

*** Device Groups Menu *** Please select from one of the following options:

1) Register a VxVM disk group as a device group 2) Synchronize volume information for a VxVM device group 3) Unregister a VxVM device group 4) Add a node to a VxVM device group 5) Remove a node from a VxVM device group 6) Change key properties of a device group

?) Help q) Return to the Main Menu Option:2

7-6

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 7 Configuring the Cluster

>>> Synchronize Volume Information for a VxVM Device Group <<<

VERITAS Volume Manager disk groups are always managed by the cluster as cluster device groups. This option is used to synchronize volume information for a VxVM device group between the VxVM software and the clustering software. It should be selected anytime a volume is either added to or removed from a VxVM disk group. Otherwise, the cluster will be unaware of the changes.

Is it okay to continue (yes/no) [yes]?yes

Name of the VxVM device group you want to synchronize?

m2000dg

Is it okay to proceed with the update (yes/no) [yes]?

yes

scconf -c -D name=m2000dg,sync

Command completed successfully.

Hit ENTER to continue: //Press <Enter> to continue

*** Device Groups Menu *** Please select from one of the following options:

1) Register a VxVM disk group as a device group 2) Synchronize volume information for a VxVM device group 3) Unregister a VxVM device group 4) Add a node to a VxVM device group 5) Remove a node from a VxVM device group 6) Change key properties of a device group ?) Help q) Return to the Main Menu

Option:q

*** Main Menu ***

Please select from one of the following options:

1) Quorum 2) Resource groups 3) Cluster interconnect

7-7

Installation Manual iManager M2000 Two-node Cluster Hot Backup system


4) Device groups and volumes 5) Private hostnames 6) New nodes 7) Other cluster properties

Chapter 7 Configuring the Cluster

?) Help with menu options q) Quit

Option:q

IV. Creating the New File System in the Volume fsdata


Execute the following command to create the new file system on the volume fsdata: # newfs /dev/vx/rdsk/m2000dg/fsdata newfs: construct a new file system /dev/vx/rdsk/m2000dg/fsdata: (y/n)? y

V. Creating the Mount Point


Execute the following commands on both the master and slave nodes: # mkdir -p /export/home # vi /etc/vfstab Add the following line to the /etc/vfstab file:
/dev/vx/dsk/m2000dg/fsdata /dev/vx/rdsk/m2000dg/fsdata no /export/home ufs 2

VI. Creating the m2000_rg Resource Group


Execute the following command: # scsetup
*** Main Menu ***

Please select from one of the following options:

1) Quorum 2) Resource groups 3) Cluster interconnect 4) Device groups and volumes 5) Private hostnames 6) New nodes 7) Other cluster properties

7-8

Installation Manual iManager M2000 Two-node Cluster Hot Backup system


?) Help with menu options q) Quit

Chapter 7 Configuring the Cluster

Option:

*** Resource Group Menu ***

Please select from one of the following options:

1) Create a resource group 2) Add a network resource to a resource group 3) Add a data service resource to a resource group

?) Help q) Return to the previous Menu

Option:

>>> Create a Resource Group <<<

This option is used to create a new resource group. It also supports the creation of new resources for the new group.

A resource group is a container into which you can place resources of various types, such as network and data service resources. The cluster uses resource groups to manage its resource types. There are but two types of resource groups, failover and scalable.

Only failover resource groups may contain network resources. A network resource, in turn, is either a LogicalHostname or SharedAddress resource.

It is important to remember that each scalable resource group depends upon a failover resource group which contains one or more network resources of the SharedAddress resource type.

Is it okay to continue (yes/no) [yes]?

YES

Select the type of resource group you want to add:

1) Failover Group 2) Scalable Group

7-9

Installation Manual iManager M2000 Two-node Cluster Hot Backup system


Option: 1

Chapter 7 Configuring the Cluster

What is the name of the group you want to add?

m2000_rg

Do you want to add an optional description (yes/no) [no]?

Since this cluster has two nodes, the new resource group will be configured to be hosted by both cluster nodes.

Which is the preferred node for hosting this group?

m2000svr-1

Some types of resources (e.g., HA for NFS) require the use of an area in a global file system for storing configuration data. If any of the resources that will later be added to this group will require such support, you can specify the full directory path name now.

Do you want to specify such a directory now (yes/no) [no]? no

Is it okay to proceed with the update (yes/no) [yes]? yes

scrgadm -a -g m2000_rg -h m2000svr-1,m2000svr-2

Command completed successfully.

Hit ENTER to continue: Do you want to add any network resources now (yes/no) [yes]? yes

Select the type of network resource you want to add:

1) LogicalHostname 2) SharedAddress

Option:

If a failover resource group contains SharedAddress resources, the most common configuration is to have one SharedAddress resource for each subnet. Therefore, scsetup(1M) assumes that there will be one such resource for each subnet.

For how many subnets do you want to add such a resource [1]? 1 Is it okay to configure the first logicalhostname resource (yes/no) [yes]? yes

7-10

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 7 Configuring the Cluster

Each network resource manages a list of one or more logical hostnames for a single subnet. This is true whether the resource is a LogicalHostname or SharedAddress resource type. The most common configuration is to assign a single logical hostname to each network resource for each subnet. Therefore, scsetup(1M) only supports this configuration. If you need to support more than one hostname for a given subnet, the additional support can be added using scrgadm(1M).

Before scsetup(1M) can create a network resource for any logical hostname, that hostname must first be specified in the hosts(4) data for each node in the cluster. In addition, NAFO-configured network adapters must be actively available on each of the nodes.

What logical hostname do you want to add?

m2000svr

Is it okay to proceed with the update (yes/no) [yes]? yes

scrgadm -a -L -g m2000_rg -l m2000svr

Command completed successfully.

Hit ENTER to continue:

Do you want to add any additional data service resources (yes/no) [no]? y

Please wait - looking up resource types .........

Please select the type of resource you want to add: 1) SUNW.HAStorage HA Storage Resource Type

2) SUNW.HAStoragePlus HA Storage Plus - A Resource Type which subsumes HA Storage 3) SUNW.RGOffload 4) SUNW.gds Offload Resource Group Generic Data Service for Sun Cluster

Option:

What is the name of the resource you want to add?

m2kdata_rs

Please wait - looking up resource properties .........................

Some resource types support the setting of certain extension properties. Please check the documentation for your data service to

7-11

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 7 Configuring the Cluster

determine whether or not you need to set any extension properties for the resource you are adding.

Any extension properties you would like to set (yes/no) [yes]? yes

Here are the extension properties for this resource:

Property Name =============

Default Setting ===============

GlobalDevicePaths FilesystemMountPoints AffinityOn FilesystemCheckCommand RunBeforeStartMethod RunAfterStartMethod RunBeforeStopMethod RunAfterStopMethod

<NULL> <NULL> True <NULL> <NULL> <NULL> <NULL> <NULL>

Please enter the list of properties you want to set: (Type Ctrl-D to finish OR "?" for help)

Property name: Property description: Property value:

FilesystemMountPoints The list of file system mountpoints /export/home

Property name:

^D

Here is the list of extension properties you entered:

FilesystemMountPoints=/export/home

Is it correct (yes/no) [yes]?

Yes

Is it okay to proceed with the update (yes/no) [yes]? yes

scrgadm

-a

-j

m2kdata_rs

-g

m2000_rg

-t

SUNW.HAStoragePlus

-x

FilesystemMountPoints=/export/home

Command completed successfully.

Hit ENTER to continue: Do you want to add any additional data service resources (yes/no) [no]? no

7-12

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 7 Configuring the Cluster

Do you want to bring this resource group online now (yes/no) [yes]? yes

scswitch -Z -g m2000_rg

Command completed successfully.

Hit ENTER to continue:

*** Resource Group Menu ***

Please select from one of the following options:

1) Create a resource group 2) Add a network resource to a resource group 3) Add a data service resource to a resource group

?) Help q) Return to the previous Menu

Option:

*** Main Menu ***

Please select from one of the following options: 1) Quorum 2) Resource groups 3) Cluster interconnect 4) Device groups and volumes 5) Private hostnames 6) New nodes 7) Other cluster properties

?) Help with menu options q) Quit

Option:

7.3 Checking Installation Result


After configuring the cluster, proceed as follows to check the installation result:

7-13

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 7 Configuring the Cluster

I. Checking the Mirroring of the Root Disk (the Status of the Disk Volume)
Execute the following command to check the mirroring of the root disk: # vxprint -g rootdg
TY NAME dg rootdg ASSOC rootdg KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 -

dm mirrdisk_1 dm rootdisk_1

c1t1d0s2 c1t0d0s2

71124291 71124291 -

node@1

fsgen node@1

ENABLED ENABLED ENABLED ENABLED ENABLED

1025595 1025595 1025595 1025595 1025595

0 0

ACTIVE ACTIVE ACTIVE -

pl node@1-01

sd rootdisk_1-03 node@1-01 pl node@1-02 node@1

sd mirrdisk_1-03 node@1-02

opt

fsgen opt

ENABLED ENABLED ENABLED ENABLED ENABLED

16779312 16779312 16779312 0 16779312 16779312 0

ACTIVE ACTIVE ACTIVE -

pl opt-01

sd rootdisk_1-04 opt-01 pl opt-02 opt

sd mirrdisk_1-04 opt-02

rootvol

root rootvol

ENABLED ENABLED ENABLED ENABLED ENABLED ENABLED ENABLED ENABLED ENABLED ENABLED ENABLED

35211132 35211132 1 0 -

ACTIVE ACTIVE ACTIVE ACTIVE ACTIVE ACTIVE -

Block0

pl rootvol-01

sd rootdisk_1-B0 rootvol-01 sd rootdisk_1-02 rootvol-01 pl rootvol-02 rootvol

35211131 1 35211132 35211132 0 16779312 16779312 16779312 0 16779312 16779312 0

sd mirrdisk_1-01 rootvol-02 v swapvol swap swapvol

pl swapvol-01

sd rootdisk_1-01 swapvol-01 pl swapvol-02 swapvol

sd mirrdisk_1-02 swapvol-02

The KSTATE of all volumes must be "ENABLED" and the STATE must be "ACTIVE".

II. Checking the Mirroring of the M2000 Volume (Mirroring of Disk Array)
Execute the following command to check the mirroring of the M2000 volume: # vxprint -g m2000dg
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0

7-14

Installation Manual iManager M2000 Two-node Cluster Hot Backup system


dg m2000dg m2000dg -

Chapter 7 Configuring the Cluster


-

dm m2000dg01 dm m2000dg02

c3t1d1s2 c4t1d1s2

211394560 211394560 -

data

gen data

ENABLED ENABLED ENABLED ENABLED ENABLED

37748736 37748736 37748736 0 37748736 37748736 0

ACTIVE ACTIVE ACTIVE -

pl data-01

sd m2000dg01-03 data-01 pl data-02 data

sd m2000dg02-05 data-02

fsdata

fsgen fsdata

ENABLED ENABLED ENABLED ENABLED ENABLED

131072000 131072000 131072000 0 131072000 131072000 0

ACTIVE ACTIVE ACTIVE -

pl fsdata-01

sd m2000dg01-05 fsdata-01 pl fsdata-02 fsdata

sd m2000dg02-01 fsdata-02

log

gen log

ENABLED ENABLED ENABLED ENABLED ENABLED

37748736 37748736 37748736 0 37748736 37748736 0

ACTIVE ACTIVE ACTIVE -

pl log-01

sd m2000dg01-04 log-01 pl log-02 log

sd m2000dg02-04 log-02

master

gen master

ENABLED ENABLED ENABLED ENABLED ENABLED ENABLED ENABLED ENABLED ENABLED ENABLED

2457600 2457600 2457600 2457600 2457600 409600 409600 409600 409600 409600

0 0 0 0

ACTIVE ACTIVE ACTIVE ACTIVE ACTIVE ACTIVE -

pl master-01

sd m2000dg01-01 master-01 pl master-02 master

sd m2000dg02-02 master-02 v sysproc gen sysproc

pl sysproc-01

sd m2000dg01-02 sysproc-01 pl sysproc-02 sysproc

sd m2000dg02-03 sysproc-02

The KSTATE of all volumes must be "ENABLED" and the STATE must be "ACTIVE".

III. Checking the Mount Status of the Volume fsdata


To check the mount status of the volume fsdata, 1) Execute the following command on the master node: #df -k
Filesystem /dev/vx/dsk/rootvol kbytes used avail capacity 8% Mounted on /

17331224 1300967 15856945

7-15

Installation Manual iManager M2000 Two-node Cluster Hot Backup system


/proc fd mnttab swap swap /dev/vx/dsk/opt /dev/vx/dsk/node@1 0 0 0 9798024 9798264 8258597 481316 0 0 0

Chapter 7 Configuring the Cluster


0 0 0 0% 0% 0% 1% 1% 4% /proc /dev/fd /etc/mnttab /var/run /tmp /opt

136 9797888 376 9797888 316749 7859263 3596 429589 1%

/global/.devices/node@1

/dev/vx/dsk/rootdisk_24vol 481316 /global/.devices/node@2 /dev/vx/dsk/m2000dg/fsdata 64511275 9 63866154 1% /export/home 3596 429589 1%

The disk device /dev/vx/dsk/m2000dg/fsdata must be mounted to the directory /export/home, as shown in the last line above. 2) Execute the following command on the slave node: #df -k
Filesystem /dev/vx/dsk/rootvol /proc fd mnttab swap swap /dev/vx/dsk/opt /dev/vx/dsk/node@1 kbytes used avail capacity 8% 0% 0% 0% 1% 1% 5% Mounted on / /proc /dev/fd /etc/mnttab /var/run /tmp /opt

17331224 1311970 15845942 0 0 0 9803992 9803912 8258597 481316 0 0 0 0 0 0

136 9803856 56 9803856 351734 7824278 3596 429589 1%

/global/.devices/node@1

/dev/vx/dsk/rootdisk_24vol 481316 /global/.devices/node@2 3596 429589 1%

The disk device /dev/vx/dsk/m2000dg/fsdata disk device is not mounted to the directory /export/home.

IV. Checking the Switchover of the m2000_rg Resource Group


To check switchover of the m2000_rg resource group, 1) Execute the following command on the master node to switch the m2000_rg resource group to the slave node: #scswitch -z -g m2000_rg -h m2000svr-2 2) After the switchover, execute the following command on the slave node: #df -k
Filesystem /dev/vx/dsk/rootvol kbytes used avail capacity 8% Mounted on /

17331224 1311988 15845924

7-16

Installation Manual iManager M2000 Two-node Cluster Hot Backup system


/proc fd mnttab swap swap /dev/vx/dsk/opt /dev/vx/dsk/node@1 0 0 0 9802008 9801936 8258597 481316 0 0 0

Chapter 7 Configuring the Cluster


0 0 0 0% 0% 0% 1% 1% 5% /proc /dev/fd /etc/mnttab /var/run /tmp /opt

136 9801872 64 9801872 351734 7824278 3596 429589 1%

/global/.devices/node@1

/dev/vx/dsk/rootdisk_24vol 481316 /global/.devices/node@2 /dev/vx/dsk/m2000dg/fsdata 64511275 9 63866154 1% /export/home 3596 429589 1%

The disk device /dev/vx/dsk/m2000dg/fsdata is mounted to the directory /export/home of the slave node. 3) Execute the following command on the master node: #df -k
Filesystem /dev/vx/dsk/rootvol /proc fd mnttab swap swap /dev/vx/dsk/opt /dev/vx/dsk/node@1 kbytes used avail capacity 8% 0% 0% 0% 1% 1% 4% Mounted on / /proc /dev/fd /etc/mnttab /var/run /tmp /opt

17331224 1300975 15856937 0 0 0 9798976 9799208 8258597 481316 0 0 0 0 0 0

136 9798840 368 9798840 316749 7859263 3596 429589 1%

/global/.devices/node@1

/dev/vx/dsk/rootdisk_24vol 481316 /global/.devices/node@2 3596 429589 1%

The disk device /dev/vx/dsk/m2000dg/fsdata is not mounted to the directory /export/home. 4) Execute the following command on the slave node to switch the m2000_rg resource group to the master node: # scswitch -z -g m2000_rg -h m2000svr-1 5) After the switchover, execute the command df k again. Check whether the disk device /dev/vx/dsk/m2000dg/fsdata is mounted to the directory /export/home of the master node. If so, it indicates that the installation is successful. Otherwise, you need to reinstall the cluster.

7-17

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 8 Installing Sybase Database

Chapter 8 Installing Sybase Database


This chapter introduces how to install the Sybase database.

8.1 Checks Before Installation


Before installing the Sybase database, check as follows: 1) 2) 3) Check if the Sybase database has already been installed. Check the version of the Sybase database. Check the device information of the database.

For details, see section 3.3.5 "Checking the Sybase Database".

8.2 Preparations Before Installation


Perform the following operations on both the master and slave nodes.

8.2.1 Modifying the File .rhosts


Add the following line to the file /.rhosts of the master node:
m2000svr-2 root

Add the following line to the file /.rhosts of the slave node:
m2000svr-1 root

Note: Use the command rcp when you copy data between the two nodes to synchronize the software. You need to modify the file /.rhosts of the master and slave nodes. In this way, the command rcp can take effect.

8.2.2 Creating the User Sybase


You can use either of the following method to create the user sybase: Log in to the system as root and run the admintool. Options for the new user: sybase (user id: 101; shell: ksh; user directory: /opt/sybase; group: staff) Execute the following command: # useradd -d /opt/sybase -u 101 -g staff -m -k /etc/skel -s /bin/ksh Sybase

8-1

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 8 Installing Sybase Database

Note: To ensure the IDs of the user Sybase on the master and slave nodes are the same, set them to "101" by default. If "101" is assigned to other users, you can select a different ID. The IDs of the user sybase on the two servers must be the same.

8.2.3 Setting the Properties of Raw Devices

Caution: Perform the following operation only on the master node. Before setting the property of raw devices of Sybase database, make sure the m2000_rg resource group on the master node is active. (You can check it using the command scstat | grep m2000_rg.) Otherwise, switch m2000_rg resource group to the master node using the command scswitch -S -h m2000svr-1, and then execute the following commands.

Execute the following commands to set properties of raw devices: # vxedit -g m2000dg -v set user=sybase group=staff mode=664 master # vxedit -g m2000dg -v set user=sybase group=staff mode=664 sysproc # vxedit -g m2000dg -v set user=sybase group=staff mode=664 data # vxedit -g m2000dg -v set user=sybase group=staff mode=664 log

8.2.4 Switching to the User Sybase and Editing the File .profile

Caution: Invalidate the contents between the "if" statement and "fi" statement in the file .profile. Otherwise, the server will exit the Sybase database automatically after startup. In the following statement, ${HOME} stands for /opt/Sybase.

Execute the following commands to switch to the user sybase and edit the file /opt/sybase/.profile: # su - sybase $ vi .profile
PATH=/usr/bin:/usr/ucb:/etc:${HOME}/bin:${HOME}/install:. export PATH

8-2

Installation Manual iManager M2000 Two-node Cluster Hot Backup system


LANG=C SYBASE=${HOME} DSQUERY=sybserver DSLISTEN=sybserver DISPLAY=m2000-console:0.0 export LANG SYBASE DSQUERY

Chapter 8 Installing Sybase Database

DSLISTEN DISPLAY

8.2.5 Modifying the File /etc/system


Log in to the system as root and add the following contents at the end of the file /etc/system:
set shmsys:shminfo_shmmax=2147483648

//Calculation formula: 2147483648=2048 x 1024 x 1024, where 2048 is the capacity of the 2 GB memory, and 1024 is the capacity of the 1 GB memory. Make calculation according to the actual memory during the installation.

8.2.6 Rebooting the Cluster


Execute the following commands on a server to reboot the cluster after you complete the above operations: # scshutdown -y -g0 Now, the master and slave nodes enter the "ok" state. Enter the command boot on the master and slave nodes to boot the system. ?ok boot

8.3 Installing Sybase Database on the Master Node


This section introduces how to install the Sybase database on the master node.

8.3.1 Preparations Before Installation


Before installing the Sybase database on the master node, switch the m2000_rg resource group to the master node as follows: 1) Check the status of the current activated node. # scstat -g
-- Resource Groups and Resources --

Group Name ---------Resources: m2000_rg

Resources --------m2000svr m2kdata_rs

-- Resource Groups --

8-3

Installation Manual iManager M2000 Two-node Cluster Hot Backup system


Group Name ---------Group: m2000_rg Group: m2000_rg

Chapter 8 Installing Sybase Database


Node Name --------m2000svr-1 m2000svr-2 State ----Offline Online

If the status is "online", it indicates that the resource group is activated on the server. The above result shows the currently activated node is m2000svr-2 instead of m2000svr-1. 2) Execute the following command to switch the m2000_rg resource group to the m2000svr-1 server: # scswitch -S -h m2000svr-2

8.3.2 Disk Array Partition Plan


Table 8-1 provides the detailed disk array partition plan. Table 8-1 Partitions of the Sun StorEdge3310 Partition No. 1 2 3 4 5 Partition name /export/home master sysproc data log Size (MB) 64,000 1,200 200 32768 32768 rdsk (raw device)/Logical volume attribute Logical volume attribute: /dev/vx/rdsk/m2000dg/fsdata /dev/vx/rdsk/m2000dg/master /dev/vx/rdsk/m2000dg/sysproc /dev/vx/rdsk/m2000dg/data /dev/vx/rdsk/m2000dg/log m2000dg m2000dg Disk group

8.3.3 Installation Procedure


Proceed as follows to install the Sybase database on the master node:

I. Installing the Sybase Database


To install the Sybase database, proceed as follows: 1) Log in to the system as the user sybase and insert the CD of the Sybase database. # su - sybase 2) Copy the files in the CD to the local host (m2000svr-1) and start to install the Sybase database. $ /cdrom/cdrom0/sybload -D Enter Y, and press <Enter>

8-4

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 8 Installing Sybase Database

Y press <Enter> >L press <Enter> (local installation) 3) Open the MML window of another terminal and obtain the password from the CAS file in the directory /cdrom/cdrom0/install/. Execute the following commands: #cd /cdrom/cdrom0/install #more CAS Suppose the obtained password is "bkbkbk". 4) Enter the installation file name. /cdrom/cdrom0/sybimage press <Enter> Y press <Enter> Enter the password (bkbkbk) obtained in Step 4. 5) Select the software package to be installed.
Select > 1 (press <Enter>) > 2(press <Enter>) > 3 (press <Enter>) > 4 (press <Enter>) > 5 (press <Enter>) > 6 (press <Enter>) > 7 (press <Enter>)

6)

Confirm your selection. Y (press <Enter>)

II. Creating the Sybase Database Server


To create the Sybase database server, proceed as follows: 1) Execute the command # xhost + host name (such as m2000svr) as root in the MML window of another terminal. The system displays the following information:
access control disabled, clients can connect from any host

Then execute the following commands as sybase in the above window: $ cd /opt/sybase/bin $ srvbuild 2) Select all servers and change the adaptive server name to sybserver. In the dialog box, set parameters according to the following table.

8-5

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 8 Installing Sybase Database

Item device path Master device size (MB) database size (MB) device path sybsystemprocs device size (MB) database size (MB) Error log path Host name

Value /dev/vx/rdsk/m2000dg/master 1200 300 /dev/vx/rdsk/m2000dg/sysproc 200 200 /opt/sybase/install/sybserver.log (Default value) m2000svr

Note: The size of the database device (200 MB set in the above dialog box) depends on the size of the disk array partition in Table 8-1. The size varies with the partition. Besides, the host name in the subsequent dialog boxes must be set to m2000svr.

Click <Edit Advanced Adaptive Server Attributes>. Enter /opt/sybase/phase2.dat. in the textbox Sybsystemdb (two-phase commit), and the default values in the other textboxes. Set the maximum number of network connection to 100. Set the maximum number of server to 100. Set the maximum number of connection to 100. 3) Click <OK>, and click <build server> in the pop-up window to create Sybase server.

III. Configuring the Database Environment


To configure the database environment, proceed as follows: 1) 2) Set a password for the user sa. Execute the following command as sybase in the window of the terminal: $ isql -Usa The system asks you to enter the password. Enter the password and press <Enter>. 1 > sp_password 2 > go 3) Create the m2000 database device. null, server1234, sa

8-6

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 8 Installing Sybase Database

1> disk init name = "data_dev", physname = "/dev/vx/rdsk/m2000dg/data", vdevno = 3, size = 16384000 2 > go 1 > disk init name = "log_dev", physname = "/dev/vx/rdsk/m2000dg/log", vdevno = 4, size = 16384000 2 > go

Caution: The size of the database device (in the above dialog box, 16384000 stands for 16384000 x 2 K (about 32 GB)) also depends on the size of the disk array partition in Table 8-1. It varies with the size of the partition.

4)

Set the default device to data_dev. 1 > sp_diskdefault data_dev, defaulton 2 > go

5)

Extend the size of tempdb to 1 GB. 1> alter database tempdb on master = 860 2> go

6)

Stop the Sybase server processes. 1> shutdown SYB_BACKUP 2> go 1> shutdown 2> go $

8.4 Installing Sybase Database on the Slave Node


This section introduces how to install the Sybase database on the slave node.

8.4.1 Installation Procedure


Proceed as follows to install the Sybase database on the slave node: 1) Pack the Sybase database on the master node and copy it to the slave node. Log in to the system as root and execute the following commands on the master node: # cd /opt
8-7

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 8 Installing Sybase Database

# tar cvf sybase.tar sybase # gzip sybase.tar # rcp sybase.tar.gz m2000svr-2:`pwd` 2) Unpack the Sybase software on slave node. Execute the following commands on the slave node: # cd /opt # gzcat sybase.tar.gz| tar xvf -

8.4.2 Checking Installation Result


After installing Sybase database on the slave node, check whether the installation is correct.

I. Switching the m2000_rg Resource Group to the Slave Node


Log in to the slave node as root and execute the following commands: # scswitch -S -h m2000svr-1 # scstat -g
-- Resource Groups and Resources --

Group Name ---------Resources: m2000_rg

Resources --------m2000svr m2kdata_rs

-- Resource Groups -Group Name ---------Group: m2000_rg Group: m2000_rg Node Name --------m2000svr-1 m2000svr-2 State ----Offline Online

If the system displays the above result, it indicates that the m2000_rg resource group is switched to the slave node successfully.

II. Starting the Sybase Database


Execute the following commands to start the Sybase database: # su - sybase $ cd ./install $ startserver -f RUN_sybserver -f RUN_sybserver_back

III. Checking Whether the Sybase Processes are Normal


To check whether the Sybase processes are normal, proceed as follows:
8-8

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 8 Installing Sybase Database

1)

Query the Sybase processes. $ ps -ef|grep sybase


sybase .. /opt/sybase/bin/dataserver -Ssybserver .. sybase .. /opt/sybase/bin/backupserver -Ssybserver_back ..

If the system displays the above result, it indicates the Sybase processes are successfully started. 2) Execute the following commands to check whether the Sybase SQL server is connected: $ isql -Usa -Pserver1234 1> If the system displays "1>", it indicates that the Sybase sever processes run normally on the slave node and the Sybase database is installed on the slave node. Otherwise, find out the failure cause and reinstall the Sybase database on the slave node. 3) Stop the Sybase processes and exit the Sybase. Execute the following commands: 1> shutdown SYB_BACKUP 2> go 1> shutdown 2> go $ exit

8-9

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 9 Installing the M2000 Application

Chapter 9 Installing the M2000 Application


The installation of the M2000 application involves the following tasks: 1) 2) 3) 4) Preparing for the installation of M2000 application Installing M2000 application on the master node Installing M2000 application on the slave node Installing the AdapterMonitor and Adapters

9.1 Preparations Before Installation


Perform the operations in this section on both the master and slave nodes.

9.1.1 Checking for Existing M2000 System


To check for the existing M2000 system, proceed as follows: 1) Log in to the system as root and check whether the M2000 system is installed. # pkginfo |grep HWM2000
application HWM2000 Huawei iManager M2000

If the system displays the above result, it indicates that the M2000 system is installed. 2) Execute the following command to uninstall the software package of the M2000 server: # pkgrm HWM2000
Do you want to remove this package? y Do you want to continue with the removal of this package [y,n,?,q] y

9.1.2 Checking the User m2000


To check whether the user m2000 is created, proceed as follows:

I. Checking Whether the User m2000 Is Created


Execute the following command: # cat /etc/passwd|grep "^m2000:"
m2000:x:102:10::/export/home/m2000:/bin/ksh

If there is no output for the above operation, it indicates that the user m2000 is not created. In this case, proceed as follows to create the user m2000.

9-1

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 9 Installing the M2000 Application

II. Creating the User m2000 on the Master Node


To create the user m2000 on the master node, proceed as follows: 1) Switch the m2000_rg resource group to the master node. For details, see section 8.3.1 "Preparations Before Installation". 2) Use one of the following methods to create the user m2000 on the master node: Log in to the system as root and run the admintool. Options for the new user are as follows: m2000 (user ID: 102; shell: ksh; user directory: /export/home/m2000; create home dir: yes; group: staff) Execute the MML command: # useradd -d /export/home/m2000 -u 102 -g staff -m -k /etc/skel -s /bin/ksh m2000

III. Creating the User m2000 on the Slave Node


Use one of the following methods to create the user m2000 on the slave node: Log in to the system as root and run the admintool. Options for the new user are: m2000 (user ID: 102; shell: ksh; user directory: /export/home/m2000; create home dir: no; group: staff) Execute the MML command: # useradd -d /export/home/m2000 -u 102 -g staff -s /bin/ksh m2000

Caution: IDs of the user m2000 on the master and slave servers must be the same. To ensure this consistency, set both IDs to "102". If "102" is assigned to other users, select a different ID.

9.2 Installing the M2000 Application on the Master Node


This section introduces how to install the iManager M2000 system on the master node.

9.2.1 Switching the m2000_rg Resource Group to the Master Node


For details, see section 8.3.1 "Preparations Before Installation".

9-2

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 9 Installing the M2000 Application

9.2.2 Preparing the M2000 Software Package


To prepare the M2000 software package, proceed as follows: 1) 2) 3) Download the installation software from the server to a PC. Send the M2000 application package from the PC to the directory /export/home/m2000 through FTP (in bin mode). Execute the following commands to decompress the software package: # cd /export/home/m2000 # gzcat XXXX.tar.Z | tar xvf //XXXX is the version No., for example, "iManagerM2000V100R005B01D006"

Note: If the software of the M2000 server is in other PCs, download the software package and send X.tar or X.tar.Z to the directory /export/home/m2000 through FTP (in bin mode). For example, send the help.tar.Z in the directory C:\file of a PC to the directory /export/home/m2000 of the M2000 server using the following commands: ftp> bin ftp> lcd c:\file ftp> cd /export/home/m2000 ftp> put help.tar.Z

9.2.3 Starting the Sybase Database Server


Proceed as follows to start the Sybase database server: 1) Execute the following command to check whether the sybase is running: # ps -ef|grep sybase
sybase .. /opt/sybase/bin/dataserver -Ssybserver .. sybase .. /opt/sybase/bin/backupserver -Ssybserver_back ..

If the output result contains the two processes "dataserver" and "backupserver", it indicates that the Sybase server is running. Otherwise, start the Sybase server processes. 2) Execute the following commands to start the Sybase processes: # su - sybase $ cd ./install $ ./startserver -f ./RUN_sybserver -f ./RUN_sybserver_back

9-3

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 9 Installing the M2000 Application

9.2.4 Installing the M2000 Application


During the installation of the M2000 application, you can use the graphic installation tool or the MML. Pay attention to the following information during installation:
Please input the local site's logical host ip address, it is 192.168.8.11 by default. Press return if you agree, >>>> Warning <<<< Please assure your input is correct!! The address like 127.0.0.1 is not recommended! [?,q] 192.168.8.10

Note: During the installation, you need to set the IP address of the server. The default IP address is the physical IP address of the local node, that is, 192.168.8.11. Modify the IP address as the logical IP address of the two nodes, that is, 192.168.8.10.

Besides, you need to delete the boot script of the M2000 application in the directory /etc/rc2.d/ because the cluster is responsible for starting all the M2000 applications. For the purpose of safety, it is recommended to modify the name of the boot script to avoid auto start. For example, modify S98m2000svr to s98m2000svr using the following commands: #cd /etc/rc2.d #mv S98m2000svr s98m2000svr

Caution: During the installation, the system prompts some questions about system setting. You can press <Enter> to select the default answers for these questions. Then the system copies files to complete the setting and sets up the database. Do not enter <y> for those questions that are not answered with yes or no. Otherwise, the character "y" instead of the default value is set.

To install the iManager M2000 application, proceed as follows: 1) Copy files to the server. #cd /export/home/m2000

9-4

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 9 Installing the M2000 Application

#pkgadd -d . HWM2000 Then the system prompts the following information:


Processing package instance <HWM2000> from </export/home/m2000> Huawei iManager M2000 (sparc) release 1.0 Copyright 2001-2004 Huawei Technologies Co., Ltd. ****************************************************************** * * * * ******************************************************** NOW INSTALL HWM2000 SERVER version:iManager M2000V100R005BXXDXXX * * * *

2)

Select the mode of installation.


Installation mode: 1, initial install 2, update install please input [1,2] 1

Enter 1 or 2 as needed. If you install M2000 on your server for the first time, choose "1"; If you want to keep the data of the previous data, choose "2".
---- your choice is "initial install" ---Are you sure? [ no,yes] (Which default is yes, Press Return if you agree)

3)

Select the version type.


Please select version type: 1, English 2, Chinese Please input [1, 2] 1 ---- Your choice is "English version" ---Are you sure? [no, yes] (Which default is yes, Press Return if you agree)

Press <Enter>. 4) Select the ME type.


Please select ME type: 1, CDMA 2, GSM 3, CDMA&GSM Please input [1, 2, 3] 1 ---- Your choice is "CDMA" ---Are you sure? [ no, yes] (Which default is yes, Press Return if you agree)

9-5

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 9 Installing the M2000 Application

5)

Select the installation module.


Please select module that you want to install: 1, switch 2, commonsvr 3, configsvr 4, alarmsvr 5, pmsvr 6, timersvr 7, ALL Press "1-7" to select,"return" to finish. >7

Select the modules to be installed and press <Enter> to complete the selection.
Your choice is below: 1, switch 2, commonsvrd 3, configsvrd 4, alarmsvrd 5, pmsvrd 6, timersvrd Are you sure? [no,yes] (Which default is yes, Press Return if you agree)

6)

Select the database installation environment.


Please provide database installation environment: 1, Lab install 2, Formal install please input [1,2] 2

Select the installation mode of the formal implementation environment.


---- your choice is "Formal install" ---Are you sure? [ no,yes] (Which default is yes, Press Return if you agree)

Press <Enter> to choose formal installation environment.

Caution: For the lab installation environment, reserve 2GB in the database. As for the formal installation environment, reserve 12 GB in the hard disk.

7)

Enter the IP address of the local site.


Please provide the local site's ip,

9-6

Installation Manual iManager M2000 Two-node Cluster Hot Backup system


it is 192.168.8.10 by default. Press <Enter> if you agree, [?,q]

Chapter 9 Installing the M2000 Application

//To exit the installation, press "q". To choose the default value, press <Enter>. To modify the default value, enter a new value. Note that any other string except "q" and "Return" is accepted as the new value.
>>>>Warning<<<< You cant choose such IP as 127.0.0.1![?,q] Please input sybase sa's password Press return if your password is server1234 [?,q]

Press <enter>. 8) Select the NTP server mode.


Select ntp server mode: 1 -- master 2 -- slave >

Note: Here, the system asks you to select the NTP Server mode. The NTP Server can act as master server or slave server. The master server provides the NTP services and the slave server obtains the clock from the master server and synchronizes itself using this clock. If the external clock equipment such as BITS is available to provide the NTP service and the M2000 system must synchronize to the clock provided by this external clock equipment, select "2 slave". Otherwise, select "1 master".

If "2 -- slave" is selected, the system prompts:


Provide the IP address of BITS >

Note: You need to enter the IP address of the NTP master server (such as BITS) only when you select the "2 slave".

9)

Choose the installation method.


Please choose between typical installation and custom installation. >>>> Warning <<<< You can choose typical installation only if your install user is m2000.

9-7

Installation Manual iManager M2000 Two-node Cluster Hot Backup system


Choices: t -- typical c -- custom [?,q] t

Chapter 9 Installing the M2000 Application

Select "t" to perform typical installation; "c" to perform customized installation; "q" to exit the installation. Press <Enter> to choose default value "typical".

Caution: Here, the system asks whether to choose default or custom installation. If you select the default values in all the previous settings, press <Enter> here to perform typical installation.

The following information is displayed:


Using </export/home/m2000> as the package base directory. ## Processing package information. ## Processing system information. //Package processing is on-going. //System information processing is on-going.

## Verifying disk space requirements. ## Checking for conflicts with packages already installed. ## Checking for setuid/setgid programs. This package contains scripts which will be executed with super-user permission during the process of installing this package.

If you select the custom installation, you need to answer the following questions:
Please provide the install path of software package, It is /export/home/m2000 by default. Press <Enter> if you agree [?,q] Please provide the sybase directory, It is /export/home/sybase by default. Press <Enter> if you agree [?,q] Please provide the sybase server's name, It is sybserver by default. It is in the user sybase's profile. Press <Enter> if you agree [?,q] Please provide the port number of switch, It is 6688 by default. Press <Enter> if you agree [?,q] Please provide the IP address of switch, It is 192.168.8.10 by default. Press <Enter> if you agree [?,q] Which path do you want to place your database backup files? It is /export/home/m2000/backup by default. [?,q]

9-8

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 9 Installing the M2000 Application

The following prompts are the same for both typical and custom installations:
Using </export/home/m2000> as the package base directory. ## Processing package information. ## Processing system information. ## Verifying disk space requirements. ## Checking for conflicts with packages already installed. ## Checking for setuid/setgid programs. This package contains scripts which will be executed with super-user Permission during the process of installing this package. Do you want to continue with the installation of <HWM2000> [y,n,?] y Installing Huawei iManager M2000 as <HWM2000>

After the database is re-established, the system prompts that the installation of the M2000 server is successful. The installation of M2000 server lasts about 150 minutes. Now, you can go on with the installation of Adapter Monitor and Adapter of each NE. For details, see section 9.4 "Installing the Adapter Monitor and Adapters". 10) Check the installation results. After the installation, the system creates some files and directories for the m2000 user. See Table 9-1. Table 9-1 Files and directories generated after installation File or directory profile bin etc data log script Description User environment parameter definition file Executable file directory Configuration file directory. It is unchanged during the running process of the system, and is not restored after the upgrade. Data file directory. It is changed during the running process of the system, and must be restored after the upgrade Log file directory Database installation script file directory

11) After the installation, restart the server. #sync ; sync ; reboot

9.2.5 Checking Installation Result


After installing the iManager M2000 system, check whether the installation is correct. 1) Log in to the system as root and execute the following commands to start the M2000 server processes:

9-9

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 9 Installing the M2000 Application

# cd /etc/init.d # ./m2000svr start 2) Check whether the server processes are successfully started using the following command: # ps -ef |grep m2000
root@m2000svr-1 # ps -ef |grep m2000 m2000 .. /export/home/m2000/bin/pmsvrd m2000 .. /export/home/m2000/bin/configsvrd m2000 .. /export/home/m2000/bin/timesvrd m2000 .. /export/home/m2000/bin/timersvrd m2000 .. /export/home/m2000/bin/timesvrd m2000 .. /export/home/m2000/bin/smsvrd m2000 .. /export/home/m2000/bin/switch m2000 .. /export/home/m2000/bin/switch m2000 .. /export/home/m2000/bin/commonsvrd m2000 .. /export/home/m2000/bin/alarmsvrd m2000 .. /export/home/m2000/bin/alarmsend m2000 .. /export/home/m2000/bin/alarmsend

If the above processes exist, the M2000 system is successfully installed.

Note: After checking the installation of the M2000 system, stop the processes of the M2000 server and Sybase server to help register the applications followed.

3)

Stop the M2000 server processes by executing the following command: # ./m2000svr stop

4)

Stop the Sybase server processes by executing the following commands: # su - sybase $ isql -Usa -Pserver1234 1> shutdown SYB_BACKUP 2> go 1> shutdown 2> go

9.3 Installing M2000 Application on the Slave Node


To install the iManager M2000 application on the slave node,

9-10

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 9 Installing the M2000 Application

1) 2)

Log in to the system as root. Execute the following commands on the master node to copy the M2000 boot script from master node to the slave node: # cd /etc/init.d # rcp m2000svr m2000svr-2:`pwd`

3)

After installing the iManager M2000 on the slave node, check whether the installation is correct as follows: a) Switch the m2000_rg resource group to the slave node. See section 8.4.2 I. "Switching the m2000_rg Resource Group to the Slave Node". b) Start Sybase server processes. See section 8.4.2 II. "Starting the Sybase Database". b) Check whether the m2000 is installed correctly. See section 9.2.5 "Checking Installation Result".

Note: After checking the installation of the M2000 system, stop the processes of the M2000 server and Sybase server to help register the applications followed.

9.4 Installing the Adapter Monitor and Adapters


Note: Install the Adapter Monitor and adapters on both the master and slave nodes. The installation method on the master node is similar to that on the slave node.

Install the Adapter Monitor and Adapters on the master node by referring to Chapter 5, "Installation of Adapter Software" of iManager M2000 Installation Manual. You need to create the corresponding users on the slave node for the Adapter Monitor and Adapters of network elements (NEs). The user ID on the slave node must be the same as that on the master node. For example, after you install the Adapter of the cBSC on the master node, the installation program creates a user cbssadapter on the master node. Suppose the user ID is 60004 (queried by using the command id cbssadapter), create the same user on the slave node using the following command:

9-11

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 9 Installing the M2000 Application

# useradd d /export/home/cbssadapter g staff u 60004 s /bin/ksh cbssadapter

Caution: During the installation of the AdapterMonitor and Adapters, enter m2000svr as the server name, and 192.168.8.10 as the logical IP address where needed.

9-12

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 10 Registering Sybase and M2000 Applications

Chapter 10 Registering Sybase and M2000 Applications


After installing the Sybase and M2000 applications, you need to register them on the SC.

10.1 Installing the Application Package


Execute the following commands on the master and slave nodes to install the application package: # cd /opt/m2kv1/cluster # ./install
Do you want to continue with the installation of <HWM2kapp> [y,n,?] y Do you want to continue with the installation of <HWSybase> [y,n,?] y

10.2 Registering the Applications on the Master Node


Execute the following commands on the master node: # cd /opt/m2kv1/cluster # ./register
Registering resource type HW.M2kapp .... Registering resource type HW.Sybase .... Registering resource m2kapp_rs ... Registering resource sybase_rs ... Enable Sybase application... Enable M2000 application... Resource registration finished.

10.3 Checking the Registration


After registering the Sybase and M2000 applications, check whether the registration is successful.

10.3.1 Checking Resource Registration


To check whether the resources are registered, query the status of the current resources using the following command:
10-1

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 10 Registering Sybase and M2000 Applications

# scstat -g
-- Resource Groups and Resources --

Group Name ---------Resources: m2000_rg

Resources --------m2000svr m2kdata_rs m2kapp_rs sybase_rs

-- Resource Groups --

Group Name ---------Group: m2000_rg Group: m2000_rg

Node Name --------m2000svr-1 m2000svr-2

State ----Offline Online

-- Resources --

Resource Name ------------Resource: m2000svr ostname offline. Resource: m2000svr stname online.

Node Name --------m2000svr-1

State -----

Status Message -------------Offline Offline - LogicalH

m2000svr-2

Online

Online - LogicalHo

Resource: m2kdata_rs Resource: m2kdata_rs

m2000svr-1 m2000svr-2

Offline Online

Offline Online

Resource: m2kapp_rs Resource: m2kapp_rs

m2000svr-1 m2000svr-2

Offline Online

Offline Online

Resource: sybase_rs Resource: sybase_rs

m2000svr-1 m2000svr-2

Offline Online

Offline Online

The above contents in bold indicate that the Sybase and M2000 applications are successfully installed and they are online.

10.3.2 Checking Sybase Processes


Note: Check whether Sybase processes are available on the current active node of the resource group sybase_rs. In the example in section 10.3.1 "Checking Resource Registration", the current active node is m2000svr-2.

10-2

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 10 Registering Sybase and M2000 Applications

Perform the following operations on the current active node m2000svr-2.

I. Checking Whether the Sybase Processes Are Running


Execute the following commands to check the Sybase processes: # su - sybase $ ps -ef|grep sybase
sybase .. /opt/sybase/bin/dataserver -ssybserver .. sybase .. /opt/sybase/bin/backupserver -Ssybserver_back ..

II. Checking Whether You can Log In to the Sybase SQL Server
If the system displays the above execution result, it indicates the Sybase processes are successfully started. Then execute the following commands to check whether the Sybase SQL server can be connected: $ isql -Usa -Pserver1234 1> If "1>" appears, it indicates the Sybase server processes run normally on the slave node, and the Sybase is installed on the slave node. Exit the Sybase SQL server. 1> exit

10.3.3 Checking M2000 Processesi


Note: Check whether M2000 processes are running on the current active node of the M2000 resource m2kapp_rs. In the example in section 10.3.1 , "Checking Resource Registration", the current active node is m2000svr-2.

Check the M2000 processes on the m200svr-2 according to instructions in section 9.2.5 "Checking Installation Result".

10.3.4 Checking the Switchover Function


Proceed as follows to check the switchover function.

I. Performing the Switchover Operation


Switch the resource group from the active node (m2000svr-2) to the slave node (m2000svr-1) using the following command: # scswitch -S -h m2000svr-2

10-3

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 10 Registering Sybase and M2000 Applications

After the operation, check the status of the current resource using the following command: # scstat -g
-- Resource Groups and Resources --

Group Name ---------Resources: m2000_rg

Resources --------m2000svr m2kdata_rs m2kapp_rs sybase_rs

-- Resource Groups --

Group Name ---------Group: m2000_rg Group: m2000_rg

Node Name --------m2000svr-1 m2000svr-2

State ----Online Offline

-- Resources --

Resource Name ------------Resource: m2000svr stname online. Resource: m2000svr ostname offline. Resource: m2kdata_rs Resource: m2kdata_rs

Node Name --------m2000svr-1

State ----Online

Status Message -------------Online - LogicalHo

m2000svr-2

Offline

Offline - LogicalH

m2000svr-1 m2000svr-2

Online Offline

Online Offline

Resource: m2kapp_rs Resource: m2kapp_rs

m2000svr-1 m2000svr-2

Online Offline

Online Offline

Resource: sybase_rs Resource: sybase_rs

m2000svr-1 m2000svr-2

Online Offline

Online Offline

In the above result, the status of m2000svr-1 is "online". It indicates the resource group is successfully switched over.

II. Checking the Applications on m2000svr-1 after the Switchover


Check whether the Sybase processes run normally on m2000svr-1. See section 10.3.2 "Checking Sybase Processes". Check whether the M2000 processes run normally on m2000svr-1. See section 10.3.3 "Checking M2000 Processes".

10-4

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Chapter 10 Registering Sybase and M2000 Applications

III. Checking the Applications on m2000svr-2 after the Switchover


Execute the following command to check whether the Sybase processes are stopped: $ ps -ef|grep sybase If the system displays the following result, it indicates the Sybase processes are stopped after the switchover. Find out the reason and reinstall the Sybase applications again.
sybase .. /opt/sybase/bin/dataserver -ssybserver .. sybase .. /opt/sybase/bin/backupserver -Ssybserver_back ..

Execute the following command to check whether the M2000 processes are stopped: # ps -ef |grep m2000 If the result contains the following messages or some of them, it indicates the M2000 processes are stopped after the switchover. Find out the reason and reinstall the M2000 applications again.
m2000 .. /export/home/m2000/bin/pmsvrd m2000 .. /export/home/m2000/bin/configsvrd m2000 .. /export/home/m2000/bin/timesvrd m2000 .. /export/home/m2000/bin/timersvrd m2000 .. /export/home/m2000/bin/smsvrd m2000 .. /export/home/m2000/bin/switch m2000 .. /export/home/m2000/bin/commonsvrd m2000 .. /export/home/m2000/bin/alarmsvrd m2000 .. /export/home/m2000/bin/alarmsend

10-5

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Appendix A Modifying IP Addresses

Appendix A Modifying IP Addresses


This appendix introduces the methods to plan and modify IP addresses and precautions during modification.

A.1 Introduction
The iManager M2000 hot backup system is commissioned before delivery. The IP addresses of the hot backup servers used in the following description are allocated during the pre-installation. If you need to modify the IP addresses according to the situations on site, execute the command ifconfig -a to obtain the current IP addresses of the system. If the IP addresses do not meet the requirement of the LAN, modify them without reinstalling the operating system. For details, see section A.2 M2000. Modifying the IP Address of Solaris 8, Sybase and

I. Planning the IP Addresses


Plan the IP addresses of the hot backup system according to the network configuration of the operator. You need to modify the IP address/subnet mask column in Table A-1 according to the network planning of the operator. The example in the table gives the default configuration. Table A-1 IP address planning (default) Item Administrative console TC Logical IP address Master node IP address Slave node IP address IP address planning One IP address for each subnet One IP address for each subnet The master and slave nodes share one logical IP address. Physical IP address of the master node Physical IP address of the slave node IP address/subnet mask 192.168.8.236, 255.255.255.0 192.168.8.244, 255.255.255.0 192.168.8.10; 255.255.255.0 192.168.8.11; 255.255.255.0 192.168.8.12; 255.255.255.0

II. Modifying the IP Addresses


Modify the IP addresses according to the IP address plan. You need to modify the following files:

A-1

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Appendix A Modifying IP Addresses

The file /etc/hosts of the master and slave nodes, where the corresponding host name cannot be modified. The file /opt/sybase/interfaces of the master and slave nodes, where the IP address must be the logical IP address. Related configuration files of the M2000. The IP address must be the logical IP address.

A.2 Modifying the IP Address of Solaris 8, Sybase and M2000


Perform the following operations only when it is necessary to modify the IP addresses of the server. If the IP addresses changes, modify IP addresses of the server in the operating system, Sybase database and M2000 application first. Then you can install the server.

I. Modifying the IP Address of Solaris8 Operating System (for the Master and Slave Nodes)
To modify the IP address in the Solaris8 operating system, 1) 2) Modify the IP address in the file /etc/hosts. Modify the subnet mask in the file /etc/netmasks.

Note: By modifying the static configuration file, the modified IP address and subnet mask can take effect after the computer reboots.

II. Modifying the IP Address in Sybase Database (for the Master and Slave Nodes)
To modify the IP address in the Sybase database, 1) Modify the file /opt/sybase/interfaces. Modify the corresponding IP address in the file /opt/sybase/interfaces. Otherwise, the database system cannot connect to the M2000 system normally. To be specific, the value of the IP address is stored in the hexadecimal format, at the line with the character string /dev/tcp. The eight characters starting with the ninth character on the right of \x is the IP address. For example, to edit the interfaces file, execute the following command. $vi interfaces
master tli tcp /dev/tcp \x000210040a811b760000000000000000 query tli tcp /dev/tcp \x000210040a811b760000000000000000

A-2

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Appendix A Modifying IP Addresses

Where: 0a811b76 is 10.129.27.118. If you want to change the IP address to 10.130.5.1, just modify 0a811b76 to 0a820501. 2) Reboot the Sybase database. After the modification of the interfaces file, reboot the Sybase database. #/etc/rc2.d/S97sybsvr stop #/etc/rc2.d/S97sybsvr start

III. Modifying the IP Address in M2000 System (for the Current Active Node)
To modify the IP address in the M2000 system, 1) Execute the following command: #su m2000 2) Modify the following in the file .profile.
OMCSWITCHIP=10.129.27.118

#su adaptermonitor $cd etc 3) 4) Modify the following in the file adaptermonitor.cfg.
SwitchIP = 10.129.27.118

Modify the SwitchIP in the file adapter.cfg of all the NE Adapters to the correct IP address.

IV. Rebooting the System


Execute the following command to reboot the system: # sync;sync; scshutdown -y -g0 The two nodes enter the ok state. Enter boot on the master and slave nodes to start the hot backup system.
ok ? boot <Enter>

A.3 Modifying Two IP Addresses of the Same Server


Sometimes, you may have two IP addresses for the same server. One connects to the public network, the other to the private network. In this case, change all the IP addresses in the file /opt/sybase/interfaces to 0s.

A.4 Precautions
Pay attention to the following two aspects.

A-3

Installation Manual iManager M2000 Two-node Cluster Hot Backup system

Appendix A Modifying IP Addresses

After installing the M2000 server, do not delete the user m2000 unless necessary. Otherwise, the server processes may be abnormal. For further information, contact the technical support engineers. After installing the M2000 server, you need to add the workstation (M2000 server) on the BAM of the NEs that support the MML commands, and allocate the highest authority to the workstation. Otherwise, the M2000 centralized configuration console cannot obtain all the configuration data of the BAM. To complete the above tasks on the BAM, 1) Add the workstation on the BAM. Execute the command ADD WS to add the workstation. Specify the name and IP address of the M2000 server in the list box. 2) Allocate the highest authority to the workstation on the BAM. Execute the command SET WSCG: WS="***", CG=G_SYS as the system administrator. //*** stands for the name of the M2000 server.

A-4

Index
i.

A
adding m2000dg to disk group, 7-5

configuring TC IP address, 3-14 configuring TC port parameter, 3-16 connecting disk array, 2-1 connecting hot backup system, 2-1 connecting TC, 3-13 creating m2000_rg viewing disk information, 7-2 creating m2000_rg resource group, 7-8 creating m2000dg disk group, 7-3 creating m2000dg disk volume, 7-3 creating mount point, 7-8 creating NAFO group, 5-20 creating new file system volume fadata, 7-8 creating Sybase database server, 8-5 creating user sybase, 8-1

C
checking cluster status, 5-24 checking database equipment information, 3-6 checking disk array mirroring, 7-14 checking disk volume status, 7-14 checking for existing M2000 system, 9-1 checking M2000 volume mirroring, 7-14 checking mount status volume fsdata, 7-15 checking NAFO group configuration, 5-25 checking NAFO group creation, 5-21 checking resource registration, 10-1 checking root disk mirroring, 7-14 checking Solaris 8, 3-2 checking Sun Cluster installation, 5-17 checking Sun Solaris 8 installation result, 4-18 checking switchover m2000_rg resource group, 7-16 checking Sybase database, 3-5 checking Sybase database version, 3-6 checking Sybase process, 10-2 checking Volume Manager installation, 6-8 chenking M2000 application installation result, 9-9 configuring cluster checking installation result, 7-13 configuring database environment, 8-6 configuring disk array, 3-18 configuring NAFO group, 5-19 configuring quorum device, 5-21 configuring Sun Fire V880, 3-36 configuring Sun Solaris 8 kernel, 4-16 configuring Sun Solaris operating system, 4-17 configuring Sun StorEdge3310s, 3-17 configuring TC, 3-13

D
disk array partition plan, 8-4

E
editing file .profile, 8-2

H
hardware configuration, 1-2 hardware description Quidway S3026, 1-7 Sun Fire V880, 1-3 Sun StorEdge 3310, 1-4 hot backup system feature, 1-1 installation procedure, 3-1 networking, 1-1

I
installing adapter, 9-11 installing adapter monitor, 9-11

i-3

installing application package, 10-1 installing extended Sun Solaris patch, 4-13 installing patche of Solaris 8 rebooting operating system, 4-15 rebooting system, 4-16 installing script, 4-16 installing SCSI card driver, 4-15 installing SCSI card driver patch, 4-16 installing Solaris 8, 4-8 installing standard .profile file, 4-11 installing Sun Cluster checking installation result, 5-24 configuring file, 5-19 installing Sun Cluster patch, 5-11 installing Sun Solaris 8 booting system, 4-1 identifying system, 4-1 installation procedure, 4-1 installing Sun Solaris recommended patch, 4-13 installing SUNWSan patch, 4-15 installing SUNWSan software package, 4-14 installing Volume Manager, 6-2 introduction to NAFO, 5-19

N
naming server, 3-7

P
performing switchover operation, 10-3 planning component, 3-11 planning disk array, 3-10 planning hard disk partition, 3-8 planning installation resource planning IP address, 3-7 planning Solaris 8, 4-6 planning Sun Cluster environment planning IP, 3-11 preparing Sun Fire V880, 3-36 prepartion before Sybase database installation modifying file, 8-3

R
rebooting cluster, 8-3 registering appllication, 10-1

S
setting network parameter, 4-3 setting raw device property, 8-2 slave node checking Sybase process, 8-8 creating user m2000, 9-2 installing M2000 application, 9-10 installing Sun Cluster, 5-11 switching m2000_rg resource group, 8-8 Sybase database installation procedure, 8-7 software configuration, 3-1 software preparations, 3-37 starting Sybase database, 8-8 starting Sybase database server, 9-3 switching to user sybase, 8-2

L
label description of HA cable, 2-1

M
master node configuring Sun Cluster, 5-1 creating user m2000, 9-2 installing M2000 application, 9-4 installing Sun Cluster, 5-1 installng Sybase database, 8-4 mirroring root disk, 7-1 modifying file .rhosts, 8-1 modifying IP address M2000, A-3 rebooting system, A-3 Solaris 8, A-2 Sybase database, A-2 modifying two IP address, A-3

T
testing TC IP address, 3-16

i-4

Vous aimerez peut-être aussi